Reading Minds and Building Moats
This week’s update:
🧠 AI is decoding brain signals
🏰 Google has no moat
🖨️ Replacing Jobs
⌨️ StarCoder release
👴 Godfather of AI
Latest Updates
Mind-Reading AI
The idea of “reading minds” is infinitely complex, given that our thoughts are not pages in a book. But we seem to be making rapid progress in deciphering what people are thinking: “In a study published in the journal Nature Neuroscience, the researchers described an A.I. that could translate the private thoughts of human subjects by analyzing fMRI scans.”
Google Has No Moat
In a leaked paper from Google, one of their researchers admits they have no moat, but neither does any other AI company: “We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.”
IBM Plans to Replace 7,800 Jobs with AI
It’s hard to tell if AI is simply a scapegoat for layoffs at this point, or if it is beginning to have on impact on hiring. I expect the former, and the latter will come eventually. “IBM Chief Executive Officer Arvind Krishna has revealed plans to pause hiring for about 7,800 positions that could be replaced by artificial intelligence systems over time, according to a Bloomberg news report published Monday.”
The World Economic Forum also projected that AI will be responsible for layoffs over the next 5 years: “Artificial Intelligence and other economic drivers will result in 83 million job losses over the next five years, amid “structural labour market churn” according to the World Economic Forum’s new Future of Jobs report.”
@ai.nexthorizonjobs, layoffs and AI #chatgpt #ai #work #greenscreen
StarCoder: A State-of-the-Art LLM for Code
Hugging Face and ServiceNow released their open source answer to GitHub’s Copilot: StarCoder. “StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks.”
Other Links
KidGeni - A pretty cool generative image tool specifically for kids. You can even order shirts based on the best things your kids create. Or just let them be creative, which is pretty neat.
Chatg - Get information about companies in industry segments. I’ve been consulting for a company regarding software for internal tools, and I’ve experimented with this (along with other tools) to inform some of our decision-making.
Notion - Send tasks to Notion with your voice. I use Notion for everything, so I’m excited to try this out.
SlackGPT - Slack is bringing AI into their tools in a big way.
AudioPen - A way to convert messy thoughts into clear text. I tend to ramble when I speak, so this has such amazing potential, I’m excited to try it out.
Deep Dive
The Godfather of AI Leaves Google and Warns of Danger Ahead
Geoffrey Hinton, a long-time AI researcher, left Google this week so he could speak more freely about his concerns about AI without hurting Google.
@ai.nexthorizondangers of AI and the future of the technology#chatgpt #ai #greenscreen
Tiktok failed to load.
Enable 3rd party cookies or use another browser
“But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
‘It is hard to see how you can prevent the bad actors from using it for bad things,’ Dr. Hinton said.”
In another interview, he discussed more of his concerns:
“I have suddenly switched my views on whether these things are going to be more intelligent than us,” he said in an interview with MIT Technology Review. “I think they’re very close to it now and they will be much more intelligent than us in the future.... How do we survive that?”
A big part of the problem is the unknown. We simply don’t know what AI will be capable of. Right now we are able to harness the power of AI for incredible things, but we simply don’t know what happens when the AI we’ve trained becomes smarter than we are or expands outside the guardrails we create. What happens when technology begins to manipulate or kill on its own? Or bad actors team up with incredibly intelligent AI to take down governments, societies, or humanity?
And the time table is also unknown. In an article from The Guardian:
I’ve been shaken by the realisation that digital intelligence is probably much better than biological intelligence
“I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I don’t think that any more. And I don’t know any examples of more intelligent things being controlled by less intelligent things – at least, not since Biden got elected.
“You need to imagine something more intelligent than us by the same difference that we’re more intelligent than a frog. And it’s going to learn from the web, it’s going to have read every single book that’s ever been written on how to manipulate people, and also seen it in practice.”
He now thinks the crunch time will come in the next five to 20 years, he says. “But I wouldn’t rule out a year or two. And I still wouldn’t rule out 100 years – it’s just that my confidence that this wasn’t coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better.”
Of course, we simply don’t know:
There’s still hope, of sorts, that AI’s potential could prove to be over-stated. “I’ve got huge uncertainty at present. It is possible that large language models,” the technology that underpins systems such as ChatGPT, “having consumed all the documents on the web, won’t be able to go much further unless they can get access to all our private data as well. I don’t want to rule things like that out – I think people who are confident in this situation are crazy.”
So now certainly seems like the time to get a handle on this while all of the technology is in its infancy. But will we be able to?