Godfather of AI says humans no match for it, resigns from Google to talk about AI danger
Geoffrey Hinton devoted his career to the advancement of artificial intelligence (AI) and worked at Google for ten years. Along with two of his students (one of whom became the chief scientist at OpenAI), he created a neural network that eventually formed the basis of AI-driven chatbots such as ChatGPT, Bing, and Bard. He was also the joint recipient of the 2018 Turing Award with Yann LeCun and Yoshua Bengio for his work on deep learning. Looking at his contributions in the world of AI, he eventually became to be known as the ‘Godfather of AI’.
While some individuals believe that AI will aid in enhancing human productivity and efficiency, others believe that it will result in the destruction of humanity. Dr. Hinton, despite contributing significantly to the development of AI tech, belongs to the latter group. Since he couldn’t talk about the dangers of AI while being employed by Google, he quit his job to warn people of the many dangers of the emerging tech.
Here is the full story in five points:
- The 75-year-old scientist said that he left his job at Google as he now wants to focus more on ‘philosophical work’ which will focus on the dangers of artificial intelligence. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review in an interview and added that as long as Google was paying him, he couldn’t do that. He also talked about his future plans and said that one of his priorities is to try to work with leaders in the technology industry to see if they can come together and agree on what the risks are and what to do about them. He added that an international ban on chemical weapons might be a way to curb the dangers of AI.
- Coming to the dangers of AI, Hinton said that he is more concerned about them ever since GPT-4 launched. That made him realise that AI machines are actually on their way of becoming a lot smarter than he had anticipated and that he is ‘scared’ about the consequences.“Sometimes I think it’s as if aliens had landed and people haven’t realised because they speak very good English,” he told the publication.
- According to Hinton, AI is on track to surpass human intelligence in the not-too-distant future. He believes that we need to start thinking seriously about the potential consequences of creating machines that are more intelligent than humans. “Our brains have 100 trillion connections,” Hinton says and adds, “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.” He also adds that there are two types of intelligence in the world- animal brains and neural networks. And artificial intelligence is a completely different, new and better form of intelligence.”
- Further, Hinton is also added that he was concerned about the dangers of unregulated AI and is afraid that such technology could be used to manipulate or harm humans who are not prepared for its power. He also told the publication that AI will surpass human intelligence in the future, posing a serious threat to our survival. He also warned about the potential misuse of AI by bad actors, such as politicians who might want to use it to manipulate elections or win wars. He also expressed concern about the development of autonomous weapons, which could be programmed to carry out immoral actions without human intervention.
- According to Hinton, the next step for smart machines is the ability to create their own subgoals, which might be used to carry out tasks that are inherently immoral or harmful. He also warned that such technology could be used by individuals like Putin to create hyper-intelligent robots with the goal of killing people, without micromanaging them and letting them figure out how to do it on their own.
Earlier, in an interview with New York Times, Hinton had said that AI’s enhanced versions could prove to be fatal for humanity as AI has a tendency of learning ‘unexpected behaviour from the vast amounts of data that they analyse’. He further talked about how this could become an issue in future as companies and individuals will not only allow AI to generate their own code, but actually run it on their own.
He then added that earlier, people thought that the possibility of this happening was way into the future. However, given the rapid pace at which AI is growing, that is not the case anymore.