Dr. Geoffrey Hinton, also known as the “godfather of AI” technology has decided to leave his position at Google so he would be able to speak freely about the technology’s risks. In a recent interview with the New York Times, Hinton spoke on his sincere concerns about AI. “It’s hard to see how you can prevent the bad actors from using it for bad things,” he stated. “Look at how [AI technology was] five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.”
Dr. Hinton has participated in the world of AI since the ’70s, gaining a degree at the University of Edinburgh, he was an early innovator in “neural network,” a mathematical system that learns skills by analyzing data. In 2012, while teaching at an University in Toronto, Hinton and two of his students — Alex Krishevsky and Ilya Sutskever, (Sutskever later becoming the ChatGPT creator OpenAI‘s chief scientist — together, created a neural network that could teach itself how to identify objects after being digitally fed thousands of images. Google would soon acquire the company that Dr. Hinton and his two students started for $44 million USD. In 2018, Dr. Hinton and two other scientists earned the renowned Turing Award, for their neural network advancements.
With all the accolades accruing, fear grew for Dr. Hinton in regards to the future of AI. He told the Times that in his opinion, Google was a “proper steward” of AI until 2022, when the search giant’s core business was threatened by Microsoft’s new Bing search engine that uses OpenAI and the company began a “code red” response to meet the challenge.
According to Highsnobiety, “Dr. Hinton worries that, in the short term, the propagation of false photos, videos and text will prevent an average internet user from discerning “what is true anymore,” and that in the longer term its automation of simple tasks will upend the job market. “It takes away the drudge work … it might take away more than that,” he said. Down the road, Dr. Hinton could even see a sci-fi story come to life, where AI systems not only generate code but run that code on their own, making them autonomous and leading to the elimination of humanity as a whole. “The idea that this stuff could actually get smarter than people — a few people believed that,” he said to the Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”