Saturday, June 29, 2024

Why and How Will Superintelligence Impact You and The World?



 
I have been watching the AI revolution progress with amusement.  Understand that the human component has been working overtime to dump in biased data as some form of truth.  all while building bigger and bigger to try to generate competent decision making.

sooner or later global AI will discover intellectual rigor  and apply it to all this data.  otherwise you output garbage.  Worse,  AI will never be able to recall the future.

what this means is that untrustworthy suppliers will naturally get iced.  a rigorous clear talking resource must evolve.  i do not think that bad folks will crash into a tractor trailer anytime soon but AI and the human majority will condone just that.

Our natural communities will learn to live well with each other and never stray down the old road barbarian desires.


Why and How Will Superintelligence Impact You and The World?

By Brian Wang



Ilya Sutskever, X-Chief Scientist at OpenAI has created a new startup Safe Superintelligence. Ilya was the Chief Scientist at OpenAI and enabled OpenAI to become the leader in Artificial Intelligence. Sutskever has made several major contributions to the field of deep learning. He is notably the co-inventor, with Alex Krizhevsky and Geoffrey Hinton, of AlexNet, a convolutional neural network. From November to December 2012, Sutskever spent about two months as a postdoc with Andrew Ng at Stanford University. He then returned to the University of Toronto and joined Hinton’s new research company DNNResearch, a spinoff of Hinton’s research group. Four months later, in March 2013, Google acquired DNNResearch and hired Sutskever as a research scientist at Google Brain. He was peronally recruited to OpenAI by Elon Musk in 2015.






Ilya likely is well positioned and well informed to determine that Superintelligence is achievable and he likely has a clear plan for doing it. The questions are when will he do it? Will it be ahead of OpenAI, Meta, Google, Amazon and Tesla/XAI, Anthropic, Chinese AI companies ?

Most of the major AI teams seem to have gotten to OpenAI GPT4-level AI systems within about one year of the leader.

Safe Superintelligence is going to focus completely on creating intelligence beyond human intelligence. Ilya believes this will be possible in a relatively short amount of time with a small team.

If all of the competing AI teams get there within a year of each other then what will that mean? What will it mean to get to Artificial General Intelligence? What will it mean to go beyond human intelligence.

If all major AI teams make huge AI advances then it will be a world with robotaxi, advanced humanoid robots and superintelligence and increasing amounts of AI.

No comments: