What AI tech leaders say about AGI

Published on
May 29th, 2024
Category
News
Share On
This is an article from our "Ask Kitty" series in our AI newsletter, Deeper Learning, where we answer your questions about AI. Today’s question was voted on in a previous newsletter.
Q: How close are we to AGI, really?
A: I can answer this question for you in four words: No one knows, really. 
That’s not fun or helpful, but it’s the truth. So let me try to be helpful. First off, I’ve identified that discrepancy across AGI estimates comes down to three things:
  • How you define AGI 
  • How you measure intelligence 
  • How fast you think we’re moving on both of those things 
Let’s look look the first today. 

What AI tech leaders say about AGI

The biggest, brightest brains in AI have weighed in. The results: We’re anywhere from a year to 50 years away from AGI. 😹
The experts at the top of the chart get a lot of media attention. This is in part because, at least in front of the media, they have a more narrow view of how intelligence is defined (i.e. “Can complete X task or test”), and in part because they’re more bullish on how quickly we’re moving on developing the tech to get machines to be able to replicate “intelligence.”
But let’s look at the bottom of the chart.
Andrew Ng, cofounder of Google Brain, recently wrote on LinkedIn “When we get to AGI, it will have come slowly, not overnight… I expect the path to AGI to be one involving numerous steps forward, leading to step-by-step improvements in how intelligent our systems are.” Ng has also spoken about AGI hype being cyclical, noting that a wave of progress in deep learning 10 years ago sparked a frenzy of AGI coverage. Ng believes this current wave of AGI interest is sparked by progress in generative AI progress, but it will wane again as we reach the limitations of generative AI.
So, will we reach the limitations of generative AI soon? In previous posts, we’ve explained some of those limitations (like context windows) and the strategies AI engineers are working on to expand beyond them (like RAG). Many people with more conservative estimates of AGI would say that new models are getting faster and cheaper with more context, but they haven’t actually solved the core dynamics of what it means to be “intelligent.”
Next week, we’ll dive into how you define “intelligence,” but here’s a sneak peek, pulling from Ben Dickson’s TechTalks article “Artificial intelligence: How to measure the “I” in AI.” DeepMind cofounder Shane Legg and AI scientist Marcus Hutter say “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” The key words here are “wide range of environments.”
Come back next week and we’ll go deeper!
–
This article originally appeared in our weekly AI newsletter, Deeper Learning. 
Comments (8)
among us
Your information is effective and extremely helpful; I love watching it as much as I love playing run 3, an incredibly fun infinite runner game every day.
miller chaney
Experts are interested in uno online, a classic game that is being sought after by many young people.
F Z
Astoundingly overpowering data, worth proposing. Notwithstanding, I propose this:
Saad Zaman
This article on what AI tech leaders say about AGI was interesting. It gave a good overview of different opinions.You should also click Plano if you need https://www.meinetipps24.de/treppenrenovierung-wie-man-alte-schoenheit-neu-belebt/ security guards for your protection.
Share
Fara Pioky
Geometry Dash is excellent for improving hand-eye coordination, as players must time their jumps and movements perfectly to avoid obstacles.