According to Doctor de Freitas, a lead researcher at DeepMind, humanity is on the verge of solving artificial general intelligence within our lifetimes.

In response to an opinion piece penned by yours truly, the scientist posted a thread on Twitter that began with what is perhaps the boldest statement we have seen from anyone at DeepMind concerning its current progress toward AGI.

My opinion: It’s all about scale now! The Game is Over!

Someone’s opinion article. My opinion: It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N https://t.co/UJxSLZGc71

— Nando de Freitas 🏳️‍🌈 (@NandoDF) May 14, 2022

You can get a weekly recap of our favorite stories.

The full text from de Freitas is here.

Someone’s opinion article. My opinion: It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N

Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Philosophy about symbols isn’t. Symbols are tools in the world and big nets have no issue creating them and manipulating them 2/n

Finally and importantly, [OpenAI co-founder Ilya Sutskever] @ilyasut is right [cat emoji]

Rich Sutton is right too, but the AI lesson ain’t bitter but rather sweet. I learned it from [Google researcher Geoffrey Hinton] @geoffreyhinton a decade ago. Geoff predicted what was predictable with uncanny clarity.

It's a lot to unpack in that thread, but it's all about scale now.

How did we get here?

DeepMind published a research paper and a post on its new multi-modal artificial intelligence system. The system is capable of performing hundreds of different tasks.

The company dubbed it a generalist system, but hadn't gone as far as to say it was capable of general intelligence.

It is easy to confuse Gato with AGI. A general intelligence could learn to do new things on their own.

I compared Gato to a gaming console.

Gato’s ability to perform multiple tasks is more like a video game console that can store 600 different games, than it’s like a game you can play 600 different ways. It’s not a general AI, it’s a bunch of pre-trained, narrow models bundled neatly.

That’s not a bad thing, if that’s what you’re looking for. But there’s simply nothing in Gato’s accompanying research paper to indicate this is even a glance in the right direction for AGI, much less a stepping stone.

Doctor de Freitas disagreed. That's not surprising, but what I found shocking was the second one.

Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Philosophy about symbols isn’t. Symbols are tools in the world and big nets have no issue creating them and manipulating them 2/n

— Nando de Freitas 🏳️‍🌈 (@NandoDF) May 14, 2022

The bit up there might have been written in response to my opinion piece. Those who follow the world of artificial intelligence know that mentioning symbols and AGItogether is a good way to summon Gary Marcus.

Enter Gary

Marcus has advocated for a new approach to AGI for several years. He wrote a best-selling book called "Rebooting Artificial Intelligence" with Ernest Davis and believes that the entire field needs to change its core methodology to build AGI.

He discussed his ideas with everyone from the University of Montreal's Yoshua Bengio to Facebook's Yann LeCun.

In the first edition of his newsletter on Substack, Marcus took on de Freitas' statements in a respectful and fiery way.

Marcus refers to the systems as attempts at alt intelligence and hyper-scaling.

He writes about DeepMind's exploration.

There’s nothing wrong, per se, with pursuing Alt Intelligence.

Alt Intelligence represents an intuition (or more properly, a family of intuitions) about how to build intelligent systems, and since nobody yet knows how to build any kind of system that matches the flexibility and resourcefulness of human intelligence, it’s certainly fair game for people to pursue multiple different hypotheses about how to get there.

Nando de Freitas is about as in-your-face as possible about defending that hypothesis, which I will refer to as Scaling-Uber-Alles. Of course, that name, Scaling-Uber-Alles, is not entirely fair.

De Freitas knows full well (as I will discuss below) that you can’t just make the models bigger and hope for success. People have been doing a lot of scaling lately, and achieved some great successes, but also run into some road blocks.

Marcus describes the problem of incomprehensibility that inundates the artificial intelligence industry's giant-sized models.

Marcus seems to be arguing that no matter how awesome and amazing the systems are, they are still incredibly.

He writes.

DeepMind’s newest star, just unveiled, Gato, is capable of cross-modal feats never seen before in AI, but still, when you look in the fine print, remains stuck in the same land of unreliability, moments of brilliance coupled with absolute discomprehension.

Of course, it’s not uncommon for defenders of deep learning to make the reasonable point that humans make errors, too.

But anyone who is candid will recognize that these kinds of errors reveal that something is, for now, deeply amiss. If either of my children routinely made errors like these, I would, no exaggeration, drop everything else I am doing, and bring them to the neurologist, immediately.

That is certainly worth a laugh, but there is a serious undertone there. A vision of the immediate or near-term future that doesn't make sense is conjured when a DeepMind researcher declares the game is over.

AGI? Really?

Gato, DALL-E, and GPT-3 are not robust enough for public consumption. None of them are capable of outputting solid results consistently and they all require hard filters to keep them from leaning toward bias. We haven't figured out the secret sauce to coding AGI because human problems are often hard and they don't always have a single, trainable solution.

It's not clear how scaling and breakthrough logic can fix these issues.

That doesn't mean giant-sized models aren't useful.

What DeepMind, OpenAI, and similar labs are doing is very important. Science is at the cutting-edge.

To declare the game over? AGI will arise from a system whose contribution is how it serves models? Gato is amazing, but it feels like a stretch.

There is no spirited rebuttal in de Freitas to change my opinion.

Gato's creators are brilliant. Gato isn't mind-blowing enough so I'm not pessimistic about AGI. Quite the opposite.

I fear that AGI is decades more away because of Gato, DALL-E, and GPT-3. They show a breakthrough in our ability to manipulate computers.

It's amazing to see a machine pull off feats of misdirection and prestidigitation, especially when you realize that the machine is dumber than a mouse.

To me, it's obvious that we need more than just a single card to take modern artificial intelligence.

Marcus ends his newsletter.

If we are to build AGI, we are going to need to learn something from humans, how they reason and understand the physical world, and how they represent and acquire language and complex concepts.

It is sheer hubris to believe otherwise.