Podcast: Can you teach a machine common sense?on November 11, 2020 at 5:35 pm

Artificial intelligence has become such a big part of our lives, you’d be forgiven for losing count of the algorithms you interact with. But the AI powering your weather forecast, Instagram filter, or favorite Spotify playlist is a far cry from the hyper-intelligent thinking machines industry pioneers have been musing about for decades.

Deep learning, the technology driving the current AI boom, can train machines to become masters at all sorts of tasks. But it can only learn only one at a time. And because most AI models train their skillset on thousands or millions of existing examples, they end up replicating patterns within historical data–including the many bad decisions people have made, like marginalizing people of color and women.

Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligence–machines that can multitask, think, and reason for themselves.

The idea is divisive. Beyond the answer to how we might develop technologies capable of common sense or self-improvement lies yet another question: who really benefits from the replication of human intelligence in an artificial mind?

“Most of the value that’s being generated by AI today is returning back to the billion dollar companies that already have a fantastical amount of resources at their disposal,” says Karen Hao, MIT Technology Review’s senior AI reporter and the writer of The Algorithm. “And we haven’t really figured out how to convert that value or distribute that value to other people.”

In this episode of Deep Tech, Hao and Will Douglas Heaven, our senior editor for AI, join our editor-in-chief, Gideon Lichfield, to discuss the different schools of thought around whether an artificial general intelligence is even possible, and what it would take to get there.

Check out more episodes of Deep Tech here.

Show notes and links:

Full episode transcript:

Gideon Lichfield: Artificial intelligence is now so ubiquitous, you probably don’t even think about the fact that you’re using it. Your web searches. Google Translate. Voice assistants like Alexa and Siri. Those cutesy little filters on Snapchat and Instagram. What you see–and don’t see–on social media. Fraud alerts from your credit-card company. Amazon recommendations. Spotify playlists. Traffic directions. The weather forecast. It’s all AI, all the time.

And it’s all what we might call “dumb AI”. Not real intelligence. Really just copying machines: algorithms that have learned to do really specific things by being trained on thousands or millions of correct examples. On some of those things, like face and speech recognition, they’re already even more accurate than humans.

All this progress has reinvigorated an old debate in the field: can we create actual intelligence, machines that can independently think for themselves? Well, with me today are MIT Technology Review’s AI team: Will Heaven, our senior editor for AI, and Karen Hao, our senior AI reporter and the writer of The Algorithm, our AI newsletter. They’ve both been following the progress in AI and the different schools of thought around whether an artificial general intelligence is even possible and what it would take to get there.

I’m Gideon Lichfield, editor in chief of MIT Technology Review, and this is Deep Tech.

Will, you just wrote a 4,000 word story on the question of whether we can create an artificial general intelligence. So you must’ve had some reason for doing that to yourself. Why is this question interesting right now?

Will Douglas Heaven: So in one sense, it’s always been interesting. Building a machine that can think and do things that people can do has been the goal of AI since the very beginning, but it’s been a long, long struggle. And past hype has led to failure. So this idea of artificial general intelligence has become,you know, very controversial and very divisive–but it’s having a comeback. That’s largely thanks to the success of deep learning over the last decade. And in particular systems like Alpha Zero which was made by DeepMind and can play Go and Shogi, a kind of Japanese chess, and chess. The same algorithm can play all three games. And GPT-3, the large language model from OpenAI, which can uncannily mimic the way that humans write. That has prompted people, especially over the last year, to jump in and ask these questions again. Are we on the cusp of building artificial general intelligence? Machines that can think and do things like humans can.

Gideon Lichfield: Karen, let’s talk a bit more about GPT-3, which Will just mentioned. It’s this algorithm that, you know, you give it a few words and it will spit out paragraphs and paragraphs of what looks convincingly like Shakespeare or whatever else you tell it to do. But what is so remarkable about it from an AI perspective? What does it do that couldn’t be done before?

Karen Hao: What’s interesting is I think the breakthroughs that led to GPT-3 actually happened quite a number of years earlier. In 2017, the main breakthrough that triggered a wave of advancement in natural language processing occurred with the publishing of the paper that introduced the idea of transformers. And the way a transformer algorithm deals with language is it looks at millions or even billions of examples, of sentences of paragraph structure of, maybe even code structure. And it can extract the patterns and begin to predict to a very impressive degree, which words make the most sense together, which sentences make the most sense together. And then therefore construct these really long paragraphs and essays. What I think GPT-3 has done differently is the fact that there’s just orders of magnitude more data that is now being used to train this transformer technique. So what OpenAI did with GPT-3 is they’re not just training it on more examples of words from corpora like Wikipedia or from articles like the New York Times or Reddit forums or all of these things, they’re also training it on, sentence patterns, it trains it on paragraph patterns, looking at what makes sense as an intro paragraph versus a conclusion paragraph. So it’s just getting way more information and really starting to mimic very closely how humans write, or how music scores are composed, or how coding is coded.

So it’s just getting way more information and really starting to mimic very closely how humans write, or how music scores are composed, or how coding is coded.

Gideon Lichfield: And before transformers, which can extract patterns from all of these different kinds of structures, what was AI doing?

Karen Hao: Before, natural language processing was actually.. it was much more basic. So transformers are kind of a self-supervised technique where the algorithm is not being told exactly what to look for among the language. It’s just looking for patterns by itself and what it thinks are the repeating features of language composition. But before that, there were actually a lot more supervised approaches to language and much more hard coded the approaches to language where people were teaching machines like “these are nouns, these are adjectives. This is how you construct these things together.” And unfortunately that is a very laborious process to try and curate language in that way where every word kind of has to have a label. And the machine has to be manually taught how to construct these things. And so it limited the amount of data that these techniques could feed off of. And that’s why language systems really weren’t very good.

Gideon Lichfield: So let’s come back to that distinction between supervised and self supervised learning, because I think we’re going to see it’s a fairly important part of the advances towards something that might become a general intelligence. Will, as you wrote in your piece, there’s a lot of ambiguity about what we even mean when we say artificial general intelligence. Can you talk a bit about what are the options there?

Will Douglas Heaven: There’s a sort of spectrum. I mean on one end, you’ve got systems which, you know, can do many of the things that narrow AI or dumb AI, if you like can do today, but sort of all at once. And Alpha Zero is perhaps the first glimpse of that. This one algorithm that can train itself to do three different things, but important caveat there, it can’t make itself do those three things at once. So it’s not like a single brain that can switch between tasks. As Shane Legg, on the co-founders of Deepmind, put it that it’s as if you or I have to, you know, when we started playing chess, we had to swap out our brain and put it in our chess brain.

That’s clearly not very general, but we’re on the cusp of that kind of thing–your kind of multi-tool AI where one AI can do several different things that narrow AI can already do. And then moving up the spectrum, what probably more people mean when they talk about AGI is, you know, thinking machines, machines that are “human-like” in scare quotes that can multitask in the way that a person can. You know we’re extremely adaptable. We can switch between, you know, frying an egg to, you know, writing a blog post to singing, whatever. Still, there are also folk, going right to the other end of the spectrum, who would rope in a machine consciousness too to talk about AGI. You know, that we’re not going to have true general intelligence or human-like intelligence until we have a machine that can not only do things that we can do, but knows that it can do things that we can do that has some kind of self reflection in there. I think all those definitions have been around since the beginning, but it’s one of the things that makes AGI difficult to talk about and quite controversial because there’s no clear definition.

Gideon Lichfield: When we talk about artificial general intelligence, there’s this sort of implicit assumption that human intelligence itself is also absolutely general. It’s universal. We can fry an egg or we can write a blog post or we can dance or sing. And that all of these are skills that any general intelligence should have. But is that really the case or are there going to be different kinds of general intelligence?

Will Douglas Heaven: I think, and I think many in the AI community would also agree that there are many different intelligences. We’re sort of stuck on this idea of human-like intelligence largely I think because humans for a long time have been the best example of general intelligence that we’ve had, so it’s obvious why they’re a role model, you know, we want to build machines in our own image, but you just look around the animal kingdom and there are many, many different ways being intelligent. From the sort of the social intelligence that ants have, where they could collectively do really remarkable things to octopuses, which we’re only just beginning to understand the ways that they’re intelligent, but then they’re intelligent in a very alien way compared to ourselves. And even our closest cousins like chimps have intelligences, which are different to, and you I, they have different skill sets than, than humans do.

So I think the idea that machines, if they become generally intelligent, needs to be like us is, as you know, is nonsense, is going out the window. The very mission of building an AGI that is human is perhaps pointless because we have human intelligences, right? We have ourselves. So why do we need to make machines that do those things? It’d be much, much better to build intelligences that can do things that we can’t do. They’re intelligent in different ways to compliment our abilities.

Gideon Lichfield: Karen, people obviously love to talk about the threat of a super-intelligent AI taking over the world, but what are the things that we should really be worried about?

Karen Hao: One of the really big ones in recent years has been algorithmic discrimination. This phenomenon we started noticing where, when we train algorithms, small or large, to make decisions based on historical data, it ends up replicating the patterns that we might not necessarily want it to replicate within historical data, such as the marginalization of people of color or the marginalization of women.

Things in our history that we would rather do without, as we move forward and progress as a society. But because of the way that algorithms are not very smart and they extract these patterns and replicate these patterns mindlessly, they end up making decisions that discriminate against people of color discriminating against women discriminate against particular cultures that are not Western-centric cultures.

And if you observe the conversations that are happening among people who talk about some of the ways that we need to think about mitigating threats around superintelligence or around AGI, however you want to call it, they will talk about this challenge of value alignment. Value alignment being defined as how do we get this super-intelligent AI to understand our values and align with our values. If they don’t align with our values, they might go do something crazy. And that’s how it sort of starts to harm people.

Gideon Lichfield: How do we create an AI, a super intelligent AI, that isn’t evil?

Karen Hao: Exactly. Exactly. So instead of talking in the future about trying to figure out value alignment a hundred years from now, we should be talking right now about how we failed to align the values with very basic AIs today and actually solve the algorithmic discrimination problem.

Another huge challenge is the concentration of power that, um, AI naturally creates. You need an incredible amount of computational power today to create advanced AI systems and break state of the art. And the only players really that have that amount of computational power now are the large tech companies and maybe the top tier research universities. And even the top tier research universities can barely compete with the large tech companies anymore.

So the Googles Facebooks apples of the world. Um, another concern that people have, for a hundred years from now is once super-intelligent AI is unleashed, is it actually going to be benefiting people evenly? Well, we haven’t figured that out today either. Like most of the value that’s being generated by AI today is returning back to the billion dollar companies that already have a fantastical amount of resources at their disposal. And we haven’t really figured out how to convert that value or distribute that value to other people.

Gideon Lichfield: Ok well let’s get back then to that idea of a general intelligence and how we would build it if we could. Will mentioned deep learning earlier. Which is the foundational technique of most of the AI that we use today. And it’s only about eight years old. Karen, you talked to essentially the father of deep learning Geoffrey Hinton at our EmTech conference recently. And he thinks that deep learning, the technique that we’re using for things like translation services or face recognition, is also going to be the basis of a general intelligence when we eventually get there.

Geoffrey Hinton [ From EmTech 2020]: I do believe deep learning is going to be able to do everything. But I do think there’s going to have to be quite a few conceptual breakthroughs that we haven’t had yet. // Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reasoning, but we also need a massive increase in scale. // The Human brain has about a hundred trillion parameters, that is synapsis. A hundred trillion. What are now called really big models like GPT-3 has 175 billion. It’s thousands of times smaller than the brain.

Gideon Lichfield: Can you maybe start by explaining what deep learning is?

Karen Hao: Deep learning is a category of techniques that is founded on this idea that the way to create artificial intelligence is to create artificial neural networks that are based off of the neural networks in our brain. Human brains are the smartest form of intelligence that we have today.

Obviously Will has already talked about some challenges to this theory, but assuming that human intelligence is sort of like the epitome of intelligence that we have today, we want to try and recreate artificial brains in sort of the image of a human brain. And deep learning is that. Is a technique that tries to use artificial neural networks as a way to achieve artificial intelligence.

What you were referring to sort of is there are largely two different camps within the field around how we might go about approaching building artificial general intelligence. The first camp being that we already have all the techniques that we need, we just need to scale them massively with more data and larger neural networks.

The other camp is deep learning is not enough. We need something else that we haven’t yet figured out to supplement deep learning in order to achieve some of the things like common sense or reasoning that has sort of been elusive to the AI field today.

Gideon Lichfield: So Will, as Karen alluded to just now, the people who think we can build a general intelligence off of deep learning think that we need to add some things to it. What are some of those things?

Will Douglas Heaven: Among those who think deep learning is, is the way to go. I mean, as well as loads more data, like Karen said, there are a bunch of techniques that people are using to push deep learning forward.

You’ve got unsupervised learning, which is.. traditionally many deep learning successes, like image recognition, just simply to use the cliched example of recognizing cats. That’s because the AI has been trained on millions of images that have been labeled by humans with “cat.” You know, this is what a cat looks like, learn it. The unsupervised learning is when the machine goes in and looks at data that hasn’t been labeled in that way and itself tries to spot patterns.

Gideon Lichfield: So in other words, you would give it like a bunch of cats, a bunch of dogs, a bunch of pecan pies, and it would sort them into groups?

Will Douglas Heaven: Yeah. It essentially has to first learn what the sort of distinguishing features between those categories are rather than being prompted. And that ability to identify itself, you know, what those distinguishing features are, is a step towards a better way of learning. And it’s practically useful because of course the task of labeling all this data is enormous.

And we can’t continue along this path, especially if we want the system to train on more and more data. We can’t continue on the path of having it manually labeled. And even more interestingly I think an unsupervised learning system has a potential of spotting your categories that humans haven’t. So we might actually learn something from the machine.

And then you’ve got things like transfer learning, and this is crucial for general intelligence. This is where you’ve got a model that has been trained on a set of data in one way or another. And what it’s learned in that training, you want to be able to then transfer that to a new task so that you don’t have to start from scratch each time.

So there are various ways you’d approach transfer learning, but for example you could take some of the, some of the values from one training, from one train network and sort of preload another one in a way that when you asked it to recognize, an image of a different animal, it already has some sense of, you know, what animals have, you know, legs and heads and tails.

What have you. So you just want to be able to transfer some of the things that’s learned from one task to another. And then there are things like few shot learning, which is where the system learns from or as the name implies from very few training examples. And that’s also going to be crucial because we don’t always have lots and lots of data to throw at these systems to teach them.

I mean they’re extremely inefficient when you think about it compared to humans. You know, we can learn a lesson from, you know, one example, two examples. You show a kid, a picture of a giraffe and it knows what a giraffe is. We can even learn what something is without saying any example.

Karen Hao: yeah. Yeah. If you think about it, kids… if you show them a picture of a horse and then you show them a picture of a rhino and you say, you know, a unicorn is something in between a horse and rhino, maybe they will actually, when they first see a unicorn in a picture book, be able to know that that’s a unicorn. And so that’s how you kind of start learning more categories than examples that you’re seeing, and this is inspiration for yet another frontier of deep learning called low shot learning or less than one shot learning. And again, it’s the same principle as few shot learning where if we are able to get these systems to learn from very, very, very tiny samples of data, the same way that humans do, then that can really supercharge the learning process.

Gideon Lichfield: For me, this raises an even more general question; which is what makes people in the field of AGI so sure that you can produce intelligence in a machine that represents information digitally, in the forms of ones and zeros, when we still know so little about how the human brain represents information. Isn’t it a very big assumption that we can just recreate human intelligence in a digital machine?

Will Douglas Heaven: yeah, I agree. In spite of the massive complexity of some of the neural networks we’re seeing today in terms of their size and their connections, we are orders of magnitude away from anything that matches the scale of a brain, even sort of a rather basic animal brain. So yeah, there’s an enormous gulf between that idea that we are going to be able to do it, especially with the present technology, the present deep learning technology.

And of course, even though, as Karen described earlier, neural networks are inspired by the brain, the neurons neurons in our brain. That’s only one way of looking at the brain. I mean, brains aren’t just lumps of neurons. They have discrete sections that are dedicated to different tasks.

So again, this idea that just one very large neural network is going to achieve general intelligence is again, a bit of a leap of faith because maybe general intelligence will require some breakthrough in how dedicated structures communicate. So there’s another divide in you know those chasing this goal.

You know, some think that you can just scale up, neural networks. Other people think we need to step back from the sort of specifics of any individual deep learning algorithm and look at the bigger picture. Actually, you know, maybe neural networks aren’t the best model of the brain and we can build better ones, that look at how different parts of the brain communicates to, you know, the, the, the sum is greater than the whole.

Gideon Lichfield: I want to end with a philosophical question. We said earlier that even the proponents of AGI don’t think it will be conscious. Could we even say whether it will have thoughts? Will it understand its own existence in the sense that we do?

Will Douglas Heaven: In Alan Turing’s paper from 1950 Can Machines Think, which even, you know, that’s when AI was still just this theoretical idea, we haven’t even addressed it as a sort of an engineering possibility. He raised this question: how do we tell if a machine can think? And in that paper, he addresses, you know, this, this idea of consciousness. Maybe some people will come along and say machines can never think because we won’t ever be able to tell that machines can think because we won’t be able to tell they’re conscious. And he sort of dismisses that by saying, well, if you push that argument so far, then you have to say the same thing about. Well, the fellow humans that you meet every, every day, there’s no ultimate way that I can say that any of you guys aren’t conscious. You know the only way that I would know that is if I experienced being you. And you get to the point that where communication breaks down and it’s sort of a place where we can’t go. So that’s one way of dismissing that question. I mean, I think the consciousness question will be around forever. One day I think we will have machines, which act as if they were.. they could think and you know, could mimic humans so well, that we might as well treat them as if they’re conscious, but as to whether they actually are, I don’t think we’ll ever know.

Gideon Lichfield: Karen, what do you think about conscious machines?

Karen Hao: I mean, building off of what Will said is, like, do we even know what consciousness. And I guess I would draw on the work of a professor at Tufts actually. He approaches artificial intelligence from the perspective of artificial life. Like how do you replicate all of the different things?

Not just the brain, but also like the electrical pulses or the electrical signals that we use within the body to communicate and that has intelligence too. If we are fundamentally able to recreate every little thing, every little process in our bodies or in an animal’s body eventually, then why wouldn’t those beings have the same consciousness that we do?

Will Douglas Heaven: You know there’s a wonderful debate going on right now about brain organoids, which are little clumps of stem cells that are made to grow into neurons and they can even develop connections and you see in some of them this electrical activity. And there are various labs around the world studying these little blobs of brain to understand human brain diseases better. But there’s a really interesting ethical debate going on about, you know, At what point does this electrical activity raise? The possibility that these little plops in Petri dishes are conscious. And that shows that we have no good definition of consciousness, even for our own brains, let alone machine ones.

Karen Hao: And want to add, we also don’t really have a good definition of artificial. So that just adds, I mean, if we talk about artificial, general, intelligence.

We don’t have a good definition of any of those three words that compose that term. So going to the point that Will made about these organoids that were growing in Petri dishes is that considered artificial? If not, why? Do we define artificial as things that are just not made out of organic material? There’s just a lot of ambiguity and definitions around all of the things that we’re talking about, which makes the consciousness question very complicated.

Will Douglas Heaven: It also makes them fun things to talk about.

Gideon Lichfield: That’s it for this episode of Deep Tech. And it’s also the last episode we’re doing for now. We’re working on some other audio projects that we’re hoping to launch in the coming months. So please keep an eye out for them. And if you haven’t already, you should check out our AI podcast called In Machines We Trust, which comes out every two weeks. You can find it wherever you normally listen to podcasts.

Deep Tech is written and produced by Anthony Green and edited by Jennifer Strong and Michael Reilly. I’m Gideon Lichfield. Thanks for listening.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *