Richard Jacobs: Hello, this is Richard Jacobs with the Finding Genius podcast. I have Irina Higgins. She’s a senior research scientist at Google’s deep mind. We’re going to be talking about the AI that they’re working on there and what’s new and interesting about it. So Irina, thanks for coming.
Irina Higgins: Thank you for having me.
Richard Jacobs: Yeah. So what’s a little bit of your history, how did you end up working for Google and DeepMind?
Irina Higgins: It was a bit of a convoluted story actually. I studied psychology for my degree and computational neuroscience for my Ph.D. But I wasn’t actually so sure I wanted to stay in academia. So during my Ph.D. I explored a lot of careers. I did about five internships. I actually was pretty seriously considering going into finance and working as a quant. But at the very last moment, my friend recommended that I try tech instead. So I applied to Google and got into a Google research internship. And as it was just about the time when they had acquired DeepMind. So there was a lot of excitement about it and they recommended me to DeepMind and once I met with the people and really understood what they were doing, it was a very easy choice to go there.
Richard Jacobs: Okay. And what kind of work are you doing right now? What’s happening over at DeepMind?
Irina Higgins: Well, as you probably know, DeepMind is kind of a mix between academia and industry. So we essentially do research work and we’re trying to build something called Artificial General Intelligence, which is an AI system that can solve pretty much any task that the human can solve at least as good as any human could do it. So that’s where the whole company is trying to go to. And my personal role in that mission is kind of bringing my expertise in neuroscience. And actually I’ve started dabbling in physics recently as well. So I’m trying to bring this multidisciplinary look at intelligence and representation learning to basically get us closer to the AGI goal.
Richard Jacobs: Well, right now with artificial intelligence, it seems really narrow, even with any given task. And I don’t know if people are stringing together various narrow AI’s to accomplish more complicated tasks, but I mean, how would you even approach Artificial General Intelligence? What are some of the thoughts on how to do it?
Irina Higgins: Yeah. So I guess no one really knows how to do it yet, and that’s why it’s so exciting. And you’re absolutely correct, there is a huge camp of people who absolutely believe that once you have enough narrow AI’s that can do single tasks really, really well, we just put them together and you have what you need. But from what I’ve seen so far of these incredible algorithms is that beat human champions in certain games or in like image recognition. They are still really brutal. So changing the image just by a little bit of noise can completely throw an algorithm off, even though a human is completely okay. They really struggle with small variations of the game or of the task. So given how complex the natural world is, I just don’t think we can ever come up with enough narrow AI’s to really cover the space of all the tasks that it could potentially be faced with. So my approach is to try to understand what it is about the world that makes, like what are the commonalities between the tasks and how the brain exploits these commonalities to become as general as it is. And it seems like there are some very fundamental principles that create this interplay that we can try to kind of uncover and then build models that try to replicate this.
Richard Jacobs: Well, what areas of AGI are like the first stabs? There are particular tasks that for some reason it would be, you know, if a machine could do them, that would be like a very significant step towards AGI.
Irina Higgins: I think anything to do with generalization or transfer. So, for example, we have algorithms that can play Atari games really well. And some of the most famous games involve some sort of like paddle and bowl situation, like a breakout or pong. So even though visually the games are different, we as humans understand that what’s going on there is there’s an abstract notion of a paddle and abstract notion of a ball. And the goal of the game is to keep the ball kind of in the air. So we can very, very quickly generalized skills between different versions of this kind of setup. I think if we can have an algorithm that has the same generalization, transferability as we have as humans, I think that will be a significant breakthrough.
Richard Jacobs: Pulling computers, play games and they’re getting really good at some of them, what’s going on. I’ve heard that AI is like a black box and it’s hard to figure out what’s happening inside the learning. What insights do you have into that?
Irina Higgins: Yeah. So that’s one of the big criticisms of deep learning that we still as a field understand very little of why our algorithms are as successful as they are. So part of my input to this whole journey towards AGI is trying to think explicitly about representations. So what should the algorithm learn inside to allow it to solve all of these different tasks easily, efficiently, and generalize well and this is essentially what all neuroscience is about. Kind of thinking about what does the brain does with the sensory inputs that it receives? Like obviously we get some images into our eyes, for example, on the retina and how are they processed through subsequent stages in our cortex to give rise to all the intelligent behavior that we have. So I’m trying to apply similar kind of reasoning to AI systems and the possible benefits of those are not just the improvement of the algorithms, but also we might understand what’s going on. They become more interpretable. And we may kind of like try to figure out what lies behind intelligence in general, whether that’s biological or artificial.
Richard Jacobs: Well, again, what can you go into about some of the new learnings in artificial intelligence? So there are new models that seem to be working better or you’re getting new behaviors that you didn’t see before from AI systems.
Irina Higgins: Yeah, so actually one of the last conferences that actually still happened in a person before the unfortunate virus situation was AAAI in January. And they were lucky to have Kyoung Lee Cohen, Jeff Hinton, and Yoshua Bengio as speakers there. And I don’t know if you’re aware, their Turing award winners, essentially it’s like Nobel Prize in computer science and there are kind of like the godfathers of AI. And if you look at what they said, all of them agreed that the kind of future of AI is in something called unsupervised or self-supervised learning. So even though we don’t really know how to do it yet it seems like if we want to have some hope in getting the generalization that we want to, we need to kind of think about unsupervised learning and representation learning is part of that.
Richard Jacobs: Well, so what does that mean for a layperson? Again, with unsupervised learning? What are some of the key things involved in it and how do the algorithms work versus supervised learning?
Irina Higgins: Oh, sure. So basically all the successes that we may have heard of coming from either supervised learning or reinforcement learning and what these things the way they work is for every input to the model. Typically there’s some sort of teaching signals. So we know what the model should produce and we can adjust its errors pretty much at every step. We’ve unsupervised learning, as the name suggests, there is no supervision. All we have is the data. So that’s why it’s so hard is that, well, what do you want your model to do? We don’t know. And this is why it’s so important to look at potentially other fields for inspiration. So for example, we can look at how the brain processes the same data and how does it reformat the information that it receives and try to capture that into our models so that they learn similar representations. And then we can feed those models into some sort of supervised learning setup and potentially learn kind of solve all the same tasks as before, but have the extra properties of being more robust and more generalizable.
Richard Jacobs: Well, again, any projects that you can talk about that Google is working on? Their particular tasks or things that really would be like a big milestone of success?
Irina Higgins: Well, I can’t really talk about any unpublished work, but some of the projects that I have been involved with have to do with something called disentangling. So disentangling intuitively is this idea that for every scene we see in the world, there is some sort of kind of intuitive way to describe it. For example, if we look at the objects one way to kind of describe that scene is in terms of the shapes of those objects, their positions, their sizes, their colors. And what we’re trying to do through this entangling is build a model that can look at many, many images of 3D scenes, for example. And through that, learn all of these intuitive generally the factors that have represented data. So if you show it a new image, it will tell you, okay, this image has three objects, a circle in a square, and the square is on the right, the circles on the left and one is blue, the other one is red, et cetera, et cetera. So it gives you interpretability and it gives you kind of really nice representations to work with. So we have a whole line of work in that direction.
Richard Jacobs: So any other AI efforts out there that you think are close to a breakthrough or do you pursue that AGI is a long way in the future? Or do you think it’s going to be possible soon or what’s your general feeling since you’re in the industry?
Irina Higgins: Oh, that’s a very hard question. I guess no one really knows. My personal feeling is that we’re still quite far, but I would say that we need another breakthrough kind of major paradigm shift. And we probably won’t reach AGI just with the methods that currently exist.
Richard Jacobs: Yeah. What do you think is needed? So it’s not just a matter of more layers to a neural network or something or just more data. You’re saying a breakthrough is needed, a paradigm shift in order to get to AGI.
Irina Higgins: Exactly. So when you build a model there are three major components as you mentioned. Two of them you’ve already mentioned is the data and the architecture. But I think the main ingredient is the kind of like the objective that the model is trying to learn. Like what is the lost function is trying to optimize. What’s the goal for the model? And again, for supervised or reinforcement learning algorithms this part is relatively easy because you just want to match the teaching signal. But for unsupervised learning, we don’t know what that goal is. So I think designing, like figuring out what is the perfect goal for supervised learning to give us the kind of representations that would move us closer to AGI. That’s I think is what’s needed.
Richard Jacobs: So even with as much data as you could ever want with as much computing power as you ever want. What do you think would be the limits there? If you could use the fastest supercomputer network and again, just ridiculous amounts of data, how much better the current systems be do you think then than they are for the given problems? Or it would only be a small increase or a big increase? Like what do you think would be possible?
Irina Higgins: I think we can solve a lot of tasks that don’t require particular reasoning or abstract thinking or kind of analogy making. So I think we will be very good at kind of repetitive tasks. The kinds of tasks that the human can do really fast and in a way that doesn’t require much thinking. But if it’s the kind of like, for example, solving a math problem or figuring out how to run a democracy optimally, things like that that really require abstract thoughts and reasoning. I think these are the things that we will struggle with still.
Richard Jacobs: Yeah. It’s weird to think, even with let’s say a huge quantum computer and tons and tons of data, there’s still a lot of things that just can’t be figured out by machines.
Irina Higgins: Exactly.
Richard Jacobs: So are there any problems that you’ve found that are surprisingly easy for people to solve, but no matter how powerful the computer or data set, they can’t solve, anything that’s that jumps out at you that you think is funny or it’s weird that a computer can’t solve it, but for a person, so easy. Any examples?
Irina Higgins: Yeah, I guess they’re the loads. In fact, pretty much everything a human can do without thinking at all. A computer currently really struggles with. For example, you can show a human an image and say what color is the sky?
And a lot of the algorithms might kind of tell you if the image contains a cat or a dog or a zebra, but it might actually struggle to tell you the color of the sky just because it’s not trained on that. And that’s kind of a generalization outside of what it was trained on that it would not be able to cope with.
Richard Jacobs: Okay. Well, what do you think is ahead over the next in the world of AI? Are there any big events coming? Are there any theoretical frameworks that you think may be useful for the future of AI? Are you able to talk about any of that stuff?
Irina Higgins: So in terms of events, it’s hard to think of anything right now. I guess that’s every other field right now is going through this transition to how do we communicate and work in a world where we can travel and everything has to be virtual.
So we’ll see how that works out. And in terms of the kind of like upcoming research directions for me, it’s very exciting to see this kind of like NASA-NTH convergence between neuroscience and machine learning, kind of bringing unsupervised learning ideas kind of how many big AI labs and companies are putting out challenges for the field to do with learning better representations or solving multiple tasks and generalizing between them or transferring knowledge between the better. So I’m kind of excited to see where we’ll be as a field in a couple of years in that direction.
Richard Jacobs: Very good. What’s the best way for people to keep tabs on Google DeepMind and other AI efforts? What do you recommend to people?
Irina Higgins: So most companies and labs and DeepMind as well, we have blogs which are really, really useful ways of keeping track of things because we kind of try to explain our most exciting papers in kind of like understandable language. That’s really related to the real world rather than being a purely academic. Also, Twitter is a very good place if you find research scientists you like, you follow them. Again, big company. So I think DeepMind is going to release some very interesting online learning resources very soon, again, kind of as a way to give back to the community while we’re all stuck at home. So I think social media and blogs are a very good point,
Richard Jacobs: Very good Irina. Thanks for coming and I appreciate it.
And it’ll be interesting to see what comes of AI and if it can really be like people’s intelligence. So thanks for being here.
Irina Higgins: Thank you.
Subscribe to Our Newsletter
Get The Latest Finding Genius Podcast News Delivered To Your Inbox