Will robots ever become so powerful that they see the mere existence of humans as a waste of energy resources and wipe us from existence? This is called the Gorilla problem in AI. We think of ourselves as intelligent, we also attribute other primates like Gorillas to having naive brains. If we develop sophisticated AI systems, they might see us in the same way in which we see Gorillas now and destroy us. Basically, the plot of every science fiction movie about artificial intelligence will go along the same lines. Is this possible? If it is possible, what should we do about it? Or where should we start thinking to make sure that this doesn't happen?
All the AI researchers including big names like Andrew Ng, Yann Lecun think that this is not a problem worth solving right now. They are of the opinion that this has been blown out of proportions by the media and we are nowhere close to building systems that are super intelligent. Ruseel is bugged by this. Russell is no ordinary scientist, he wrote a book on Reinforcement learning. He is probably in a better position than any of us to judge the progress on AI. He thinks that we need focus on proactively making AI human compatible otherwise we are doomed.
Russell says that the standard model of AI where human specifies the objective function for the machine is flawed. Not only that, he proposes an alternative. Machines, according to Russel should always try to understand what human wants. It's objective function should always consider our preferences and do the task at hand accordingly. He also critiques deep learning, the new kid on the floor as naive. Most of his criticisms are true though. All the researchers tend to agree although these deep learning systems are an engineering marvel, we don't know a lot about how to put them together and mimic intelligence. We seem to have access to fundamental blocks, but the jigsaw puzzle as of now is unsolved.
I feel that this book is mistimed. I don't think we are anywhere close to building general artificial intelligence systems. Russel compares this gorilla problem to building the nuclear reactors and gene editing. In both cases, scientists thought it was impossible to do it. Both cases, scientists were proved wrong. I think this comparison is unfair. We know a lot of nuclear physics before the chain reaction was deemed possible. With respect to AI, I don't think we even scratched the surface or asked the right questions to begin with.
Depending on how you read it, this book can mean different things for you. If you are in the bandwagon of "Too early to discuss about genral AI", nothing will change for you. If you are already thinking that AI is going to destroy us, this book offers hope that we can mitigate it. It is a subtle but an important contribution about the the current state of affairs in artificial intelligence. As a working practitioner in Machine learning, I immensely liked the book for a consice description of existing questions in AI.