The Future of Machine Learning (and the End of the World?)

October 31, 2012
By

(This article was originally published at Normal Deviate, and syndicated at StatsBlogs.)

The Future of Machine Learning (and the End of the World?)

On Thursday (Oct 25) we had an event called the ML Futuristic Panel Discussion. The panelists were Ziv Bar-Joseph, Steve Fienberg, Tom Mitchell Aarti Singh and Alex Smola.

Ziv is an expert on machine learning and systems biology. Steve is a colleague of mine in Statistics with a joint appointment in ML, Tom is the head and founder of the ML department, Aarti is an Assistant Professor in ML and Alex, who is well known as a pioneer in kernel methods, is joining us as a professor in ML in January. An august panel to say the least.

The challenge was to predict what the next important breakthroughs in ML would be. It was also a discussion of where the panelists thought ML should be going in the future. Based on my notoriously unreliable memory, here is my summary of the key points.

1. What The Panelists Said

Aarti: ML is good at important but mundane tasks (classification etc) but not at higher level tasks like thinking of new hypotheses. We need ML techniques that play a bigger role in the whole process of making scientific discoveries. The more machines can do, the more high level tasks humans can concentrate their efforts on.

Ziv: There is a gap between the advances in systems biology and its use on practical problems, especially medicine. Each person is a repository of an unimaginable amount of data. An unsolved problem in ML is how to use all the knowledge we have developed in systems biology and use it for personalized medicine. In a sense, this is the problem of bridging information at the cell level and information at the level of an individual (consisting of trillions of interacting cells).

Steve: We should not forget the crucial role of intervention. Experiments involve manipulating variables. Passive ML methods are only part of the whole story. Statistics and ML methods help us learn, but then we have to decide what experiments to do, what interventions to make. Also, we have to decide what data to collect; not all data are useful. In other words, the future of ML has to still include human judgement.

Tom: He joked that his mother was not impressed with ML. After all, she saw Tom grow from an infant who knew nothing, to and adult who can do an amazing number of things. Tom says we need to learn how to “raise computers” in analogy to raising children. We need machines that can learn how to learn. An example is the NELL project (Never Ending Language Learning) which Tom leads. This is a system which has been running since January 2010 and is learning how to read information from the web. See also here. Amazing stuff.

Alex: More and more, computing is done on huge numbers of highly connected inexpensive processors. This raises many questions about how to design algorithms. There are interesting challenges for systems designers, ML people ad statisticians. For example: can you design an estimator that can easily be distributed with little loss of statistical efficiency and that is highly tolerant to failures of small numbers of processors?

2. The Future?

I found the panel discussion very inspiring. All of the panelists had interesting things to say. There was much discussion after the presentations. Martin Azizyan asked (and I am paraphrasing), “Have we really solved all the current ML problems?” The panel agreed that, no, we have not. We need to keep working on current problems (even if they seem mundane compared to the futuristic things discussed by the panel). But we can also work on the next generation of problems at the same time.

Discussing future trends is important. But we have to remember that we are probably wrong about our predictions. Neils Bohr said “Prediction is very difficult, especially about the future.” And as Yogi Berra said, “The future ain’t what it used to be. ”

When I was a kid, it was routinely predicted that, by the year 2000, people would fly to work with jetpacks, we’d have flying cars and we’d harvest our food from the sea. No one really predicted the world wide web, laptops, cellphones, gene microarrays etc.

3. The Return of AI

But, I’ll take my chances and make a prediction anyway. I think Tom is right: computers that learn in ways closer to the ways humans learn is the future.

When I was in London in June, I had the pleasure to meet Shane Legg, from Deepmind Technologies. This is a startup that is trying to build a system that thinks. This was the original dream of AI.

As Shane explained to me, the has been huge progress in both neuroscience and ML and their goal is to bring these things together. I thought it sounded crazy until he told me the list of famous billionaires who have invested in the company.

Which raises an interested question. Suppose someone — Tom Mitchell, the people at Deepmind, or someone else — creates a truly intelligent system. Now they have a system as smart as a human. But all they have to do is put the system on a huge machine with more horsepower than a human brain. Suddenly, we are in the world of super-intelligent computers surpassing humans.

Perhaps they’ll be nice to us. Or, it could turn into Robopocalypse. If so, this could mean the end of the world as we know it.

By the way, Daniel Wilson, the author of Robopocalypse, was a student at CMU. I heard rumours that he kept a picture of me on his desk to intimidate himself to work hard. I don’t think of myself as intimidating so maybe this isn’t true. However, the book begins with a character named Professor Wasserman, a statistics professor, who unwittingly unleashes an intelligent program that leads to the Robopocalypse.

Steve Speilberg is making a movie based on the book, to be released April 25 2104. So far, I have not had any calls from Speilberg.

So my prediction is this: someone other than me will be playing Professor Wasserman in the film adaptation of Robopocalypse.

What are your predictions for the future of ML and Statistics?




Please comment on the article here: Normal Deviate

Tags:


Subscribe

Email:

  Subscribe