Thursday, December 31, 2015

An Eventful 2015

This post continues my tradition of reviewing the year.  This time my commentary is a bit longer than in previous years, so without further ado, here is my take on 2015:
  • This year, we again have two important computing anniversaries: the bicentenaries of Ada Lovelace and of George Boole.  Ada Lovelace, daughter of Lord Byron, was the world's first programmer -- her algorithm computed the Bernoulli numbers on the Babbage engine, a machine that had only existed on paper at the time.  Incidentally, my Ph.D. advisor, Dana Angluin, wrote a nice piece on Lovelace.  George Boole, of course, established the foundations of 0-1 valued Boolean algebra, which is essential for computer science.
    left: George Boole (photo from Wikipedia), right: Ada Lovelace (photo from Eventbrite)
  • This year also marks the 25th anniversary of the World Wide Web.  Congress decided to celebrate by ... passing CISPA.  Sigh.
  • The results the latest neural nets are producing are moving quickly from the realm of impressive to the realm of scary.  On that note, I think it's time for us to start seriously thinking about the potential dangers of AI.  If you believe there is nothing special about our wetware, and that we'll keep making scientific progress, it's pretty straightforward to reach the conclusion that we will likely end up building computers that are in every way smarter and more creative than humans.  Once this happens, the cat will be out of the bag, so to speak, and the only question, really, is when will this happen?  This isn't necessarily a bad thing, but it is something we'll probably have only one chance to get right.
A "reimagined" version of "The Scream" by Google Brain scares me more than the original.
  • On a related note, a few high-tech luminaries founded OpenAI, a new AI research center with a billion dollar (yes, you read that right!) endowment.  Just think what impact the 60 million dollar Simons centers have had, and you'll quickly see how big a deal this could be if the money is spent well.  While I think the people involved are fantastic (congrats to Ilya!), I'm surprised by the center's initial narrow focus on deep learning.  As Seb pointed out, we will need entirely new mathematical frameworks to come close to "solving AI," so such a narrow focus seems shortsighted.  Luckily, the folks at OpenAI have plenty of resources for any future course-corrections.
    Sam Altman and Elok Musk, two of the founders of OpenAI, photo from medium/backchannel
  • Assuming the result checks out, the field of theoretical computer science has had a rare breakthrough.  Namely, Laci Babai gave a quasipolynomial-time algorithm for the graph isomorphism (GI) problem.  I was lucky enough to attend the lecture where the result was announced,  My grad student, Jeremy Kun, wrote up what is probably the best account of the lecture that still serves as a nice introduction to the paper, which is now up on arXiv.  One interesting thing about this result is that it neither makes real practical progress on GI (we already have fast heuristics), nor does it change our view of complexity (since the GI result is "in the direction" we expected).  It's just that GI is a such long-standing hard problem that progress on it is a very big deal.    
Babai, presenting his proof.  Photo by Jeremy Kun.
  • In combinatorics, Terry Tao, building on a polymath online collaboration, solved the Erdos Discrepancy problem, by proving the conjecture true.  I like that these large online collaborations have now led to solutions, or helped lead to solutions, to multiple important open problems.
  • Paul Erdos and Terry Tao, photo from Wikipedia
  • This isn't primarily about computer science or math, but if you haven't heard of CRISPR, go read about it.  I don't think it's an exaggeration to say that CRISPR, or its offshoots, are probably going to change our world rather fast.  While this technique was discovered a couple years ago, it was named "breakthrough of the year" for 2015 by Science magazine.

Monday, December 21, 2015

Why Does the World Care about Our Math?

One day while working on bandit problems at Yahoo!, I had this strange realization that its search engine, nay the entire world, seems have a particular remarkable property: we can sit around doing math, and as a result, better advertisements will get served to users.  Of course, this applies not just to computational advertising, but to pretty much anything -- we can know where the planets will be, when certain epidemics will spread, how fast planes need to fly to stay airborne, and a plethora of other things just by thinking abstractly and solving some equations.

I immediately and eagerly shared my newfound realization with others, and it impressed absolutely nobody.  I was told "How else would the world work?" and "There is lots of math that's not useful, but we choose to work on and formalize the things are are relevant to the real world."  These are, of course, perfectly good objections, and I couldn't explain why I found my realization at all remarkable, but I'd had a nagging feeling that I was onto something.

Forward 6 years, and I'm at Market Fresh Books, a bookstore near UIC.  As an aside, this bookstore is really interesting -- it sells used books by the pound or for small flat fees. I even once picked up a copy of baby Rudin for just 99¢ (plus tax) to add to my library.  Anyhow, I stumbled upon a copy of "Disturbing the Universe," Freeman Dyson's autobiography from 1979, and it looked interesting enough to buy.  That evening, while reading it, I came upon the following passage by Dyson:
"Here was I ... doing the most elaborate and sophisticated calculations to figure out how an electron should behave.  And here was the electron ... knowing quite well how to behave without waiting for the result of my calculation.  How could one seriously believe that the electron really cared about my calculation one way or the other?  And yet the experiments ... showed it did care.  Somehow or other, all this complicated mathematics that I was scribbling established rules that the electron ... was bound to follow.  We know that this is so.  Why it is so, why the electron pays attention to our mathematics, is a mystery that even Einstein could not fathom."
I still don't know the answer, and I can't even state the question without it seeming silly, but at least I now know I'm in good company.

Freeman Dyson
image credit,