- This year, we again have two important computing anniversaries: the bicentenaries of Ada Lovelace and of George Boole. Ada Lovelace, daughter of Lord Byron, was the world's first programmer -- her algorithm computed the Bernoulli numbers on the Babbage engine, a machine that had only existed on paper at the time. Incidentally, my Ph.D. advisor, Dana Angluin, wrote a nice piece on Lovelace. George Boole, of course, established the foundations of 0-1 valued Boolean algebra, which is essential for computer science.
left: George Boole (photo from Wikipedia), right: Ada Lovelace (photo from Eventbrite)
- This year also marks the 25th anniversary of the World Wide Web. Congress decided to celebrate by ... passing CISPA. Sigh.
- The results the latest neural nets are producing are moving quickly from the realm of impressive to the realm of scary. On that note, I think it's time for us to start seriously thinking about the potential dangers of AI. If you believe there is nothing special about our wetware, and that we'll keep making scientific progress, it's pretty straightforward to reach the conclusion that we will likely end up building computers that are in every way smarter and more creative than humans. Once this happens, the cat will be out of the bag, so to speak, and the only question, really, is when will this happen? This isn't necessarily a bad thing, but it is something we'll probably have only one chance to get right.
|A "reimagined" version of "The Scream" by Google Brain scares me more than the original.|
- On a related note, a few high-tech luminaries founded OpenAI, a new AI research center with a billion dollar (yes, you read that right!) endowment. Just think what impact the 60 million dollar Simons centers have had, and you'll quickly see how big a deal this could be if the money is spent well. While I think the people involved are fantastic (congrats to Ilya!), I'm surprised by the center's initial narrow focus on deep learning. As Seb pointed out, we will need entirely new mathematical frameworks to come close to "solving AI," so such a narrow focus seems shortsighted. Luckily, the folks at OpenAI have plenty of resources for any future course-corrections.
Sam Altman and Elok Musk, two of the founders of OpenAI, photo from medium/backchannel
- Assuming the result checks out, the field of theoretical computer science has had a rare breakthrough. Namely, Laci Babai gave a quasipolynomial-time algorithm for the graph isomorphism (GI) problem. I was lucky enough to attend the lecture where the result was announced, My grad student, Jeremy Kun, wrote up what is probably the best account of the lecture that still serves as a nice introduction to the paper, which is now up on arXiv. One interesting thing about this result is that it neither makes real practical progress on GI (we already have fast heuristics), nor does it change our view of complexity (since the GI result is "in the direction" we expected). It's just that GI is a such long-standing hard problem that progress on it is a very big deal.
|Babai, presenting his proof. Photo by Jeremy Kun.|
- In combinatorics, Terry Tao, building on a polymath online collaboration, solved the Erdos Discrepancy problem, by proving the conjecture true. I like that these large online collaborations have now led to solutions, or helped lead to solutions, to multiple important open problems.
- This isn't primarily about computer science or math, but if you haven't heard of CRISPR, go read about it. I don't think it's an exaggeration to say that CRISPR, or its offshoots, are probably going to change our world rather fast. While this technique was discovered a couple years ago, it was named "breakthrough of the year" for 2015 by Science magazine.
Paul Erdos and Terry Tao, photo from Wikipedia