In addition to machine learning, one of my main research interests is theoretical computer science. Following the success of MathOverflow and other Q&A sites, cstheory.stackexchange.com ("cstheory" for short) was born about a year ago, and I've been an active participant since its inception. In the last year, we've worked hard to make cstheory a valuable resource and to promote it. I'd say the efforts paid off -- the site is doing quite well, with over 5000 members.
Moreover, cstheory recently added a blog (Aaron Sterling and Joe Fitzsimons are editors), whose posts are on topics related to theoretical computer science and to the site itself. In one of the first posts, Suresh Venkatasubramanian, a site moderator and one of the people most responsible for its success, wrote a nice summary of cstheory's first year.
I've also volunteered to occasionally blog about the learning theory side of things. My first entry, on learning regular languages, was just posted. It touches on many interesting issues learning theorists regularly encounter. Enjoy!
Monday, August 22, 2011
Saturday, August 13, 2011
Reyzin-Srivastava Trees
When I was a grad student at Yale, Ravi Kannan taught a great sequence of graduate theory courses on approximation algorithms, randomized algorithms, and streaming algorithms, and I was lucky enough to get to take all three. The first one, approximation algorithms, Ravi taught during my first semester in grad school. We were allowed to collaborate on assignments, and I discussed many of them with my friend Nikhil Srivastava.
I remember a problem on one of the first assignments was particularly tricky. The question was about s-t cuts in a graph. The hard part about this problem was that to solve it, it seemed we needed to efficiently keep track of all pairs minimum s-t cuts. We started discussing the problem in the afternoon, and though it took us a while, by about 5am we had designed a data structure to do just that. We realized you can maintain all the cuts in just one tree. It was perfect. We jokingly called the data structure the Reyzin-Srivastava tree.
I spent the remainder of the night writing up the assignment, describing this thing in detail, proud of our cleverness. About a week later, we got the graded assignments back. Mine had one note on it, next to that particular question of course. It said "Next time, just reference Gomory-Hu trees when you need to use them."
I learned a couple lessons from this. The first is that I should remember to do the required reading before starting the assignment (we used Vijay Vazirani's approximation algorithms textbook, which really is worth reading). Had I done the reading, I would have learned about Gomory-Hu trees and realized the question asked us to trivially apply them to some problem, instead of reinventing them. The second, and perhaps, more important lesson was that I had a shot at doing research, at inventing something new. Yes, these trees had been invented decades before, and yes, we'd been exposed to more data structures than Gomory and Hu had been in 1961. But somehow knowing that we figured out something on our own that wasn't trivial really helped our confidence as first year grad students, and as I've written before, confidence is quite important for research.
Nikhil and I went on to do some real research together, producing two joint papers the following year. Actually, both of those papers also began as classwork, for Dana Angluin's machine learning class. But that's a story better left for another post.
Labels:
reflections
Subscribe to:
Posts (Atom)