Saturday, August 13, 2011

Reyzin-Srivastava Trees

When I was a grad student at Yale, Ravi Kannan taught a great sequence of graduate theory courses on approximation algorithms, randomized algorithms, and streaming algorithms, and I was lucky enough to get to take all three.  The first one, approximation algorithms, Ravi taught during my first semester in grad school.  We were allowed to collaborate on assignments, and I discussed many of them with my friend Nikhil Srivastava.

I remember a problem on one of the first assignments was particularly tricky.  The question was about s-t cuts in a graph.  The hard part about this problem was that to solve it, it seemed we needed to efficiently keep track of all pairs minimum s-t cuts.  We started discussing the problem in the afternoon, and though it took us a while, by about 5am we had designed a data structure to do just that.  We realized you can maintain all the cuts in just one tree.  It was perfect.  We jokingly called the data structure the Reyzin-Srivastava tree.

I spent the remainder of the night writing up the assignment, describing this thing in detail, proud of our cleverness.  About a week later, we got the graded assignments back.  Mine had one note on it, next to that particular question of course.  It said "Next time, just reference Gomory-Hu trees when you need to use them."

I learned a couple lessons from this.  The first is that I should remember to do the required reading before starting the assignment (we used Vijay Vazirani's approximation algorithms textbook, which really is worth reading).  Had I done the reading, I would have learned about Gomory-Hu trees and realized the question asked us to trivially apply them to some problem, instead of reinventing them.   The second, and perhaps, more important lesson was that I had a shot at doing research, at inventing something new.  Yes, these trees had been invented decades before, and yes, we'd been exposed to more data structures than Gomory and Hu had been in 1961.  But somehow knowing that we figured out something on our own that wasn't trivial really helped our confidence as first year grad students, and as I've written before, confidence is quite important for research.

Nikhil and I went on to do some real research together, producing two joint papers the following year.  Actually, both of those papers also began as classwork, for Dana Angluin's machine learning class.  But that's a story better left for another post.

5 comments:

  1. Anonymous12:23 PM

    We later invented Sstars and NED. This decreased our confidence back to what it was before (:

    ReplyDelete
  2. Hi Nikhil. Luckily neither concept appeared in print (in that form)...

    ReplyDelete
  3. Anonymous9:22 PM

    Isn't the actual moral of the story that it's good you didn't read the textbook? I mean had you read it, you would've probably forgotten all about Gomory-Hu trees by now.

    ReplyDelete
  4. Nice post. I've been in the same shoes as you. It's much more important to reinvent the wheel than just memorize things. Perhaps you'll uncover a few new things that noone noticed along the way.

    How is Post-Doc research treating you?

    -Alex Yampolskiy
    http://yampolskiy.blogspot.com

    ReplyDelete
  5. @Anonymous,
    Yes, I was thinking that it's not so bad we figured these things out for ourselves. On the other hand, you can't reinvent every wheel. We make progress by learning from others' past work.

    @Alex, I agree - in the case where you can reinvent something without taking forever, great. But the choice isn't between that and memorizing. When we read books, we don't memorize what's in them, but rather try to understand it. Big difference.

    Research always treats me well; I hope I'm returning it the favor :)

    ReplyDelete