NIPS is broader than ICML, the other general machine learning conference, and it is certainly much broader than COLT and ALT, the learning theory conferences. I thought this would be a bad thing, as I expected that I wouldn't understand many of the talks and posters, but it turned out to have been one of my favorite (if not my favorite) conference experiences.
First, because most people at NIPS are in the position of not being familiar with everything, there wasn't the expectation that I would immediately know the model or the problem being tackled when I talked to someone. This strangely made things more, not less, accessible. Also, because lots of topics were outside my areas of research interests, I didn't feel internal pressure to attend many of the talks, and it allowed me to relax and have more time to talk to other researchers. And because NIPS draws a huge attendance, I got to reconnect with many people I hadn't seen for a while, and I had a chance to meet people whom I know only through their research or through correspondence.
On top of all that, within my areas of research, there were quite a few really good contributions, and I wanted to point out some of the papers that I found especially interesting. This is, of course, a list biased not only toward my interests, but also toward the papers whose presentations I happened to catch.
- A Theory of Multiclass Boosting by I. Mukherjee, R.E. Schapire. This paper characterizes necessary and sufficient conditions for multiclass boosting and gives a new multiclass algorithm not based on reductions to the binary case. A nice an elegant paper, it won one of the best student paper awards.
- Online Learning: Random Averages, Combinatorial Parameters, and Learnability by A. Rakhlin, K. Sridharan, A. Tewari. Building on the work of [BPS '09], this paper extends various notions of complexity to the online setting (Radamacher complexity, covering numbers, fat shattering dimension) and proves general online learnability guarantees based on these notions. This is similar in flavor to already known general offline guarantees.
- Trading off Mistakes and Don’t-Know Predictions by A. Sayedi, M. Zadimoghaddam, A. Blum. This paper analyzes a model where the learner is allowed a limited number of prediction mistakes, but is also allowed to answer "I don't know," generalizing the KWIK model [LLW '08]. They prove some nice trade-offs, and there seem to be many potential interesting extensions.
- Learning from Logged Implicit Exploration Data by A. Strehl, J. Langford, L. Li, S. Kakade. This paper discusses how to analyze the performance of a new contextual bandit algorithm by running it on previously recorded data. It's pretty intuitive that this should be possible, and this paper goes through the math carefully. This is a problem confronted by many content-serving companies, and I imagine this analysis will be quite useful.
Finally, I also attended the NIPS workshops, which were rumored to be quite good, and they certainly lived up to their expectations. I especially enjoyed the workshop on Computational Social Science and Wisdom of the Crowds -- I decided to attend a workshop in an area about which I know very little, and all the talks were really good. In fact, from Yiling Chen's and Jake Abernathy's talks, I even learned about connections between prediction markets and online learning, so this workshop ended up being more closely related to my research than I expected.
Clearly, I've been missing out by not having attended the previous NIPS meetings. I'm planning to make up for it in future years.
No comments:
Post a Comment