I’ve just returned from the 28th International Conference on Algorithmic Learning Theory (ALT 2017), which was held in Kyoto, Japan. The last time ALT was held in Japan was in 2007, exactly 10 years ago. Back then, I was a second year grad student at my first ALT, and at one of my first conferences altogether. Now, ten years later, I got to watch my Ph.D. student Mano Vikash present his first conference paper, and I was serving as program co-chair, together with Steve Hanneke. What a difference 10 years makes!
a photo I took at Yasaka Shrine in Kyoto
Being PC chair was a lot of work, but it was quite gratifying. I appreciate that Steve and I were able to work quite well together. I’m thankful that so many amazing people agreed to serve on the PC and that our invited speakers readily agreed to come all the way to Japan. I am thankful that we got many good submissions — more submitted papers than in any year since my first ALT a decade ago. And I am especially grateful for the quality of the reviews; the PC had to handle more papers than usual, yet the reviews were careful and detailed, catching multiple bugs and providing valuable feedback to authors. The resulting proceedings are online.
We also had two great invited speakers for ALT. Sasha Raklin, among other things, gave some interesting results that I was not previously aware of at the intersection of Rademacher complexity and online learning. Adam Kalai gave great and accessible talk on "fairness" and had a nice way to express various notions of fairness as loss functions. I must admit I’ve been skeptical of this area for a while, but chatting with Adam afterwards has made me less so.
When blogging about conferences in the past, I discussed some of my favorite papers. I don’t feel it would be especially appropriate for me to do so in this case, so I’ll just mention the paper for which the E. M. Gold student paper award was given to Tyler Dorhn, an undergraduate student at Yale. In the paper, Dana Angluin and Tyler Dohrn showed that when equivalence queries are answered randomly, the expected query complexity of exact learning is drastically improved over when equivalence queries are answered adversarially. In doing so, they introduced a nice notion called the "elimination graph" of a concept space. It's a concept that I expect to have more applications. And this is a problem I and others have informally thought about, so I’m glad to see progress in this area.
Finally, I’ll note that ALT has been going through some changes lately. This year, in addition to more minor tweaks, we switched publication venues from Springer to PMLR (the new name for JMLR’s conference proceedings) in favor of open access, and we got rid of page limits. More big changes are coming next year: ALT 2018 also will co-locate with AISTATS instead of DS next year, and PC co-chairs Mehryar Mohri and Karthik Sridharan have put out an ambitious call for papers with the goal of becoming the "best conference in algorithmic and theoretical machine learning." (The co-location with AISTATS also means that the conference is moving from Fall to Spring, and papers are due in a week!)
Computational learning theory has two main conferences: COLT and ALT, with COLT being the larger of the two. ALT has always had strong authors and PC members, but hadn’t grown in prestige and visibility like COLT. My former postdoc host John Langford wrote, "ALT = 0.5 COLT." Yet, I’ve always appreciated ALT’s breadth and its resilience to various trends that change the theme of some other conferences almost yearly. ALT grew this year, and I’m optimistic about its future. And my hope is that ALT can meet its new ambitions while retaining its friendly and open culture.