Thursday, July 31, 2014

On AAAI 2014

I've just returned from AAAI 2014, which was held in Quebec City, Canada. It was my first time submitting to, attending, or presenting at a AAAI, and I was very pleasantly surprised on many levels -- the strength of the program, the breadth of research topics covered, and the conference organization.  I also really enjoyed spending some time in Quebec City, which was a wonderful location.

a photo of Quebec City that I took from the conference hotel

Organization

AAAI is quite a large conference with around a thousand attendees and a couple hundred presenters, so it involves multiple parallel tracks, posters, etc.  Carla Brodley and Peter Stone, the co-chairs, did a wonderful job organizing everything to work smoothly.  Moreover, they introduced two innovations that I particularly liked: 
  1. Authors whose papers were selected for talks were able to choose to give a 2-minute plenary talk instead of a regular session talk.  This worked really well for papers that carried a strong conceptual message, which might appeal to a wider audience. (I chose this option: you can see my slides here.)  I was never bored at the plenary sessions -- if I found one of these talks or topics uninteresting, it was over in two minutes.  On the other hand, if I liked a talk, I could find the presenter later at the poster session (see next innovation).  The slides for these talks were all pre-loaded on one machine, and Carla and Peter somehow got everyone to stick to just two minutes, so, much to my surprise, these sessions went very smoothly.
  2. Authors who didn't give talks and authors who chose the 2-minute plenary talk option had scheduled 45 minute slots to "present" their posters during one of the multiple poster sessions.  At any given time during a poster session only about 30 posters were officially being presented (you could also show people your poster and work at any time). This was great because I didn't feel like I had to stand by my poster for hours.  Moreover, during my 45 minute presentation, I had a much larger audience.  I really liked both innovations and hope they are incorporated into future AAAIs (and ICMLs, NIPSs, etc.). 
I also want to discuss some of the content that I enjoyed.  I should note that I left the conference day early, attended a somewhat arbitrary selection of sessions, and pseudorandomly meandered through the poster sessions, so what follows is not nearly complete.  I know for sure that I am leaving out many good talks and papers.

Plenary Talks

I only saw two of the invited talks, but both were really great.

image from mashable

In the first talk, Adam Cheyer, one of the main inventors of the iPhone "digital assistant" Siri, discussed the history of the project, as well as some of the technology behind it.  One thing I learned from the talk was how much Siri is able to do that I didn't know about.  For instance, if you're reading an email and want to call the email's sender, all you need to do is say something like "Siri, call him" if the contact info is in your phone.  If you think about it, this is incredibly easier than having to leave email, go to phone contacts, search, etc.  In fact, Adam pointed out that while your hands are the best interface for getting at items on a screen, your voice is better for accessing content you can't see.  If one uses Siri with this in mind, it can be much more effective.  Adam also pointed out how immensely challenging building the system was.  For instance, a user who wants two tickets for the movie "Nine" at "8pm" can say: "two for nine at eight" and Siri has to figure out what to do after hearing what seems like 4 different numbers.  Just thinking about the difficulty of this problem should impress us how for AI has come!

As for Siri's history, the entire recounting was interesting, but one facet stuck out the most:  Immediately after the iPhone was announced in 2007, Adam and two of his colleagues met to discuss how to take advantage of the new technology.  Just three months later, they were making a pitch to VCs to fund their new idea.  By 2010 Siri was an app, and by 2011 it was bought out and integrated into iOS, making billions of dollars for Apple and impacting millions of users.  Adam clearly had a lot of experience working on these types of systems and in launching new and exciting products (CALO, IRIS, etc.), but it amazed me how much one can accomplish in a short time if one is talented, experienced, and motivated enough.

an example UI from the coloring game described below, taken from this paper

The second invited talk was by Michael Kearns, who is known for a lot of things including fundamental contributions to learning theory.  But in this talk, he discussed some of the user studies in network science research he and his group at UPenn have been doing.  In these experiments, subjects come to a room and sit at computers and perform various tasks like distributedly coloring a graph, coming to consensus, etc.  In their typical setup, players play on an underlying graph (whose topology they don't know), with players controlling one vertex each and only seeing what their neighbors are doing.  

In a coloring game, the players have to choose one of a few colors and will only get paid if their final color is different from the colors chosen by all their neighbors.  They can asynchronously switch their own color as much as they want, and the goal is to see how long it takes various graphs to get colored.  In a consensus game, the goal is for the entire graph to come to consensus, so the players need to choose colors that are the same as their neighbors' choices.  From the computational complexity standpoint, coloring is NP-hard (in the worst case) and consensus is trivial, but Kearns's group observed that the hardness of performing these tasks in the real world depends on the interplay of the graph structure and the task.  For instance, on many graphs, consensus appears much harder than coloring.

The result I found the most interesting pertained to an experiment they ran on coming to consensus in a game where different nodes have different preferences.  The way this setting works is that the players only get paid if at the end of a set amount of time, all have agreed on a single color.  However, different players will be paid different amounts depending on which color they agree on.  (This was made to be a simplified model the Obama vs. Hillary primary, where democrats had strong reasons to come to agreement, but had different preferences about whom to agree on.)  Kearns's group showed that in such a game, a small minority of well connected players with one set of preferences in all their trials override a much larger set of worse connected players with different preferences, even though the minority in question had no information about the structure of the graph (or even any knowledge that they were more connected than others).

This seems an interesting line of research and could hope to explain various real-world phenomena.

Papers

Finally, I also enjoyed quite a few of the papers at the conference.  One thing that surprised me was the large number of machine learning and also game theory papers.  The learning / game theory papers were on average more application driven than what one usually finds at ICML/NIPS or EC, but it seems that these fields are becoming increasingly incorporated into AAAI.

So, here's a very incomplete list of some papers I found interesting:

  • Active Learning with Model Selection by Alnur Ali, Rich Caruana, and Ashish Kapoor -- this paper considers active learning for  simultaneously choosing the best model and its parameters.  This is done by training the model on a training set and evaluating its performance on a validation set.  Both sets require labeled data.  So the active learning algorithm, in sampling a label for a new point, must also consider whether to add the point into the training set or into the unbiased validation set.  The resulting performance of their algorithm shows that it roughly first "explores" by placing into valuation and then "explores" by placing into training.  I found this idea and interpretation pretty neat.
  • Fixing a Balanced Knockout Tournament by Haris Aziz, Serge Gaspers, Simon Mackenzie, Nicholas Mattei, Paul Stursberg, and Toby Walsh -- this paper considers a natural, previously open, question.  Given you know the probabilities with which each team will beat each other team, can you design a knockout tournament (balanced binary tree) that maximizes the probability of a given winning?  This turns out to be NP-hard, but they also give some positive results in certain cases.
  • Preference Elicitation and Interview Minimization in Stable Matchings by Joanna Drummond and Craig Boutilier -- consider medical residents being matched to schools.  For residents and schools to choose their rankings, they need to interview with all the schools.  This is really costly.  This paper assumes that when one of two options is much better than the other, an interview probably doesn't need to be scheduled just to tell them apart.  Then, given their modeling assumptions, they try to find stable matchings without having to do too many expensive "interviews."  What I like about this paper is that it considers the sort of practical concerns that give a new twist to what seems like an already solved problem.
  • Simultaneous Cake Cutting by Eric Balkanski, Simina Brânzei, David Kurokawa, and Ariel D. Procaccia -- this paper looks at "cake cutting" in the simultaneous model, where user preferences are elicited simultaneously.  Cake cutting problems involve dividing a resource, say a yummy continuous cake, such that everyone is happy with the division (or at least thinks the division fair under various notions).  They show that in a simultaneous model, one can satisfy "proportionality" but not "envy-freeness", and also give some approximation results.  I don't know exactly why I like this model, but I have a soft spot for cake cutting.
  • Bagging by Design (on the Suboptimality of Bagging) by Periklis Papakonstantinou, Jia Xu, and Zhu Cao -- this paper gives a subsampling method that outperforms bagging, in the case, roughly, when bagging works, e.g. by being more stable to noise than a single predictor.  Their method comes from the observation that bagging is done over subsampling a fixed dataset, which makes bagging behave differently than if it had access to an unlimited supply of independent samples.
  • Recovering from Selection Bias in Causal and Statistical Inference by Elias Bareinboim, Jin Tian, and Judea Pearl -- this paper won the best paper award.  It gives complete conditions on when one can recover from selection biased data. They also consider settings where some unbiased data is available and other extensions.  I plan on reading it more carefully, as the results seem important.
In sum, I enjoyed AAAI and look forward to attending it again in future years.

No comments:

Post a Comment