Wednesday, December 28, 2022

Reflections on 2022

As we reach the end of another year, it's again time to for me to continue the tradition of posting some reflections. Russia’s unjust invasion of Ukraine loomed large over this year, especially for those of us with many connections to both countries.  There’s almost too much to say, so I’ll keep my comments limited to that I am heartened by most of the world’s support of Ukraine and by academia’s attempts to mitigate the war's effects on its small corner of the world.

Now for more provincial concerns:

an image of IDEAL's website
  • This year I graduated my first Computer Science Ph.D. student, Neshat Mohammadi (co-advised by Tasos Sidiropoulos); all my previous students were in Mathematics. Her unconventional career path now takes her to Stanford Medical School for a postdoc; I will follow her progress with interest.  With Will Perkins, I also co-hosted a postdoctoral fellow, Aditya Potukuchi, who went on to take a tenure-track position in EECS at York University.  Congratulations go to them both!
  • ChatGPT was recently released, causing quite a stir in my community. It’s scarily impressiveI used to believe that neural networks would only take us so far before we have to invent new methods to progress toward AGI, and while that may still be true, they’ve already taken us further than I expected and show no signs to stopping anytime soon.  Of course ChatGPT has many failure modes, and no it doesn’t have the actual “understanding” that humans do, but I am amazed that some skeptics can only find things to criticize in this new technology.  It reminds be of this.
a ChatGPT-created riddle that I enjoyed, with a correct solution
  • My alma mater continues to disappoint me. This year’s most worrying development was Princeton’s firing of renowned classicist Joshua Katz, under a clear pretext, for his arguably immoderate speech that nonetheless should have obviously been protected.  Despite my increasing unease with Princeton’s policies, I had dutifully donated to them every year, but no longer. On occasion, we still get glimmers of hope from academia, but overall the situation remains glum.
  • NASA's James Webb telescope started producing images this July, and they are astounding. I'm looking forward to enjoying its constant stream of beautiful pictures in the coming years. 
an image of Jupiter obtained by the James Webb Space Telescope (provided by NASA)
  • Academia appears to be slowly relaxing its most restrictive COVID measures.  This year, even UIC, which seems to have some of the most draconian policies among American universities, allowed its faculty to give lectures unmasked (to a masked and distanced audience). This was not an option many of my colleagues chose to exercise, but I for one appreciated being able to better vocalize (and breathe).
  • I enjoyed attending ALT 2022, which was my first post-COVID international conference. It was held in France at the fabled ENS-Paris. I'm looking forward to continuing to be able to attend conferences next year.
ENS Paris


Here’s to a peaceful, healthy, and productive 2023!

Friday, December 31, 2021

As 2021 Ends

Another difficult year ends, this time beginning and ending under the shadow of the COVID pandemic, with no end in sight. Vaccine and booster requirements are only intensifying, masks mandates are still in effect, and many institutions, including my own university, are even starting 2022 online. We won’t return to normal until we decide it's time to start treating COVID less as a pandemic and more as an endemic virus that is here to stay, like the flu (whose dangers have lessened with exposure and time).

In the meantime, here are some other thoughts on this last year.
  • Over the summer, I got promoted to the rank of Full Professor. I’ve said the main things I wanted to say in a Twitter thread. You can read it here.
  • The ISAIM 2020 special issue of the journal AMAI, which I guest-edited, will come out in January. Despite the pandemic slowing things down, we completed the submission process, the reviews, gathering revisions, and finalizing publication of this issue in less than 2 years -- my issue foreword is already online.  On a related note, you can still register to attend ISAIM 2022 virtually January 3rd through 5th.
  • I taught my courses in 2021 in “hybrid” mode. For me, teaching in a mask (as required by UIC) was difficult and frustrating, especially when trying to cater to an in-person and online audience simultaneously.
  • I co-organized a machine learning theory workshop at the Chicago-based ISMI institute.  The talks were awesome and are available online. Also, I started dabbling in drawing and decided to sketch a portrait of each of our organizers and invited speakers, but fearing causing unintended slight, I kept these private since the event. I have now changed my mind and am posting them below. But please attribute anything you may unflattering in your portrait to my lack of drawing ability and not to ill intent.
my hand drawing of the IMSI ML Workshop organizers and attendees
  • My employer, UIC, has intensified its misguided policies with respect to basic faculty rights. Not unrelatedly, I have suspended donating money to various UIC funds and have instead begun donating to FIRE. I am also pleased to say that I have joined the AFA by invitation. I am thankful for the existence of these organizations and want to do what I can to help the cause of academic freedom.
  • About a year ago, fifty established researchers (including a Turing award winner and other very famous people) sent an open letter to the CACM. After acknowledging the letter’s receipt, the CACM went on to ignore it, neither publishing it nor rejecting it, and not even replying to our inquiries about its status. I can only surmise the magazine could not publish it without risking a woke backlash or reject it without embarrassing themselves. After waiting what I think is an appropriate amount of time, I decided it best to make it known what course of inaction the CACM has chosen. (Also see my 2020 end-of-year post.)
  • My former Ph.D. student Ben Fish started a tenure-track position at the University of Michigan this fall. Not to put any pressure on him or anything, but I am now eagerly awaiting academic grandchildren. (Is this how some parents feel after their children get married?)

Wishing everyone a happy and healthy 2022!

Thursday, December 31, 2020

Done with 2020

We finally made it to the end of 2020! I found myself thinking recently about how long the year felt and was reminded that Trump's impeachment trial happened at the beginning of 2020 and Biden's upcoming inauguration will take place less than a year after.  It is a bit hard for me to reconcile this observation with how I experienced the flow of time this year.

  • I'll begin with COVID-19 since starting with anything else would seem strange to me. Too many people, including those who started 2020 on a healthy note, didn't make it to 2021.  Over 350k Americans died of the coronavirus, which means that many of us (myself included) lost a friend or family member to this disease.  When this pandemic is over, I hope we can all learn some lessons for the future.  I have my own thoughts on what we did well, what we did badly, etc., but perhaps this post is not the right place for them.  For now, I will simply express hope that the newly made mRNA vaccines turn out to be as safe and effective as predicted.

  • One positive side-effect of the pandemic is that technology allowing people to work remotely seriously improved during the lockdowns.  When I switched to teaching online in Spring 2020, I had to improvise and hated it. But by Fall 2020, I ran two courses remotely, obtained the necessary tools to make the process smooth, and even learned to enjoy teaching online. Many companies had to go remote during this year and are now considering staying completely remote going forward. This may allow more people disentangle where they live and where they work (and perhaps precipitate the collapse of Silicon Valley as the undisputed technology capital of the US). Another positive side-effect, at least for me, was that being locked-down gave me more time with family and allowed time for new hobbies and for more reading.

  • I've been optimistic about the prospect of medical sciences making significant advances in the next few decades for improving human healthspans (and lifespans), but I had been less optimistic about the ability of ML and AI to seriously help in this endeavor. AlphaFold, a product of DeepMind using technology it developed earlier, has made drastic improvement on the problem of protein folding. I was pleasantly surprised by the result and have thus also become more positive in general on the ability of machine learning to have a real impact on medical science in the future. Read about it if you haven't already!
image from DeepMind's Nature article
  • I have been fortunate to advise and graduate wonderful students, and this year was no exception. My student Shelby Heinecke defended her Ph.D. (online!) and began work as a Scientist at Salesforce Research, a rather young lab that has grown impressively over the last several years.

  • I helped draft a letter to the Communications of the ACM expressing concern over the growing cancel culture and pleading for allowing for vigorous argument over ideas.  We released it to the public a few days ago, on December 29th.  The letter was quite anodyne but already caused the predictable reaction, which only served to reinforce the idea that such a stance very much needs to be taken. If you are an established professional in the computing sciences, there is still time to sign it before we send it to the CACM.
    image of CACM open letter

After a long summary for last year, that's all I have this time around.  But I am glad to be done with 2020.  

Here's to a better 2021!

Tuesday, December 31, 2019

A Look Back on 2019 and on the Decade

Today not only ends the year but also (putting debates aside) the decade.  Before I reflect on this last year, I wanted to also take a look back at this last decade.

I began the previous decade (the 00's) as still a teenager, soon to graduate from high school and ready to set out for college, and I ended it by getting married and with receiving my Ph.D.  That amounted to a lot of personal, academic, and professional change and growth.

This decade (the 10's), I also feel I have a lot to be thankful for and frankly proud of.  In the last ten years, my wife and I had (and have been raising) two wonderful kids; we moved to Atlanta then to Chicago (where we bought a house); I completed two postdoctoral positions (one at Yahoo! Research NY and one at ARC at Georgia Tech); I got a tenure-track faculty job at UIC (and then tenure) and took my first sabbatical (at Northwestern); I obtained some interesting (at least to me!) results with fantastic collaborators and in the process (co-)authored about 30 papers; I mentored 6 amazing Ph.D. students and postdocs; and I incorporated the non-profit AALT and remain very active with it.

It feels strangely self-congratulatory, but I think it's good to reflect on one's life and achievements on occasion, and the end of a decade is not a bad time.  In sum, I consider myself quite fortunate, and I look forward to the adventures and opportunities the next decade will bring!

A photo I took of downtown Chicago, the city I've come to call home this last decade.

Now, on to some 2019 highlights, where I cannot seem but help to delve into some of the latest controversies:
  • A group of us at UIC were awarded an NSF TRIPODS grant to start an Institute on the Foundations of Data Science.  UIC even put out a press release.  (In this coming decade, I expect it will take up quite a bit of my time as its director.)  Our first activity will be an open house on January 17th -- all are welcome to attend, and you can register for the open house here.  Later, UIC, with the involvement of the new institute will host the Midwest Machine Learning Symposium (MMLS) in 2020.  
  • My student Mano Vikash Janardhanan defended a very nice dissertation this year on graph learning, which was the topic of my own Ph.D.  Mano is now doing well as an applied research scientist at Lifion by ADP.  I should also note that my former student Benjamin Fish is currently on the research job market -- he is great and you should hire him if you can.
  • UC Davis Mathematics Department Chair and Professor Abigail Thompson wrote "A word from..." in Notices of the American Mathematical Society (AMS) criticizing the use of diversity statements.  (She later wrote a very nice op-ed in the Wall Street Journal.)  Her message compared how the University of California system evaluates diversity statements to political litmus tests during the Red Scare.  I agree with the concern that diversity statements will mainly serve to filter out conservative applicants and will further discourage diversity of thought, which should be among the central concerns for universities.  I was and remain concerned at the attacks against her, and I was proud to be among the many to sign onto a letter ("letter to the editor," p. 9) in her support.

    Signatories of a different letter ("the math community values a commitment to diversity," p.2) criticized the Notices for publishing her piece -- I found that letter troubling, not only because of its assumptive claim of speaking for "the math community," but also for its apparent position that opposition to diversity statements contradicts AMS's policy supporting diversity.  Not only does Thompson support diversity, but even if she didn't, disagreeing with any given policy does not mean violating that policy per se.  Policies can recommend courses of action or express commitment to certain goals, but policies should not forbid people to disagree with them.  If the AMS does not wish to become a laughingstock (or a Church), it should refrain from declaring infallibility on any of its positions. (If it were against any policy to oppose it, we would not be able to revoke any current policy in the future.  Surely, that's not a reasonable position.)
  • The Association for Computing Machinery (ACM) apparently signed on to a letter opposing a proposed policy of making all federally funded research publicly accessible.  This is not an issue I am deeply informed on, but my initial take is that there is no reason for federally funded research not to be open to the public (unless national security or clearance issues are involved).  When we have arXiv, journals simply do not seem to provide enough value to warrant limiting access (consider e.g. "overlay journals").  This seems to be an issue of a professional society protecting its interests (the ACM limits access to some of its publications) over those of its members.
  • I've discovered the Goodreads site and app, which lets me keep track of my reading, rate books, and set reading goals.  I can't recommend it enough.  It's one of the only apps that reminds and motivates me to spend my time well (on reading) instead of wasting it online.  I recommend you try it too, especially if you have reading goals or resolutions for the upcoming year.
  • As many of you know, in 2018 there was a move to rename the Neural Information Processing Systems conference. After much debate and a confusing process, the conference was not renamed, but its acronym was changed from NIPS to NeurIPS going forward. Controversy around this renamingacronyming reignited when Scott Aaronson, on his blog, posted an email by Steven Pinker politely expressing his view that the renaming was a bad idea. I do not want to get into the latest debate, but I want to point out that subsequently, on a widely read twitter thread, Pinker was accused of "sexist behavior" for writing the email, and he and Aaronson were accused of "shutting down marginalized voices" just for stating or publishing an opinion.  Baselessly accusing Pinker of sexism is a bullying tactic that's meant to scare him and others less famous than he is into keeping quiet (which is ironically one of the accusations falsely leveled against him).  Regardless of the side we take in these debates, we should roundly reject the use of such tactics in arguments in the academic community.
  • I am the PC chair of ISAIM 2020, which begins on June 6th of 2020 in Fort Lauderdale, Florida.  We accepted some very nice papers, and we will have several exciting invited talks.  I have been attending this biennial conference since 2014, and it will be the first conference I attend this year.
  • Related to my areas of research, my department is hiring a very well-funded postdoc in data science and a tenure-track faculty member in mathematical computer science. It is not too late to apply for either position, so please consider us! We have a strong theory group that’s continuing to grow.

Monday, December 31, 2018

2018 in Review

The best thing about establishing a tradition of blogging at the end of the year is that it compels me to write down some thoughts.  So, here they are, in no particular order:
  • My very talented student Ben Fish graduated this year and is now doing a postdoc at the newly established Microsoft Research Montréal.  In his dissertation, he developed interesting new algorithms for modern data analysis, and I eagerly await the impactful work he'll continue to produce.
  • I am on sabbatical at Northwestern until Fall of 2019.  I'll be teaching a graduate class this Winter quarter and probably another one in Spring.  I've realized I'm probably happiest while teaching one course, so that's the situation I've arranged for myself.  I'm also of course excited to be in a new environment and to interact with Northwestern's fantastic group of faculty and students.
Mudd library: home of Northwestern CS and where I'll be most of next year. (Photo by me.)
  • There has been recent activity connecting logic to machine learning.  A nice paper by my colleagues at UIC relates concepts in model theory and concepts in computational learning theory.  Another interesting paper gives a machine learning problem that's independent of ZFC; I've written a Nature News and Views piece about it, which should appear sometime soon (update on 1/18/19: my paper is here).  In general, I am excited to see where these directions lead.
  • I really enjoy attending the ALT conference because unlike some of the huge machine learning conferences, it is a relatively small and intimate gathering, where it's possible to get to know fellow attendees and actually have time to discuss ideas.  In addition to chairing the local planning as ALT arrives in Chicago for 2019, I've also been leading an effort to build a legal structure around the organization of ALT.  And this November, AALT, the Association for Algorithmic Learning Theory, became incorporated as a nonprofit.  It's been a lot of work, and I'm not yet done, but it's also been very rewarding to help ensure the future of a conference I've become very fond of.  I'm also excited to see ALT improve year after year, while it continues cover a broad array of topics within learning theory.
ALT 2019 will be in Chicago.  (Photo by Allen McGregor.)
  • I continue to worry about illiberal values gaining ground in higher education, where more and more dogmas cannot even be questioned. This trend is increasingly affecting the sciences and even applied mathematics.  The best defense that I see is for what I really hope is the majority of us who do value a diversity of ideas to speak out, and the more who do, the less risky it will be.  Outside higher education, there are also many reasons to worry, but one bright spot is the fairly new Quillette magazine, which has been fearlessly publishing thoughtful articles on controversial topics, including ones concerning academia.
  • In my last year's post, I advocated for conferences to make changes to address problems around the harassment of attendees, and I'm glad to see instances of sexual harassment being taken seriously.  But, I was skeptical of the proposal to rename NIPS.  Nonetheless, after what most everyone would agree was a flawed process, NIPS ended up changing its name branding to what seems to many to be something remarkably awkward.  Time may tell whether this decision was wise.
  • My department is hiring for its MCS group, and theoretical computer science, which has grown substantially at UIC in the last few years, is a priority area.  If you're interested in joining us, consider applying (preferably by 1/14)!
    UIC's math department in winter. (Photo by me.)
To a happy and productive 2019!

Thursday, June 28, 2018

Janus and Higher Education

In its recent Janus v. AFSCME decision, the Supreme Court struck down public sector union security agreements.  This relates to my professional life because UIC is a public university, and our faculty are unionized.  I also have an interest in constitutional law, and I finally have a "jurisdictional hook" to blog about it.  Moreover, I recently realized that while I often hear from pro-union faculty and various union representatives, I rarely see other perspectives, at least at work.  So, as a union non-member, I thought it might be useful to give some brief thoughts about the issues involved.

a photo I took of the Supreme Court building during a recent visit to D.C.

In Janus v. AFSCME, Janus challenged the constitutionality of charging public sector employees "fair share" agency fees.  In about half of the states (those without "right-to-work" laws), when a workforce became unionized, unions were allowed to negotiate security agreements, which gave them the power to collect agency fees from non-members.  The Supreme Court held that these agreements violate non-members' first amendment rights by forcing them to subsidize political speech they disagree with.  Unions were already not permitted to compel non-members to pay for the portion of their activities that are overtly political, but the Supreme Court ruled that in the public sector, union bargaining is inherently political because the unions negotiate with the government, which impacts public policy. Especially illuminating was one particular exchange from the oral argument, between Mr. Franklin, the Solicitor General of Illinois, and Justice Kennedy:
Mr. Franklin: ... Independent of that, we have an interest at the end of the day in being able to work with a stable, responsible, independent counterparty that's well-resourced enough that it can be a partner with us in the process of not only contract negotiation -­ 
Justice Kennedy: It can be a partner with you in advocating for a greater size workforce, against privatization, against merit promotion, against -- for teacher tenure, for higher wages, for massive government, for increasing bonded indebtedness, for increasing taxes? That's -- that's the interest the state has? 
Whether you buy this argument or not, you only have to look at the budget crisis in Illinois to understand the concern.

Turning to my own experience, when I first arrived at UIC, the faculty had just unionized.  And while UIC is not the only public research university with a unionized faculty, I viewed unionization as a worrisome development.  It especially seemed to me that tenure-track faculty at an R1 university should be able to negotiate on their own behalf when the need arises.  My concerns were further reinforced when two years later, a faculty strike almost coincided with our interviewing faculty candidates.  Imagine trying to convince someone to join your department with your colleagues holding "unhappy faculty on strike" signs.  Moreover, I thought it would be detrimental for faculty to have to worry about possible repercussions of teaching during strikes or to face potential political pressure from colleagues to join the union.

a union strike at UIC

As I already mentioned, I never joined the union; but on occasion, various union representatives have tried to get me to join and invariably made the following pitch: "You're going to have to pay the union anyway, so why not sign the membership card and have a say?"  The union wants to keep the majority of the bargaining unit as members in order to avoid facing a credible decertification effort, and thereby wants everyone to join.  And non-members were incentivized to join even if they didn't support the union, so that they could have some say in their contract.  To me, this argument for non-members to join the union seems as objectionable as the impetus will be for members to become "free riders" in the post-Janus world.

So what will happen now that security agreements are struck down? It's hard to predict.  I don't know if it's possible, but I'd like to see a workable middle ground emerge. A compromise, for example, that allows workers to unionize and allows unions to charge, represent, and negotiate on behalf of their members only, while leaving the non-members alone, might be one answer.  I hope there are also other interesting possibilities to consider.  Whatever happens, the status quo is about to change, and I envisage for the better.

Wednesday, December 27, 2017

A Look Back on 2017

Continuing the tradition of summarizing my year on this blog, here are some things of note from 2017.
  • I got tenure this year!  Somehow, even though my job is now about as secure as jobs get these days, I also find myself with a lot more work on my hands than ever before.  I realize it's in some sense self-imposed, but it doesn't really feel like it.  Yet I can't complain; I get to pursue exciting research of my own choosing and work with incredible colleagues and graduate students.
  • Steve Hanneke and I co-chaired ALT 2017, which was my first time chairing a conference.  We got lots of great submissions and ended up with what I consider a very strong program.  You can read about my experience here.
  • This year, I graduated two more fantastic Ph.D. students: Ádám Lelkes (jointly supervised with György Turán) defended in spring and is now at Google Research and Yi Huang defended in the summer and is now doing a postdoc at the University of Chicago.  In the spirit of the occasion, I've linked to their dissertations rather than their websites.
Left: me, Ádám, and György at Spring commencement.  Right: Yi and me at Fall commencement.
  • Li Wang, whose postdoc I hosted, became a tenure-track Assistant Professor at UT Arlington's math department! At UIC, we usually call it "mentoring" instead of "hosting," but Li needed no actual mentoring from me.  I simply had the pleasure of watching her carry out her ambitious research agenda and produce an array of impressive results.
  • AlphaZero, a more general and more advanced version of AlphaGo, beat out all other engines in a variety of other two-player games, including Stockfish at chess (and even AlphaGo at Go).  Its chess play feels much more "human" than that of other engines, and I've spent quite a bit of time just watching its games against Stockfish.  I never expected I'd be spending any significant time watching two computers play chess against each other, but here we are.  I'm posting one of these matches, below, for your enjoyment.
A video of AlphaZero putting Stockfish into a beautiful zugzwang.
On a related note, while it's clear deep learning has had and continues to have an impressive impact on the state of the art of AI, I'm curious to what extent these advances in gameplay are the deep learning classifier versus the Monte Carlo Tree Search.  In particular, I wonder how good would AlphaZero be if it combined MCTS with a different classifier?  Anyone who has something interesting to say on this point is welcome to leave a comment below.
  • Even though I know it's arbitrary, I can't help but notice when some numbers get a significant digit added in base 10.  This year, my Twitter followers surpassed 1000, and so did my citation count.  Actually these two numbers have been tracking each other rather closely ever since both became non-negligible. A coincidence? 
  • I got to see a total solar eclipse over the Grand Tetons and took a pretty nice picture of it.  The next one over the US will be in 2024, which is rather soon as far as these things go -- I highly recommend trying to see it if it's at all possible.  This essay pretty much gets the experience right from my perspective.
totality!
  • I've also blogged about this before, but incidents involving students and faculty across multiple universities trying to stifle speech and debate continue a troubling pattern for academia.  These have included a violent attack on Charles Murray and his faculty host at Middlebury, a bizarre tribunal at Wilfred Laurier of a TA named Lindsay Shepard, and The Evergreen State College descending into complete madness over Brett Weinstein's opposition to issues related to an "equity" proposal.  It's also unsurprising to me that these incidents are happening at the most liberal of universities where increasingly effete (or even often sympathetic) administrations are afraid or unwilling to stand up to some of their increasingly emboldened students.  Not all the news on this front is bad: I predict students at Claremont McKenna College will think twice before blockading a speaker again.
  • I have no new insights to add, but it seems worth nothing that Bitcoin prices have gone crazy.  I can say that at no point in time have I had any interest in buying or mining Bitcoin, and that hasn't changed. 
the price, in dollars, of 1 Bitcoin versus time
  • The "Me Too" movement exposed some very troubling things across many industries.  The machine learning community (and academia in general) is clearly not immune from these problems.  And we also need some institutional changes; the clearest among these is creating systems which can address harassment at gatherings like conferences, which operate outside the normal work/university setting.  I'm glad this is being taken seriously by our community, starting with a rethinking of the code of conduct at NIPS, one of the main machine learning venues.
  • Finally, my department is hiring specifically in MCS.  Applications are accepted through 1/22, so it's still not too late to apply.  We have a strong and growing theory group!
Here's to an exciting and productive 2018!

Thursday, October 19, 2017

ALT 2017

I’ve just returned from the 28th International Conference on Algorithmic Learning Theory (ALT 2017), which was held in Kyoto, Japan.  The last time ALT was held in Japan was in 2007, exactly 10 years ago.  Back then, I was a second year grad student at my first ALT, and at one of my first conferences altogether.  Now, ten years later, I got to watch my Ph.D. student Mano Vikash present his first conference paper, and I was serving as program co-chair, together with Steve Hanneke.  What a difference 10 years makes!

a photo I took at Yasaka Shrine in Kyoto

Except for handling some last-minute duties concerning session chair assignments and minor issues with the online proceedings, there was little left for me and Steve to do at the actual conference. Once the conference started, the local organizers and general chair took over, and they kept things running smoothly.  But the experience of attending was still different for me.  By the time the conference started, I had read many of the papers and was at least a little familiar with the rest of them.  I also felt a strange sense of responsibility to go to every single talk, and I enjoyed doing this more than I expected.

Being PC chair was a lot of work, but it was quite gratifying.  I appreciate that Steve and I were able to work quite well together.  I’m thankful that so many amazing people agreed to serve on the PC and that our invited speakers readily agreed to come all the way to Japan.  I am thankful that we got many good submissions — more submitted papers than in any year since my first ALT a decade ago.  And I am especially grateful for the quality of the reviews; the PC had to handle more papers than usual, yet the reviews were careful and detailed, catching multiple bugs and providing valuable feedback to authors.  The resulting proceedings are online.

We also had two great invited speakers for ALT.  Sasha Raklin, among other things, gave some interesting results that I was not previously aware of at the intersection of Rademacher complexity and online learning.  Adam Kalai gave great and accessible talk on "fairness" and had a nice way to express various notions of fairness as loss functions. I must admit I’ve been skeptical of this area for a while, but chatting with Adam afterwards has made me less so.  

When blogging about conferences in the past, I discussed some of my favorite papers.  I don’t feel it would be especially appropriate for me to do so in this case, so I’ll just mention the paper for which the E. M. Gold student paper award was given to Tyler Dorhn, an undergraduate student at Yale.  In the paper, Dana Angluin and Tyler Dohrn showed that when equivalence queries are answered randomly, the expected query complexity of exact learning is drastically improved over when equivalence queries are answered adversarially.  In doing so, they introduced a nice notion called the "elimination graph" of a concept space.  It's a concept that I expect to have more applications.   And this is a problem I and others have informally thought about, so I’m glad to see progress in this area.

Finally, I’ll note that ALT has been going through some changes lately.  This year, in addition to more minor tweaks, we switched publication venues from Springer to PMLR (the new name for JMLR’s conference proceedings) in favor of open access, and we got rid of page limits.  More big changes are coming next year: ALT 2018 also will co-locate with AISTATS instead of DS next year, and PC co-chairs Mehryar Mohri and Karthik Sridharan have put out an ambitious call for papers with the goal of becoming the "best conference in algorithmic and theoretical machine learning."  (The co-location with AISTATS also means that the conference is moving from Fall to Spring, and papers are due in a week!)

Computational learning theory has two main conferences: COLT and ALT, with COLT being the larger of the two.  ALT has always had strong authors and PC members, but hadn’t grown in prestige and visibility like COLT.  My former postdoc host John Langford wrote, "ALT = 0.5 COLT." Yet, I’ve always appreciated ALT’s breadth and its resilience to various trends that change the theme of some other conferences almost yearly.  ALT grew this year, and I’m optimistic about its future. And my hope is that ALT can meet its new ambitions while retaining its friendly and open culture.

Wednesday, July 12, 2017

Doing My Small Part

I've watched with astonishment as multiple universities descended into insanity in the last couple years.  Both my almae matres were affected: Princeton's president catered to protesters who took over his office, and Yale students who organized protests where a college Master was cursed and yelled at were given "leadership" awards at graduation.  I don't doubt that students, including the ones protesting here, may have legitimate grievances, but these latest movements have been bullying in their tactics and misguided in their demands.  You can read the petition I signed onto in Princeton's case.

Woodrow Wilson (photo from history.com), Princeton's most famous alumnus and a former Princeton president, was one of the targets of the protests at Princeton. Thankfully, he avoided having his name scraped off Princeton's School of Public and International Affairs.
But lately, the descent into madness seems to be accelerating. Charles Murray, who has been unfairly maligned as a eugenicist monster, came to Middlebury to deliver a lecture only to be shouted down and physically attacked, with the offending students barely receiving any punishment. Berkeley hasn't been able to host certain conservative speakers without violence breaking out. And very recently, Brett Weinstein, at Evergreen college has been literally hunted because he didn't think he should be asked to leave campus due to his race, and he is receiving no support from the administration. (Listen to this if you want to see how bad things have gotten.) 

In this post, I won't go into the reasons why I think things have gotten so out of control; I have some ideas, but I'm also dumbfounded.  I will say that these latest incidents are so obviously unacceptable that I figured that most university faculty would be on the side of free expression, but it seems I may be sadly mistaken about this.  Brett Weinstein reported that the vast majority of his colleagues who have spoken out on this issue are actually calling on him to be disciplined, and only one other Evergreen professor is willing to defend him publicly.  The protesters and many faculty even blame Weinstein for going on Fox News as the cause of the chaos that resulted on their campus in the aftermath.

So, as a university professor, I want to do my small part and publicly defend Brett Weinstein.  No group should feel entitled to ask any other group to leave campus, especially based on skin color or ethnicity.  Faculty should be able to express their opposition to such requests and to other bad policies without fear of being labeled racists. To me, it is clear that the organizers of some of these protest movements are the actual racists. And Brett Weinstein should be able to go on Fox News or any other forum to express his dismay at the situation.  Needless to say, I also condemn any efforts to silence Charles Murray, Ann Coulter, or the other people facing illiberal forces on college campuses.

It's time to stand up for free and spirited debate and for respectful discourse and common decency. If universities are to remain centers for inquiry and progress, we cannot afford to give these regressive movements another inch.

Friday, December 30, 2016

2016 is Finally Ending

2016 was, to say the least, a tumultuous year, marked by numerous conflicts across the world, the British exit from the E.U., an exhausting U.S. political campaign culminating in the election of The Donald, a sharp ascent of Putin's menacing role in the world, and too many other things to even list.  I mention this to note that I haven't missed any of these events, or their importance, but I'll skip most of these in this year's summary in lieu of some more personal, or at least scientific, happenings.  (I do occasionally tweet some political opinions, and if you want to see those, you should follow me there.)

So, in no particular order, here are some things I do want to note as 2016 comes to a close.
  • It's reassuring to remember that while it may not feel like it, the world continues to become a better place as a whole.  If you don't believe me, take a look at the data.  And even if you do believe me, read this book by Steven Pinker.
  • I have been developing my amateur interest in architectural photography.  Below is a recent photo of mine of UIC's University Hall, looming over its surroundings.  While not particularly pleasing to look at, I think it captures "socialist utopian" ideology of the brutalist architecture on display all over campus.
    University Hall, photo by me
  • It has been 25 years since Pretty Good Privacy (PGP) was developed.  Given the recent political happenings, I'd say it's a pretty good time to start using it for sensitive emails.  I installed Mailvelope, and here is my public key; its fingerprint is AC5E DCA0 76A1 F55A 4819 94A9 2FAC ADDD C766 7CB9.
  • Reports of the Russian government influencing our election highlight once again the importance of good security practices, which are horribly lacking throughout most of our companies and the government.  Until we fix this, we will continue to be at the mercy of foreign adversaries, hackers, and (mis)fortune.
    photo from glitch news
  • This year, I graduated my first Ph.D. student, Jeremy Kun.  Jeremy finished in 5 years, though he only started working with me at the end of his second year.  Before graduating, he had the option to do a postdoc in academia, work at Google, or join a startup, and he decided to go the startup route and is now at 21 Inc.  He wrote an interesting blog post about his journey through grad school that I recommend everyone considering math or cs theory grad school to read. (Full disclosure, I think he makes UIC MCS seem a rather nice place, which I agree with, but it's also in my interest to promote it as such to prospective students.)  I also expect to graduate some more students in 2017.
    Jeremy Kun defending his thesis
  • The Man Who Knew Infinity, a movie about Ramanujan, was released in the U.S. this year.  Even though the movie got some biographical details wrong, and even though I found some parts a bit annoying, the mathematical parts were quite accurate.  In particular, I think this movie, more than any other that I've seen, does a pretty decent job of showing to a general audience what it that mathematicians do all day.
  • Two years ago I predicted that a computer program will be able to beat the best human Go players by the year 2020.  AlphaGo reached this milestone this year, and while this technically met my prediction, the speed at which it arrived hasn't helped allay my fears of A.I. posing an existential risk to humanity.  Those of you who haven't given this issue much thought should watch Sam Harris's TED talk on this topic.  Also, I recommend watching Westworld, which I liked both as a show and for some of the nontrivial philosophy that it presents on this topic.
    computers become better than humans at one more thing, image from quartz
  • This year Elon Musk declared that the odds are a billion to 1 that we are living in a simulation. The argument goes like this: eventually we will become advanced enough to simulate worlds ourselves, and the simulated beings won't know they're being simulated (and perhaps eventually make their own simulations), and the number of simulated worlds will vastly outnumber real ones.  My prior doesn't allow for such odds, and I think there are quite a few hidden and probably false assumptions in his argument, but if he's right, it would reveal that we're fundamentally mathematical beings, and that should at the very least make Max Tegmark happy.

Tuesday, December 20, 2016

Counting Our Losses After the Election

After Trump's victory this election, I've seen a number of posts criticizing the "data scientists," all of whom predicted a Clinton victory.  If they all got it wrong, how can they claim to be engaging in science if they won't now change their methods?  And should they change their methods?

the electoral vote outcome as of 12/20/16, image from Wikipedia

I'm not a fan of the hordes of "data scientists" running regressions pretending they're doing "science."  In fact, I'm skeptical of any field doing actual science that has science in its name, computer science included. But I want to defend the some of the forecasts themselves and suggest a way of going forward.

(Also, while I wouldn't blame the pollsters, who have an increasingly hard job these days, one group I have no problem blaming are the political "scientists," who have all these theories about what candidates should and shouldn't do, where advertising helps and where it doesn't, and Trump did none of the things he was "supposed" to do and still won.)

Blame the forecasters?

I don't think there was an honest way to look at the polling, or really, most other publicly available data, and claim that Trump was actually more likely to win than Clinton. The truth is simply that unlikely events occasionally occur, and this was one of them.

While the forecasts all agreed Clinton is the favorite, they assigned her different win probabilities.  Sam Wang (whose forecast I repeatedly dismissed before the election) assigned something like a 99% chance to a Clinton victory.  Nate Silver predicted something like a 2/3 chance to a Clinton victory.  Does that mean that Nate is a better predictor?

Well, still not necessarily.  Unless someone assigned a 100% probability to a Clinton win, we can't know for sure.  Sam Wang could have been closer to the truth, but simply gotten unlucky.  Moreover, people should be rewarded for predicting close to 0% or 100% because those predictions are much more informative.  Nate Silver's prediction might have been well calibrated, but still quite useless.

Consider the following prediction.  I can predict that for the next 10 elections, the candidates of the two major parties have roughly a 50-50 chance of winning.  Since the Democrats and the Republicans roughly win half the time, I'll probably be well calibrated, but my prediction will remain useless.

Count your log-loss

So, ought we throw out hands up in the air and trust everyone equally next time?  No!  Statistics and machine learning have ways of evaluating precisely these things.  We can use something called a loss function (for reasons I won't go into here, I will use the "log-loss" function, but you can use others), where we assign penalties, or losses, for inaccurate predictions.  Whoever accumulates the least loss over time can be thought of as the better predictor.

The binary version of the log-loss function works as follows:
L(y,p) = -(y log(p) + (1-y)log(1-p))

So let y=1 in the event where Trump wins and p be the probability assigned to that event.  Someone assigning this event a probability of .01 will suffer loss = -(1*log(.01)+(1-1)log(1-.01)) = 2.  Whereas someone assigning this event a probability of .33 will suffer loss of approximately 0.5.  Note that had Trump lost, the losses would have been approximately .005 and .2, respectively, rewarding the confident prediction.

So, according to this metric, Sam Wang gets penalized a lot more than Nate Silver for predicting an event that didn't occur.  If he keeps doing this over time, he will be discovered to be a bad predictor.  Note that this function indeed assigns a loss of 0 for predicting a 100% probability to an event that occurs and infinite loss to assigning 0% to an event that occurs.  Big risks yield big rewards.  Also note that my scheme of assigning a 50-50 chance to each future election will simply yield a loss of about .3 each time, which shouldn't be too hard to beat.

So, I suggest we start keeping track of the cumulative log-losses of the various people in this game to keep them honest.

Thursday, December 31, 2015

An Eventful 2015

This post continues my tradition of reviewing the year.  This time my commentary is a bit longer than in previous years, so without further ado, here is my take on 2015:
  • This year, we again have two important computing anniversaries: the bicentenaries of Ada Lovelace and of George Boole.  Ada Lovelace, daughter of Lord Byron, was the world's first programmer -- her algorithm computed the Bernoulli numbers on the Babbage engine, a machine that had only existed on paper at the time.  Incidentally, my Ph.D. advisor, Dana Angluin, wrote a nice piece on Lovelace.  George Boole, of course, established the foundations of 0-1 valued Boolean algebra, which is essential for computer science.
    left: George Boole (photo from Wikipedia), right: Ada Lovelace (photo from Eventbrite)
  • This year also marks the 25th anniversary of the World Wide Web.  Congress decided to celebrate by ... passing CISPA.  Sigh.
  • The results the latest neural nets are producing are moving quickly from the realm of impressive to the realm of scary.  On that note, I think it's time for us to start seriously thinking about the potential dangers of AI.  If you believe there is nothing special about our wetware, and that we'll keep making scientific progress, it's pretty straightforward to reach the conclusion that we will likely end up building computers that are in every way smarter and more creative than humans.  Once this happens, the cat will be out of the bag, so to speak, and the only question, really, is when will this happen?  This isn't necessarily a bad thing, but it is something we'll probably have only one chance to get right.
A "reimagined" version of "The Scream" by Google Brain scares me more than the original.
  • On a related note, a few high-tech luminaries founded OpenAI, a new AI research center with a billion dollar (yes, you read that right!) endowment.  Just think what impact the 60 million dollar Simons centers have had, and you'll quickly see how big a deal this could be if the money is spent well.  While I think the people involved are fantastic (congrats to Ilya!), I'm surprised by the center's initial narrow focus on deep learning.  As Seb pointed out, we will need entirely new mathematical frameworks to come close to "solving AI," so such a narrow focus seems shortsighted.  Luckily, the folks at OpenAI have plenty of resources for any future course-corrections.
    Sam Altman and Elok Musk, two of the founders of OpenAI, photo from medium/backchannel
  • Assuming the result checks out, the field of theoretical computer science has had a rare breakthrough.  Namely, Laci Babai gave a quasipolynomial-time algorithm for the graph isomorphism (GI) problem.  I was lucky enough to attend the lecture where the result was announced,  My grad student, Jeremy Kun, wrote up what is probably the best account of the lecture that still serves as a nice introduction to the paper, which is now up on arXiv.  One interesting thing about this result is that it neither makes real practical progress on GI (we already have fast heuristics), nor does it change our view of complexity (since the GI result is "in the direction" we expected).  It's just that GI is a such long-standing hard problem that progress on it is a very big deal.    
Babai, presenting his proof.  Photo by Jeremy Kun.
  • In combinatorics, Terry Tao, building on a polymath online collaboration, solved the Erdos Discrepancy problem, by proving the conjecture true.  I like that these large online collaborations have now led to solutions, or helped lead to solutions, to multiple important open problems.
  • Paul Erdos and Terry Tao, photo from Wikipedia
  • This isn't primarily about computer science or math, but if you haven't heard of CRISPR, go read about it.  I don't think it's an exaggeration to say that CRISPR, or its offshoots, are probably going to change our world rather fast.  While this technique was discovered a couple years ago, it was named "breakthrough of the year" for 2015 by Science magazine.

Monday, December 21, 2015

Why Does the World Care about Our Math?

One day while working on bandit problems at Yahoo!, I had this strange realization that its search engine, nay the entire world, seems have a particular remarkable property: we can sit around doing math, and as a result, better advertisements will get served to users.  Of course, this applies not just to computational advertising, but to pretty much anything -- we can know where the planets will be, when certain epidemics will spread, how fast planes need to fly to stay airborne, and a plethora of other things just by thinking abstractly and solving some equations.

I immediately and eagerly shared my newfound realization with others, and it impressed absolutely nobody.  I was told "How else would the world work?" and "There is lots of math that's not useful, but we choose to work on and formalize the things are are relevant to the real world."  These are, of course, perfectly good objections, and I couldn't explain why I found my realization at all remarkable, but I'd had a nagging feeling that I was onto something.

Forward 6 years, and I'm at Market Fresh Books, a bookstore near UIC.  As an aside, this bookstore is really interesting -- it sells used books by the pound or for small flat fees. I even once picked up a copy of baby Rudin for just 99¢ (plus tax) to add to my library.  Anyhow, I stumbled upon a copy of "Disturbing the Universe," Freeman Dyson's autobiography from 1979, and it looked interesting enough to buy.  That evening, while reading it, I came upon the following passage by Dyson:
"Here was I ... doing the most elaborate and sophisticated calculations to figure out how an electron should behave.  And here was the electron ... knowing quite well how to behave without waiting for the result of my calculation.  How could one seriously believe that the electron really cared about my calculation one way or the other?  And yet the experiments ... showed it did care.  Somehow or other, all this complicated mathematics that I was scribbling established rules that the electron ... was bound to follow.  We know that this is so.  Why it is so, why the electron pays attention to our mathematics, is a mystery that even Einstein could not fathom."
I still don't know the answer, and I can't even state the question without it seeming silly, but at least I now know I'm in good company.

Freeman Dyson
image credit, atomicheritage.org

Wednesday, December 31, 2014

2014 in Review

As 2014 comes to an end, I decided I want to continue my newfound tradition of summarizing my thoughts in a "year in review" post.  So here are some thoughts on academia, machine learning, and theory, again in no particular order.
  • Every company seems to have its own superstar leading a neural nets effort.  And deep learning keeps making impressive advances.  My hope that a nicer learning theory gets developed around this topic.
  • Computer science enrollments continue to soar, and the "sea change" may be here to stay. It's becoming a better and better time to study computer science.
  • On the other hand, research labs have continued to be vulnerable.  Perhaps we'll see a reverse-trend, with academic jobs temporarily making up for losses in the research job market.
  • It's an interesting time for online education, which has had some setbacks recently.  Yet, it seems even Yale would rather stream Harvard CS50 than hire enough faculty to teach its introductory computer science course.
  • With the release of The Imitation Game, more people than ever will know about Alan Turing.  But will their impressions be accurate?
  • My favorite "popular" AI article this year was on computer Go.  My guess is that by 2020, computers will be able to beat the best humans.
  • After teaching a learning theory course last semester, next semester I'll be teaching a graduate-level "Foundations of Data Science" course, loosely following Hopcroft and Kannan's new book.  I'll have to make some tough choices about what to material include and what to skip.  Any thoughts?