Wednesday, May 26, 2010

The Legacy of Martin Gardner

You may have heard that Martin Gardner passed away a couple days ago at the age of 95. Martin Gardner was perhaps the most famous creator of recreational mathematics. His books, columns, and articles made math fun and accessible to many.

Martin Gardner
photo from the Oberwolfach Photo Collection 

Martin Gardner was best known for his Mathematical Games column in Scientific American from 1956 to 1981. He stopped writing the column before I was even born, but others carried on his legacy. In the 1980s Douglas Hofstadter (the famous author of GEB) took over with with his Metamagical Themas column, and afterwards Ian Stewart (a professional mathematician) continued the tradition with Mathematical Recreations.

Ian Stewart's wonderful column ran from 1990 to 2001, when I was just the right age to become hooked -- I probably read every one of Ian Stewart's articles after 1995. When I got to college, some of my friends pointed me to the earlier columns of Martin Gardner, and I quickly became a fan. It was not hard to me to see why Ron Graham said of him, "Martin has turned thousands of children into mathematicians and thousands of mathematicians into children." I'm sure that I owe some of my love for puzzles to Martin Gardner's legacy, and I was very sad to hear of his passing.

It would only be appropriate to end this post with a puzzle. Martin Gardner really liked this one and named it the "Impossible Puzzle."
Let x and y be two different integers. Both x and y are greater than 1, and their sum is at most 100. Sally is given only their sum, and Paul is given only their product. Sally and Paul are honest and all this is commonly known to both of them.

The following conversation now takes place:
  • Paul: I do not know the two numbers.
  • Sally: I knew that already.
  • Paul: Now I know the two numbers.
  • Sally: Now I know them also.
What are these numbers?

No cheating! I will update this post with the solution in a couple days.

A good memorial and an obituary of Martin Gardner.

The writer of the current Scientific American puzzle column, Puzzling Adventures, is Dennis Shasha.


Update (5/30/10): The answer is that the two numbers are 4 and 13. I might have written up a solution if good ones (including Martin Gardner's) didn't appear here.

Thursday, May 20, 2010

Fenno's Phenomenon

As November congressional elections approach and analysts fill the airways, I am reminded of a phenomenon called Fenno's Paradox. It goes like this: why is it that voters are usually dissatisfied with congress but keep re-electing their representatives at high rates?

I first heard of Fenno's Paradox as an undergraduate taking an elective course on congressional power. The professor tried to tackle the paradox by looking at the advantages of incumbency, the role of money in elections, the irrationality of voters, etc. But I never understood why this is a paradox at all.

Consider the following situation. Say each voter wants all federal spending to go to his or her own state and votes for representatives who feel the same. Each representative fights to bring all spending to his or her state, and this results in a compromise that the money is split among the states. The voters in every state are furious at the end result, and they blame congress for wasting their money. But all appreciate their respective representatives' valiant efforts.

This may even be not so far from what really happens. I haven't studied this carefully, but most people seem to be against protectionism and special deals, except when these deals favor their own states. Representatives (without national ambitions) are only answerable to their own constituents, so they have incentive to keep pushing for these deals, and in the end everyone is disgusted with congress.

The problem is that whenever I mention this to political scientists, they aren't convinced but don't really tell me why. And clearly I'm not the first person who thought of this "resolution." Perhaps in the social sciences coming up with a non-paradoxical interpretation doesn't resolve a paradox, or a paradox may just mean a seemingly contradictory statement.

Admittedly, I haven't read the literature on this phenomenon, so am I missing something? Perhaps the real resolution to this paradox is that in the future I should stick to posting only on things I know anything about.

Monday, May 17, 2010

Lessons from Future Past

After reading Isaac Asimov's 1950 novel Pebble in the Sky, I started thinking about what visions of the future people had around 50 years ago. While reading the book, I was particularly struck by a mundane passage where one character (Arbin) waits for his turn at the newspaper. This passage wouldn't have been jarring to me had the book's setting not been thousands of years in Earth's future, where spaceships routinely traversed the galaxy. Asimov (at least in this book) imagined people passing a newspaper around in an age of interstellar travel. This reminded me of Captain Kirk signing notepads brought to him by his crew or Princess Leia hiding messages in droids. Just send an email!

But who can blame people for imagining the future this way? The 50s and 60s followed an exciting time in physics -- in the preceding half-century, we had gone from searching for the ever-unfindable aether to discovering relativity and quantum mechanics, inventing televisions and the atomic bomb, and much more. Fusion power providing unlimited free energy was supposed to be just around the corner. Meanwhile, computer science was still in its early stages -- we sent a man to the moon still doing some calculations with slide rules. What happened in the next half a century blindsided everyone.

To be fair, physics has made its own remarkable advances since then (in ways people imagined) -- in everything from incredible materials to new and interesting theoretical developments. But in the last half-century, the real action was in computing. Some visionaries did foresee the rise of computers. But instant and universal access to information? Secure virtual payments? Zettabyte scales? Nobody saw that coming! People envisioned a Golden Age for spaceships and jetpacks, yet got one in computing first.

That some of our predictions would turn out wrong is of course expected, but I still can't help wonder what the next 50 years will bring. There's a near consensus that these advances will continue, that we'll spend more and more time online and computers will do more and more for us. And it's hard for me not to believe that this will help Golden Ages in other fields -- in medicine, now that we can quickly sequence genomes, run studies at unprecedented scales, and take advantage of nanotechnology; in math, as computers become more useful and we collaborate in new ways to solve open problems; even social sciences, as researchers get data they couldn't have dreamed of. Perhaps computers will even start to develop dreams of their own.

And while I think continued breakthroughs in computer science await (How can I not? I'm a computer scientist!), it's useful to remember that our predictions have never been perfect. Who knows where the next exciting advance will actually lie -- it might even be fusion reactors.

Friday, May 14, 2010

Caveat Surfer

A recent article got me thinking about laws and the internet. A couple days ago, the Latvian police apprehended a researcher at the University of Latvia who they claim hacked into government systems and obtained tax documents of various officials. Apparently, there was a security flaw that allowed anyone to access Latvian tax records by basically visiting the proper url. So this alleged "hacker" probably wrote a simple shell script to get 7.5 million tax documents and sent them to a journalist. He now faces a possible 10 years in jail for essentially executing an illegal command.

Now, we don't really know what he did, and I have little knowledge of Latvian law, but it got me thinking about what I'd ideally like the law to be. On one extreme, we have people who write viruses and purposely cause lots of damage; on the other extreme are people who steal wifi from the local coffee shop (even they can get arrested).

This Latvian story falls somewhere in between -- unlike the wifi "thief," the Latvian hacker probably should have known what he was doing could get him into trouble, but do we really want to live in a society where you can be sent to jail for visiting a website? It seems one problem is that visiting a website can include everything from buffer overflow attacks to illegal currency transfers. But this incident feels more like the Latvian government put sensitive information online and then decided to arrest anyone who accessed it.

I guess it's unavoidable that laws about the digital world, even more so than laws about face-to-face interactions, will have seemingly arbitrary lines between what's legal and what's not. But there should be some burden on people and governments to reasonably protect their own data -- and freedom for all of us to poke around a little.

Reddit has some interesting comments on this story.

For a blog on these types of issues by people who have thought about them more than I, visit Freedom to Tinker.

Tuesday, May 11, 2010

Around the Galaxy in 80 Years

Stephen Hawking recently wrote a fun article on time travel. One of Hawking's ideas is to build a giant spaceship and fill it with fuel. As the ship burned the fuel, it would go faster and faster, eventually reaching speeds near the speed of light. Hawking argued that we could use such a ship to travel to the future or to the edge of our Galaxy, perhaps in only 80 years.


Leaving aside my skepticism about our ability to do that sort of thing, a bigger question should be -- how would that even help? It seems impossible to get to the edge of our Galaxy in 80 years no matter how fast we go. The Milky Way is thousands of light years in diameter. Even going at the speed of light, crossing it would take thousands of years, so what could Hawking be talking about?

The answer is again relativity. In my previous post, I talked about how objects going near the speed of light experience relativistic effects. One of those effects is time dilation: clocks of moving objects go slower when viewed by (relatively) stationary observers. Another is Lorentz contraction: an observer will measure the length of a moving object as shorter in the direction of its relative motion.

So while we can't hope to go faster than the speed of light, we wouldn't really have to traverse thousands of light years either, at least not from our point of view. Because of Lorentz contraction, we would see the entire galaxy contract into a more manageable distance. So, in some sense, we can travel a light year in under a year. And if we then turn around and come home to Earth, we will also have traveled into the future, killing two birds with one giant fuel-filled spaceship.

This also answers the question from my previous post. From their point of view, muons halve only thrice between airplane height and the Earth's surface because to them it's 3000 feet, not 30000.

The Milky Way image is under a Creative Commons Attribution-Share Alike 2.5 Generic license. Its author is Digital Sky LLC.

Sunday, May 09, 2010

Saved by Relativity

Coming at you from space, flying close to the speed of light, about 1 muon passes through your head every second. Probably not the best thing for your health, but also not enough to really harm you.
image from nsf.gov / J. Yang
Muons happen to be very unstable: their half-life is near 1 millionth of a second (microsecond or μs). 100 muons become 50 in 1μs and 25 in another. They decay so fast that if every particle in the universe were a muon, within 1 millisecond (1000μs) they'd all be gone!

These muons fly toward Earth at near the speed of light, at about 1000 ft/μs. So, if every second 1 muon goes through your head on Earth's surface, calculations show (due to their decay) 2 should go though your head at 1000 feet, 4 at two thousand feet, and 2^30 at 30000 feet -- that's over 1 Billion muons, enough to kill you instantly!

But planes fly at 30000 feet all the time. So where did we go wrong?

To figure that out, we need some relativity. The special theory of relativity postulates that the laws of physics are the same in all inertial (not accelerating) reference frames and that the speed of light is always constant. Its consequences include:
  1. Moving clocks appear to go slower to stationary observers.
  2. An observer will measure the length of a moving object as shorter in the direction of its relative motion.
  3. E = mc^2 (which we don't need for this problem).
The first two of these consequences have a noticeable effect only at very high velocities and a strong effect at near the speed of light.

Remembering that muons travel fast enough to experience relativistic effects, it becomes clear what's going on. From our point of view they decay (what turns out to be) 10x slower. So instead of 30 halvings, we only see them go through 3. At 30000 feet only 2^3 = 6 muons per second go through your head, and you can survive that just fine!

This just leaves one puzzle: from the muons' reference fames it is our clocks that slow down, not their own. How do we explain them halving only thrice between airplane height and the Earth's surface from their point of view?

This post is inspired by a 2002 Princeton physics lecture by Peter Meyers. A similar story appears on the Stanford SLAC website.

Update (5/11/10): my following post answers the muon riddle.

Wednesday, May 05, 2010

Paradox Lost

Imagine the following game. You're given $1. Then, you keep flipping a fair coin, and every time the coin lands on heads, your money is doubled. The game ends the first time you see tails.  A little thought reveals that your expected payoff is $1 + (1/2)$2 + (1/4)$4 + (1/8)$8 + ...   = infinity!  No matter how much you're willing to pay per game, if you play enough times, you'll eventually come out ahead.   But how much would you pay to play this game once?

Intuitively, you'd be crazy to pay more than $20, and you'd be right not to. One reason is that your utility for money isn't linear: you just don't value 10 billion dollars 10 times more than 1 billion. Another reason is if you got lucky enough, there wouldn't be enough money in the world to pay your winnings -- capping the game at, say, a trillion dollars makes its expected value only around $20. But you wouldn't get lucky enough -- because one-in-a-trillion events just don't occur in real life.  That you shouldn't pay more than $20 to play this infinitely profitable game is known as the St. Petersburg paradox.

This paradox is related to the two envelopes problem in my previous post (required reading for the next part). The way I set it up, the two envelopes problem is also a game of infinite expectation.  Whether you exchange envelopes or not, your expected payoff a priori is infinite.  That's why you can swap envelopes without even looking at what's in yours and expect to gain about 22% -- 1.22 times infinity is still infinity! And the problem with both the St. Petersburg game and the two envelopes problem is that no matter how much money you get, it's less than the expectation.  In both games, you get a finite amount with probability 1, but what's that compared to infinity?  It's also related to why you always gain in expectation by switching.  So, can we make the game's expected value finite and still get the two envelopes paradox? The answer is no! Paradox lost.

I'm happy enough with this explanation, but I'd love to see other ideas about how to resolve these problems.  The two envelopes problem and others like it bothered me for a long time, but I'm trying to accept that often our intuitions simply break when we deal with infinity.

If you want more detail on these paradoxes, David Chambers has some nice papers, which proved useful in making this post.

Sunday, May 02, 2010

Two Envelopes

When I was an undergrad, a friend showed me a version of the following famous paradox.  Suppose somebody puts money into two envelopes, with one envelope getting thrice as much money as the other.  You pick an envelope at random and see that it has $90.  Now you have the option to exchange the $90 for the money in the other envelope.  Should you do it?  Well, not having much information, you figure it's about as likely that the other one has $30 as it does $270 and compute that in expectation you'll get 0.5($30)+0.5($270) = $150 if you swap.  This of course works for any amount x -- you can get about 1.67x in expectation by swapping.  You choose the envelope at random but always want to swap for the money in the other envelope -- this is the paradox! Does it trouble you?

It shouldn't -- because I cheated here.  I never said how the money was placed in the envelopes. For example, because there are infinitely many choices for possible amounts of money you can place in the envelopes, you can't choose from them uniformly at random.  And there are lots of intuitive distributions that won't work; perhaps the whole setting is impossible.

So let's be more careful.  Let's say the person placing the money chooses a positive integer y with probability 2^(-y) (this is an honest distribution) and then places 3^y dollars in one envelope and 3^(y+1) dollars in the other.  Now, when you open an envelope at random and see x dollars, you can figure out the probabilities of there being x/3 and 3x in the other.  If x = 3, you know you'll win by switching because the other envelope must have $9.  Otherwise it's not hard to see that you'll get x/3 with probability 2/3 and 3x with probability 1/3, giving you about 1.22x in expectation for switching -- not exactly the 1.67x, but still a gain.

Now again we get the same paradox: you choose an envelope at random but always win (in expectation) if you switch -- this even works if you don't look at the amount in the envelope!  Does this break probability or only intuition?

A note: various write-ups of this problem exist online, including one on Gowers's weblog from which I borrowed ideas for this entry.

Update (5/9/10): my following post is also about this paradox.