Saturday, February 18, 2012
In early 2010, Google apologized for the way Google Buzz had revealed people's Gmail contacts to the world. Later that year, the company announced that its Street View cars had been recording the data being transmitted over WiFi networks they drove by. And just this week, the Wall Street Journal and privacy researcher Jonathan Mayer revealed that Google had been using cookies in a way that directly contradicted what it had been telling users to do if they didn't want cookies.
Once is an accident, and twice a coincidence, but three times is a sign of a company with a compliance problem. All three of these botches went down the same way. A Google programmer implemented a feature with obvious and serious privacy implications. The programmer's goal in each case was relatively innocuous. But in each case he or she designed the feature in a way that had the predictable effect of handing people's private information in a way that blatantly violated the company's purported privacy principles. Then--and this is the scary part--Google let the feature ship without noticing the privacy time bomb it contained.
Google was founded and is run as an engineering-driven company, which has given it amazing vitality and energy and the ability to produce world-changing products. But even as the company has become a dominant powerhouse on which hundreds of millions of people depend, it continues to insist that it can run itself as a freewheeling scrum because, er, um, Google is special, Google's values are better than the competition's, and Google employees are smarter than your average bear. All of these may be true, but adult companies have adult responsibilities, and one of them is to train and supervise their employees. Google is stuck in a perpetual adolescence, and it's getting old fast.
The only other firms I can think of with this kind of sustained inability to make their internal controls stick are on Wall Street. (See, e.g.) Google has already had to pay out a $500 million fine for running advertisements for illegal pharmaceutical imports. And the company is already operating under a stringent consent decree with the FTC from the Buzz debacle. If those weren't sufficient to convince Larry Page to put his house in order, it's hard to know what will be. Sooner or later, the company will unleash on the Internet a piece of software written by the programmer equivalent of a Jérôme Kerviel or a Kweku Adoboli and it won't be pretty, for the public or for Google.
Actually a few years ago ago a batch of user searches done over several on Google by AOL users was released by AOL for benign research purposes. The supposedly anonymous data soon was found to refer to idenfiable people, revealing their search patterns. They had allowed this simply by enabling cookies.
Google exists to make money, not uphold ideas of good and evil. As pressures pile up, they will sell off privacy people did not even know they had surrendered.
Posted by: Frances | Feb 20, 2012 8:45:58 AM
A UK defence contractor can declare that a document prepared for the government should be exempt from Freedom of Information requests because of (various reasons, including) commercial confidentiality. However, you mustn’t just plaster a statement to that effect on every document, or the Department concerned will conclude you haven’t thought about it and ignore *all* such statements.
Sounds as though it’s important to get it right, doesn’t it!
So the lawyers issued an instruction to all engineers to be careful to make the right decision. But when asked for clarification/help, they bailed completely and referred back to their original instruction.
So what’s a poor engineer to do? (My choice was, never put the notice on anything I produced; all far too low-level and technical to interest the press!)
Posted by: Simon | Feb 20, 2012 10:13:42 AM
The AOL search data came from AOL subscribers, so they weren't identified using cookies.
Posted by: James Grimmelmann | Feb 20, 2012 3:39:55 PM
I think you under-estimate the difficulty of correct programming. Given you're a lawyer and not a practicing computer programmer this is understandable so let me spell it out. The bugs you're discussing are not bugs that affected the operation of the software - they didn't stop anything from working. They resulted in data going where it wasn't supposed to, and that data was then ignored because nobody was looking for it, because it wasn't supposed to be there.
This is very hard to detect reliably, in fact it's an entirely new class of bugs in software, which aren't widely understood or trained for because they are rare. So the usual ways software engineers tackle bugs don't help - there are no automated tools for checking this, code reviewers may miss the subtle side effects and so on. As if programming wasn't already hard enough!
Perhaps you think these technologies are simple. Why not examine this discussion of Safaris policy which takes place between senior developers working at Apple and Google (they co-operate on Safari development):
We also believe this change will fix other bugs with the current policy, where sites attempt to log the user out by overwriting an existing cookie with an expired one. Granted, we could address those cases with a much narrower exception."
In other words Safaris policy was weird enough and complicated enough that it broke Facebook in obscure and hard to understand ways, in fact, eventually the Safari team themselves had to figure out what was happening. They knew about this risk when they implemented it and decided that breaking things was worth it for privacy, although the developers agree on that thread that the privacy benefits are vague and impossible to measure in any meaningful way (which means there may be none at all).
Posted by: Mike | Feb 25, 2012 1:35:47 PM
Mike, I was a full-time programmer before I went to law school, and the last time I wrote code was a couple of months ago. So I am intimately, even painfully, familiar with how hard it is to write bug-free code. (Indeed, this is a significant theme in my scholarship.) Thanks for the more detailed explanation than I gave, which I'm sure other readers will also find useful.
My belief that Google should nonetheless have caught the Safari cookie bug comes from three lines of reasoning. First, Google's public statements about Safari cookie blocking raised the bar: the company made promises it didn't keep. Second, using a workaround for a Safari-specific policy to link cookies with ads should at least raise a question about how this would affect ad cookies on Safari, which targeted testing would then have flagged as a problem. And third, a privacy test suite that checked the efficacy of Google's published opt-out instructions would have picked up a regression.
Posted by: James Grimmelmann | Feb 25, 2012 1:48:20 PM
Sorry to come late to the party, but I only found this while researching something else.
The 'one rogue software engineer' line put out by Google and in part supported by Mike above, is of course a smokescreen worthy of Rupert Murdoch's empire.
For the streetview cars to have been capable of picking up wifi signals they would have had to be equipped with the necessary antennas and RF signal detection/demodulation circuitry before there was any data for the software engineer to store. This means that clearly the company was well aware beforehand that it would be scanning for this information or otherwise it wouldn't have had any need to build in the physical (ie hardware) capability from which the data could be channelled onto a storage device, whish was presumably the same storage that held the imagery and GPS data, in order for it all to be tied together at some later stage. Google knew exactly what was going on from the outset.
Posted by: Andy J | Jun 7, 2012 8:57:33 AM