Monday, August 30, 2010

Ten Fallacies About Web Privacy

We are not used to the Internet reality that something can be known and at the same time no person knows it.

By PAUL H. RUBIN  WSJ   August 30, 2010

Privacy on the Web is a constant issue for public discussion—and Congress is always considering more regulations on the use of information about people's habits, interests or preferences on the Internet. Unfortunately, these discussions lead to many misconceptions. Here are 10 of the most important:

1) Privacy is free. Many privacy advocates believe it is a free lunch—that is, consumers can obtain more privacy without giving up anything. Not so. There is a strong trade-off between privacy and information: The more privacy consumers have, the less information is available for use in the economy. Since information helps markets work better, the cost of privacy is less efficient markets.

2) If there are costs of privacy, they are borne by companies. Many who do admit that privacy regulations restricting the use of information about consumers have costs believe they are born entirely by firms. Yet consumers get tremendous benefits from the use of information.

Think of all the free stuff on the Web: newspapers, search engines, stock prices, sports scores, maps and much more. Google alone lists more than 50 free services—all ultimately funded by targeted advertising based on the use of information. If revenues from advertising are reduced or if costs increase, then fewer such services will be provided.

3) If consumers have less control over information, then firms must gain and consumers must lose. When firms have better information, they can target advertising better to consumers—who thereby get better and more useful information more quickly. Likewise, when information is used for other purposes—for example, in credit rating—then the cost of credit for all consumers will decrease.

4) Information use is "all or nothing." Many say that firms such as Google will continue to provide services even if their use of information is curtailed. This is sometimes true, but the services will be lower-quality and less valuable to consumers as information use is more restricted.

For example, search engines can better target searches if they know what searchers are looking for. (Google's "Did you mean . . ." to correct typos is a familiar example.) Keeping a past history of searches provides exactly this information. Shorter retained search histories mean less effective targeting.

5) If consumers have less privacy, then someone will know things about them that they may want to keep secret. Most information is used anonymously. To the extent that things are "known" about consumers, they are known by computers. This notion is counterintuitive; we are not used to the concept that something can be known and at the same time no person knows it. But this is true of much online information.

6) Information can be used for price discrimination (differential pricing), which will harm consumers. For example, it might be possible to use a history of past purchases to tell which consumers might place a higher value on a particular good. The welfare implications of discriminatory pricing in general are ambiguous. But if price discrimination makes it possible for firms to provide goods and services that would otherwise not be available (which is common for virtual goods and services such as software, including cell phone apps) then consumers unambiguously benefit.

7) If consumers knew how information about them was being used, they would be irate. When something (such as tainted food) actually harms consumers, they learn about the sources of the harm. But in spite of warnings by privacy advocates, consumers don't bother to learn about information use on the Web precisely because there is no harm from the way it is used.

8) Increasing privacy leads to greater safety and less risk. The opposite is true. Firms can use information to verify identity and reduce Internet crime and identity theft. Think of being called by a credit-card provider and asked a series of questions when using your card in an unfamiliar location, such as on a vacation. If this information is not available, then less verification can occur and risk may actually increase.

9) Restricting the use of information (such as by mandating consumer "opt-in") will benefit consumers. In fact, since the use of information is generally benign and valuable, policies that lead to less information being used are generally harmful.

10) Targeted advertising leads people to buy stuff they don't want or need. This belief is inconsistent with the basis of a market economy. A market economy exists because buyers and sellers both benefit from voluntary transactions. If this were not true, then a planned economy would be more efficient—and we have all seen how that works.

Mr. Rubin teaches economics at Emory University.

Wednesday, August 25, 2010

Federal CIOs Issue Cloud Computing Privacy Framework

Poorly planned and executed cloud computing contracts could result in security disaster, warns CIO Council.

By J. Nicholas Hoover,  InformationWeek,  Aug. 25, 2010

Federal privacy regulations control how and where federal agencies hold and process personally identifiable information, and the CIO Council warns that, without consulting their legal and privacy teams and putting a plan into place, federal agencies may run afoul of those regulations.

"Once an agency chooses a cloud computing provider to collect and store information, the individual is no longer providing information solely to the government, but also to a third party who is not necessarily bound by the same laws and regulations," the document says.

Federal agencies need to follow laws like the E-Government Act and the Privacy Act and regulations like the National Institute of Standards and Technology's Special Publication 800-53, but cloud providers are bound only so far as they don't stray so far from the regulations that they can't serve the federal government.

Among the risks include improperly setting the contractual terms of service in such a way that allows the provider to analyze or search the data; possibilities that the data could become an asset in bankruptcy, that foreign law enforcement may search the data pursuant to a court order or other request, or that the service provider doesn't inform the government of a breach; and the possible failure of the cloud provider to provide a full and accessible audit trail to the government.

Certain privacy laws may also make it harder for agencies to host data on the cloud. For example, the document notes, the Health Insurance Portability and Accountability Act (HIPAA) requires formal agreements before the government can share records with a cloud provider.

Despite the risks, however, the CIO Council notes that "a thoughtfully considered" cloud deployment can, contrary to its earlier warnings, actually enhance privacy and make agency information more secure.

The document recommends agencies maintain a focus on contract language that meets federal privacy needs and regulations, conduct what the CIO Council terms a Privacy Threshold Analysis to determine whether a new system creates privacy risks, and then carry out a Privacy Impact Assessment to assess and help mitigate those risks.

According to the document, Privacy Threshold Analyses should address things like changes in how data is managed, consolidation of data, and new public and inter-agency access and use, while Privacy Impact Assessments should address specifics about the data itself -- what it is, why it's being collected, with whom it will be shared, and so on.

Although cloud computing represents a possible solution to the government's rapidly increasing on-premises storage needs, federal agencies need to be aware of "significant privacy concerns" associated with storing personally identifiable information in the cloud, the federal CIO Council says in a new document outlining a proposed policy framework on privacy and the cloud.

Microsoft ID guru slams 'duplicitous' Apple

Jobs' non-personal data claim 'hogwash'

By Cade Metz in San Francisco Posted in The Register, August 24, 2010

Microsoft chief architect of identity Kim Cameron has  insisted that the "non-personal information" collected by Apple can be used to personally identify you – despite angry counterarguments from at least one Jobsian fanboi.

At a privacy conference in Seattle, Washington, Cameron last week gave a talk that touched on Apple's recent changes to its iPhone privacy policy, which he first flagged up [1] in late June.

Apple now says it will collect "non-personal information - data in a form that does not permit direct association with any specific individual," and it reserves the right to disclose this information "for any purpose."

This, according to Apple, includes data such as "occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising."

Cameron has long said that Apple's characterization of this data as non-personal is "unbelievably specious." And at the end of his talk, one audience member took issue with this stance. "My questioner was clearly a bit irritated with me," Cameron says in a blog post [2].

"Didn’t I realize that the 'unique device identifier' was just a GUID - a purely random number? It wasn’t a MAC address. It was not personally identifying."

But Cameron had already made his case quite clearly. "The question really perplexed me, since I had just shown a slide demonstrating how if you go to this well-known website [Whitepages.com [3]] and enter a location you find out who lives there," Cameron says.

"I pointed out the obvious: if Apple releases your location and a GUID to a third party on multiple occasions, one location will soon stand out as being your residence...Then presto, if the third party looks up the address in a 'Reverse Address' search engine, the 'random' GUID identifies you personally forever more. The notion that location information tied to random identifiers is not personally identifiable information is total hogwash."

But the fanboi was unbowed. "Is your problem that Apple’s privacy policy is so clear?" Cameron's questioner asked. "Do you prefer companies who don’t publish a privacy policy at all, but rather just take your information without telling you?"

According to Cameron, the question was shot down by a collective groan from his audience. But it stuck with him. "I personally found the question thought provoking," Cameron says. "I assume corporations publish privacy policies - even those as duplicitous as Apple’s - because they have to. I need to learn more about why."

Cameron did not immediately respond to our request for comment. But when he learns more about Apple's "duplicitous" policy writing, we'll let you know. ®
Links
  1. http://www.identityblog.com/?p=1136
  2. http://www.identityblog.com/?p=1154
  3. http://www.whitepages.com/reverse_address