Tuesday, November 1, 2011

Why Johnny Can't Opt Out: A Usability Evaluation of Tools to Limit Online Behavioral Advertising

Title:  Why Johnny Can’t Opt Out: A Usability Evaluation of Tools to Limit Online Behavioral Advertising   
Authors:        Pedro G. Leon, Blase Ur, Rebecca Balebako, Lorrie Faith Cranor, Richard Shay, and Yang Wang
Publication Date:       October 31, 2011     

Abstract
We present results of a 45-participant laboratory study investigating the usability of tools to limit online behavioral advertising (OBA).We tested nine tools, including tools that block access to advertising websites, tools that set cookies indicating a user’s preference to opt out of OBA, and privacy tools that are built directly into web browsers. We interviewed participants about OBA, observed their behavior as they installed and used a privacy tool, and recorded their perceptions and attitudes about that tool. We found serious usability flaws in all nine tools we examined.

The online opt-out tools were challenging for users to understand and configure. Users tend to be unfamiliar with most advertising companies, and therefore are unable to make meaningful choices. Users liked the fact that the browsers we tested had built-in Do Not Track features, but were wary of whether advertising companies would respect this preference. Users struggled to install and configure blocking lists to make effective use of blocking tools. They often erroneously concluded the tool they were using was blocking OBA when they had not properly configured it to do so.

Full Report: CMU-CyLab-11-017

--

Privacy: "Things are just getting worse"

Your phone company is selling your personal data

David Goldman @CNNMoneyTech November 1, 2011

NEW YORK (CNNMoney) -- Your phone company knows where you live, what websites you visit, what apps you download, what videos you like to watch, and even where you are. Now, some have begun selling that valuable information to the highest bidder.

In mid-October, Verizon Wireless changed its privacy policy to allow the company to record customers' location data and Web browsing history, combine it with other personal information like age and gender, aggregate it with millions of other customers' data, and sell it on an anonymized basis.

That kind of data could be very useful -- and lucrative -- to third-party companies. For instance, if a small business owner wanted to figure out the best place to open a new pet store, the owner could buy a marketing report from Verizon about a designated area. The report might reveal which city blocks get the most foot or car traffic from people whose Web browsing history reveals that they own pets.
Verizon (VZ, Fortune 500) is the first mobile provider to publicly confirm that it is actually selling information gleaned from its customers directly to businesses. But it's hardly alone in using data about its subscribers to make extra cash.

All four national carriers use aggregated customer information to help outside parties target ads to their subscribers. AT&T, Sprint and T-Mobile insist that subscriber data is never actually handed over to third-party vendors; nevertheless, they all make money on it.

AT&T's (T, Fortune 500) AdWorks program, for instance, promotes AT&T's customer base to advertisers. On its AdWorks website, AT&T touts its ability to "reach customized audience segments based on anonymous and aggregate demographics." It then shows customers carefully tailored coupons, in-app ads and Web ads.

Sprint (S, Fortune 500), like Verizon, tracks the kinds of websites a customer visits on their mobile devices as well as what applications they use, according to spokesman Jason Gertzen. Sprint uses that data to help third parties target ads to customers.

That's a step further than Verizon goes. It too lets advertisers target customized messages to Verizon subscribers' mobile phones, but for that initiative, it does not incorporate its customers' Web surfing or location data, according to a company spokesman. Verizon relies on other personal information, including customers' demographic details and home address.

T-Mobile declined to answer specific questions about what kind of information it shares or sells, instead pointing CNNMoney to T-Mobile's privacy policy. The policy's open-ended terms seem to suggest that the company does not divulge customer information, but a T-Mobile spokeswoman acknowledged that the company "collects information about the websites that customers visit and their location" and that it "may use that information in an anonymous, aggregate form to improve our services."

Selling customer information is an age-old practice that is certainly not exclusive to the wireless industry. Brian Kennish, a former DoubleClick engineer who developed the advertising network's mobile ad server, noted that wireless companies have been sharing users' location data with third parties for more than a decade.

Why Apple and Google need to stalk you
But the rise of smartphones has given mobile providers an accidental treasure trove of marketable data: The gadgets are hyper-personalized tracking devices that "know" more about their owners than any other product on the market.
Wireless providers are taking advantage of their gold mine.

"At the end of the day, we're getting to a situation where customers are the products that these wireless companies are selling," said Nasir Memon, a professor of computer science at New York University's Polytechnic Institute. "They're creating a playground to attract people and sell them to advertisers. People are their new business."

There's a lot of money to be made in the largely untapped local advertising markets. A BIA/Kelsey study from March predicts that U.S. local online ad revenues will reach $42.5 billion annually in 2015.

Google (GOOG, Fortune 500) and Facebook are scrambling to sign local businesses to their new services like Facebook Places, Google Wallet and Google Places. But with smartphone customer data in their arsenal, wireless carriers are well positioned to swoop in as well.

"Verizon revealed the industry's strategy," said Jeff Chester, executive director of the Center for Digital Democracy. "This is more than the camel's nose under the tent.

With NFC [near field communication, an emerging technology for mobile payments] and GPS, there's a new digital gold rush here, and wireless companies want to reap the tremendous financial rewards that will come with dominating a local advertising market."

Chester noted that Verizon was the first to admit that it's selling customer data for local advertising and business-development purposes, but he said he believes all of the industry's players are involved in using subscriber information for that purpose.
"They're all doing this," he said. "Everyone is aware that big growth in the digital economy is mobile and location-based services."

For its part, Verizon has largely been applauded by privacy groups for at least being transparent about what it's doing and pointing users to an opt-out site if they don't wish to participate. But privacy advocates are concerned about the direction wireless companies are headed.

"The Web pages we go to and searches we do are the closest thing to our thoughts, the most private info of all, that can be recorded," said Kennish, who now heads up Disconnect, an online privacy tool. "If Verizon succeeds, I'm sure others will follow. Despite all the talk about privacy lately, things are just getting worse." 


Monday, October 31, 2011

Privacy and Security in the Implementation of Health Information Technology (Electronic Health Records): U.S. and EU Compared

      Privacy and Security in the Implementation of Health Information Technology (Electronic Health Records): U.S. and EU Compared, B.U. J. SCI. & TECH. L., Vol. 17, Winter 2011. "The importance of the adoption of Electronic Health Records (EHRs) and the associated cost savings cannot be ignored as an element in the changing delivery of health care. However, the potential cost savings predicted in the use of EHR are accompanied by potential risks, either technical or legal, to privacy and security. The U.S. legal framework for healthcare privacy is a combination of constitutional, statutory, and regulatory law at the federal and state levels. In contrast, it is generally believed that EU protection of privacy, including personally identifiable medical information, is more comprehensive than that of U.S. privacy laws. Direct comparisons of U.S. and EU medical privacy laws can be made with reference to the five Fair Information Practices Principles (FIPs) adopted by the Federal Trade Commission and other international bodies. The analysis reveals that while the federal response to the privacy of health records in the U.S. seems to be a gain over conflicting state law, in contrast to EU law, U.S. patients currently have little choice in the electronic recording of sensitive medical information if they want to be treated, and minimal control over the sharing of that information. A combination of technical and legal improvements in EHRs could make the loss of privacy associated with EHRs de minimis. The EU has come closer to this position, encouraging the adoption of EHRs and confirming the application of privacy protections at the same time. It can be argued that the EU is proactive in its approach; whereas because of a different viewpoint toward an individual’s right to privacy, the U.S. system lacks a strong framework for healthcare privacy, which will affect the implementation of EHRs. If the U.S. is going to implement EHRs effectively, technical and policy aspects of privacy must be central to the discussion."

Monday, October 24, 2011

Jim Dempsey Op-ed: The shocking strangeness of our 25-year-old digital privacy law

Op-ed: The shocking strangeness of our 25-year-old digital privacy law

By Jim Dempsey 

Op-ed: Twenty-five years after it was passed, the Electronic Communications Privacy Act still governs much of our privacy online, and the Center for Democracy and Technology argues that ECPA needs an overhaul. The opinions in this post do not necessarily reflect the views of Ars Technica.

Cell phones the size of bricks, "portable" computers weighing 20 pounds, Ferris Bueller's Day Off, and the federal statute that lays down the rules for government monitoring of mobile phones and Internet traffic all have one thing in common: each is celebrating its 25th anniversary this year.
The Electronic Communications Privacy Act (ECPA) was signed into law on October 21, 1986. Although it was forward-looking at the time, ECPA’s privacy protections have remained stuck in the past while technology has raced ahead, providing us means of communication that not too long ago existed only in the minds of science fiction writers.

Citing ECPA, the government claims it can track your movements without having to get a warrant from a judge, using the signal your mobile phone silently sends out every few seconds. The government also claims it can read your e-mail and sneak a peek at your online calendar and the private photos you have stored in “the cloud," all without a warrant.

The government admits that if it wants to seize photos on your hard drive, it needs a warrant from a judge. And if it wants to intercept your e-mail en route, well, it needs a warrant for that, too. But once the data comes to rest on the Internet’s servers, the government claims you’ve lost your privacy rights in it. Same data, different rules.

Sound illogical? Out of step with the way people use technology today? It is. Most people assume the Constitution protects them against unreasonable searches and seizures, regardless of technology. The Justice Department thinks differently. It argues that the Fourth Amendment's warrant requirement does not apply to data stored online.

That’s the same argument the government made about telephones 80 years ago. If you really wanted your privacy, the government argued, you wouldn’t use the telephone. Unfortunately, in 1928 the Supreme Court agreed and said that wiretapping was not covered by the Constitution. It took the Court 40 years to rule that ordinary telephone calls were protected.

The courts have been equally slow in recognizing the significance of the Internet. The Supreme Court still has never ruled on whether e-mail is protected by the Constitution. Next month, the Supreme Court will hear oral argument in a case involving GPS tracking; let’s hope it doesn’t tell us we have to wait 40 years for the Constitution to cover GPS. But whatever the outcome in that case, it is unlikely to resolve all the issues associated with the new technologies we depend on now in our daily lives.

Search, but with a warrant
It’s time for Congress to update ECPA to require a warrant whenever the government reads our e-mail or tracks our movements. No competent programmer would be content to release version 1.0 of a program and then just walk away, ignoring bug reports and refusing all requests for upgraded features. Why should Congress be content with version 1.0 of our digital privacy law?

The good news is that an upgrade is in the works. Leading Internet companies and public interest groups from the left and the right have founded the Digital Due Process coalition to press Congress to enact reforms to ECPA. DDP's chief request is that, just as the government needs a warrant to enter your house or seize your computer, it should get a warrant before gaining access to your private communications stored online or to track you via your mobile phone.

Congress has taken note. Earlier this week, Senators Ron Wyden (D-OR) and Mark Kirk (R-IL) held a press conference to highlight their bi-partisan sponsorship of a bill requiring government agents to get a warrant before using technological means to track an individual. The press conference was held amid a "Retro Tech Fair" that displayed a dazzling array of 1986-era computers—highlighting just how far technology has come since ECPA was passed.

Just yesterday, Sen. Patrick Leahy (D-VT), the original author of ECPA, announced his intention to schedule a Committee markup before year's end on his ECPA reform bill.

These are encouraging steps, but you can be sure that the Justice Department will put up a fight. Prosecutors would rather act on their own, without going before a judge. They will raise all kinds of arguments about why the standard set in the Constitution over 200 years ago should not apply to the Internet.

Proponents of stronger privacy protection are gearing up, too. A left-right coalition spanning political ideologies has launched a campaign where individuals can add their name to a petition urging Congress to enact strong privacy protections.

You can get nostalgic for a 25-year-old movie, but there's nothing endearing about a 25-year-old digital privacy law.

Jim Dempsey is the Vice President for Public Policy at the Center for Democracy & Technology in Washington, DC.
Photograph by Center for Democracy and Technology

Monday, October 17, 2011

The Default Choice, So Hard to Resist

By STEVE LOHR

IN the wide-open Web, choice and competition are said to be merely “one click away,” to use Google’s favorite phrase. But in practice, the power of digital distribution channels, default product settings and traditional human behavior often matters most.

In a Senate hearing last month about Google, Jeremy Stoppelman, the chief executive of Yelp, pointed to that reality in his testimony. “If competition really were just ‘one click away,’ as Google suggests,” he said, “why have they invested so heavily to be the default choice on Web browsers and mobile phones?”

“Clearly,” he added, “they are not taking any chances.”

Indeed, Google made a big bet early in its history: In 2002, it reached a deal with AOL, guaranteeing a payment of $50 million to come from advertising revenue if AOL made Google its automatic first-choice search engine — the one shown to users by default. Today, Google pays an estimated $100 million a year to Mozilla, coming from shared ad revenue, to be the default search engine on Mozilla’s popular Firefox Web browser in the United States and other countries. Google has many such arrangements with Web sites.

Most economists agree that Google’s default deals aren’t anticompetitive. Rivals like Bing, the general search engine from Microsoft, and partial competitors like Yelp, an online review and listing service for local businesses, have their own Web sites and other paths of distribution. Choice, in theory, is one click away.

But most people, of course, never make that single click. Defaults win.

The role of defaults in steering decisions is by no means confined to the online world. For behavioral economists, psychologists and marketers, defaults are part of a rich field of study that explores “decision architecture” — how a choice is presented or framed. The field has been popularized by the 2008 book “Nudge,” by Richard H. Thaler, an economist at the University of Chicago and a frequent contributor to the Sunday Business section, and Cass R. Sunstein, a Harvard Law School professor who is now on leave and is working for the Obama administration. Nudges are default choices.

In decision-making, examples of the default preference abound: Workers are far more likely to save in retirement plans if enrollment is the automatic option. And the percentage of pregnant women tested for H.I.V. in some African nations where AIDS is widespread has surged since the test became a regular prenatal procedure and women had to opt out if they didn’t want it.
A study published in 2003 showed that while large majorities of Americans approved of organ donations, only about a quarter consented to donate their own. By contrast, nearly all Austrians, French and Portuguese consent to donate theirs. The default explains the difference. In the United States, people must choose to become an organ donor. In much of Europe, people must choose not to donate.

Defaults, according to economists and psychologists, frame how a person is presented with a choice. But they say there are other forces that make the default path hard to resist. One is natural human inertia, or laziness, that favors making the quick, easy choice instead of exerting the mental energy to make a different one. Another, they say, is that most people perceive a default as an authoritative recommendation.

“All those work, and that is why defaults are so powerful,” says Eric J. Johnson, a professor at the Columbia Business School and co-director of the university’s Center for Decision Sciences.

THE default values built into product designs can be particularly potent in the infinitely malleable medium of software, and on the Internet, where a software product or service can be constantly fine-tuned.

“Computing allows you to slice and dice choices in so many ways,” says Ben Shneiderman, a computer scientist at the University of Maryland. “Those design choices also shape our social, cultural and economic choices in ways most people don’t appreciate or understand.”

Default design choices play a central role in the debate over the privacy issues raised by marketers’ tracking of online consumer behavior. The Federal Trade Commission is considering what rules should limit how much online personal information marketers can collect, hold and pass along to other marketers — and whether those rules should be government regulations or self-regulatory guidelines.

Privacy advocates want tighter curbs on gathering online behavioral data, and want marketers to have to ask consumers to collect and share their information, presumably in exchange for discount offers or extra services. Advertisers want a fairly free hand to track online behavior, and to cut back only if consumers choose to opt out.

New research by a team at Carnegie Mellon University suggests the difficulty that ordinary users have in changing the default settings on Internet browsers or in configuring software tools for greater online privacy. The project, called “Why Johnny Can’t Opt Out,” has just been completed and the results have not yet been published. Forty-five people of various backgrounds and ages in the Pittsburgh area were recruited for the study.

To qualify as research subjects, they had to be frequent Internet users and express an interest in learning about protecting their privacy online. Each was interviewed for 90 minutes, and each watched a video showing how online behavioral advertising works.

Then, each person was given a laptop computer and told to set privacy settings as he or she preferred, using one of nine online tools. The tools included the privacy options on browsers like Mozilla Firefox and Microsoft’s Internet Explorer, and online programs like Ghostery and Adblock Plus, as well as Consumer Choice from the Digital Advertising Alliance.
The privacy tools typically proved too complicated and confusing to serve the needs of rank-and-file Internet users.

“The settings they chose didn’t block as much as they thought they were blocking, often blocking nothing,” says Lorrie Faith Cranor, a computer scientist at Carnegie Mellon who led the research.

Ms. Cranor says the research points to the need to simplify privacy software to few choices. “If you turn it on, it should be pretty privacy-protective,” she says. “The defaults are crucial.”

Monday, October 3, 2011

New Book (Ordered): Jeff Jarvis: Are we Too Hung Up on Privacy

Jeff Jarvis: 'We now take it for granted that any piece of information we want is likely a search away.'

By L. GORDON CROVITZ

For many years, privacy has been evolving to become a right as fundamental as equal protection or free speech. But what if it comes at too high a cost? What if we have too much privacy when technology now makes sharing information so much easier and the value of shared information so much greater?

This is the thesis of a new book, "Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live," by Jeff Jarvis, a journalism professor at the City University of New York. In contrast to privacy activists, he argues for "publicness" to make the most of modern technologies.

"Just as we now take it for granted that any piece of information we want is likely a search away," Mr. Jarvis writes, "we are coming to rely on the idea that the people we want to meet are a connection away."

The benefits of social media are leading people increasingly to be more public. Most Americans over the age of 12 now have accounts on Facebook, whose entire purpose is to connect and share personal information. "To join up with fellow diabetics or vegetarians or libertarians or Star Trek fans, we first have to reveal ourselves as members of those groups," Mr. Jarvis writes.

Mr. Jarvis details privacy fears over time arising from new technologies, from the printing press to the telephone to the microphone. A century ago, Kodak cameras made it easy for the first time to take and share photos of people; Teddy Roosevelt for a time banned cameras from parks in Washington as a privacy violation.

Mr. Jarvis is his own test case for the benefits of sharing information. A hyperactive blogger and Twitter poster, he is famous among the social media set for his frank postings about his prostate cancer and the occasional embarrassing side effects of the treatment. He says in return for being so public, he got advice from men who had undergone the same procedure and the satisfaction of urging others to seek treatment. He once complained so bitterly online about problems with his computer that he created what became known online as "Dell hell."

The more than 80,000 people who follow Mr. Jarvis on Twitter know his views on many topics. These are often cranky. He credits WikiLeaks as journalism, thinks advertising revenue will somehow again be enough to fund news reporting, and last Friday reported he had visited Occupy Wall Street demonstrators (Twitter post: "Glad to see they were eating well & a generator powering many Macs") and made a donation. Those of us who disagree with these views appreciate much of the rest of his posts, including his links to interesting articles and events.

Mr. Jarvis is not starry-eyed about the Web. "Some people turn trollish, just announcing that they don't like what I'm saying, adding nothing to the discussion but venom," he told me last week. "They sometimes accuse me of oversharing. Well, I say they're over-listening. If they don't like what I say and don't choose to enter a discussion, they shouldn't follow me—and they shouldn't try to tell me what not to say."

Mr. Jarvis argues it should be up to each person where to balance the risks and rewards of being more public. "When new technologies cause change and fear, government's reflex is to regulate them to protect the past," he says. "But in doing so, they also can cut off the opportunities for the future."

Congress is considering several privacy bills. But Mr. Jarvis calls it a "dire mistake to regulate and limit this new technology before we even know what it can do."
Privacy is notoriously difficult to define legally. Mr. Jarvis says we should think about privacy as a matter of ethics instead. We should respect what others intend to keep private, but publicness reflects the choices "made by the creator of one's own information." The balance between privacy and publicness will differ from person to person in ways that laws applying to all can't capture.

"Perhaps this will lead to what I call the doctrine of mutually assured humiliation," Mr. Jarvis says. "I won't make fun of your silly picture if you don't make fun of mine. Perhaps it will lead to a greater expectation of openness from corporations and transparency from government. Perhaps it will also lead to people being more connected, for they can no longer run away from each other as they'll always be only a link or two apart."

No one could have known when the printing press was invented that it would enable people to share and test new ideas, from democracy to the scientific method. Some day we'll know whether digital technologies have such a profound impact, but we are already altering our behavior to take advantage of what they offer. Yesterday's expectation of privacy is rapidly giving way to something new, and perhaps better.

--------------------------------------------------Stefaan G. Verhulst
Chief of Research
Markle Foundation
10 Rockefeller Plaza, Floor 16
New York, NY 10020-1903

 Tel. 212 713 7630http://www.markle.org
P Please consider the environment before printing this e-mail.

Friday, September 23, 2011

Old data learns new tricks: Managing patient security and privacy on a new data-sharing playground


Data is quickly becoming one of the health industry’s most treasured commodities. Yet, health organizations are acutely aware that sensitive data can be easily compromised. In just the last year and a half, a breach of personal health information occurred, on average, every other day. Breaches erode productivity and patient trust. They’re costly, unpredictable, and unfortunately quite common. More than half of healthcare organizations surveyed by PwC have had at least one privacy/security-related issue in the last two years

·        Download: Old data learns new tricks (1.24mb)
·       Download: Old data learns new tricks: Chart pack (58kb)

Monday, September 19, 2011

NYTimes on ID

Call It Your Online Driver’s License

By NATASHA SINGER  NYT   9/18/11

Consumers who still pay bills via snail mail. Hospitals leery of making treatment records available online to their patients. Some state motor vehicle registries that require car owners to appear in person — or to mail back license plates — in order to transfer vehicle ownership.

But the White House is out to fight cyberphobia with an initiative intended to bolster confidence in e-commerce.

The plan, called the National Strategy for Trusted Identities in Cyberspace and introduced earlier this year, encourages the private-sector development and public adoption of online user authentication systems. Think of it as a driver’s license for the Internet. The idea is that if people have a simple, easy way to prove who they are online with more than a flimsy password, they’ll naturally do more business on the Web. And companies and government agencies, like Social Security or the I.R.S., could offer those consumers faster, more secure online services without having to come up with their own individual vetting systems.

“What if states had a better way to authenticate your identity online, so that you didn’t have to make a trip to the D.M.V.?” says Jeremy Grant, the senior executive adviser for identity management at the National Institute of Standards and Technology, the agency overseeing the initiative.

But authentication proponents and privacy advocates disagree about whether Internet IDs would actually heighten consumer protection — or end up increasing consumer exposure to online surveillance and identity theft.

If the plan works, consumers who opt in might soon be able to choose among trusted third parties — such as banks, technology companies or cellphone service providers — that could verify certain personal information about them and issue them secure credentials to use in online transactions.

Industry experts expect that each authentication technology would rely on at least two different ID confirmation methods. Those might include embedding an encryption chip in people’s phones, issuing smart cards or using one-time passwords or biometric identifiers like fingerprints to confirm substantial transactions. Banks already use two-factor authentication, confirming people’s identities when they open accounts and then issuing depositors with A.T.M. cards, says Kaliya Hamlin, an online identity expert known by the name of her Web site, Identity Woman.

The system would allow Internet users to use the same secure credential on many Web sites, says Mr. Grant, and it might increase privacy. In practical terms, for example, people could have their identity authenticator automatically confirm that they are old enough to sign up for Pandora on their own, without having to share their year of birth with the music site.

The Open Identity Exchange, a group of companies including AT&T, Google, Paypal, Symantec and Verizon, is helping to develop certification standards for online identity authentication; it believes that industry can address privacy issues through self-regulation. The government has pledged to be an early adopter of the cyber IDs.

But privacy advocates say that in the absence of stringent safeguards, widespread identity verification online could actually make consumers more vulnerable. If people start entrusting their most sensitive information to a few third-party verifiers and use the ID credentials for a variety of transactions, these advocates say, authentication companies would become honey pots for hackers.

“Look at it this way: You can have one key that opens every lock for everything you might need online in your daily life,” says Lillie Coney, the associate director of the Electronic Privacy Information Center in Washington. “Or, would you rather have a key ring that would allow you to open some things but not others?”

Even leading industry experts foresee challenges in instituting across-the-board privacy protections for consumers and companies.

For example, people may not want the banks they might use as their authenticators to know which government sites they visit, says Kim Cameron, whose title is distinguished engineer at Microsoft, a leading player in identity technology. Banks, meanwhile, may not want their rivals to have access to data profiles about their clients. But both situations could arise if identity authenticators assigned each user with an individual name, number, e-mail address or code, allowing companies to follow people around the Web and amass detailed profiles on their transactions.

“The whole thing is fraught with the potential for doing things wrong,” Mr. Cameron says.

But next-generation software could solve part of the problem by allowing authentication systems to verify certain claims about a person, like age or citizenship, without needing to know their identities. Microsoft bought one brand of user-blind software, called U-Prove, in 2008 and has made it available as an open-source platform for developers.

Google, meanwhile, already has a free system, called the “Google Identity Toolkit,” for Web site operators who want to shift users from passwords to third-party authentication. It’s the kind of platform that makes Google poised to become a major player in identity authentication.

But privacy advocates like Lee Tien, a senior staff lawyer at the Electronic Frontier Foundation, a digital rights group, say the government would need new privacy laws or regulations to prohibit identity verifiers from selling user data or sharing it with law enforcement officials without a warrant. And what would happen if, say, people lost devices containing their ID chips or smart cards?

“It took us decades to realize that we shouldn’t carry our Social Security cards around in our wallets,” says Aaron Titus, the chief privacy officer at Identity Finder, a company that helps users locate and quarantine personal information on their computers.

Carrying around cyber IDs seems even riskier than Social Security cards, Mr. Titus says, because they could let people complete even bigger transactions, like buying a house online. “What happens when you leave your phone at a bar?” he asks. “Could someone take it and use it to commit a form of hyper identity theft?”

For the government’s part, Mr. Grant acknowledges that no system is invulnerable. But better online identity authentication would certainly improve the current situation — in which many people use the same one or two passwords for a dozen or more of their e-mail, e-tail, online banking and social network accounts, he says.

Mr. Grant likens that kind of weak security to flimsy locks on bathroom doors.

“If we can get everyone to use a strong deadbolt instead of a flimsy bathroom door lock,” he says, “you significantly improve the kind of security we have.”

But not if the keys can be compromised.
A version of this article appeared in print on September 18, 2011, on page BU4 of the New York edition with the headline: Call It Your Online Driver’s License.

Friday, September 16, 2011

Privacy Law Would Help U.S. Compete, Official Says

Privacy Law Would Help U.S. Compete, Official Says
By Juliana Gruenwald    Updated: September 15, 2011 | 5:59 p.m.  National Journal

U.S. firms would be more competitive and better able to comply with foreign privacy laws if the United States had a broad law protecting consumer privacy online, a Commerce Department official told a House panel on Thursday.

“It would be helpful and I think it would help the competitiveness of our businesses if we had baseline privacy protections that are flexible and take into account really the changing economy, [and] changing technologies,” Nicole Lamb-Hale of Commerce’s International Trade Administration told the Energy and Commerce Subcommittee on Commerce, Manufacturing and Trade.

Some privacy advocates have called on the EU to get tougher with the United States and require it to harden up the current mix of industry self-regulation and some specific privacy laws related to health and finance. They say industry self-regulation has failed to protect Internet users who are increasingly being tracked  by companies that collect information for advertising purposes. The Obama administration and even some tech firms such as Intel and Microsoft have called on Congress to pass legislation that would establish baseline privacy protections.

The House panel examined how the European Union’s privacy law, which was first adopted in 1995, affects U.S. firms and what lessons it may provide U.S. policymakers. The law bars the flow of personal data about EU citizens to countries that do not have “adequate” privacy protections.

To ensure that U.S. firms would not be harmed by the law, the U.S. government negotiated a “safe harbor” in the late 1990s with the EU that allows companies to be deemed in compliance with the EU privacy law if they follow an agreed set of privacy principles.

Paula Bruening, vice president for global policy for the Center for Information Policy Leadership, said the EU law has not been implemented or enforced consistently among member states. She said it imposes burdensome administrative requirements on U.S. companies.

The EU is currently considering changes to the law to respond to some of these criticisms, but may also make it tougher. Lamb-Hale said it is unclear whether the European Union would continue to recognize the safe harbor after it revises its privacy law.

The Trans Atlantic Consumer Dialogue, a coalition of nearly 80 European and U.S. consumer groups, wrote the subcommittee earlier this week saying there is much the United States could learn from the Europeans on privacy given the rising levels of privacy breaches in the United States.

Ohio State University law professor Peter Swire, a privacy adviser in the Clinton administration, noted that countries outside of Europe have been passing privacy laws based on the EU directive. He said U.S. companies could face problems moving data out of those countries as well.

However, Consumer Data Industry Association President Stuart Pratt told National Journal after the hearing that he believes the cost of complying with a U.S. privacy law would far outweigh any benefits companies would receive from it.

Subcommittee Chairwoman Mary Bono Mack, R-Calif., said she has not decided whether Congress should pass privacy legislation. She plans more hearings to explore the issue. “My purpose in holding this hearing is not to point fingers,” she said. “Instead, my goal is to point to a better way to protect privacy online and promote e-commerce.”

Want to stay ahead of the curve? Sign up for National Journal's AM & PM Must Reads. News and analysis to ensure you don't miss a thing.

Tuesday, September 13, 2011

Jeff Rosen in NYTimes: Protect Our Right to Anonymity

By Jeffrey Rosen  September 12, 2011   NYT
 IN November, the Supreme Court will hear arguments in a case that could redefine the scope of privacy in an age of increasingly ubiquitous surveillance technologies like GPS devices and face-recognition software.

The case, United States v. Jones, concerns a GPS device that the police, without a valid warrant, placed on the car of a suspected drug dealer in Washington, D.C. The police then tracked his movements for a month and used the information to convict him of conspiracy to sell cocaine. The question before the court is whether this violated the Fourth Amendment to the Constitution, which prohibits unreasonable searches and seizures of our “persons, houses, papers, and effects.”

It’s imperative that the court says yes. Otherwise, Americans will no longer be able to expect the same degree of anonymity in public places that they have rightfully enjoyed since the founding era.

Two federal appellate courts have upheld the use of GPS devices without warrants in similar cases, on the grounds that we have no expectation of privacy when we are in public places and that tracking technology merely makes public surveillance easier and more effective.

But in a visionary opinion in August 2010, Judge Douglas H. Ginsburg, of the United States Court of Appeals for the District of Columbia Circuit, disagreed. No reasonable person, he argued, expects that his public movements will be tracked 24 hours a day, seven days a week, and therefore we do have an expectation of privacy in the “whole” of our public movements.
“Unlike one’s movements during a single journey,” Judge Ginsburg wrote, “the whole of one’s movements over the course of a month is not actually exposed to the public because the likelihood anyone will observe all those movements is effectively nil.”

Judge Ginsburg realized that ubiquitous surveillance for a month is impossible, in practice, without technological enhancements like a GPS device, and that it is therefore qualitatively different than the more limited technologically enhanced public surveillance that the Supreme Court has upheld in the past (like using a beeper to help the police follow a car for a 100-mile trip).

The Supreme Court case is an appeal of Judge Ginsburg’s decision. If the court rejects his logic and sides with those who maintain that we have no expectation of privacy in our public movements, surveillance is likely to expand, radically transforming our experience of both public and virtual spaces.

For what’s at stake in the Supreme Court case is more than just the future of GPS tracking: there’s also online surveillance. Facebook, for example, announced in June that it was implementing face-recognition technology that scans all the photos in its database and automatically suggests identifying tags that match every face with a name. (After a public outcry, Facebook said that users could opt out of the tagging system.) With the help of this kind of photo tagging, law enforcement officials could post on Facebook a photo of, say, an anonymous antiwar protester and identify him.

There is also the specter of video surveillance. In 2008, at a Google conference on the future of law and technology, Andrew McLaughlin, then the head of public policy at Google, said he expected that, within a few years, public agencies and private companies would be asking Google to post live feeds from public and private surveillance cameras all around the world. If the feeds were linked and archived, anyone with a Web browser would be able to click on a picture of anyone on any monitored street and follow his movements.

To preserve our right to some degree of anonymity in public, we can’t rely on the courts alone. Fortunately, 15 states have enacted laws imposing criminal and civil penalties for the use of electronic tracking devices in various forms and restricting their use without a warrant. And in June, Senator Ron Wyden, Democrat of Oregon, and Representative Jason Chaffetz, Republican of Utah, introduced the Geolocation Privacy and Surveillance Act, which would provide federal protection against public surveillance.

Their act would require the government to get a warrant before acquiring the geolocational information of an American citizen or legal alien; create criminal penalties for secretly using an electronic device to track someone’s movements; and prohibit commercial service providers from sharing customers’ geolocational information without their consent — a necessary restriction at a time of increasing cellphone tracking by private companies.

It’s encouraging that Democrats and Republicans in Congress are coming together to preserve the expectations of anonymity in public that Americans have long taken for granted. Soon, liberal and conservative justices on the Supreme Court will have an opportunity to meet the same challenge.

If they fail to rise to the occasion, our public life may be transformed in ways we can only begin to imagine.

Jeffrey Rosen, a law professor at George Washington University, is an editor of the forthcoming book “Constitution 3.0: Freedom and Technological Change.”

Monday, August 29, 2011

The PII Problem: Privacy and a New Concept of Personally Identifiable Information

The PII Problem: Privacy and a New Concept of Personally Identifiable Information

Paul M. Schwartz
University of California, Berkeley - School of Law
Daniel J. Solove
George Washington University Law SchoolNew York University Law Review, Vol. 86, 2011

Abstract:     Personally identifiable information (PII) is one of the most central concepts in information privacy regulation. The scope of privacy laws typically turns on whether PII is involved. The basic assumption behind the applicable laws is that if PII is not involved, then there can be no privacy harm. At the same time, there is no uniform definition of PII in information privacy law. Moreover, computer science has shown that in many circumstances non-PII can be linked to individuals, and that de-identified data can, in many circumstances, be re-identified. PII and non-PII are thus not immutable categories, and there is a risk that information deemed non-PII at one point in time can be transformed into PII at a later juncture. Due to the malleable nature of what constitutes PII, some commentators have even suggested that PII be abandoned as the means to define the boundaries of privacy law.

In this Article, Professors Paul Schwartz and Daniel Solove argue that although the current approaches to PII are flawed, the concept of PII should not be abandoned. They develop a new approach called “PII 2.0,” which accounts for PII’s malleability. Based upon a standard rather than a rule, PII 2.0 is based upon a continuum of risk of identification. PII 2.0 regulates information that relates to either an “identified” or “identifiable” individual, and it establishes different requirements for each category. To illustrate their theory, Schwartz and Solove use the example of regulating behavioral marketing to adults and children. They show how existing approaches to PII impede the effective regulation of behavioral marketing and how PII 2.0 would resolve these problems.

Monday, July 18, 2011

CDT Justin Brookman: Why the US needs a data privacy law-and why it might finally get one

Why the US needs a data privacy law—and why it might finally get one
By Justin Brookman | Published July 18, 2011  ARS

The general public and Congress have both discovered geolocation, data breaches, and tracking cookies—and they're worried about the privacy implications. In this op-ed, the Center for Democracy & Technology's Justin Brookman argues that this could be the moment at which everything comes together to make comprehensive privacy reform possible. The opinions in this op-ed do not necessarily represent those of Ars Technica.

With the understandable exceptions of the national debt and the deployments of our troops abroad, privacy is possibly the hottest issue in Congress today. After ten years of limited interest in the subject, we’ve recently seen a spate of legislation introduced to give consumers rights over how their information is collected and shared.

In the House of Representatives, Reps. Bobby Rush (D-IL) and Cliff Stearns (R-FL) have each introduced separate comprehensive bills. In the Senate, John Kerry (D-MA) and John McCain (R-AZ) recently introduced the "Commercial Privacy Bill of Rights" with similar goals. The (Democrat-led) Senate Commerce Committee recently held a hearing on the topic of privacy; the next week, the (Republican-led) House Energy and Commerce Committee looked at the same thing.

In a town where positions on issues are often deeply divided along partisan lines, it’s encouraging to see that there appears to be at least one issue that both parties recognize as a problem that needs to be addressed.

Not much company
Here’s why Congress is interested: today, the United States and Turkey are the only developed nations in the world without a comprehensive law protecting consumer privacy. European citizens have privacy rights, Asian citizens have privacy rights, Latin American citizens have privacy rights. In the US, however, in lieu of a comprehensive approach, we have a handful of inconsistent, sector-specific laws around particularly sensitive information like health and financial data. For everything else, the only rule for companies is just “don’t lie about what you’re doing with data.”

The Federal Trade Commission enforces this prohibition, and does a pretty good job with this limited authority, but risk-averse lawyers have figured out that the best way to not violate this rule is to not make explicit privacy promises at all. For this reason, corporate privacy policies tend to be legalistic and vague, reserving rights to use, sell, or share your information while not really describing the company’s practices. Consumers who want to find out what’s happening to their information often cannot, since current law actually incentivizes companies not to make concrete disclosures.

This has been the case for years, of course, but in the modern era of constant connectivity, social networking, and cheap data storage and processing, the stakes are remarkably higher. Before the advent of the Internet, there were only so many data points for marketers and information brokers to collect about you, and bookstores and libraries didn’t share what you were reading. Even just a few years ago, when you went to a major publisher website, there might have been a couple third-party trackers on the site who could drop a cookie on your computer to “anonymously” track you across other sites. Today, these same sites may deploy hundreds of trackers from dozens of different companies, many of which know your offline identity as well. What happens to all that information? With whom is it shared? No one really knows, and there is no framework to regulate it.

Bad for business
This black box into which our data flows is bad for consumers, but it’s increasingly an impediment to US businesses as well. As Silicon Valley companies encourage consumers to store their personal data in “the cloud,” people are legitimately asking, “Why? What’s going to happen to my data there?” Today, the US is the undisputed leader in cloud computing services, but international competitors are increasingly advertising the fact that their services aren’t US-based. The Department of Commerce recently issued a report arguing that the lack of privacy protections threatens both the adoption of new technologies by worried consumers and the ability to have international data sent to the US. Last week, Forrester Research released a study showing that privacy concerns were the biggest impediment to the growth of e-commerce on mobile technologies.

Companies would be better off if they all provided meaningful privacy protections for consumers, but privacy is a collective action problem for them: many companies would love to see the ecosystem fixed, but no one wants to put themselves at a competitive disadvantage by imposing unilateral limitations on what they can do with user data. It’s fantastic to see companies endeavoring to compete on privacy (such as Google touting the privacy features of its new social network), but so far such competition has been spotty and often takes place at the margins. Many companies that touch and store consumer data don’t have consumer-facing sides (like the ever-increasing number of intermediaries in the behavioral advertising space), so it’s hard to see the Internet ecosystem fixing itself on its own.

And let’s be frank: so far, self-regulation hasn’t been enough. Increasingly, leading multinational corporations have recognized this problem, and companies like Microsoft, Intel, and HP that have heavily invested in cloud technologies have endorsed specific legislative solutions such as the Kerry-McCain and Rush bills to provide consumers with comprehensive privacy protections.

Any privacy law that is enacted doesn’t need to, and shouldn’t, prohibit data sharing or invalidate business models. However, consumers have a right to know what’s happening with their information and to have a say in how it gets shared. If a company insists on sharing data about a consumer as a condition of doing service, fine. As long as that fact is clearly conveyed, and the consumer decides to accept the terms, we shouldn’t put limits on what consumers are willing to do with their own information. Unfortunately, consumers today aren’t even told what’s happening, so they can’t exercise meaningful control over their data unless they take extreme measures to anonymize their surfing though services like Tor or block third-party content (which surely isn’t the right result for anyone).

So will a new law be passed? As with anything in Washington, it’s hard to say what will happen—Congress has a lamentable tendency to kick problems down the road for another day. However, with tremendous attention to privacy issues and widespread consumer support for basic consumer protections, we have the best opportunity in memory to enact basic rules to give people control of their personal information and to give them confidence in an increasingly complex data ecosystem. We should take advantage of this moment to develop a considered consensus on reasonable baseline protections that work for both consumers and businesses.

Justin Brookman is Director of the Consumer Privacy Project at the Center for Democracy & Technology in Washington, DC.

Privacy Isn't Dead. Just Ask Google+.

July 18, 2011, 12:59 pm
Privacy Isn’t Dead. Just Ask Google+.
By NICK BILTON
http://bits.blogs.nytimes.com/2011/07/18/privacy-isnt-dead-just-ask-google/?smid=tw-nytimesbits&seid=auto#h[]

Some people have a very hard time trusting Facebook.

After dozens of privacy problems over the years, they’ve grown extremely weary of what the company is doing with my personal information.  I, for one, rarely use Facebook anymore, beyond a rare comment or “Like.”

My Facebook fears stem from the several instances when the company has added new features to the site and chose to automatically opt-in hundreds of millions of users, most of whom don’t even know they’ve been signed up for the new feature. I’ve also been sapped by the company’s hyper-confusing privacy policy, which requires users to navigate a labyrinth of buttons and menus when hoping to make their personal information private.

For Facebook, these breaches on people’s personal privacy rarely result in any repercussions: the negative press is usually temporary, and users have mostly stayed with the service, saying that there isn’t a viable alternative social network to talk to family and friends.
That is, until now.

Enter Google+, which started last month and has already grown to 10 million users. Rather than focus on new snazzy features — although it does offer several — Google has chosen to learn from its own mistakes, and Facebook’s. Google decided to make privacy the No. 1 feature of its new service.

I learned this lesson accidentally last week. When I signed up for Google+, I quickly posted a link to a New York Times article I wanted to share with people. Several hours later my Google+ link lay dormant. No comments. No +1 clicks. And no resharing the link.

It wasn’t until later that I realized that my post had been made private by default; a Google+ user has to specifically say they want to share a post publicly. By doing this, Google has chosen to opt users out of being public, rather than the standard practice by most other services to automatically opt users in.

This isn’t to say Google is perfect. Last year the company has had its fair share of privacy problems. This happened most recently when it started Google Buzz, a social networking service, which turned into a privacy disaster and resulted in calls in Congress to investigate the company.

With Google’s latest offering, it seems that the company not only learned its lesson about the importance of privacy for consumers online, but also realized that Facebook hasn’t learned about the importance of this issue either.

Wednesday, July 13, 2011

The Importance of FIPs in data exchange

Channel: RHIOs/HIEs
Source: Lorraine Fernandes, global healthcare ambassador, IBM
Date: Jul 12, 2011
http://www.nhinwatch.com/perspective/importance-fips-data-exchange


Many of the Department of Health and Human Services, Office of the National Coordinator, Privacy and Security Tiger Team discussions over the past year have invoked the FTC's Fair Information Practices. Why? Because the Health Insurance Portability and Accountability Act (HIPAA) does not address one of today's most critical healthcare issues - data sharing. In the absence of updated regulations, the FIPs offer a comprehensive framework for moving forward.

The best way to move forward is to remove the emotion from the privacy and consent debate and instead look at this in a practical, constructive fashion. Perhaps Paul Tang, vice chair of the HIT Policy committee and member of numerous workgroups, said it best during one of the Tiger Team meetings last summer: "What would a patient expect?"

The Markle Foundation submitted a letter to the Department of Commerce on February 18, 2011, concisely articulating the importance of FIPs in today's society. As suggested in the letter, titled "The Need for a Coordinated Department of Commerce Policy on Consumer Protection and Privacy," we must look at data in a broader fashion and recognize that when we talk about data, we are really talking about consumer data, not healthcare data. This broader consumer framework paves the way for us to move away from our current prescriptive system, which focuses too much on regulations, toward a set of principles that allows us to respond to innovation and changing technology. There is a place for regulations, but let's have that dialogue after we have a solid foundation.

Let's ponder for a moment the FIPs and how we can use them to help achieve the goals of improving individual and population health.

Openness and Transparency - Consumers should be able to readily access data-usage policies, understand the collection and use of their data, and be able to limit the use of their data if they choose to do so. This can be achieved by public notices, website postings, social media and other more traditional approaches. Full transparency is crucial to building consumer trust.

Purpose Specification and Minimization - Data use should be specified at the time of collection and use should be limited only to those stated purposes. And if there is a proposed change in the use, the consumer should be notified. The classic "bait and switch" should never occur with consumer data.

Collection Limitation - This might also be coined "minimum data necessary." Don't collect more data than what is needed for the purpose at hand. This is particularly true when dealing with sensitive data like social security number, certain clinical conditions and past histories in a treatment setting. Perhaps the standard question when developing new data collection practices should be: "Do I really need this data to achieve my goals?"

Use Limitation - Data should be used only for the stated purpose. No dissemination or re-use should be undertaken unless consistent with the use limitation. For example, personally identifiable information should not be used for research unless the patient has been notified.

Individual Participation and Control - Consumers should understand how their data will be used. I think Dr. Tang's "What would the patient expect?" question really articulates a clear practice matching this FIP. Consumers should be notified on a timely basis if there is a data breach. The Phase 1 Meaningful Use requirement for patient access to their data also nicely matches this principle. Patients should be able to conduct a "consumer audit" to find out where their data has been used, whether that data is identifiable, de-identified or limited.

Data Integrity and Quality - Data collected (consistent with the other FIPs) should be accurate, complete and up to date. It should also include attribution (the originating source of the data). If problems are identified with the data quality, then the consumer should have remedies consistent with the FIPs.

Security Safeguards and Controls - Reasonable safeguards should be employed to protect against data theft, breach and unauthorized access. Clearly this is a problematic area, given the incidences of laptop thefts that frequently expose unencrypted data.

Accountability and Oversight - Those in control of consumer information must be accountable for following the FIPs. If breaches occur, those responsible must be disciplined consistent with policies and remedies.

Remedies - Remedies should be documented, transparent and must address what happens if there is a breach or privacy violation.

Following these basic practices and associated principles, and tying all discussions about data collection and exchange to the FIPs, would go a long way to building consumer trust and confidence. If we used these practices as a framework, the discussions could be more rationale, pragmatic, understandable and results oriented.  And, we can't pick and choose; we must use the FIPS as a whole.

When the FIPS are "front and center," consumers are front and center, and that is the only path that leads to trust in electronic health records and data exchange.

Lorraine Fernandes, RHIA, is the global Healthcare ambassador for IBM.

How Google and Data-Mining Drive Economic Inequality in Our Nation

Nathan Newman, July 11, 2011  Huffington Post

This is the first part in a three-part series that will run this week at HuffPost on why lost privacy online matters for economic equity in our economy.
Why has economic inequality increased so radically in the United States over the last generation?
General explanations range from globalization to the decline in trade unions to rising returns to education -- and therefore the loss of income to the less educated. These all no doubt play a role, but in an age of information what is unquestionably true is that control of that information is extremely unequal -- and that inequality drives broader economic inequality in our economy.

Information is power and as companies know more and more about us, while the products they sell become more opaque and complicated -- think mortgage-based Collateralized Debt Obligations (CDOs) -- inequality in information begets a massive transfer of wealth from individuals to corporations and to their shareholders. Companies figure out not just what to sell you but the maximum price you and other people like you will pay for that product.

Privacy is About Economic Power and Inequality: The debate on privacy online is therefore not about whether you think it's creepy that corporations are tracking your online activities. You may not have a strong "ick" factor from corporate surveillance per se -- I don't myself -- but what you should care about is that lost privacy is converted by those companies into information that ultimately drives greater economic inequality in our country.

One original promise of the Internet was that "no one knows you're a dog on the Internet" but we have instead evolved through data-mining and online surveillance into a world where not only do companies know what you are, they know where you are and what you are most interested in. For the economically privileged, that may not seem like much of a problem and even a benefit since companies may be able to service your needs more effectively. But for those who already suffer discrimination and exploitation, whether because of race, poverty or other factors, it means that the Internet can just magnify and target that discriminatory treatment and exploitation.

Which brings us to the Federal Trade Commission antitrust investigation into Google. The problem with Google is not that users don't have enough competing options on search engines but that Google's dominance of search and other online products allows them to extract the most massive quantities of private information from users of any corporation. And as I described in my piece back in March, You're Not Google's Customer, You're the Product, Google's real customers are the whole array of corporations who buy access to that user information to know how to effectively market their products and increase their profits.

Google at the Nexus of the Marketing of Privacy: Google is the key nexus in the information age, pricing individual privacy and monetizing it for the benefit of global corporations. They are the dominant middleman between hundreds of millions of people -- even approaching billions globally -- and the corporations using that Google-generated profiling to market their products and extract profit for their shareholders.

And it is that global market power over private individual data by Google that antitrust regulators need to investigate in order to counteract the rising inequality in the information economy. The cost of lost privacy driven by Google is corporate data-mining and manipulated prices across a whole array of markets and the exacerbation of multiple forms of discrimination in the marketplace. Google's monopoly dominance of personal information thereby helps leverage the broader corporate dominance of our lives by the companies using its data.

Why Free is a Bad Deal: The first step in how lost privacy increases economic inequality begins at the moment users give away their private information in the first place. Google offers the enticement of free services in exchange for users turning over a whole range of basic personal data and even what their basic desires are in the form of the whole record of what they search for on Google's pages.

What could be better than free, most users think, as they take the deal offered? It's a bit like how early bank customers might have felt, being told the bank would keep their money safe for free, only later figuring out that the bank was making tons of money lending that money to other people. The free Google tools into which users drop their private information are like the vault banks offered to store your money: it's not a service but a honeypot that allows both banks and Google to resell what users deposit there. Bank customers now expect actual payment in the form of interest for money deposited in banks but most Google customers don't even recognize that their private information has a monetary value that has economic value.
To put it another way, the fact that users are de facto involved in barter with Google, trading privacy for individual tools, should tell you this is an exploitative situation. Like most barter economies, pricing is opaque and creates massive opportunities for economic arbitrage by the sophisticated side of the barter transaction -- i.e. Google. Essentially, Google users are the primitive tribes of the Internet, accepting the shiny trinkets of Gmail and free search in exchange for their privacy.

Google then takes that private information and monetizes it with advertisers who pay very precise dollar terms in the modern part of the Google economy. And those advertisers pay prices far above the costs spent by Google on the tools provided to users -- as highlighted by Google's massive profits year after year. That advertising side of Google's internal economy is actually a monument to converting privacy into a modern currency, with sophisticated auctions for key words and phrases based on particular user demographics and backgrounds that the advertiser may be looking for. One analyst describes this as less the sale of privacy itself by Google, but rather the sale of a "privacy derivative", where companies invest in Google's appraisal of customers' needs and wants.(See Karl T. Muth's Googlestroika: Privatizing Privacy for more on how Google monetizes user privacy).

So the first step in the transfer of wealth via Google is from users selling their privacy for too little and Google arbitraging user ignorance for profit. If Google had less dominance of the online advertising field, there would be far greater pressure for Google to develop as sophisticated a market for users to be compensated for their privacy as the markets in which it resells that lost privacy.

To get some sense of the value of user information, look at the recent controversy over another big Internet player, namely Apple, when it demanded that sellers of subscriptions to apps on the iPhone had to give Apple not just 30% of sales, but sole control of user information as well. Lauren Idvik at Mashable noted that publishers like the Financial Times may not have liked the 30% cut Apple wanted from subscriptions, but "the main problem is that Apple will not share subscriber data with publishers, long one of publishers' most valuable assets, particularly to advertisers." Think about it -- your personal data is worth potentially more than 30% of the cost of what you are purchasing and most users give it away for free to companies like Google and Apple.

And Google is looking to leverage its position at the nexus of the Internet to further expand its data collection of users -- and the opportunities for marketing that data in Internet commerce. Most recently, Google is making a play for inserting what's called NFC technology into every smartphone and turn them into wireless credit cards -- and a substitute for every other card you carry -- that would make all commerce easier for users, while giving Google information on every transaction you make and providing even more expanded data on user shopping habits. Google is marching from dominance over information about online commerce to trying to dominate information about offline shopping as well.

In part 2 of this series, I'll look at why this personal information is so valuable to advertisers and how it empowers what economists call "price discrimination" and just plain old racial discrimination. Part 3 will look at the role of Google in the subprime mortgage debacle and its aftermath, as well as the broader antitrust implications of the company's dominant role as an intermediary for behavioral targeting of consumers by advertisers.

Nathan Newman, a lawyer and Ph.D., has an extensive history of supporting local policy campaigns, from coalition organizing work to drafting legislation. Previously Executive Director of Progressive States, an Associate Counsel at the Brennan Center for Justice, Program Director of NetAction's Consumer Choice Campaign, and co-director of the UC-Berkeley Center for Community Economic Research, he has also been a labor and employment lawyer, freelance columnist and technology consultant. He received his J.D. from Yale Law School and his Ph.D. in Sociology from the University of California at Berkeley and has written extensively about public policy and the legal system in a range of academic and popular journals, including publishing a book, Net Loss: Internet Prophets, Private Profits and the Costs to Community, detailing the relationship between telecommunications public policy and local economic development. His writing and organizing has been cited in the New York Times, USA Today. San Jose Mercury News, Baltimore Sun, Wired, Village Voice, ZDNet, CNet News, San Francisco Chronicle, TheStreet.com, Chronicle of Higher Education, MIT’s Technology Review, The Nation and the American Prospect. He runs his own site at www.nathannewman.org and a technology policy site, www.tech-progress.org.

Lack of Genuine Privacy Interest Doomed Vermont Drug Marketing Law

  
Deven McGraw        Monday, July 11, 2011  Ihealthbeat

On June 23, the Supreme Court issued its much anticipated decision in Sorrell v. IMS Health, striking down as unconstitutional a Vermont statute that prohibited the use of drug prescribing information for marketing purposes. In a 6-3 decision, the court found that the Vermont law violated the free speech rights of drug marketers. 

A number of privacy advocates had weighed in on the case, seeing it as a showdown between privacy and corporate claims of free speech rights. The Center for Democracy & Technology was skeptical of the privacy arguments made in defense of the law, but we too were worried about its potential impact on a range of health privacy and health IT issues.

After thorough review of the opinion, it is clear that the case should not be read as a threat to well-crafted privacy laws. As interpreted by the Supreme Court, the Vermont statute was an explicit effort to control specific speech by specific speakers -- a double no-no in First Amendment jurisprudence. And, as a privacy law, it was ineffective because it allowed pharmacies to share the covered information with anyone for any reason save one: marketing by drugmakers.

Ironically, a more comprehensive regulation of prescription data -- motivated by a genuine interest in protecting privacy and drawn to serve that interest -- would have been more likely to have been upheld.
Why Did the Supreme Court Strike Down This Law?

To begin with, it is important to recognize that patient privacy was not at issue in Sorrell v. IMS Health because the data at question did not identify patients.  Instead, the data identified prescribers, primarily doctors, and their prescribing patterns. In a process known as "detailing," drug company sales representatives use the data when they visit a doctor's office to persuade the doctor to buy a particular pharmaceutical, which the court noted was almost always a "high-profit brand-name" drug.

The Supreme Court found that the intent of the law was targeted solely at the marketing of brand-name drugs by drugmakers. The law prohibited the sale of prescriber-identifying data without the prescriber's consent, but the exceptions to that prohibition were so broad that they actually allowed sale to anyone except drugmakers. The law also prohibited the use of such data, absent prescriber consent, by pharmacies and drugmakers for marketing purposes. On the face of these provisions alone, the Supreme Court had no trouble finding that the law was a transparent attempt to stop pharmaceutical companies from engaging in effective marketing of their brand-name drugs. 

Matters got worse when the court looked at the findings adopted by the Vermont state Legislature when it passed the law. Those findings expressly said, "the goals of marketing programs are often in conflict with the goals of the state." Since the Supreme Court has long held that marketing is "speech" under the First Amendment, and since the whole point of the First Amendment is to protect speech that the government doesn't like, this statement alone probably doomed the law.

Normally, commercial speech is subject to a relatively weaker form of protection than non-commercial speech. But once the Supreme Court found the Vermont law was targeting a specific kind of speech -- drug marketing -- by a specific kind of speaker -- drug companies -- the law became subject to what the court calls "heightened scrutiny." On top of that, the court found the law appeared to allow the use of prescriber-identifying data to promote less-expensive generic drugs. 

So, in the Supreme Court's view, the law allowed covered information to be used for those marketing messages the state considered to be good, and only prohibited its use for marketing messages the state thought was bad. That kind of control is called "viewpoint" discrimination -- where the government is targeting only one side of an issue -- and that is the ultimate offense under the First Amendment.

With all of that, the Supreme Court said that the Vermont law might have withstood scrutiny if it in fact had been well crafted to serve a legitimate state interest. And, the court assumed that protecting doctor privacy was a legitimate state interest. The problem was that the law totally failed to protect privacy and was not an appropriate response to the other goals the state advanced in its defense.

In rejecting the privacy claim, the Supreme Court emphasized that under the Vermont law, "pharmacies may share prescriber-identifying information with anyone for any reason save one:" marketing. The court noted that the state "all but conceded" that the statute does not advance confidentiality interests. Further, arguments that the law also was intended to protect doctors from aggressive sales tactics carried no weight with a court that had previously held that the First Amendment protects speech even when it "may move people to action, bring them to tears or inflict great pain."

The state also argued that the law advanced legitimate public policy goals by lowering the cost of health care. That is a legitimate goal, the court agreed, but the government cannot pursue it by curtailing speech. Quoting from an earlier decision, the Supreme Court said, "the fear that people would make bad decisions if given truthful information cannot justify content-based burdens on free speech." The court said that if the government wants to control health care costs, it has to do so directly, not by curtailing speech or cutting off access to information that is used in speech the state thinks exacerbates the cost problem.

In sum, because the statute discriminated both on the basis of content and viewpoint, and because it was not actually drawn to serve its stated goal of protecting doctor privacy, it could not survive scrutiny under the First Amendment.

What Are the Potential Implications of This Decision?

The Supreme Court's decision might mean that similar drug marketing laws adopted for similar reasons by Maine and New Hampshire also are unconstitutional. In addition, the case is highly relevant to other laws that try to specifically regulate advertising. Beyond that, however, the case probably sets no new standards for review of health privacy or privacy regulation in general.

Some organizations had urged the court to find that the data at issue could identify patients. This implicated the question of whether the HIPAA de-identification standard provides sufficient protection for patient privacy. The Supreme Court did not take the bait on that issue. It never questioned the premise that the data were adequately de-identified as to patients. Consequently, the important public policy considerations surrounding de-identification should be resolved by legislatures and regulatory bodies, which are better suited to handle them.

Most importantly, the case does not deal a death blow to privacy regulation. To the contrary, the Supreme Court noted that the state could have advanced its asserted privacy interest "by allowing the information's sale or disclosure in only a few narrow and well-justified circumstances." Such a statute, said the court, "would present quite a different case than the one presented here." To illustrate its point, the Supreme Court specifically cited the HIPAA regulations, suggesting they were an example of a more comprehensive privacy regime that would be upheld.

Moreover, the opinion includes strong rhetoric showing the Supreme Court is sensitive to the privacy threats posed by modern IT. In particular, the court noted that "[t]he capacity of technology to find and publish personal information, including records required by the government, presents serious and unresolved issues with respect to personal privacy and the dignity it seeks to secure." 

Like many Supreme Court opinions, Sorrell v. IMS Health includes various broad statements that could be misconstrued if taken out of context. For example, at one point, the opinion says that there is a First Amendment right to collect and disclose facts. But that does not mean that any burden on the collection and dissemination of facts is impermissible under the First Amendment.

To the contrary, as the Court made clear, privacy is a legitimate state interest that can in some contexts be protected consistently with the First Amendment, if the burden on speech is carefully drawn to serve that interest. What the First Amendment will not tolerate is regulatory subterfuge. As the court said, "Privacy is a concept too integral to the person and a right too essential to freedom to allow its manipulation to support just those ideas the government prefers."

MORE ON THE WEB
·       Supreme Court Decision in Sorrell v. IMS Health
·       "Supreme Court Case on Rx Data Mining Requires Nuanced Understanding of Privacy" (McGraw, iHealthBeat, 4/19).
·       "Sorrell v. IMS Health Has Far-Reaching Privacy Implications" (McGraw, CDT blog, 5/6).
·       "Encouraging the Use of, and Rethinking Protections for, De-Identified (and "Anonymized") Health Data" (McGraw, CDT, 6/25/2009).

Read more: http://www.ihealthbeat.org/perspectives/2011/lack-of-genuine-privacy-interest-doomed-vermont-drug-marketing-law.aspx#ixzz1S3JK3Hif