Monday, July 18, 2011

CDT Justin Brookman: Why the US needs a data privacy law-and why it might finally get one

Why the US needs a data privacy law—and why it might finally get one
By Justin Brookman | Published July 18, 2011  ARS

The general public and Congress have both discovered geolocation, data breaches, and tracking cookies—and they're worried about the privacy implications. In this op-ed, the Center for Democracy & Technology's Justin Brookman argues that this could be the moment at which everything comes together to make comprehensive privacy reform possible. The opinions in this op-ed do not necessarily represent those of Ars Technica.

With the understandable exceptions of the national debt and the deployments of our troops abroad, privacy is possibly the hottest issue in Congress today. After ten years of limited interest in the subject, we’ve recently seen a spate of legislation introduced to give consumers rights over how their information is collected and shared.

In the House of Representatives, Reps. Bobby Rush (D-IL) and Cliff Stearns (R-FL) have each introduced separate comprehensive bills. In the Senate, John Kerry (D-MA) and John McCain (R-AZ) recently introduced the "Commercial Privacy Bill of Rights" with similar goals. The (Democrat-led) Senate Commerce Committee recently held a hearing on the topic of privacy; the next week, the (Republican-led) House Energy and Commerce Committee looked at the same thing.

In a town where positions on issues are often deeply divided along partisan lines, it’s encouraging to see that there appears to be at least one issue that both parties recognize as a problem that needs to be addressed.

Not much company
Here’s why Congress is interested: today, the United States and Turkey are the only developed nations in the world without a comprehensive law protecting consumer privacy. European citizens have privacy rights, Asian citizens have privacy rights, Latin American citizens have privacy rights. In the US, however, in lieu of a comprehensive approach, we have a handful of inconsistent, sector-specific laws around particularly sensitive information like health and financial data. For everything else, the only rule for companies is just “don’t lie about what you’re doing with data.”

The Federal Trade Commission enforces this prohibition, and does a pretty good job with this limited authority, but risk-averse lawyers have figured out that the best way to not violate this rule is to not make explicit privacy promises at all. For this reason, corporate privacy policies tend to be legalistic and vague, reserving rights to use, sell, or share your information while not really describing the company’s practices. Consumers who want to find out what’s happening to their information often cannot, since current law actually incentivizes companies not to make concrete disclosures.

This has been the case for years, of course, but in the modern era of constant connectivity, social networking, and cheap data storage and processing, the stakes are remarkably higher. Before the advent of the Internet, there were only so many data points for marketers and information brokers to collect about you, and bookstores and libraries didn’t share what you were reading. Even just a few years ago, when you went to a major publisher website, there might have been a couple third-party trackers on the site who could drop a cookie on your computer to “anonymously” track you across other sites. Today, these same sites may deploy hundreds of trackers from dozens of different companies, many of which know your offline identity as well. What happens to all that information? With whom is it shared? No one really knows, and there is no framework to regulate it.

Bad for business
This black box into which our data flows is bad for consumers, but it’s increasingly an impediment to US businesses as well. As Silicon Valley companies encourage consumers to store their personal data in “the cloud,” people are legitimately asking, “Why? What’s going to happen to my data there?” Today, the US is the undisputed leader in cloud computing services, but international competitors are increasingly advertising the fact that their services aren’t US-based. The Department of Commerce recently issued a report arguing that the lack of privacy protections threatens both the adoption of new technologies by worried consumers and the ability to have international data sent to the US. Last week, Forrester Research released a study showing that privacy concerns were the biggest impediment to the growth of e-commerce on mobile technologies.

Companies would be better off if they all provided meaningful privacy protections for consumers, but privacy is a collective action problem for them: many companies would love to see the ecosystem fixed, but no one wants to put themselves at a competitive disadvantage by imposing unilateral limitations on what they can do with user data. It’s fantastic to see companies endeavoring to compete on privacy (such as Google touting the privacy features of its new social network), but so far such competition has been spotty and often takes place at the margins. Many companies that touch and store consumer data don’t have consumer-facing sides (like the ever-increasing number of intermediaries in the behavioral advertising space), so it’s hard to see the Internet ecosystem fixing itself on its own.

And let’s be frank: so far, self-regulation hasn’t been enough. Increasingly, leading multinational corporations have recognized this problem, and companies like Microsoft, Intel, and HP that have heavily invested in cloud technologies have endorsed specific legislative solutions such as the Kerry-McCain and Rush bills to provide consumers with comprehensive privacy protections.

Any privacy law that is enacted doesn’t need to, and shouldn’t, prohibit data sharing or invalidate business models. However, consumers have a right to know what’s happening with their information and to have a say in how it gets shared. If a company insists on sharing data about a consumer as a condition of doing service, fine. As long as that fact is clearly conveyed, and the consumer decides to accept the terms, we shouldn’t put limits on what consumers are willing to do with their own information. Unfortunately, consumers today aren’t even told what’s happening, so they can’t exercise meaningful control over their data unless they take extreme measures to anonymize their surfing though services like Tor or block third-party content (which surely isn’t the right result for anyone).

So will a new law be passed? As with anything in Washington, it’s hard to say what will happen—Congress has a lamentable tendency to kick problems down the road for another day. However, with tremendous attention to privacy issues and widespread consumer support for basic consumer protections, we have the best opportunity in memory to enact basic rules to give people control of their personal information and to give them confidence in an increasingly complex data ecosystem. We should take advantage of this moment to develop a considered consensus on reasonable baseline protections that work for both consumers and businesses.

Justin Brookman is Director of the Consumer Privacy Project at the Center for Democracy & Technology in Washington, DC.

Privacy Isn't Dead. Just Ask Google+.

July 18, 2011, 12:59 pm
Privacy Isn’t Dead. Just Ask Google+.
By NICK BILTON
http://bits.blogs.nytimes.com/2011/07/18/privacy-isnt-dead-just-ask-google/?smid=tw-nytimesbits&seid=auto#h[]

Some people have a very hard time trusting Facebook.

After dozens of privacy problems over the years, they’ve grown extremely weary of what the company is doing with my personal information.  I, for one, rarely use Facebook anymore, beyond a rare comment or “Like.”

My Facebook fears stem from the several instances when the company has added new features to the site and chose to automatically opt-in hundreds of millions of users, most of whom don’t even know they’ve been signed up for the new feature. I’ve also been sapped by the company’s hyper-confusing privacy policy, which requires users to navigate a labyrinth of buttons and menus when hoping to make their personal information private.

For Facebook, these breaches on people’s personal privacy rarely result in any repercussions: the negative press is usually temporary, and users have mostly stayed with the service, saying that there isn’t a viable alternative social network to talk to family and friends.
That is, until now.

Enter Google+, which started last month and has already grown to 10 million users. Rather than focus on new snazzy features — although it does offer several — Google has chosen to learn from its own mistakes, and Facebook’s. Google decided to make privacy the No. 1 feature of its new service.

I learned this lesson accidentally last week. When I signed up for Google+, I quickly posted a link to a New York Times article I wanted to share with people. Several hours later my Google+ link lay dormant. No comments. No +1 clicks. And no resharing the link.

It wasn’t until later that I realized that my post had been made private by default; a Google+ user has to specifically say they want to share a post publicly. By doing this, Google has chosen to opt users out of being public, rather than the standard practice by most other services to automatically opt users in.

This isn’t to say Google is perfect. Last year the company has had its fair share of privacy problems. This happened most recently when it started Google Buzz, a social networking service, which turned into a privacy disaster and resulted in calls in Congress to investigate the company.

With Google’s latest offering, it seems that the company not only learned its lesson about the importance of privacy for consumers online, but also realized that Facebook hasn’t learned about the importance of this issue either.

Wednesday, July 13, 2011

The Importance of FIPs in data exchange

Channel: RHIOs/HIEs
Source: Lorraine Fernandes, global healthcare ambassador, IBM
Date: Jul 12, 2011
http://www.nhinwatch.com/perspective/importance-fips-data-exchange


Many of the Department of Health and Human Services, Office of the National Coordinator, Privacy and Security Tiger Team discussions over the past year have invoked the FTC's Fair Information Practices. Why? Because the Health Insurance Portability and Accountability Act (HIPAA) does not address one of today's most critical healthcare issues - data sharing. In the absence of updated regulations, the FIPs offer a comprehensive framework for moving forward.

The best way to move forward is to remove the emotion from the privacy and consent debate and instead look at this in a practical, constructive fashion. Perhaps Paul Tang, vice chair of the HIT Policy committee and member of numerous workgroups, said it best during one of the Tiger Team meetings last summer: "What would a patient expect?"

The Markle Foundation submitted a letter to the Department of Commerce on February 18, 2011, concisely articulating the importance of FIPs in today's society. As suggested in the letter, titled "The Need for a Coordinated Department of Commerce Policy on Consumer Protection and Privacy," we must look at data in a broader fashion and recognize that when we talk about data, we are really talking about consumer data, not healthcare data. This broader consumer framework paves the way for us to move away from our current prescriptive system, which focuses too much on regulations, toward a set of principles that allows us to respond to innovation and changing technology. There is a place for regulations, but let's have that dialogue after we have a solid foundation.

Let's ponder for a moment the FIPs and how we can use them to help achieve the goals of improving individual and population health.

Openness and Transparency - Consumers should be able to readily access data-usage policies, understand the collection and use of their data, and be able to limit the use of their data if they choose to do so. This can be achieved by public notices, website postings, social media and other more traditional approaches. Full transparency is crucial to building consumer trust.

Purpose Specification and Minimization - Data use should be specified at the time of collection and use should be limited only to those stated purposes. And if there is a proposed change in the use, the consumer should be notified. The classic "bait and switch" should never occur with consumer data.

Collection Limitation - This might also be coined "minimum data necessary." Don't collect more data than what is needed for the purpose at hand. This is particularly true when dealing with sensitive data like social security number, certain clinical conditions and past histories in a treatment setting. Perhaps the standard question when developing new data collection practices should be: "Do I really need this data to achieve my goals?"

Use Limitation - Data should be used only for the stated purpose. No dissemination or re-use should be undertaken unless consistent with the use limitation. For example, personally identifiable information should not be used for research unless the patient has been notified.

Individual Participation and Control - Consumers should understand how their data will be used. I think Dr. Tang's "What would the patient expect?" question really articulates a clear practice matching this FIP. Consumers should be notified on a timely basis if there is a data breach. The Phase 1 Meaningful Use requirement for patient access to their data also nicely matches this principle. Patients should be able to conduct a "consumer audit" to find out where their data has been used, whether that data is identifiable, de-identified or limited.

Data Integrity and Quality - Data collected (consistent with the other FIPs) should be accurate, complete and up to date. It should also include attribution (the originating source of the data). If problems are identified with the data quality, then the consumer should have remedies consistent with the FIPs.

Security Safeguards and Controls - Reasonable safeguards should be employed to protect against data theft, breach and unauthorized access. Clearly this is a problematic area, given the incidences of laptop thefts that frequently expose unencrypted data.

Accountability and Oversight - Those in control of consumer information must be accountable for following the FIPs. If breaches occur, those responsible must be disciplined consistent with policies and remedies.

Remedies - Remedies should be documented, transparent and must address what happens if there is a breach or privacy violation.

Following these basic practices and associated principles, and tying all discussions about data collection and exchange to the FIPs, would go a long way to building consumer trust and confidence. If we used these practices as a framework, the discussions could be more rationale, pragmatic, understandable and results oriented.  And, we can't pick and choose; we must use the FIPS as a whole.

When the FIPS are "front and center," consumers are front and center, and that is the only path that leads to trust in electronic health records and data exchange.

Lorraine Fernandes, RHIA, is the global Healthcare ambassador for IBM.

How Google and Data-Mining Drive Economic Inequality in Our Nation

Nathan Newman, July 11, 2011  Huffington Post

This is the first part in a three-part series that will run this week at HuffPost on why lost privacy online matters for economic equity in our economy.
Why has economic inequality increased so radically in the United States over the last generation?
General explanations range from globalization to the decline in trade unions to rising returns to education -- and therefore the loss of income to the less educated. These all no doubt play a role, but in an age of information what is unquestionably true is that control of that information is extremely unequal -- and that inequality drives broader economic inequality in our economy.

Information is power and as companies know more and more about us, while the products they sell become more opaque and complicated -- think mortgage-based Collateralized Debt Obligations (CDOs) -- inequality in information begets a massive transfer of wealth from individuals to corporations and to their shareholders. Companies figure out not just what to sell you but the maximum price you and other people like you will pay for that product.

Privacy is About Economic Power and Inequality: The debate on privacy online is therefore not about whether you think it's creepy that corporations are tracking your online activities. You may not have a strong "ick" factor from corporate surveillance per se -- I don't myself -- but what you should care about is that lost privacy is converted by those companies into information that ultimately drives greater economic inequality in our country.

One original promise of the Internet was that "no one knows you're a dog on the Internet" but we have instead evolved through data-mining and online surveillance into a world where not only do companies know what you are, they know where you are and what you are most interested in. For the economically privileged, that may not seem like much of a problem and even a benefit since companies may be able to service your needs more effectively. But for those who already suffer discrimination and exploitation, whether because of race, poverty or other factors, it means that the Internet can just magnify and target that discriminatory treatment and exploitation.

Which brings us to the Federal Trade Commission antitrust investigation into Google. The problem with Google is not that users don't have enough competing options on search engines but that Google's dominance of search and other online products allows them to extract the most massive quantities of private information from users of any corporation. And as I described in my piece back in March, You're Not Google's Customer, You're the Product, Google's real customers are the whole array of corporations who buy access to that user information to know how to effectively market their products and increase their profits.

Google at the Nexus of the Marketing of Privacy: Google is the key nexus in the information age, pricing individual privacy and monetizing it for the benefit of global corporations. They are the dominant middleman between hundreds of millions of people -- even approaching billions globally -- and the corporations using that Google-generated profiling to market their products and extract profit for their shareholders.

And it is that global market power over private individual data by Google that antitrust regulators need to investigate in order to counteract the rising inequality in the information economy. The cost of lost privacy driven by Google is corporate data-mining and manipulated prices across a whole array of markets and the exacerbation of multiple forms of discrimination in the marketplace. Google's monopoly dominance of personal information thereby helps leverage the broader corporate dominance of our lives by the companies using its data.

Why Free is a Bad Deal: The first step in how lost privacy increases economic inequality begins at the moment users give away their private information in the first place. Google offers the enticement of free services in exchange for users turning over a whole range of basic personal data and even what their basic desires are in the form of the whole record of what they search for on Google's pages.

What could be better than free, most users think, as they take the deal offered? It's a bit like how early bank customers might have felt, being told the bank would keep their money safe for free, only later figuring out that the bank was making tons of money lending that money to other people. The free Google tools into which users drop their private information are like the vault banks offered to store your money: it's not a service but a honeypot that allows both banks and Google to resell what users deposit there. Bank customers now expect actual payment in the form of interest for money deposited in banks but most Google customers don't even recognize that their private information has a monetary value that has economic value.
To put it another way, the fact that users are de facto involved in barter with Google, trading privacy for individual tools, should tell you this is an exploitative situation. Like most barter economies, pricing is opaque and creates massive opportunities for economic arbitrage by the sophisticated side of the barter transaction -- i.e. Google. Essentially, Google users are the primitive tribes of the Internet, accepting the shiny trinkets of Gmail and free search in exchange for their privacy.

Google then takes that private information and monetizes it with advertisers who pay very precise dollar terms in the modern part of the Google economy. And those advertisers pay prices far above the costs spent by Google on the tools provided to users -- as highlighted by Google's massive profits year after year. That advertising side of Google's internal economy is actually a monument to converting privacy into a modern currency, with sophisticated auctions for key words and phrases based on particular user demographics and backgrounds that the advertiser may be looking for. One analyst describes this as less the sale of privacy itself by Google, but rather the sale of a "privacy derivative", where companies invest in Google's appraisal of customers' needs and wants.(See Karl T. Muth's Googlestroika: Privatizing Privacy for more on how Google monetizes user privacy).

So the first step in the transfer of wealth via Google is from users selling their privacy for too little and Google arbitraging user ignorance for profit. If Google had less dominance of the online advertising field, there would be far greater pressure for Google to develop as sophisticated a market for users to be compensated for their privacy as the markets in which it resells that lost privacy.

To get some sense of the value of user information, look at the recent controversy over another big Internet player, namely Apple, when it demanded that sellers of subscriptions to apps on the iPhone had to give Apple not just 30% of sales, but sole control of user information as well. Lauren Idvik at Mashable noted that publishers like the Financial Times may not have liked the 30% cut Apple wanted from subscriptions, but "the main problem is that Apple will not share subscriber data with publishers, long one of publishers' most valuable assets, particularly to advertisers." Think about it -- your personal data is worth potentially more than 30% of the cost of what you are purchasing and most users give it away for free to companies like Google and Apple.

And Google is looking to leverage its position at the nexus of the Internet to further expand its data collection of users -- and the opportunities for marketing that data in Internet commerce. Most recently, Google is making a play for inserting what's called NFC technology into every smartphone and turn them into wireless credit cards -- and a substitute for every other card you carry -- that would make all commerce easier for users, while giving Google information on every transaction you make and providing even more expanded data on user shopping habits. Google is marching from dominance over information about online commerce to trying to dominate information about offline shopping as well.

In part 2 of this series, I'll look at why this personal information is so valuable to advertisers and how it empowers what economists call "price discrimination" and just plain old racial discrimination. Part 3 will look at the role of Google in the subprime mortgage debacle and its aftermath, as well as the broader antitrust implications of the company's dominant role as an intermediary for behavioral targeting of consumers by advertisers.

Nathan Newman, a lawyer and Ph.D., has an extensive history of supporting local policy campaigns, from coalition organizing work to drafting legislation. Previously Executive Director of Progressive States, an Associate Counsel at the Brennan Center for Justice, Program Director of NetAction's Consumer Choice Campaign, and co-director of the UC-Berkeley Center for Community Economic Research, he has also been a labor and employment lawyer, freelance columnist and technology consultant. He received his J.D. from Yale Law School and his Ph.D. in Sociology from the University of California at Berkeley and has written extensively about public policy and the legal system in a range of academic and popular journals, including publishing a book, Net Loss: Internet Prophets, Private Profits and the Costs to Community, detailing the relationship between telecommunications public policy and local economic development. His writing and organizing has been cited in the New York Times, USA Today. San Jose Mercury News, Baltimore Sun, Wired, Village Voice, ZDNet, CNet News, San Francisco Chronicle, TheStreet.com, Chronicle of Higher Education, MIT’s Technology Review, The Nation and the American Prospect. He runs his own site at www.nathannewman.org and a technology policy site, www.tech-progress.org.

Lack of Genuine Privacy Interest Doomed Vermont Drug Marketing Law

  
Deven McGraw        Monday, July 11, 2011  Ihealthbeat

On June 23, the Supreme Court issued its much anticipated decision in Sorrell v. IMS Health, striking down as unconstitutional a Vermont statute that prohibited the use of drug prescribing information for marketing purposes. In a 6-3 decision, the court found that the Vermont law violated the free speech rights of drug marketers. 

A number of privacy advocates had weighed in on the case, seeing it as a showdown between privacy and corporate claims of free speech rights. The Center for Democracy & Technology was skeptical of the privacy arguments made in defense of the law, but we too were worried about its potential impact on a range of health privacy and health IT issues.

After thorough review of the opinion, it is clear that the case should not be read as a threat to well-crafted privacy laws. As interpreted by the Supreme Court, the Vermont statute was an explicit effort to control specific speech by specific speakers -- a double no-no in First Amendment jurisprudence. And, as a privacy law, it was ineffective because it allowed pharmacies to share the covered information with anyone for any reason save one: marketing by drugmakers.

Ironically, a more comprehensive regulation of prescription data -- motivated by a genuine interest in protecting privacy and drawn to serve that interest -- would have been more likely to have been upheld.
Why Did the Supreme Court Strike Down This Law?

To begin with, it is important to recognize that patient privacy was not at issue in Sorrell v. IMS Health because the data at question did not identify patients.  Instead, the data identified prescribers, primarily doctors, and their prescribing patterns. In a process known as "detailing," drug company sales representatives use the data when they visit a doctor's office to persuade the doctor to buy a particular pharmaceutical, which the court noted was almost always a "high-profit brand-name" drug.

The Supreme Court found that the intent of the law was targeted solely at the marketing of brand-name drugs by drugmakers. The law prohibited the sale of prescriber-identifying data without the prescriber's consent, but the exceptions to that prohibition were so broad that they actually allowed sale to anyone except drugmakers. The law also prohibited the use of such data, absent prescriber consent, by pharmacies and drugmakers for marketing purposes. On the face of these provisions alone, the Supreme Court had no trouble finding that the law was a transparent attempt to stop pharmaceutical companies from engaging in effective marketing of their brand-name drugs. 

Matters got worse when the court looked at the findings adopted by the Vermont state Legislature when it passed the law. Those findings expressly said, "the goals of marketing programs are often in conflict with the goals of the state." Since the Supreme Court has long held that marketing is "speech" under the First Amendment, and since the whole point of the First Amendment is to protect speech that the government doesn't like, this statement alone probably doomed the law.

Normally, commercial speech is subject to a relatively weaker form of protection than non-commercial speech. But once the Supreme Court found the Vermont law was targeting a specific kind of speech -- drug marketing -- by a specific kind of speaker -- drug companies -- the law became subject to what the court calls "heightened scrutiny." On top of that, the court found the law appeared to allow the use of prescriber-identifying data to promote less-expensive generic drugs. 

So, in the Supreme Court's view, the law allowed covered information to be used for those marketing messages the state considered to be good, and only prohibited its use for marketing messages the state thought was bad. That kind of control is called "viewpoint" discrimination -- where the government is targeting only one side of an issue -- and that is the ultimate offense under the First Amendment.

With all of that, the Supreme Court said that the Vermont law might have withstood scrutiny if it in fact had been well crafted to serve a legitimate state interest. And, the court assumed that protecting doctor privacy was a legitimate state interest. The problem was that the law totally failed to protect privacy and was not an appropriate response to the other goals the state advanced in its defense.

In rejecting the privacy claim, the Supreme Court emphasized that under the Vermont law, "pharmacies may share prescriber-identifying information with anyone for any reason save one:" marketing. The court noted that the state "all but conceded" that the statute does not advance confidentiality interests. Further, arguments that the law also was intended to protect doctors from aggressive sales tactics carried no weight with a court that had previously held that the First Amendment protects speech even when it "may move people to action, bring them to tears or inflict great pain."

The state also argued that the law advanced legitimate public policy goals by lowering the cost of health care. That is a legitimate goal, the court agreed, but the government cannot pursue it by curtailing speech. Quoting from an earlier decision, the Supreme Court said, "the fear that people would make bad decisions if given truthful information cannot justify content-based burdens on free speech." The court said that if the government wants to control health care costs, it has to do so directly, not by curtailing speech or cutting off access to information that is used in speech the state thinks exacerbates the cost problem.

In sum, because the statute discriminated both on the basis of content and viewpoint, and because it was not actually drawn to serve its stated goal of protecting doctor privacy, it could not survive scrutiny under the First Amendment.

What Are the Potential Implications of This Decision?

The Supreme Court's decision might mean that similar drug marketing laws adopted for similar reasons by Maine and New Hampshire also are unconstitutional. In addition, the case is highly relevant to other laws that try to specifically regulate advertising. Beyond that, however, the case probably sets no new standards for review of health privacy or privacy regulation in general.

Some organizations had urged the court to find that the data at issue could identify patients. This implicated the question of whether the HIPAA de-identification standard provides sufficient protection for patient privacy. The Supreme Court did not take the bait on that issue. It never questioned the premise that the data were adequately de-identified as to patients. Consequently, the important public policy considerations surrounding de-identification should be resolved by legislatures and regulatory bodies, which are better suited to handle them.

Most importantly, the case does not deal a death blow to privacy regulation. To the contrary, the Supreme Court noted that the state could have advanced its asserted privacy interest "by allowing the information's sale or disclosure in only a few narrow and well-justified circumstances." Such a statute, said the court, "would present quite a different case than the one presented here." To illustrate its point, the Supreme Court specifically cited the HIPAA regulations, suggesting they were an example of a more comprehensive privacy regime that would be upheld.

Moreover, the opinion includes strong rhetoric showing the Supreme Court is sensitive to the privacy threats posed by modern IT. In particular, the court noted that "[t]he capacity of technology to find and publish personal information, including records required by the government, presents serious and unresolved issues with respect to personal privacy and the dignity it seeks to secure." 

Like many Supreme Court opinions, Sorrell v. IMS Health includes various broad statements that could be misconstrued if taken out of context. For example, at one point, the opinion says that there is a First Amendment right to collect and disclose facts. But that does not mean that any burden on the collection and dissemination of facts is impermissible under the First Amendment.

To the contrary, as the Court made clear, privacy is a legitimate state interest that can in some contexts be protected consistently with the First Amendment, if the burden on speech is carefully drawn to serve that interest. What the First Amendment will not tolerate is regulatory subterfuge. As the court said, "Privacy is a concept too integral to the person and a right too essential to freedom to allow its manipulation to support just those ideas the government prefers."

MORE ON THE WEB
·       Supreme Court Decision in Sorrell v. IMS Health
·       "Supreme Court Case on Rx Data Mining Requires Nuanced Understanding of Privacy" (McGraw, iHealthBeat, 4/19).
·       "Sorrell v. IMS Health Has Far-Reaching Privacy Implications" (McGraw, CDT blog, 5/6).
·       "Encouraging the Use of, and Rethinking Protections for, De-Identified (and "Anonymized") Health Data" (McGraw, CDT, 6/25/2009).

Read more: http://www.ihealthbeat.org/perspectives/2011/lack-of-genuine-privacy-interest-doomed-vermont-drug-marketing-law.aspx#ixzz1S3JK3Hif

Friday, July 1, 2011

FTC: Consumer Confidence in Internet Marketplace Depends on Privacy Protections FTC Tells Senate Commerce Committee

: 06/29/2011

The Federal Trade Commission today told Congress that consumers must be confident that their privacy will be protected if they are to be willing to take advantage of all the benefits offered by the Internet marketplace.

Commission testimony to the Senate Committee on Commerce, Science and Transportation, delivered by Commissioner Julie Brill, states that, “Privacy has been an important component of the Commission’s consumer protection mission for 40 years. During this time, the Commission’s goal in the privacy arena has remained constant: to protect consumers’ personal information and ensure that they have the confidence to take advantage of the many benefits offered by the dynamic and ever-changing marketplace.”

The FTC’s testimony states that the FTC has taken a three-pronged approach to preserving consumers’ privacy – law enforcement actions, consumer and business education efforts and policy initiatives.

It notes that in the last 15 years, the agency has brought more than 300 privacy-related actions, including: 34 data security cases; 84 Fair Credit Reporting Act cases; 97 spam cases; 15 spyware cases; and 16 cases enforcing the Children’s Online Privacy Protection Act.
In addition, the testimony states that the agency has distributed millions of copies of consumer and business education materials that address basic privacy issues and security and privacy threats.

Policy initiatives to advance the agency’s privacy agenda include three privacy roundtables that involved privacy experts, business representatives, and academics who examined the implications of new technologies and business practices on consumer privacy. Based on the roundtable discussions, FTC staff issued a preliminary report proposing a privacy framework with three main concepts, the testimony states.

“Staff recommended that companies should adopt a ‘privacy by design’ approach by building privacy protections into their everyday business practices, such as collecting or retaining only the data they need to provide a requested service or transaction, and implementing reasonable security for such data,” according to the testimony.

The staff report also called for companies to provide an easy way for consumers to control the collection and use of their personal information. “One example of how choice may be simplified for consumers is through a universal, one-stop choice mechanism for online behavioral tracking, often referred to as “Do Not Track.” The testimony explained that “any Do Not Track system should not undermine the benefits that online behavioral advertising has to offer, by funding online content and services and providing personalized advertisements that many consumers value.” Any Do Not Track mechanism should be “flexible” and “should allow companies to explain the benefits of tracking and to take the opportunity to convince consumers not to opt out of tracking,” and “could include an option that enables consumers to control the types of advertising they want to receive and the types of data they are willing to have collected about them, in addition to providing the option to opt out completely.” The testimony notes that the industry “appears to be receptive to the demand for simple choices.”

In addition, the staff report recommended that “companies should improve their privacy notices so that consumers, advocacy groups, regulators, and others can compare data practices and choices across companies, thus promoting competition,” the testimony states.

The testimony notes that while the FTC has not taken positions advocating any particular legislative proposals, it favors data security legislation “that would (1) impose data security standards on companies, and (2) require companies, in appropriate circumstances, to provide notification to consumers when there is a security breach.” The testimony states that the Commission is committed to protecting consumers privacy -both online and off, and looks forward to working with Congress to achieve that goal

The Commission vote to issue the testimony was 5-0, with Commissioner J. Thomas Rosch issuing a separate statement recommending that the Commission and Congress learn more about Do Not Track before proceeding.

The Federal Trade Commission works for consumers to prevent fraudulent, deceptive, and unfair business practices and to provide information to help spot, stop, and avoid them. To file a complaint in English or Spanish, visit the FTC’s online Complaint Assistant or call
1-877-FTC-HELP (1-877-382-4357). The FTC enters complaints into Consumer Sentinel, a secure, online database available to more than 2,000 civil and criminal law enforcement agencies in the U.S. and abroad. The FTC’s website provides free information on a variety of consumer topics. Like the FTC on
Facebook and follow us on Twitter.
MEDIA CONTACT:
      Claudia Bourne Farrell Office of Public Affairs 202-326-2181