Wednesday, November 24, 2010

Deep Packet Inspection is back

Shunned Profiling Technology on the Verge of Comeback

By STEVE STECKLOW and PAUL SONNE WSJ, November 24, 2010

One of the most potentially intrusive technologies for profiling and targeting Internet users with ads is on the verge of a comeback, two years after an outcry by privacy advocates in the U.S. and Britain appeared to kill it.

The technology, known as "deep packet inspection," is capable of reading and analyzing the "packets" of data traveling across the Internet. It can be far more powerful than "cookies" and other techniques commonly used to track people online because it can be used to monitor all online activity, not just Web browsing. Spy agencies use the technology for surveillance.

Now, two U.S. companies, Kindsight Inc. and Phorm Inc., are pitching deep packet inspection services as a way for Internet service providers to claim a share of the lucrative online ad market.

Kindsight and Phorm say they protect people's privacy with steps that include obtaining their consent. They also say they don't use the full power of the technology, and refrain from reading email and analyzing sensitive online activities.

Use of deep packet inspection this way would nonetheless give advertisers the ability to show ads to people based on extremely detailed profiles of their Internet activity. To persuade Internet users to opt in to be profiled, Kindsight will offer a free security service, while Phorm promises to provide customized web content such as news articles tailored to users' interests. Both would share ad revenue with the ISPs.

Kindsight says its technology is sensitive enough to detect whether a particular person is online for work, or for fun, and can target ads accordingly.

"If you're trying to engage in one-stop-shopping surveillance on the Internet, deep packet inspection would be an awesome tool," says David C. Vladeck, director of the Federal Trade Commission's Bureau of Consumer Protection. When deep packet inspection is used for targeted ads, the FTC has made it clear that broadband providers "should, at a minimum, notify consumers that the ISP was mining the information and obtain clear consumer consent," Mr. Vladeck says.

Kindsight, majority owned by telecommunications giant Alcatel-Lucent SA, says six ISPs in the U.S., Canada and Europe have been testing its security service this year although it isn't yet delivering targeted ads. It declined to name the clients.

"These are tier-one ISPs we're working with," says Mike Gassewitz, Kindsight's chief executive. He says his company also has been placing ads on various websites to test the ad-placement technology and build up a base of advertisers, which now number about 100,000.

Two large ISPs in Brazil—Oi, a unit of Tele Norte Leste Participacoes SA, and Telefonica SA—currently have deals with Phorm. Oi, Brazil's largest broadband provider with about 4.5 million customers, has launched the product initially with about 10,000 people in Rio De Janeiro.

"We want to grow that," says Pedro Ripper, Oi's strategy and technology director.
A spokesman for Telefonica says it is testing the service on about 1,000 broadband customers and will evaluate the results before deciding whether to roll it out. "The user has the choice to enable or disable the service anytime he or she wants to," the company said in a statement.

Phorm is hoping to introduce its service in South Korea and eventually in the U.S. "It is designed from the ground up to ensure one thing and that is privacy," says Kent Ertugrul, Phorm's chief executive.

Kindsight and Phorm say the ISPs don't provide them with subscribers' real identities. Both also say they don't collect any personal information, read email, store users' browsing histories or monitor sensitive sites such as health blogs. Subscribers must "opt in," or give their consent to participate, both companies say.

Both the Kindsight and Phorm systems study people's behavior and interests based on the websites they visit to show them relevant ads. Mr. Gassewitz says that unlike web-based tracking methods, which generally create a single behavioral profile no matter how many people share a computer, Kindsight can "generate multiple characters per human."

"If I come online and I'm in work mode, I will show up as a very different character than when I go online Saturday morning and I'm in recreation mode," he says. The targeted ads would reflect which "character" is online.

Mr. Gassewitz calls that some of Kindsight's "secret sauce." The company this year filed a patent on its "character differentiation" technology.

A new revenue source would mark a welcome change for ISPs. The companies have been under pressure to offer ever-faster Internet services at lower prices, while Google Inc. and other companies raked in billions of dollars selling ads. Targeted ads based on people's interests or behavior generally fetch higher fees.

ISPs "feel like they have data and they ought to be able to use it," says Tim McElgunn, chief analyst at Pike & Fischer Broadband Advisory Services. "They really desperately want to."
This isn't the first time ISPs have tried this. Two years ago, ISPs in the U.S. and Britain signed deals with companies offering deep packet inspection services and a cut of ad revenue.

Those pacts fell apart after a privacy outcry. In the U.K., an uproar ensued after BT Group PLC admitted it had tested Phorm's technology on some subscribers without telling them. Last year, BT and two other British ISPs that explored deploying Phorm's service—Virgin Media Inc. and TalkTalk—abandoned it.

In the U.S., controversy erupted in 2008 over the practices of a company called NebuAd Inc., which planned to use deep packet inspection to deliver targeted advertising to millions of broadband subscribers unless they explicitly opted out of the service. At a congressional hearing, Bob Dykes, the company's founder, was grilled over its policy. NebuAd stopped doing business last year; several U.S. ISPs who signed deals with NebuAd have been hit with class-action lawsuits accusing them of "installing spyware devices" on their networks.

In an interview, Mr. Dykes said, "If I had to do things over again, I would have figured out how to architect an opt-in model."

The companies now offering ad services based on deep packet inspection believe they have learned how to make the services acceptable to privacy advocates and Internet users. This includes asking for permission up front and offering people incentives to receive targeted ads, such as Kindsight's free security service, which includes identity-theft protection. Customers can pay a monthly fee to receive no ads.

In Brazil, Phorm is emphasizing customized content on partner websites if people agree to opt in. For example, users visiting a sports website might see articles about their favorite teams (gleaned from an analysis of their surfing habits), providing an online experience different from other people.

"Receive your favorite content in an easy and practical way and without spending money!" says Oi's main opt-in screen for the Phorm service, called Navegador. "We guarantee your privacy!
No personal information is input in the program, so your privacy is guaranteed!"

Oi's Mr. Ripper says more than half the subscribers offered the service in the initial launch have opted in to date. "We were very happy with it," he says. He says two outside auditors verified Phorm's privacy-protection settings.

Until 2007, Phorm was known as 121Media Inc. It delivered targeted ads, particularly pop-ups, to users who downloaded free software. The ads were "based on an anonymous analysis of their browsing behavior, which is likely to indicate their commercial and lifestyle interests," according to corporate filings.

Several Internet security companies, including Symantec Corp., flagged part of 121Media's adware system as "spyware." Microsoft's Malware Protection Center called it a "trojan," or malicious software disguised as something useful.

Facing "a combination of public perception and legal and technological challenges," 121Media said it shifted its focus in 2005 from the desktop-adware business to ISPs.

It eventually shuttered its adware business and renamed itself Phorm. The company is led by Mr. Ertugrul, a Princeton-educated, former investment banker who in the early 1990s formed a joint venture with the Russian Space Agency to offer joy rides to tourists in MiG-29 fighter jets. The venture was later sold.

In February 2008, Britain's biggest ISPs—BT, Virgin Media and TalkTalk—announced plans to implement Phorm's service. Those plans quickly unraveled.

Suspicions earlier had arisen among some BT subscribers who discovered they were being routed through an unfamiliar Internet address when they tried to visit a website. Some of them contacted BT and were advised their computer might be infected with a virus, according to a person familiar with the matter.
A BT spokesman said it is "standa
rd procedure" to take customers through "a number of steps to try and identify the issue" if they call with a question about their service.
In fact, the subscribers were part of tests BT conducted in 2006 and 2007 using Phorm's technology. When BT disclosed the testing in April 2008, the backlash was fierce, with online protests by privacy advocates and government investigations. Four members of the board of directors later resigned, including former AT&T chief executive David Dorman and ex-Coca-Cola Co. president Steven Heyer, citing differences with Mr. Ertugrul. Messrs. Dorman and Heyer declined to comment.

The three ISPs eventually bailed out. "Phorm was bad news," says David Smith, deputy commissioner of Britain's Information Commissioner's Office, which oversees data protection. He says he's not surprised Phorm is looking for clients abroad. "It was pretty clear that no one was going to touch them in the UK."

Kindsight's roots trace to an in-house project known as Project Rialto at Alcatel-Lucent, where Mr. Gassewitz once worked as a vice president of strategic planning.
A 2007 job posting on Project Rialto's website described the company's work as developing "systems that can handle [a] massive volume of data for in-depth analysis of user behavior to enable targeted advertising."

Project Rialto eventually became Kindsight, a spinoff. At an Alcatel-Lucent conference held in September 2008 in Beverly Hills, Mr. Gassewitz spoke at a session called "Merging Technology and Advertising." A summary of his comments, posted on Alcatel's website, reads in part: "Through technologies like deep packet inspection," Internet service providers "can gather even more information about consumers" than rivals such as Google or Facebook.

Mr. Gassewitz also talked about "significant privacy concerns," the summary says, and stressed that ISPs must find a way to provide measurable value to consumers "to avoid backlash."

To win over Internet users to its services, Kindsight plans to offer what it has described as a "free, always-on, always-up-to-date security service."

"Say hello to your new best friend…" it said on its redesigned website in 2008. The company later dropped the slogan. "That was early days," says Mr. Gassewitz.

Before giving away the security service free, Kindsight plans to display an opt-in screen to ISP users that explains how its technology analyzes "web sites visited and searches conducted to assign a numerical value to various interest categories." The "score" is used to deliver relevant ads.

In market-research tests in North America, France and the U.K., Kindsight found that about 60% of users were willing to take the service free in exchange for receiving targeted ads, he says. Another 10% were willing to pay for it.

Mr. Gassewitz says six ISPs have tested Kindsight's security service on subscriber groups as big as 200,000. Mr. Gassewitz says, "There was no profiling occurring, no advertising occurring, no data collection occurring."

Oi's Mr. Ripper believes that the technology's time has come. "The Internet is becoming more and more a platform to deliver very targeted messages," he says. As for deep packet inspection, "Everyone is going to get there. It's just a matter of timing."

Write to Steve Stecklow at and Paul Sonne at

Privacy Groups Fault Online Health Sites for Sharing User Data With Marketers

By NATASHA SINGER  NY Yimes  11/24/2010

QualityHealth is a popular health Web site with more than 20 million registered users that offers online medical information and e-mail newsletters on a variety of topics, including diabetes, allergies, asthma and arthritis.

But according to a complaint filed Tuesday with the Federal Trade Commission, site visitors who provide personal details about themselves might not be aware that QualityHealth collects information about people’s medical conditions, preferred medicines and treatment plans and uses it to profile its users for prescription drug marketing.

Rob Rebak, the chief executive of QualityHealth, a company also known as Marketing Technology Solutions of Delaware, did not return a request for comment.

QualityHealth is one of a number of companies cited in the complaint to the F.T.C. filed by four nonprofit privacy and consumer advocacy groups. In the complaint, the Center for Digital Democracy, U.S. PIRG, Consumer Watchdog and the World Privacy Forum charged that online marketing of medications, products and medical services posed fundamental new risks to consumer privacy and health because of sophisticated data collection and patient-profiling techniques.

Asserting that such techniques are unfair and deceptive, the groups asked the F.T.C. to investigate the health marketing used by some popular sites including Google, HealthCentral, Everyday Health, WebMD and Sermo, a site for medical professionals.

Some sites, the complaint said, were not transparent enough about how they tracked people through users’ online heath searches and discussions or how they categorized and marketed to their conditions. Other sites may not be entirely open about how they create and use data profiles about users or blur the line between independent and sponsored content, the complaint said.

The concern, said Ed Mierzwinski, the consumer program director at U.S. PIRG, is not just about data mining and marketing that could influence patients to seek drugs they do not need or to spend more money on branded drugs rather than generics. More broadly, employers or health insurers could gain access to the consumers’ data profiles, leading to potential problems or penalties against the consumer, he said.

“You could be searching for health information about your cat or your neighbor and it could end up harming your health care in terms of denial or increased cost,” Mr. Mierzwinski said. “If people knew what kind of surveillance, eavesdropping and data mining were being used to collect information about you, encourage your use of prescription drugs and essentially use you as a research guinea pig, I think people would think twice.”

The complaint comes at a delicate moment for online and social media marketing of medical products and services. The Food and Drug Administration, which oversees drug marketing, is developing industry guidelines for digital and social media. In a two-day agency hearing on the subject last winter, many health sites, drug makers and marketing firms promoted the idea of digital health as a tool that empowered consumers, allowing patients to easily access medical information and form supportive interactive communities with other patients.

“There are clear public health benefits for health care providers and patients to be able to access truthful, scientifically accurate and F.D.A.-regulated information about medicines online from the companies that research and develop them,” said Jeffrey K. Francer, assistant general counsel of the Pharmaceutical Research and Manufacturers of America, an industry trade group. His group is still reviewing the complaint, he said.

But Jeff Chester, the executive director of the Center for Digital Democracy, said that the industry had provided the F.D.A with a “fairy-tale version of digital health marketing.”
Online data collection techniques give “pharmaceutical companies and health marketers a kind of digital X-ray of a health consumer’s concerns, fears and behaviors,” Mr. Chester said. “There is no meaningful disclosure of how that data is collected and used.”

A notice at the bottom of QualityHealth’s member registration form, for example, provides a link to the site’s privacy policy. The policy explains that information that may or may not identify someone may be used for ads aimed at consumers.

But QualityHealth provides greater detail about its methods in its pitch to business clients — like how the site uses tailored messages, informed by patient profiling, to prompt members to seek prescriptions for specific brands from their doctors.

“We can reach consumers just before their next doctor visit,” the site says, “as well as follow up with reminders and relevant information, for maximum impact.”

The site’s privacy policy states that the company is committed to providing consumers with clear notice and choice about its practices and that visitors are asked to opt in to the company’s data use policy.

Adam Grossberg, a spokesman for WebMD, said that WebMD had always been transparent and direct with its users. “All sponsored content on our site is clearly labeled as such, he said.” He added that the company could not comment on the specifics of the complaint because it had not received it.

A spokesman for Google said the company declined to comment for this article. HealthCentral, Everyday Health and Sermo did not respond to requests for comment.

The groups decided to file their complaint with the F.T.C., Mr. Chester said, because it oversees consumer privacy issues and because, the groups believed that the F.D.A., which has long overseen traditional marketing of drugs in print, radio and television, lacked the staff and the expertise to oversee online and social media drug marketing. An F.D.A. spokeswoman said the agency planned to review the complaint.

David Vladeck, the director of the F.T.C.’s bureau on consumer protection, said he had not yet read the complaint. But, he said, the F.T.C. plans to soon release a report about online privacy because it is concerned about sensitive personal information.

Mr. Vladeck offered a hypothetical example of a core privacy question. “Suppose someone goes online to read about depression, should that person get targeted with ads for antidepressants, for counseling services or books about depression?”

Tuesday, November 23, 2010

HIPAA: Holes in the fence?

By Joseph Conn  Modern Healthcare  November 23, 2010 - 12:00 pm ET

Is the primary federal privacy law up to the task of protecting patient information in the 21st century?

It's a question we put to opinion leaders in the legal, research, policy, ethics, provider and technology fields within the healthcare privacy community. It comes as hospitals and office-based physicians ramp up adoption of electronic health-record systems and join information exchanges to qualify for their share of the $27 billion in federal information technology subsidy payments available under the American Recovery and Reinvestment Act of 2009, also known as the stimulus law.

The key federal privacy law, the Health Insurance Portability and Accountability Act, was passed in 1996, an era in which the public Internet still was in its infancy.

HIPAA identified providers, payers and clearinghouses as the primary claims-creating and -handling organizations and singled them out as “covered entities” under the law, meaning they are required to comply with the law's mandates on data transaction standards and security. The HIPAA privacy protection scheme centered on them as well.

Thus, what we'll call the HIPAA paradigm sought to protect patient privacy mainly by placing a regulatory fence around this special class of organizations and individuals. Businesses that handled some of the data-processing tasks for covered entities were exempt from direct liability for privacy violations, but were contractually roped into the scheme through business associate agreements with the covered entities.

This regulatory paradigm continues to this day, with some modifications Congress enacted last year as part of the stimulus law, such as making business associates liable under HIPAA for privacy violations. By extending direct liability to business associates, in effect, the stimulus law moved the HIPAA regulatory fence out a bit, but kept covered entities in the center of the enclosure.

Federal officials have spoken often about the “foundational” importance of privacy and security. The argument goes like this: If patients don't trust that their information will be kept safe, then they won't agree to have their information stored or shared on IT systems, so the potential quality and safety and cost improvements afforded by those systems—and the government's investments in them—will come to naught.

David Blumenthal, head of the Office of the National Coordinator for Health Information Technology at HHS, said as much when he addressed an Aug. 4 meeting in Washington hosted by the Substance Abuse and Mental Health Services Administration, part of HHS.

Of the many health IT activities undertaken by his office, Blumenthal said, “none is more important than the issue that we're talking about today, generically, and that is privacy and security of healthcare information.”

“We work within the HIPAA framework, and that's extremely useful as a foundation, but we are aware that HIPAA was not constituted with the electronic age in mind, and we were tasked by the Congress with pushing beyond it,” Blumenthal said. But not everyone shares Blumenthal's faith in the usefulness of HIPAA going forward.

Hardly a week goes by when the efficacy of the HIPAA privacy paradigm in the new information age isn't called into question. For example:

·       Last year, parents sued the Texas Department of State Health Services when they learned blood samples, taken from infants for public health purposes, were used without parental consent for research. The suit led to the destruction of more than 5 million samples.
·       In February, the not-for-profit news website Texas Tribune reported the same state program also provided hundreds of the infant blood samples to the Armed Forces DNA Identification Laboratory for the creation of a genetics database to be used for military, law enforcement and security purposes.

·       In May, PatientsLikeMe, a social-networking site for patients with serious and life-ending diseases ranging from depression to ALS, discovered, according to its co-founder, that it had been scraped of members' information by an unauthorized data-collection service run by the Nielsen Co., a global marketing research firm. A Nielsen spokesman says it has halted what he called a “legacy” practice.

In all of these cases, HIPAA was not a factor because the parties, despite handling massive amounts of sensitive patient information, are not covered entities and thus operate outside the regulatory fence of the HIPAA paradigm.

In 1996, in drafting HIPAA, Congress gave itself three years to write supplemental legislation to flesh out HIPAA's bare bones structure on patient privacy. Congress failed to act in time, which triggered a HIPAA provision that transferred the responsibility for writing a complete HIPAA privacy rule to HHS. Staffers at HHS produced the initial HIPAA privacy rule in 2000. That early version called for covered entities to obtain patient consent for treatment, payment and other healthcare operations, the latter being something of a catch-all category that includes fundraising and medical underwriting.

But in 2002, HHS rewrote the HIPAA privacy rule, granting “regulatory permission” to covered entities to disclose patient information for treatment, payment and other healthcare operations without patient consent. It was a fundamental change, one that privacy advocates say may soon come back to haunt federal regulators who are now pushing hard for EHR adoption and interoperability.

The HIPAA paradigm, however, is not the only regulatory game in town, even at the federal level. Congress, for example, provided veterans with their own consent protection for records involving diagnosis or treatment of drug or alcohol abuse, HIV/AIDS or sickle cell anemia. Similarly, in 1972, Congress passed a law protecting the records of patients of federally funded treatment programs for alcohol and drug abuse.

The law is more commonly known by its location in the Code of Federal Regulations of its attendant rule, 42 CFR Part 2. Both the law and the rule apply to thousands of healthcare organizations, according to Catherine O'Neill, senior vice president and director of HIV/AIDS projects at the Legal Action Center in New York. The not-for-profit center advocates on behalf of drug- and alcohol-abuse patients and persons with criminal records.

Like the veterans' law, 42 CFR Part 2 also requires, in most cases, written patient consent for the disclosure of drug or alcohol treatment records. But unlike the veterans' law or the HIPAA paradigm, where the requirement is attached to the organization, with 42 CFR Part 2, the consent obligation flows with the data. When treatment records are moved and come into the possession of another provider or organization, the rule essentially states, “tag, you're it,” and that new provider in possession is obliged to seek patient consent to disclose those records to anyone else.

Despite the elimination of consent by HHS for most healthcare information, the SAMHSA fully supports retention of the consent requirement for drug and alcohol treatment records in 42 CFR Part 2, says Robert Lubran, acting director of the division of services improvement at the SAMHSA. “I think it provides a principle that people here at least feel is very important in terms of keeping this in the control of the consumer,” Lubran says.

States, too, often have special patient-consent requirements for data involving diagnosis and treatment for drug or alcohol abuse, HIV/AIDS, sexually transmitted diseases, mental health issues and sickle cell anemia.

Consent long has been considered the
sine qua non of privacy.

Longtime privacy researcher Alan Westin, in his seminal book Privacy and Freedom, published in 1967, declared: “Privacy is the claim of individuals, groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others.”

Westin, professor emeritus in public law and government at Columbia University, served as lead consultant on a 2007 survey by Harris Interactive on public opinion regarding privacy and healthcare research and the importance of patient consent. Westin says the removal of consent from HIPAA by federal rulemakers in 2002 “left us high and dry,” but with the improvements to HIPAA in the stimulus law, “I think the raw materials for excellence are there.” Privacy protection will depend again on HHS rulemakers, however, he says. (A proposed privacy rule addressing HIPAA modifications from the stimulus law was released by HHS in July, but a final rule is pending.) If it's not addressed, Westin says, don't be surprised if there is consumer backlash.

“I think we're at a pivotal moment,” Westin says, given the massive inflows of federal IT subsidies about to begin. “Just imagine a lawsuit as a class action with all the people who would otherwise be swept into a network saying, ‘I did not give my consent,' and asking the court to intervene.”

Meanwhile, Westin says he sees “a dangerous trend” developing in healthcare IT in which patients are regarded as “inert data elements, not conscious persons” who have the right to make informed choices regarding “how their health information is used beyond the direct care settings. "You have to have privacy orienting systems at the design,” he says. “If the plumbing all gets in, it's going to be very costly to tear it down and change it.”

While much of the federal privacy focus is on HIPAA, in the U.S., “what we're really talking about is a mosaic of policies,” says Ioana Singureanu, a health IT standards development consultant with her own firm, Eversolve, in Windham, N.H.

Singureanu has been a member since 1997 of Health Level Seven, a prominent healthcare IT standards development organization. Most recently, she's been working with HL7 on developing guidelines for electronic patient consent directives for EHRs and data exchanges.

While HIPAA places a privacy floor under both the states and the federal government, “the floor has to be raised, that's quite clear,” Singureanu says. “If you raise the floor, then you can make this mosaic a little bit more manageable.”

But reliance on policies as the sole protector of individual privacy won't work, either, she says. Technology itself needs to be brought to bear to create tools to aid providers in enforcing those publicly evolved privacy policies. “I think the systems of the future will have to be more proactive, to prevent you from doing what you're not allowed to do by policy,” she says.

“The technology exists already to protect certain information that meets specific criteria,” she says. “That's not too different than the quality measures that people are being asked to collect automatically. The challenge is to formulate rules in such a way that they actually live up to the spirit of the policy.”

Singureanu says Australia and the Canadian province of British Columbia, as well as the U.S. Veterans Affairs Department's health system, all “have some sort of form they use to record your preferences. These are in use now.” Other countries also do well in providing technologies for patients to revoke previously given consent, she says.

IT-enabled privacy protection functions need to be included in EHR certification criteria and their use made part of the meaningful-use criteria under the stimulus law's EHR incentive program, she says.

Kenneth Goodman, professor of medicine and philosophy and the director of the bioethics program at the University of Miami, says he sees HIPAA as part of the health IT furniture.

“Is HIPAA a good place to start for moving into the new world of ubiquitous IT?” Goodman asks. “It better be. Because starting over isn't a practical or politically viable option. I'm sure if the framers of HIPAA would have do-overs, they'd do it differently. I believe that HIPAA can be improved. It's the best we've got.”

Goodman co-authored an article on ethics, policy, EHRs and biobanking published in February in Science Progress, an online science and policy magazine of the liberal Center for American Progress, a Washington-based think tank. In it, Goodman argues that individuals have an obligation to provide access to their healthcare information for the public good and that society has both a right and a duty to use that information to improve community health.

A new challenge will be to regulate against the abuse of data outside the scope of HIPAA. “You encounter personal health records, where people put their health information on a cell phone, or on Google and Microsoft, and Google and Microsoft are not covered entities. We need to figure out what the privacy framework is for personal health records and other sharing of personal information.”

Deborah Peel is the practicing psychiatrist who founded the Patient Privacy Rights Foundation in Austin, Texas. To Peel, the HIPAA paradigm is obsolete and inadequate and needs to be replaced.

“You can't draw a fence around who has sensitive health information,” Peel says. “It might have made sense 20 years ago, but it is a model that doesn't fit the realities of today. It's based on an anachronistic view of the healthcare system, as if it's totally separate from everything else in business and in life, and if technology has taught us anything, it's that that's not effective.”

Peel also says the 42 CFR Part 2 framework should be applied to all patient data. “Healthcare information, because of the Internet, is everywhere; therefore, the protections must follow the data,” she says. “If we don't say a damn word about social media and websites and the rest, we lose because that information is out there in all of those places.”

Mark Rothstein, a lawyer and the director of the Institute for Bioethics, Health Policy and Law at the University of Louisville (Ky.) School of Medicine, says he's been “a proponent of comprehensive privacy legislation for a long time, which we don't have, and nobody's talking about this. What I mean by comprehensive is we don't have it limited to a group of three covered entities. It applies to everyone who accesses and uses private health information.”

But Rothstein, who served as chairman of the subcommittee on privacy and confidentiality of the National Committee on Vital and Health Statistics, an advisory committee to HHS, from 1999 to 2008, concedes that major legislative changes to the HIPAA paradigm, as much as it is needed, are unlikely.

One lesser change Rothstein suggests would be helpful is to add to HIPAA the right of an individual to sue a privacy violator in federal court. “It would certainly act as a deterrent to wrongdoing,” Rothstein says. “The wrongdoers would be at risk from civil judgment. Now, all you have to do is promise not to do it again, if it gets that far.”

“The other thing we ought to take a look at is the nonconsensual using of discarded information,” Rothstein says.

Consent is key, according to Rothstein, who cited the Texas cases in support of his argument. “One mother sued, and as result of the lawsuit, 5.3 million blood samples were destroyed,” Rothstein says. A state law passed in the wake of the uproar gave parents the right to opt out of the collection program. “Since then, the opt-out rate is only 3%,” Rothstein says. “But they want to be asked.”

Pam Dixon, the founder and executive director of the World Privacy Forum, says HIPAA is only “a beginner framework” that “we've grown out of now.”

“There has to be an entirely new approach and it has to start with governance,” she says. Dixon, who has served as a member of the state-chartered California Privacy and Security Advisory Board since 2008, says the U.S. needs to create the position of a national data commissioner on privacy with broad authority across all industries, not just healthcare. “We're the only industrialized country that doesn't have this.”

One of the clearer windows on Internet-based threats to personal privacy is the case of social media site PatientsLikeMe confronting market researcher Nielsen.

“Nielsen posed as a depressed patient, and then they turned on a computer once they were logged in,” says Jamie Heywood, co-founder and board chairman of PatientsLikeMe. What Nielsen gathered while there was “data that was available to the community—to 70,000 people,” he says. But the point, according to Heywood, is Nielsen didn't ask, it took.

“When we sell our data, we contractually require our clients to do certain things,” he says. “They can't re-identify the data. We feel we have a moral contract with our customers to make the world better. What Nielsen did was they went in and took data that was available and sold it with none of the restrictions that we work under. So we stopped them. We sent them a cease-and-desist letter. They broke a legal contract when you sign on to our site.”

Nielsen spokesman Matt Anchin, in response to questions about the company's doings on the PatientsLikeMe website, says the activities were conducted by a Nielsen service called BuzzMetrics. Anchin would not say who, even by the type of industry, uses BuzzMetrics data, or for what purpose. “We became aware of it and we stopped it,” Anchin says.

“There is no such thing as de-identified data any longer,” Heywood says. “Anyone who has a state, age and gender and a couple of diagnoses is pretty much identifiable to every doctor and insurance company.”

Editor's note: This is an expanded version of the story published in the Nov. 22, 2010, issue of Modern Healthcare.

Stefaan G. Verhulst
Chief of Research
Markle Foundation
10 Rockefeller Plaza, Floor 16
New York, NY 10020-1903
Tel. 212 713 7630
Cell 646 573 1361

Monday, November 15, 2010

Poll: Huge majorities want control over health info

AUSTIN, TX – Patient Privacy Rights, the health privacy watchdog, has enlisted the help of Zogby International to conduct an online survey of more than 2,000 adults to identify their views on privacy, access to health information, and healthcare IT. The results were overwhelmingly in favor of individual choice and control over personal health information.

Ninety-seven percent of Americans believe that doctors, hospitals, labs and health technology systems should not be allowed to share or sell their sensitive health information without consent.

The poll also found strong opposition to insurance companies gaining access to electronic health records without permission. Ninety-eight percent of respondents opposed payers sharing or selling health information without consent.

"No matter how you look at it, Americans want to control their own private health information," said Deborah Peel, MD, founder of Patient Privacy Rights. "We asked the question, 'If you have health records in electronic systems, do YOU want to decide which companies and government agencies can see and use your sensitive data?' Ninety-three percent said 'Yes!'"

Ninety-one percent of Americans want to be able to decide which individual people can see and use their health information, according to the poll – which indicated that Americans are concerned not just about corporations intruding on their data, but also researchers, employees, and people with malicious intent, such as ex-spouses or abusive partners.

Patient Privacy Rights advocates for a patient-centered healthcare system, where each person controls the use of personal health data and healthcare systems put patients first. The group asserts that the most important element in a patient-centered healthcare system is an inexpensive, practical and effective method to ensure individuals can control and selectively share personal health information scattered across many locations, from doctors' offices, pharmacies, labs  and insurance companies to hospitals with those they choose.

The group advocates a 'one-stop shop' website where consumers can set up consent directives or rules to guide the use and disclosure of all or part of their electronic health information; if a request to use or sell health data is not covered by privacy rules, they can be 'pinged' via cell phone or e-mailed for informed consent.

Patient Privacy Rights calls this solution the "Do Not Disclose" list – similar to the national "Do Not Call" list. If a patient's name is on the list, any organization that holds his or her sensitive health information, from prescriptions to DNA, must first explain how that information will be used before being granted permission.

A large majority of people surveyed, 78 percent, said they would be somewhat likely (28 percent) or very likely (50 percent) to use a website that allows them to decide who can see and use his or her sensitive health information.

For more information on the "Do Not Disclose" list and petition, visit

"Americans overwhelmingly believe that they are the only people in the right position to make decisions about how their information can be used," said Peel. "Researchers do not get a free pass."

Only 5 percent of those surveyed said the government should make the decision on whether corporations and researchers can see and use the information in health records without permission. Moreover, most individuals don't trust their doctors to make decisions for them.

Just 5 percent believe their physician or other health professional should have that power. When asked who should have that power, 87 percent of respondents said, "you personally."
Patient Privacy Rights urged policymakers and Congress to "think long and hard about Americans' strong, indisputable beliefs about their right to privacy, especially as they decide policies and funding for the nation's health systems."

WSJ Crovitz: Forget any 'Right to Be Forgotten'

Don't count on government to censor information about you online.

L. Gordon Crovitz  WSJ  11/15/2010

The stakes keep rising in the debate over online privacy. Last week, the Obama administration floated the idea of a privacy czar to regulate the Internet, and the European Union even concocted a new "right to be forgotten" online.

The proposed European legislation would give people the right, any time, to have all of their personal information deleted online. Regulators say that in an era of Facebook and Google, "People should have the 'right to be forgotten' when their data is no longer needed or they want their data to be deleted." The proposal, which did not explain how this could be done in practice, includes potential criminal sanctions.

Privacy viewed in isolation looks more like a right than it does when seen in context. Any regulation to keep personal information confidential quickly runs up against other rights, such as free speech, and many privileges, from free Web search to free email.

There are real trade-offs between privacy and speech. Consider the case of German murderer Wolfgang Werle, who does not think his name should be used. In 1990, he and his half brother killed German actor Walter Sedlmayr. They spent 15 years in jail. German law protects criminals who have served their time, including from references to their crimes.

Last year, Werle's lawyers sent a cease-and-desist letter to Wikipedia, citing German law, demanding the online encyclopedia remove the names of the murderers. They even asked for compensation for emotional harm, saying, "His rehabilitation and his future life outside the prison system is severely impacted by your unwillingness to anonymize any articles dealing with the murder of Mr. Sedlmayr with regard to our client's involvement."

Censorship requires government limits on speech, at odds with the open ethos of the Web. It's also not clear how a right to be forgotten could be enforced. If someone writes facts about himself on Facebook that he later regrets, do we really want the government punishing those who use the information?

UCLA law Prof. Eugene Volokh has explained why speech and privacy are often at odds. "The difficulty is that the right to information privacy—the right to control other people's communication of personally identifiable information about you—is a right to have the government stop people from speaking about you," he wrote in a law review article in 2000.

Indeed, there's a good argument that "a 'right to be forgotten' is not really a 'privacy' right in the first place," says Adam Thierer, president of the Progress and Freedom Foundation. "A privacy right should only concern information that is actually private. What a 'right to be forgotten' does is try to take information that is, by default, public information, and pretend that it's private."

There are also concerns about how information is collected for advertising. A Wall Street Journal series, "What They Know," has shown that many online companies don't even know how much tracking software they use. Better disclosure would require better monitoring by websites. When used correctly, these systems benignly aggregate information about behavior online so that advertisers can target the right people with the right products.

Many people seem happy to make the trade-off in favor of sharing more about themselves in exchange for services and convenience. On Friday, when news broke of potential new regulations in the U.S., the Journal conducted an online poll asking, "Should the Obama administration appoint a watchdog for online privacy?" Some 85% of respondents said no.

As Brussels and Washington were busily proposing new regulations last week, two of the biggest companies were duking it out over consumer privacy, a new battlefield for competition. Google tried to stop Facebook from letting users automatically import their address and other contact details from their Gmail accounts, arguing that the social-networking site didn't have a way for users to get the data out again.

When users tried to import their contacts to Facebook, a message from Gmail popped up saying, "Hold on a second. Are you super sure you want to import your contact information for your friends into a service that won't let you get it out?" The warning adds, "We think this is an important thing for you to know before you import your data there. Although we strongly disagree with this data protectionism, the choice is yours. Because, after all, you should have control over your data."

One of the virtues of competitive markets is that companies vie for customers over everything from services to privacy protections. Regulators have no reason to dictate one right answer to these balancing acts among interests that consumers are fully capable of making for themselves.

Sunday, November 14, 2010

NYTimes: The Limits of Secondary Use

I.R.S. Sits on Data Pointing to Missing Children

By DAVID KOCIENIEWSKI    NY Times   November 12, 2010

For parents of missing children, any scrap of information that could lead to an abductor is precious.

Three years into an excruciating search for her abducted son, Susan Lau got such a tip. Her estranged husband, who had absconded with their 9-year-old from Brooklyn, had apparently filed a tax return claiming the boy as an exemption.

Investigators moved quickly to seek the address where his tax refund had been mailed. But the Internal Revenue Service was not forthcoming.

“They just basically said forget about it,” said Julianne Sylva, a child abduction investigator who is now deputy district attorney in Santa Clara County, Calif.

The government, which by its own admission has data that could be helpful in tracking down the thousands of missing children in the United States, says that taxpayer privacy laws severely restrict the release of information from tax returns. “We will do whatever we can within the confines of the law to make it easier for law enforcement to find abducted children,” said Michelle Eldridge, an I.R.S. spokeswoman.

The privacy laws, enacted a generation ago to prevent Watergate-era abuses of confidential taxpayer information, have specific exceptions allowing the I.R.S. to turn over information in child support cases and to help federal agencies determine whether an applicant qualifies for income-based federal benefits.

But because of guidelines in the handling of criminal cases, there are several obstacles for parents and investigators pursuing a child abductor — even when the taxpayer in question is a fugitive and the subject of a felony warrant.

“It’s one of those areas where you would hope that common sense would prevail,” said Ernie Allen, president and chief executive of the National Center for Missing and Exploited Children. “We are talking about people who are fugitives, who have criminal warrants against them. And children who are at risk.”

About 200,000 family abductions are reported each year in the United States, most of which stem from custody disputes between estranged spouses. About 12,000 last longer than six months, according to Justice Department statistics, and involve parental abductors who assume false identities and travel the country to escape detection.

But, counterintuitive as it may seem, a significant number file one of bureaucracy’s most invasive documents, a federal tax return. A study released by the Treasury Department in 2007 examined the Social Security numbers of 1,700 missing children and the relatives suspected of abducting them, and found that more than a third had been used in tax returns filed after the abductions took place.

Criminologists say it is unclear what motivates a child abductor to file a tax return: confusion, financial desperation for a refund or an attempt to avoid compounding their criminal problems by failing to pay taxes. Whatever the reason, the details in a return on an abductor’s whereabouts, work history and mailing address can be crucial to detectives searching for a missing child.

“It doesn’t make a whole lot of sense,” said Harold Copus, a retired F.B.I. agent who investigated missing child cases, of why abductors provide such information. “But if they were thinking clearly, they wouldn’t have abducted their child in the first place.”

The law forbids the I.R.S. from turning over data from tax returns unless a parental abduction is being investigated as a federal crime and a United States district judge orders the information released. But the vast majority of parental abduction cases are investigated by state and local prosecutors, not as federal crimes, say investigators and missing children’s advocates. Even when the F.B.I. does intercede in parental abduction cases, requests for I.R.S. data are rarely granted.

When the Treasury Department study identified hundreds of suspected abductors who had filed tax returns, for instance, a federal judge in Virginia refused to issue an order authorizing the I.R.S. to turn over their addresses to investigators. The judge, Leonie M. Brinkema, declined to discuss her decision.

Advocates for missing children say that federal judges often argue that parental abductions are better suited to family court than criminal court.

“There’s this sense that because the child is with at least one of their parents, it’s not really a problem,” said Abby Potash, director of Team Hope, which counsels parents who are searching for a missing child. Ms. Potash’s son was abducted by a relative and kept for eight months before he was recovered. “But when you’re the parent who’s left behind, it is devastating. You’re being robbed of your son or daughter’s childhood.”

In Ms. Lau’s case, her search for her missing son dragged on for two years after the I.R.S. refused investigators’ request for her ex-husband’s tax return. She actually got the tip from the I.R.S., which disallowed her request to claim the boy on her own tax return because someone else had. The boy was eventually found in Utah, after his photo appeared in a flier distributed by missing children’s groups, and he was reunited with his mother at age 15 — five years after they were separated.

I.R.S. officials are quick to point out that they have worked closely with missing children’s advocates in some areas. The I.R.S.’s “Picture Them Home” program has included photos of thousands of missing children with forms mailed to millions of taxpayers since 2001. More than 80 children were recovered with the help of that program.

Still, attempts to change the law to give the tax agency more latitude have sputtered over the last decade. Dennis DeConcini, a former Democratic senator from Arizona, lobbied for the change in 2004 on behalf of a child advocacy group, but said that it never gained traction because some members of Congress feared that any release of I.R.S. data could lead to a gradual erosion of taxpayer privacy. In recent years, much of the legislation involving missing children has focused on international abductions.

One problem missing children’s advocates have wrestled with in proposing legislation is determining how much information the I.R.S. should be asked to release from a suspected abductor’s tax return. Should disclosure be required only if a child’s Social Security number is listed on a return? Should child abduction investigators be given only the address where a tax return was mailed? Or the location of an employer who has withheld taxes on a suspected abductor?

Griselda Gonzalez, who has not seen her children since 2007, holds fleeting hope that some type of information might reunite her family. Diego and Tammy Flores were just 2 and 3 years old when their father took them from their home in Victorville, Calif., for a weeklong visit and never returned. After Ms. Gonzalez reported their disappearance, a felony warrant for kidnapping was issued for the father, Francisco Flores. His financial records suggest he meticulously planned his actions for months — withdrawing money from various accounts and taking out a second mortgage — so Ms. Gonzalez doubts he would claim the children as dependents on a tax return.

But it gnaws at her that some federal laws seemed more concerned with the privacy of a fugitive than the safety of children.

“When your kids are taken from you, the hardest part is at night, thinking about them going to sleep,” she said. “You wonder who’s tucking them in, who will hug them if they have a bad dream or taking them to the bathroom if they wake up. And you ask yourself whether you’ve done everything possible to find them.”

“It would be good to know that you tried everything,” she said.

Missing children’s advocates see the I.R.S. data as a potentially powerful resource.
“There are hundreds of cases this could help solve,” said Cindy Rudometkin of the Polly Klaas Foundation. “And even if it helped solve one case — imagine if that child returned home was yours.”