Eight months after a significant data breach involving customer data was reported to Panera Bread company by a security researcher and within a day of an article being published laying out the nature and extent of the breach, the company on April 2, 2018 acknowledged the data leak.  However, it insisted that fewer than 10,000 consumers had been affected in contrast to the more than 7 million customers several security researchers estimate were affected.

The story is not so much the vulnerability in Panera’s online food ordering system that exposed the customer’s information, nor the fact that Panera may not have been aware of the breach before the researcher contacted it, but rather about Panera’s delay disclosing the breach and its refusal to acknowledge the magnitude of the customer information leaked. Panera is likely to become the poster child for what not to do in addressing a data breach.  For example, Panera does not have a dedicated method to accept vulnerability reports from security researchers, it ignored numerous communications from the security researcher that attempted to alert the company to the breach and became defensive about his report, including accusing the security researcher of being a scammer of some sort.  Perhaps the greatest surprise is it waited eight months to acknowledge the leak and to set about fixing it.  In the meantime more customers were likely affected by the disclosures of personal information. In addition, the reputational harm to Panera because it failed to respond quickly and forcefully, could be significant.

A national standard that includes a set notice period for businesses to disclose data breaches to  customers would have avoided the situation Panera finds itself in.  The delay could create substantial risk that customers take legal action against the company.  For nearly the last ten years many U.S. data security and breach notification laws have been introduced in the Congress but none have passed.  Currently at least one Senate and one  House bill have been introduced.  H.R. 5388, the Data Accountability and Trust Act and S. 2179 the Data Security and Breach Notification Act have been introduced.  Both bills contain provisions that generally require consumers to be notified of any breach within 30 days after its discovery.

Panera is not alone in having delayed in reporting breaches.  Equifax and Target are among the many in that category.  In fact, in 2017 Uber actually paid two hackers to keep quiet about a cyberattack that exposed the data of 57 million Uber riders and drivers.  State and federal lawmakers and security experts all agree that the lack of transparency by businesses, governmental entities and other organizations is a problem that needs to be addressed.  While many state legislatures have passed data breach notification periods, the Congress has been unable to pass legislation to address this and other issues resulting from the many significant data breaches that occur almost daily.  While it is not clear that consumers have changed their online activity because of these breaches, that day may come.

Once again, we realize that we have little control over how information we share on social media is ultimately used. The recent revelation that a data analytic firm retained by Trump’s presidential campaign used the Facebook data of more than 50 million people to target them with political ads is both shocking and unsurprising at the same time.  Facebook’s business model is built on collecting and monetizing our data and Facebook has previously been less than forthright about its privacy policies. But I bet few people anticipated that their “likes” would be used by Trump’s political consultants to sway their vote.

While details are still emerging, it appears the basic facts are as follows. In 2015, Dr. Aleksandr Kogan, a psychology professor at the University of Cambridge, offered a personality quiz on Facebook. Approximately 270,000 users downloaded an app to take the quiz. In doing so, they gave permission for Kogan to access their Facebook profile as well as their friends’ profiles. In other words, if your friend took the quiz, your information was also shared with Kogan without you knowing.

While Kogan claimed that his app was for academic purposes, in actuality, Kogan was harvesting data for a company called Cambridge Analytica. Cambridge Analytica is a firm that does political, government and military work around the globe, including for Ted Cruz’s and Donald Trump’s election campaigns.

By getting a few hundred thousand Facebooks users to take his quiz, Kogan was able to access 50 million user profiles and he turned all this information over to Cambridge Analytica. Of those profiles, roughly 30 million contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Those profiles were then used by the Trump campaign to try to influence voters.

What is especially noteworthy is that Kogan’s harvesting of user data and their friends’ data was permitted under Facebook’s developer application programming interface at the time. Facebook confirmed that the information was legitimately obtained in accordance with Facebook’s rules. In other words, this was not a “breach” in the sense that information was stolen or hacked. In fact, Facebook’s initial responses to reports were quite nonchalant. Facebook claimed that everyone “knowingly” provided their information and “gave their consent”. However, based on people’s reactions, it is clear that many users feel violated and had no idea their information would be shared in this manner.

The Cambridge Analytica revelations raise many questions, including whether Facebook broke any laws.  Lawsuits have started to roll in, including a proposed class action of Facebook members and a lawsuit on behalf of Facebook investors.  The FTC is apparently looking into this matter, as well as into whether this incident violates Facebook’s 2011 settlement with the FTC over privacy complaints. And Congress has begun demanding answers.

Investigators will likely look at whether Facebook adequately disclosed its information sharing practices to users and whether it took adequate steps to protect user data. Even if Facebook believes it was completely upfront with members (and based on people’s surprise that their information could be shared through friends, arguably this information sharing practice was not clearly and conspicuously disclosed), the scandal is not going away overnight and Facebook will need to justify their past behavior. The ultimate question may be whether users will be more circumspect about sharing information on the social media site going forward.

If you never appreciated it before, this scandal should drive home that every “click” you make on Facebook is saved and analyzed and every “harmless” survey you take is likely used to micro-target ads to you.  And if you haven’t done so already, I encourage you to go to “Settings” and then “Apps” to see what apps you have authorized to interact with your Facebook account.

On February 27, 2018, the Supreme Court heard arguments in United States v. Microsoft Corp., a case that will decide whether a digital communications provider has to comply with a U.S. search warrant for user data that is stored outside of the U.S. U.S. v. Microsoft could have major consequences for digital privacy and international data sharing, especially for the cloud-computing industry.

Continue Reading <I>U.S. v. Microsoft</I>: Is Your Data and Privacy at Risk?

In 2016, the U.S. Supreme Court in Spokeo, Inc. v. Robins, provided a potentially powerful Article III standing defense under F.R.Civ.P. 12(b)(1) seemingly applicable to a variety of privacy claims, including FCRA, FACTA, TCPA, and FDCPA statutory damage claims. The Court noted for a plaintiff to establish standing to sue in federal court, she must establish an “injury in fact” consisting of an invasion of a legally protected interest, which is both particularized and concrete.

Spokeo dealt with the “concrete” portion. To be concrete, an injury must be real but may also be intangible. Congress’ intent in creating a right is instructive, but not sufficient. Allegations of a bare procedural violation likely would not suffice to maintain standing. Some injuries create harm, others do not. Thanks for that.

Continue Reading More or Less Than the Plaintiff Bargained For: Two Recent Appellate Courts Thwart Privacy Claims Based On The Contract

There was a recent headline-making story involving a Wisconsin employer that announced it was offering its employees the option to be microchipped to replace security badges they use regularly at work. Of the 85 employees, 41 decided to have the small chip implanted in their hand. Husch Blackwell attorneys Laura Ferrari and Erik Eisenmann break-down the seemingly futuristic concept of “chipped” employees and the privacy concerns it brings in a post that originated on Husch Blackwell’s Technology, Manufacturing and Transportation Industry Insider blog.

The advice we always give to clients regarding privacy policies is: “say what you do and do what you say.” It seems simple, but simplicity can be deceiving. Companies want to reassure consumers that their personal data is safe and secure; however, in today’s world, no one can make fail-safe representations of security. Uber’s recent settlement with the FTC illustrates this problem.

Continue Reading Don’t Make “Uber” Promises You Can’t Keep

With the rise of innovations like cloud technology and software-as-a-service, clients are increasingly finding that it makes business sense to outsource computerized services, from payroll processing to the storage of electronic medical records. While doing so often cuts costs, routing (frequently confidential) data through third-party service providers also implicates serious cybersecurity concerns and, in some cases, may increase potential liability. Further, one of the pillars of a commercially reasonable information security program is selecting and retaining service providers capable of maintaining appropriate safeguards. To address these concerns, and to keep data safe, clients should require service providers to furnish them with Service Organization Control (“SOC”) Reports, particularly SOC 2 Reports.

SOC Reports were developed by the American Institute of CPAs (AICPA) to provide information about the robustness and quality of a service provider’s internal controls over certain types of data. There are three types of SOC Reports, each serving separate functions.

Continue Reading SOC It To ‘Em: Securing Your Outsourced Data with SOC 2 Reports

On April 24, 2017, the Office of Civil Rights (“OCR”) announced the first HIPAA settlement based on the impermissible disclosure of unsecured electronic protected health information by a wireless service provider. CardioNet, an ambulatory cardiac monitoring service, provides remote mobile monitoring of and rapid response to patients at risk for cardiac arrhythmias, agreed to pay $2.5 million, and to implement a corrective action plan.

As reported by the OCR, in 2012 CardioNet reported to the OCR the theft of a workforce member’s unencrypted laptop containing electronic PHI (“ePHI”) of 1,391 individuals. OCR’s investigation revealed that CardioNet had an insufficient risk analysis and risk management processes in place at the time of the theft.   Additionally, CardioNet’s provided the OCR draft policies and procedures implementing the HIPAA Security standards, and was unable to produce final policies or procedures implementing the security safeguards for ePHI, including mobile devices. Continue Reading Mighty Fine – The High Cost ($2.5 Million) for Unsecured ePHI

I recently decided to reread Dante’s The Inferno. One would not expect guidance on IoT privacy and data security (IotPDS) from a 700 year old text, but The Inferno, particularly Canto III, provides significant direction on consumer IoTPDS issues.  So,

“Abandon All Hope, You Who Enter Here.”

Continue Reading Dante on IoT Security