Colorado’s Protections for Consumer Data Privacy law (“new law”) takes effect on September 1, 2018 and requires that businesses holding personal information for Colorado residents destroy the data they don’t need, protect the data they decide to keep, and disclose any security breaches involving that data within 30 days of its occurrence. The new law amends existing obligations and adds new obligations applicable to businesses holding information about Colorado residents.
More and more frequently the following question arises: “What do we do about personal, sensitive, and business information owned by or residing with a financially troubled company?” Information is an intangible asset and often has significant value. Information increasingly resides with a party other than the owner and may need to be transferred in unexpected ways. Unfortunately, the thinking about this question often arises after financial distress is readily apparent, such as after a bankruptcy filing. Planning should occur much earlier, whether for the business in distress or in dealing with a business that could suffer financial distress (hint 1 – the latter is every business). Continue Reading Information in Distress – Part 1
Your business is an international company selling products to U.S. consumers. In the last few years, you may have heard a lot about high-profile information privacy and security cases brought by the U.S. government. Should you be concerned? Most definitely.
On Feb. 23, 2016, the FTC announced that Taiwan-based computer hardware maker ASUSTeK Computers, Inc. (“ASUS”) agreed to a 20-year consent order, resolving claims that it engaged in unfair and deceptive practices in connection with routers it sold to U.S. consumers. According to the FTC’s complaint, ASUS failed to take reasonable steps to secure the software for its routers, which it offered to consumers specifically for protecting their local networks and accessing their sensitive personal information. The FTC alleged that ASUS’s router firmware and admin console were susceptible to a number of “well-known and reasonably foreseeable vulnerabilities”; that its cloud applications included multiple vulnerabilities that would allow cyber attackers to gain easy, unauthorized access to consumers’ files and router login credentials; and that the application encouraged consumers to choose weak login credentials. By failing to take reasonable actions to remedy these issues, ASUS subjected its customers to a significant risk that their sensitive personal information and local networks would be subject to unauthorized access. Continue Reading FTC v. ASUS – In the Internet age, being a foreign-based company is no defense
In this series on defining your company’s information security classifications, we’ve already looked at Protected Information under state PII breach notification statutes, and PHI under HIPAA. What’s next? Customer information that must be safeguarded under the Gramm-Leach-Bliley Act (GLBA), a concern for any “financial institution” under GLBA.
GLBA begins with an elegant, concise statement of congressional policy: “each financial institution has an affirmative and continuing obligation to respect the privacy of its customers and to protect the security and confidentiality of those customers’ nonpublic personal information.” Sounds straightforward, doesn’t it? Things get complicated, though, for three reasons: (1) the broad scope of what constitutes a “financial institution” subject to GLBA; (2) the byzantine structure of regulators authorized under GLBA to issue rules and security standards and to enforce them; and (3) the amorphous definition of nonpublic customer information. Continue Reading Adding yet more class to Information Governance (Part 3)
The Cybersecurity Act of 2015, signed into law on Dec. 18, has four titles that address longstanding concerns about cybersecurity in the United States, such as cybersecurity workforce shortages, infrastructure security, and gaps in business knowledge related to cybersecurity. This post distills the risks and highlights the benefits for private entities that may seek to take advantage of Title I of the Cybersecurity Act of 2015 – the Cybersecurity Information Sharing Act of 2015 (“CISA”).
It’s been clear for many years that greater information-sharing between companies and with the government would help fight cyber threats. The barriers to such sharing have been (1) liability exposure for companies that collect and share such information, which can include personally identifiable information, and (2) institutional and educational impediments to analyzing and sharing information effectively.
CISA is designed to remove both of these information-sharing barriers. First, CISA provides immunity to companies that share “cyber threat indicators and defensive measures” with the federal government in a CISA-authorized manner. Second, CISA authorizes, for a “cybersecurity purpose,” both use and sharing of defensive measures and monitoring of information systems. CISA also mandates that federal agencies establish privacy protections for shared information and publish procedures and guidelines to help companies identify and share cyber threat information. Notably, companies are not required to share information in order to receive information on “threat indicators and defensive measures,” nor are entities required to act upon information received – but this won’t shield companies from ordinary ‘failure to act’ negligence claims. Continue Reading What’s new with the Cybersecurity Information Sharing Act?
In this series on establishing security classifications for your company’s information, last week’s post looked at one aspect – the widely varying definitions of Protected Information under state PII breach notification statutes. But if your organization is a covered entity or business associate under the Health Insurance Portability and Accountability Act (HIPAA), the definition of Protected Health information (PHI) is also a key puzzle piece for your classification scheme.
HIPAA establishes national standards for the use and disclosure of PHI, and also for the safeguarding of individuals’ electronic PHI, by covered entities and business associates. Merely having information commonly thought of as “protected health information” does not mean that HIPAA applies. And there are some surprises in which organizations are – and are not – covered by HIPAA. So, that’s the first question to answer – is your company a HIPAA covered entity or business associate?
When governing information, it works well to identify and bundle rules (for legal compliance, risk, and value), identify and bundle information (by content and context), and then attach the rule bundles to the information bundles. Classification is a great means to that end, by both framing the questions and supplying the answers. With a classification scheme, we have an upstream “if-then” (if it’s this kind of information, then it has this classification), followed by a downstream “if-then” (if it’s information with this classification, then we treat it this way). A classification scheme is simply a logical paradigm, and frankly, the simpler, the better. For day-to-day efficiency, once the rules and classifications are set, we automate as much and as broadly as possible, thereby avoiding laborious individual decisions that reinvent the wheel.
Easy so far, right? One of the early challenges is to identify and bundle the rules, which can be complicated. For example, take security rules. Defining what information fits in a protected classification for security controls can be daunting, given the various overlapping legal regimes in the United States for PII, PHI, financial institution customer information, and the like. So, let’s take a look, over several posts, at legal definitions for protected information, starting with PII under state statutes. Continue Reading Adding some class to Information Governance (Part 1)
Today the FTC announced a $100-million settlement of its most recent data security lawsuit against LifeLock, the ubiquitous B2C provider of credit monitoring and identity theft protection to consumers. Despite years of litigation with the FTC and 35 states’ attorneys general, LifeLock has continued with a business model that taps into consumers’ visceral fear of identity theft, and also consumers’ persistent belief that such exposure can magically disappear… all for “less than $10/ month.” But while “Nobody can conceive or imagine all the wonders there are unseen and unseeable in the world,” LifeLock’s settlement with the FTC is a reminder that there is no perfect protection against identity theft. Continue Reading FTC v. LifeLock — sorry Virginia, there is no Security Claus
Yesterday the FTC announced it has settled its claims against Wyndham for inadequate data security, with Wyndham signing on to essentially the same consent order used by the FTC in most of its more than 50 concluded data security enforcement matters. The settlement marks the end of a three-year legal battle in which Wyndham attempted, unsuccessfully, to restrict the FTC’s authority to pursue companies for inadequate data security as an ”unfair” business practice under Section 5 of the FTC Act. Continue Reading Wyndham checks out of FTC dispute
The FTC has pursued enforcement actions against more than 50 companies for inadequate data security, and to date only two, Wyndham Hotels and LabMD, have pushed back. On the heels of a Third Circuit victory in its Wyndham litigation, the FTC recently suffered a blow when its administrative complaint against LabMD was dismissed – by an FTC administrative judge, no less.
As the FTC pursues an appeal to its commissioners, are there lessons to be learned? First, reports of the death of the FTC’s Section 5 data security enforcement authority have, once again, been greatly exaggerated – the FTC will remain in the data security enforcer role post-LabMD, as strong as ever. And second, the real lesson of LabMD is what it teaches us about grey hat security firm tactics, and how businesses need to trust their gut and do their homework. Continue Reading FTC v. LabMD – 50 shades of white hat