On May 15, 2019, President Trump issued Executive Order 13873 (“E.O. 13873”) and declared a national emergency in response to increasing actions by “foreign adversaries” to create and exploit “vulnerabilities in information and communications technology and services” supplied to the U.S.  E.O. 13873 broadly prohibits persons subject to U.S. jurisdiction from engaging in information and communications technology or services transactions with “foreign adversaries” that: (i) pose undue sabotage or subversion risks to U.S. information and communications technology or services, (ii) pose an undue risk to critical U.S. infrastructure or the U.S. digital economy, or (iii) otherwise pose an unacceptable risk to U.S. national security.  Within one hundred fifty (150) days of E.O. 13873, the Secretary of Commerce, in consultation with other executive agencies, will issue formal rules or regulations which will identify the specific “foreign adversaries” who are subject to E.O. 13873’s prohibitions, establish criteria for determining the types of transactions that are prohibited by E.O. 13873 and establish procedures for obtaining licensing to conduct transactions that would otherwise be prohibited by E.O. 13873 and its associated rules and regulations.

Colleges and universities frequently hire third-party vendors to provide services that involve student data—cloud storage, online education delivery, and online grade books to name a few. Although the arrangements are common, they can run afoul of the Family Educational Rights and Privacy Act (20 U.S.C. § 1232g; 34 CFR Part 99) (FERPA) and other data privacy best practices. Colleges and universities should contemplate privacy and security issues when contracting with third-party vendors and include language in the service agreement that identifies exactly what information is being shared and protects how the information can be used in the future.

Your business is an international company selling products to U.S. consumers. In the last few years, you may have heard a lot about high-profile information privacy and security cases brought by the U.S. government. Should you be concerned? Most definitely.

On Feb. 23, 2016, the FTC announced that Taiwan-based computer hardware maker ASUSTeK Computers, Inc. (“ASUS”) agreed to a 20-year consent order, resolving claims that it engaged in unfair and deceptive practices in connection with routers it sold to U.S. consumers. According to the FTC’s complaint, ASUS failed to take reasonable steps to secure the software for its routers, which it offered to consumers specifically for protecting their local networks and accessing their sensitive personal information. The FTC alleged that ASUS’s router firmware and admin console were susceptible to a number of “well-known and reasonably foreseeable vulnerabilities”; that its cloud applications included multiple vulnerabilities that would allow cyber attackers to gain easy, unauthorized access to consumers’ files and router login credentials; and that the application encouraged consumers to choose weak login credentials. By failing to take reasonable actions to remedy these issues, ASUS subjected its customers to a significant risk that their sensitive personal information and local networks would be subject to unauthorized access.

You may have a top-notch security incident response plan and a crack team for data breach response…but have you checked to be sure that your company’s HR policies are on the same team with you? Personnel Management is one of the most important—yet often overlooked—of the 10 activity channels for effective data breach response. In the crunch of handling an actual data security incident, your company’s HR policies will either pave or block the road to a nimble, successful response.

Of course, various policies are important for prevention of data security breaches, including policies for such matters as authorized computer systems, e-communications, and Internet use; authorized data and system access; strong passwords; use of encryption and encryption keys; mobile device safeguards; precluding or limiting storage of company data on home or other personal devices; and the like. But other policy provisions are essential for effective security breach response:

In this series on defining your company’s information security classifications, we’ve already looked at Protected Information under state PII breach notification statutes, and PHI under HIPAA. What’s next? Customer information that must be safeguarded under the Gramm-Leach-Bliley Act (GLBA), a concern for any “financial institution” under GLBA.

GLBA begins with an elegant, concise statement of congressional policy: “each financial institution has an affirmative and continuing obligation to respect the privacy of its customers and to protect the security and confidentiality of those customers’ nonpublic personal information.” Sounds straightforward, doesn’t it? Things get complicated, though, for three reasons: (1) the broad scope of what constitutes a “financial institution” subject to GLBA; (2) the byzantine structure of regulators authorized under GLBA to issue rules and security standards and to enforce them; and (3) the amorphous definition of nonpublic customer information.

The Cybersecurity Act of 2015, signed into law on Dec. 18, has four titles that address longstanding concerns about cybersecurity in the United States, such as cybersecurity workforce shortages, infrastructure security, and gaps in business knowledge related to cybersecurity. This post distills the risks and highlights the benefits for private entities that may seek to take advantage of Title I of the Cybersecurity Act of 2015 – the Cybersecurity Information Sharing Act of 2015 (“CISA”).

It’s been clear for many years that greater information-sharing between companies and with the government would help fight cyber threats. The barriers to such sharing have been (1) liability exposure for companies that collect and share such information, which can include personally identifiable information, and (2) institutional and educational impediments to analyzing and sharing information effectively.

CISA is designed to remove both of these information-sharing barriers. First, CISA provides immunity to companies that share “cyber threat indicators and defensive measures” with the federal government in a CISA-authorized manner. Second, CISA authorizes, for a “cybersecurity purpose,” both use and sharing of defensive measures and monitoring of information systems. CISA also mandates that federal agencies establish privacy protections for shared information and publish procedures and guidelines to help companies identify and share cyber threat information. Notably, companies are not required to share information in order to receive information on “threat indicators and defensive measures,” nor are entities required to act upon information received – but this won’t shield companies from ordinary ‘failure to act’ negligence claims.

Marvel fans know that Captain America’s shield is extraordinary, but exactly what it’s made of remains unknown – Vibranium? Adamantium? Unobtanium (oops, wrong movie)? For the time being, similar mystery shrouds the specifics of the new EU-U.S. Privacy Shield. Four months ago we posted on the European Court of Justice’s ruling that the U.S.-EU Safe Harbor was invalid. This Tuesday the European Commissioner announced negotiations with the U.S. had successfully yielded a new vehicle for compliant cross-border transfers of EU residents’ personal data, dubbed the EU-U.S. Privacy Shield. But until details of the new vehicle are disclosed, the specific features of the Privacy Shield remain murky.

All encryption tools are not created equal. Just ask the folks at Microsoft, who have recently demonstrated that encrypted Electronic Medical Record databases can leak information. Turns out that CryptDB, a SQL database add-on developed at MIT that allows searching of encrypted data, allows search queries to be combined with information in the public domain to hack the database. More on this in a minute. In the meantime, let’s consider the assumption that encryption is inviolate/ infrangible/ impervious to hacks. As I mentioned in an earlier post, encryption algorithms are too complex for most laypersons to understand, but we should at least wrap our heads around the concept that encryption is not a “set it and forget it” technology, nor is it foolproof.

In this series on establishing security classifications for your company’s information, last week’s post looked at one aspect – the widely varying definitions of Protected Information under state PII breach notification statutes. But if your organization is a covered entity or business associate under the Health Insurance Portability and Accountability Act (HIPAA), the definition of Protected Health information (PHI) is also a key puzzle piece for your classification scheme.

HIPAA establishes national standards for the use and disclosure of PHI, and also for the safeguarding of individuals’ electronic PHI, by covered entities and business associates. Merely having information commonly thought of as “protected health information” does not mean that HIPAA applies. And there are some surprises in which organizations are – and are not – covered by HIPAA. So, that’s the first question to answer – is your company a HIPAA covered entity or business associate?

When governing information, it works well to identify and bundle rules (for legal compliance, risk, and value), identify and bundle information (by content and context), and then attach the rule bundles to the information bundles. Classification is a great means to that end, by both framing the questions and supplying the answers. With a classification scheme, we have an upstream “if-then” (if it’s this kind of information, then it has this classification), followed by a downstream “if-then” (if it’s information with this classification, then we treat it this way). A classification scheme is simply a logical paradigm, and frankly, the simpler, the better. For day-to-day efficiency, once the rules and classifications are set, we automate as much and as broadly as possible, thereby avoiding laborious individual decisions that reinvent the wheel.

Easy so far, right? One of the early challenges is to identify and bundle the rules, which can be complicated. For example, take security rules. Defining what information fits in a protected classification for security controls can be daunting, given the various overlapping legal regimes in the United States for PII, PHI, financial institution customer information, and the like. So, let’s take a look, over several posts, at legal definitions for protected information, starting with PII under state statutes.