Photo of Megan Scheiderer

A seasoned litigator of business and employment disputes in jurisdictions across the nation, Megan champions practicality in discovery as she co-leads the firm’s innovative eDiscovery Solutions team. A significant portion of Megan’s solutions are offered before litigation even hits, when she counsels clients on corporate information governance decisions and internal processes, bringing corporate, IT and legal departments to the table together.

I write this post on the three-year anniversary (Cheers!) of Judge Andrew Peck’s Da Silva Moore v. Publicis Groupe et al, S.D. New York, 11-1279, 2-24-2012 opinion, widely cited as the first case ruling to endorse the use of predictive coding or “technology-assisted review” (TAR) as a discovery tool.

TAR is the process of training a computer system to make decisions about the responsiveness of a document that would otherwise be reviewed and coded by a manual reviewer. With TAR, human effort is not eliminated, but rather used throughout the review process to train the system on what is responsive and what is not. The documents used to train the system are called the “training set” or “seed set.”   Once the system is trained, the computer reviews and codes the documents.

Since Da Silva Moore, the use of TAR in cases has gained some traction with litigants and courts. Commentary on the cost-savings and increased accuracy of TAR versus human review is relatively old news, and it seems well-established in case law that, as a general matter, TAR is an appropriate method for reviewing electronic data. But the defensibility of the particular TAR process used in a specific case is not yet predictable (pun intended). For example: