Technology-assisted review (“TAR”) is a powerful tool used to streamline document review. Because data volume is constantly increasing, TAR was designed to leverage human categorization of documents (i.e., responsive/not responsive) to educate software, that would, in turn, categorize additional documents based upon what the computer had “learned.”
The original TAR (commonly known as TAR 1.0) was a welcomed advancement as it redefined the way electronically stored information (“ESI”) potentially relevant to a litigation was reviewed. TAR’s ultimate purpose was to implement various advances intended to increase review throughput. These advances included predictive coding clustering, and concept analytics. And, while TAR 1.0 promoted efficiency and increased throughput, TAR 2.0 has improved the review process even more. Indeed, good news for weary document reviewers – TAR 2.0 makes for an even more efficient, more precise (and therefore, less costly) review process. TAR 2.0 is also known as Continuous Active Learning (CAL). This means that as human reviewers make decisions about a document’s relevance, importance, confidentiality and other categories the TAR engine is actively internalizing these decisions and continuously learning so that it can re-prioritize documents yet to be reviewed so that those documents most likely to be relevant are prioritized in the queue. At some point, there are diminishing returns and only non-relevant documents remain. And so, one of the benefits of TAR 2.0 is that an attorney reviews all relevant documents without non-relevant documents intermixed among the materials.
An additional benefit of TAR 2.0 is that the process is as accurate as a full manual review (albeit less expensive). Indeed, because the entire population of relevant documents is eventually reviewed by human reviewers, the coding designations are as accurate as humanly possible. From there, simple quality control methods can be used to catch any outliers.
Another advantage of TAR 2.0 is that because it is a prioritization tool, it can be used on a dataset with as little as 500 documents, whereas TAR 1.0 required a data set of 50,000 documents minimally. This means TAR 2.0 can be leveraged in a wide variety of cases.*
And so, the next time you are beginning a document review project, consider using TAR 2.0, which allows for a meaningful way to reduce the volume of documents in the review queue while increasing efficiency, and throughput of a review project.
*Moreover, TAR 2.0 has been deemed user friendly as compared to TAR 1.0, which required complicated metrics to validate results.
**Thank you to first year associate, Jaclyn Ruggirello in the Firm’s Uniondale office, for her research assistance related to today’s blog.
Have questions? Please contact me at firstname.lastname@example.org.