Home
Resources
Blog

The Myth of the Gold Standard in eDiscovery and What to Do Instead

Women in Technology - Hillary Hames image and quote

Written By Samishka Maharaj

Published: Nov 06, 2025

Updated:

For decades, there has been an ongoing and misleading myth in the realm of eDiscovery: that manual human review, in which reviewers meticulously pore over documents one by one—is the ultimate option in terms of accuracy, relevance, and privilege. This notion is what’s known as the "gold standard," suggesting that manual review is the most reliable, defensible, and complete method of document review.

Consilio is deeply committed to experts guiding even the AI and TAR-based solutions we offer around review; so, this is not a “human” versus “technology” discussion.

However, several parties are starting to see results that suggest manual human review is no longer the gold standard. For example, in the 2023 case of In re Diisocyanates Antitrust Litigation, the court approved the use of TAR 2.0 workflows and emphasized that the parties' use of advanced analytics and continuous active learning (CAL) was both appropriate and proportional under Rule 26(b)(1).

What’s more is that the court explicitly rejected the idea that human review of every document was necessary for defensibility, noting that well-implemented AI review can meet or exceed the effectiveness of linear review. Thus, legal teams should avoid taking the route of using human review alone for eDiscovery, as the myth of it being the gold standard is just that, a myth.

Human Fallibility in the Gold Standard

One issue that underlies this myth is a romanticized idea of human judgment, primarily that a team of legal professionals can achieve perfect results. While they are armed with domain knowledge and a review platform, research and real-world experience consistently demonstrate the opposite.

Recent analyses continue to confirm significant inconsistency among human reviewers. In the 2022 United States v. Google LLC antitrust case, TAR protocols played a central role in discovery, and court filings revealed that manual review had resulted in notable misclassifications; errors that were later caught and corrected by AI-based validation. This example reinforced what many discovery professionals already know: fatigue, ambiguity, and cognitive overload introduce risks that no amount of legal expertise can fully eliminate.

Human reviewers often diverge in their assessments even when working from the same protocols, a phenomenon sometimes referred to as “reviewer variability” or “assessor overlap.” These discrepancies can lead to inconsistent privilege tagging, missed key documents, and erroneous inclusions or exclusions from productions. At scale, these mistakes have serious consequences, including waived privilege under Federal Rule of Evidence 502(b), exposure to sanctions, or critical strategic disadvantages.

As such, even the most seasoned legal professionals are vulnerable to the very human limitations of attention, interpretation, and endurance. Technology-assisted review isn’t merely a faster option; it’s increasingly the more accurate and defensible one.

The Illusion of Control

Using manual review gives legal teams a sense of wielding more control over their review process, and with it comes a kind of comfort in seeing documents reviewed by actual people. But this is merely a perception and does not always translate into the reality of the review process.

This is because, without systematic checks and measurements, there is no real visibility into how effective a legal team’s review is. As such, a legal team is not able to control some of the most critical aspects of a document review: quality assurance and checks for accuracy.

Manual review on its own cannot guarantee quality. It always warrants utilizing additional validation. Without this kind of systems-based affirmation, your review becomes liable to risks, especially in complex or high-volume cases.

Thus, the concept of manual review being the gold standard in legal document review is more of a legacy practice than a modern-day best practice.

In re Apple Inc. App Store Litigation case, where both parties leveraged TAR workflows and the court emphasized the importance of transparent validation protocols over blind reliance on manual review. The judge stated that incorporating algorithmic validation provided greater confidence in the accuracy of the review process, especially given the volume and complexity of the data involved. The court made clear that defensibility comes not from who reviewed the document, but how review decisions were reached, documented, and verified.

Ultimately, the belief that manual review is the gold standard is more rooted in legacy than logic. Technology-assisted methods, especially when paired with legal expertise, offer greater precision, consistency, and auditability. Human reviewers still play an essential role in applying legal nuance, but they are most effective when paired with defensible, system-based tools, which guide and support them.

The Costly and Time-Consuming Issues of Clinging on to Manual Document Review Alone

Beyond issues of accuracy and consistency, manual document review is prohibitively expensive and time-intensive, often making it the least practical choice in today’s data-heavy litigation and investigation environments. Legal teams that rely solely on human reviewers quickly face ballooning costs, not because their work isn’t valuable, but because the volume of data has outpaced the scalability of traditional workflows.

Billing dozens (or hundreds) of attorneys by the hour to manually review hundreds of thousands, or even millions, of documents is no longer sustainable. According to a 2024 report from the EDRM eDiscovery Pricing Benchmark Study, document review still accounts for over 70% of total eDiscovery spend, and manual review tops the cost breakdown. These rising costs are especially a burden in early case assessment (ECA), regulatory response, or internal investigations, where fast, cost-efficient decision-making is critical.

Time is another casualty. Manual review is not only expensive, but slow. Review timelines can stretch from weeks into months, delaying critical legal decisions and increasing downstream risks. In fast-moving litigation or regulatory matters, this lag can result in missed deadlines, penalties, or unfavorable procedural outcomes.

By contrast, modern review platforms that incorporate TAR and AI-driven prioritization can reduce document review costs by 40–60% and compress timelines significantly, without sacrificing defensibility. These technologies allow legal teams to focus human effort where it matters most: nuanced judgment calls, privilege assessments, and strategic analysis.

Continuing to rely on manual review alone is not just risky, as it’s inefficient and financially imprudent. Legal teams need to shift from viewing AI and automation as a threat to human expertise, and instead embrace them as force multipliers that deliver greater value, faster results, and lower costs.

Go Beyond the Myth with Smarter Quality Control

While legal teams shouldn’t completely part with manual reviews, they also shouldn’t treat them as the endpoint. Instead, modern eDiscovery teams should incorporate a layered, metrics-driven approach to their review processes. This way, they can achieve quality control.

This includes integrating traditional techniques such as second-level review and targeted searching, as well as advanced strategies like sampling, feedback loops, and technology-assisted review (TAR). Here are a few key ways to embrace quality control:

1. Second-Level Review and Targeted Searching

Two potent traditional methods you ought to implement into your review are those of second-level review and targeted searching. In second-level review, a portion of documents reviewed in the first pass are then re-reviewed by more senior reviewers to test for accuracy. This can range from a re-review of everything marked as relevant in smaller projects to a spot-check of 10% in larger ones. The goal of this method is to detect patterns of error and correct them as early as possible.

Targeted searching complements this by running queries for specific terms, like attorney names or privileged language, and then verifying that those documents are coded properly. Both methods help uncover inconsistencies in review and allow your team to correct course before they escalate.

2. Sampling: Measuring, Not Guessing

Sampling introduces objectivity into the review process. There are two main types of sampling.

Judgmental Sampling: This is an informal method in which reviewers pull a random subset of a document or documents to “get a sense” of its content. Although it is useful for preliminary assessments, it doesn’t provide reliable metrics. The goal of this is to get an impression and make an intuitive assessment rather than taking a specific measurement.

Formal Sampling: This is a rigorous, statistical approach used to measure performance. It involves reviewing a specified number of randomly selected documents, aimed at taking a defined measurement with a particular strength. Such a measurement is either being taken to test classifiers or estimate prevalence:

  • Test Classifiers: This process tests how effective a TAR process, a search or a human reviewer is. It allows you to quantify the accuracy and error rate of individual reviewers and teams or quantify the recall and precision of searches or TAR processes. In the context of quality control, these measurements can identify problem reviewers, measure overall review effectiveness, or implement lot acceptance sampling.
  • Estimate Prevalence: This involves reviewing a simple random sample of a given collection of materials to estimate how much of a given kind of thing is present. In the context of quality control, this measures how much relevant material may exist in the unreviewed remainder left after applying searches or a TAR process (a.k.a. measuring elusion).

Formal sampling allows teams to quantify error rates, identify outliers, and apply lot acceptance standards, which, in turn, ensures consistent quality across the board.

3. Feedback Loops for Continuous Improvement

Document review is not a “set it and forget it” task. Rather, it is an evolving process that mandates communication between all stakeholders. A strong feedback loop includes the following:

  1. Review managers regularly provide corrections and clarifications to reviewers.
  2. Weekly meetings and one-on-ones are used to address and correct recurring issues.
  3. A shared question-and-answer log ensures that everyone operates from the same understanding.

Equally important is the feedback loop between review managers and the case team. As the case evolves or new information comes to light, the review strategy must adapt with it. These feedback mechanisms transform a static review into a dynamic process that continuously learns and improves over time.

The Critical Role of Privilege Protection

Manual review is particularly risky in the context of privilege, where mistakes can be debilitating. Under FRE 502(b), courts evaluate whether "reasonable steps" were taken to prevent unintentional disclosure. It’s therefore not enough to solely assign a team of human reviewers to comb through documents on their own.

The Committee’s Explanatory Note on Rule of Evidence 50215 states that the selection and implementation of a review methodology must be defensible. They must be explained, justified, and tested. Therefore, quality assurance in privilege review isn't a nice-to-have or secondary by any means; it’s a necessity.

Targeted searches, classifier testing, sampling for privilege hits, and thorough training all contribute to a defensible and effective privilege review strategy. It’s not about checking boxes, it’s about ensuring the methodology itself holds up under scrutiny.

The New Gold Standard: Embracing Technology Without Losing Human Oversight

The rise of TAR, continuous active learning (CAL), advanced analytics and AI doesn't negate the need for human review; it enhances it. As such, these tools are not replacements for legal practitioners; they are legal force multipliers.

When properly implemented, technology-assisted workflows have repeatedly shown to be more accurate, more consistent, and more efficient than exhaustive manual review. However, even such technologies still require human oversight, quality control, and feedback loops to be successful.

These tools don't replace legal judgment; they enhance it. TAR and CAL use machine learning to prioritize and surface the most relevant documents first, enabling legal teams to focus their time and expertise on what matters most. AI can identify patterns, anomalies, and contextual signals across massive datasets at speeds no human team could match. Meanwhile, legal professionals provide the domain knowledge, interpretive nuance, and strategic thinking that machines still cannot replicate.

Together, this partnership delivers faster, more accurate, and more cost-effective results than manual review ever could alone. In fact, most courts now accept and often expect some form of TAR or validation technology in large-scale matters. Organizations that adopt this hybrid approach are not just keeping pace with eDiscovery demands, they're gaining a competitive edge.

As such, there is a new gold standard making its way into the legal sphere: that of combining human expertise with technological advancements rather than just taking one route.

This new model isn't just more efficient; it’s more defensible. When audit trails, sampling protocols, and validation methods are built into the review process, teams can prove the rigor of their methods without relying on the fallible memory or subjective opinions of individual reviewers.

Thus, a hybrid model where humans and technology work in tandem, and where quality is measured and managed throughout the process is the new gold standard.

Parting with the eDiscovery Myth, Embrace the Metrics with Top eDiscovery Provider Consilio

The idea that manual review is the gold standard of document review is a relic of the past. As the volume and complexity of electronically stored information (ESI) continually grows, legal teams must move beyond intuition and embrace evidence-based practices.

By implementing structured quality controls, leveraging technology intelligently, and fostering strong feedback systems, legal teams can achieve higher accuracy, reduce costs, and defend their processes in court.

Document review today needs a combination of legal expertise and smart technology. You still need the right experts to guide the process, but that alone isn’t enough when you’re dealing with complex or high-volume matters.

That’s where tools like Guided Ai Review come in, offering fast, AI-driven document review. And with AI PrivDetect, you can use multi-model technology to identify privilege more accurately. As such, you ought to use Consilio, the leading legal technology firm, as we offer both seasoned legal consultants, coupled with a breadth of tools as part of our eDiscovery software for litigation.

Work with us today.

Interested in tackling document review?

No items found.

Sign up for Consilio updates

Lorem ipsum dolor sit amet, consécration adipiscine, élite. Suspendez divers éléments de votre histoire.
Merci ! Votre candidature a été reçue !
En cliquant sur S'inscrire, vous confirmez que vous êtes d'accord avec notre Politique de confidentialité
Oups ! Une erreur s'est produite lors de l'envoi du formulaire.