Consilio Advanced Learning Institute

eDiscovery Deep Dive: 6 Key Questions for Opposing Party Productions

article

You’re an eDiscovery practitioner and opposing counsel has just delivered a production – now what? It’s important to ask a few key questions to get your arms around the dataset.

Most eDiscovery practitioners have been in this situation: An opposing counsel has just delivered a production volume – now what?

Many eDiscovery practitioners might simply hand the complete dataset to a legal services provider or eDiscovery project manager for loading into a document review platform – ASAP.

Before having it loaded, however, it’s important to ask a few key questions to get your arms around the dataset you’re working with:

  1. What is the actual eDiscovery production format? Did the opposing party provide PDFs, native files, or TIFFs with load files?

  • PDF Production Volumes: These usually consist of endorsed with Bates numbers, often without any accompanying metadata load files or text files, although sometimes the pdfs are searchable.
  • Native Production Volumes: Some parties simply turn over natives of documents that may or may not have been renamed to a Bates number, without more.
  • Standard Tiff Production Volumes: These typically consist of a set of Bates-numbered tiff images with accompanying load files for the images (.opt or .lfp), numbering and metadata (.dat), and text (.lst or text link in .dat).

Ask your eDiscovery project manager to let you know the format if it’s not readily apparent to you. Why is the production volume format important? Read on.

  1. Is the eDiscovery production volume format generally as requested, agreed, or required?

What if the production should have been in one format under the applicable rules, ESI agreement, and/or requests – but arrived in a different format? You may want to address that with opposing counsel as quickly as possible before having the volume loaded to your review platform.

Any discrepancy in format could lead to additional time and expense for the production to be loaded to, or to be as useful as possible within, your review platform.

For instance, if you asked for a standard tiff production volume, but received a PDF production volume, the PDFs may need to be processed further to extract or OCR (optical character recognition) text so that the documents can be searchable in your review platform.

You may also want to consider sending the PDFs out for coding of values that you would normally have expected to receive in, say, a metadata .dat file, but which may now only be apparent on the face of the production PDFs, such as email header values or document titles. Or, you may need to have a single production PDF “re-unitized” by apparent document breaks for ease of review.

Similarly, if opposing counsel unexpectedly provided a native set of documents, the documents may need to be processed first, so they can be assigned a unique DocID for loading to your review platform, so searchable text and any available metadata values can be extracted. You also may need to have tiff images of those native records generated for use in the course of the litigation.

Further, it may not make sense to begin working with the production, and taking the time to code and annotate the record in your review platform, if a replacement set is going to be provided.

  1. Does the volume include only newly produced records, or also previously produced records?

 Some parties may mix re-productions of previously produced records into productions of new data without advance notice to the recipient. A good project manager at your legal services provider can inform you whether the production consists of an entirely new Bates range, or whether there are any overlaps with a Bates number previously used (whether inadvertent or intentional). Assuming clawbacks are not involved, you may want to have your project manager help you take a look at the differences between the prior and new versions with the same Bates numbers before simply overwriting the existing version of the records in the review platform.

  1. Does the Bates range immediately follow from the last eDiscovery production?

 If this production does not start with the next in order Bates number following the last production, there may be an unexpected gap in the Bates range.

That is something a savvy eDiscovery practitioner would quickly follow up with the opposing party about.

  1. Even if the production format is generally as expected, are there any specific areas of concern?

For instance, even if a searchable PDF production was expected, and PDFs are received, are the PDFs provided actually searchable outside of the review platform? Or, does the volume also include native Excel documents with the same Bates number as some of the PDFs?  If so, the PDF may simply be a slipsheet indicating that the Excel record is being produced in native; you may want to have your vendor load the Excel as the native file in your review platform and simply image the PDF to load it as images for the same record in the platform instead.

Similarly, if a standard tiff production volume was expected, and generally appears to have been received, more detailed checks may be helpful:

  • Does the count of images provided match the Bates range? If the number of images does not match up with the volume’s overall Bates range, there may be gaps to follow up with opposing counsel about.
  • Was text provided for all records? If not, you may want to have the production images for any records without text OCRed so that searchable text can be available in the review platform for those records.
  • Were selective natives (for Excel documents, for instance) provided as expected?
  • Were the metadata fields included in the .dat file those that were expected?
  • Even if all expected metadata fields were included in the .dat, were values populated for those fields? If documents are being re-produced and values are not being re-provided in those fields in the .dat, any existing values in your review platform for those fields may be overwritten.
  • Do the metadata fields provided in the .dat file match up with the currently available fields in your review platform?  If not, you may want to specify how the fields in the .dat file should be mapped to the existing fields in your review platform, or request that new fields be created to best store those values.  You may need to have the displays in your review platform adjusted as well so that any new fields are visible during your review.
  • Was a Confidentiality field value provided that matches the confidentiality endorsements on the images?  If not, you may want to have that value coded by reviewers, or have the opposing party provide an overlay with those values.  You may want to assess the validity of the confidentiality endorsement, so it may be useful to have that as a searchable fielded value.
  1. Are there any other features available in your eDiscovery document review platform that may be useful?

Apart from some of the technical considerations above, it may also be helpful to discuss with your project manager whether there are other features in your review platform that may be useful to assess the produced data other than just by a document-by-document review.

For instance, you may want to assess whether any prior work product in your review platform can be leveraged to analyze the opposing production volume, or whether there are ways to efficiently prioritize or group records within just the new volume itself:

  • If MD5 hash values and family values were provided in the .dat file, you should be able to use that to facilitate a comparison of those records against any already-coded records that you have, or group any duplicates within the volume itself together for mass coding/review, if appropriate.
  • If metadata values have been provided with the production, you may be able to run pivots or searches in your workspace for other records with similar values.
  • If text has been provided with the volume or is subsequently obtained via processing or OCR, you also may be able to use email threading, textual near-duplicate identification, or conceptual analytics and assisted review features to compare the new records against your already-coded records, or simply to organize/prioritize the records more efficiently in the volume itself.

The next time you receive an opposing production volume, take a few moments to consider and confirm what components it includes, what follow-up may be needed with opposing parties, and what features may be available to assist in working more efficiently with those records in your review platform.

Other Related Posts