http://ow.ly/bE7fG
A guest blog post by Jason Baron, and also by Ralph Losey, Esq. posted on the e-Discovery Team® website.
This blog post provides information from Jason Baron's recent Keynote Speech from the "Seventh Circuit Electronic Discovery Workshop on Computer-Assisted Review", held in Chicago.
The article discusses the TREC project, the RAND study of eDiscovery expenses, and the impact that technology is having upon the legal profession. The article provides information from the Mr. Baron's keynote speech, "My crystal ball is as cloudy as any other preacher on the ediscovery conference circuit, but in the time I have I’d like to sketch out three issues, in the form of questions, concerning the present and future state of e-discovery search that I trust may be of interest at least to some. First, in light of emerging case law, what does cooperation truly mean in an age of software-assisted review and the technical protocols that optimize them? Second, with the recent demise (or at least hiatus) of the TREC Legal Track, is there a need for further research to evaluate search efficacy? And third, are we finally at the point where we should be contemplating voluntary “quality” standards in the e-discovery arena?"
The article examines recent case law that has given some judicial approval to the use of technology assisted review workflow processes such as predictive coding. The article also looks at some of the scientific research conducted used to substantiate the effectiveness of technology assisted review.
The article further states, "...the RAND report’s bottom line is that software assisted review holds the potential for significant costs savings without compromising quality as compared with more traditional forms of human review.
Again let me be clear: I remain a strong cheerleader and advocate for advanced forms of search and document review. But there are dozens of open questions that remain for further research in this area, and would caution against making judicial short-cuts to findings that say this area is essentially a “solved problem” for lawyers. We ain’t there yet. Some of the kind of questions I have in mind:
- What kind of cases does software-assisted review methods work best in?
- What kind of seed set process is optimum — random sampling or some kind of judgmental sampling based on keywords, using hot docs and privileged docs?
- How good are such software assisted methods in stabilizing subsets of documents by issue tags?
- How good is the software in dealing with cryptic docs?
- How good can software-assisted methods be in really honing in on hot documents of the most material nature to a given litigation."
The article also looks at standards used to measure the effectiveness of eDiscovery processes, such as ISO 9001, The Sedona Conference, and the DESI IV Workshop. Links to information about the referenced organizations are provided. In addition, a link to another informative article, "Evaluation of Information Retrieval in E-Discovery " co-authored by Jason Baron is provided in the blog post.
No comments:
Post a Comment