Honeywell’s Ron Harry (Manager, Electronic Data Discovery) shows a model where Honeywell divides into three tiers — small, medium and large matters. Then they split that into Identification, collection, pre-process, pre-review, processing and production. There are in-house, out-sourced and hybrid models for each cell that this matrix produces, and Honeywell aims to save as much money as possible by slotting the right model into that cell.
Gartner has a slightly broader model, where we set a certain threshold for matters/year and then advise inhouse/out-source (for the left side of the EDRM) depending on which side of the threshold that company falls on.
Harry mentions that they’re looking at the EDRM, and asking what the company needs to accomplish in each cell, and then asking what level of sophistication they need. An interesting point from Christina Ayiotis, Group Counsel (E-Discovery and Data Privacy) at Computer Sciences Corporation — how to work out existing software and service assets’ impact on e-discovery, and what purchases (she mentioned enterprise search, and I didn’t even give her any baked goods) can be used across in multiple places.
Harry says that Honeywell is also seeking to decommision enterprise information systems that are simply too old to trust. That could mean eliminating the systems or migrating the data elsewhere. What he says they are trying to do at Honeywell is involve his group in the actuall process of equipment collection and requisitioning, which is intriguing — in other words, trying to get a DNR tattoo on servers that need it.
Harry said he attended a a meeting at Honeywell where an administrator gave a speech on protecting the company from Web 2.0 techs by barring it. “I just sat in my chair and I was going insane inside my head…you have to work with the technology as it comes along.” Much of his work has been on mitigating the concomitant risks.
KPMG’s Keith Andrzejewski (Principal) now discussing models for working with information BEFORE e-discovery enters litigation phase. Discussing what we think of as Data Loss Prevention.
Andrzejewski said enterprise search is a huge boon in identifying properly valuable information, but at the same time creates explosive perspectives that aren’t sufficiently narrowed. “The challenge has been I couldn’t find anything…now, I find everything. Now we get an automated, overwhelming response.”
Jim Lynch (partner, Latham & Watkins): “You’re saying garbage in, garbage out. The best ones preserve chain of custody, and that’s a critical issue. If the data collection is not done properly, then you can have a real problem from a spoliation perspective. The best enterprise search tools eliminate that risk by capturing it in a way that is documented and defensible.” (not sure that he said “defensible”)
Ayiotis mentions that one of the problems is things like public Internet sites and private Intranet sites, because they’re hard or nearly impossible to manage. (Something that came up in the discussion with Autonomy this morning apropos of the potential synergies between itself and Interwoven.)
Good job by Lynch, moderator, in guiding discussion.
Glad I went to this panel. Everything a pleasure.