Sure, I’ll get you a beer, but you need to answer a quick question first: “what is The Meaning of Life?” (no, it is not 42 ). Are you getting your beer any time soon? Not likely… This imperfect metaphor is what comes to my mind when I hear that you “need” to have a formal data classification effort before deploying content-aware DLP technology.
If a simple task (and, no, I am not implying that successfully deploying DLP is simple) has a pre-requisite and that pre-requisite is itself an impossible task, the simplicity doesn’t really matter. Therefore, creating a dependency of DLP on data classification makes DLP success unlikely, however counter-intuitive it might sound.
Now, I have seen examples where data classification, whether heavy-weight or selective and lightweight, worked really well. However, I would venture a guess that most of my readers here agree that it is not for everybody. To quote our recent research on this, classification efforts “receive mixed success” and “elaborate multitier classification schemes tend to fall back to simplistic efforts in practice (creating a disconnect between policy and reality).” In brief, you can guess what wins when “getting the job done” and “following the classification policy” collide …
Here is what some organizations did instead – and it worked:
- A light-weight and targeted “mini-classification” focused solely on the types of data to be protected by a DLP tool can be defined and implemented.
- Content-aware DLP itself can help gain clarity about the types, volumes and value (by means of a human observing the data) of data that is stored, transmitted and utilized (so, classification becomes a side-benefit, not a pre-requisite for DLP)
- Finally, tactical DLP without any data classification is possible and may occasionally be successful, especially when the scope of protected data is REALLY narrow (such as specific , known card numbers from a database).
In essence, a security data classification program can help your DLP tool effort a lot, but it is NOT a set-in-stone prerequisite. Finally, here is some additional guidance on creating a classification scheme.
Related posts:
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
3 Comments
What’s the difference between classification and mini-classification ?
By classification I understand:
* defining types of information
* knowing who can access which types
* knowing where a type of info is created and stored
* knowing through which channels it can legitimatly possibly go out of its storage.
Am I naive about it?
Anton – companies should listen to this, keep Data classification simple is salient advice and gives business a quick return. Companies could solve 80% of their problems with a simple way to enforce their DC policy – assuming they have one – and then tackle 80% of their problems with a DLP solution. Then, and only then as they understand their data over time, they can finesse their policies based on knowledge not theory and use their existing investment in technology to tackle the 20%.
@chris Thanks for the comment! Simply the scale and the approach:
* defining types of SOME information
* knowing (in SOME case) when info is created, etc
Applying these and other things to select types of info which is planned to be protected by a DLP tool (e.g. only documents from repository X,only files about project Y, etc)
@ian Thanks for the comment as well. In this area, “starting small/simple” is indeed solid advice, otherwise the “pre-requisite” DC project ties down their efforts for years and they never move to protection, education, etc