Heidi Wachs

A member of the Gartner Blog Network

Heidi Wachs
Research Director
1 year at Gartner
6 years IT industry

Heidi Wachs is a Research Director on the Gartner for Technical Professionals Identity and Privacy Strategies team ...Read Full Bio

Needed: CPOs at Non-profits

by Heidi Wachs  |  January 20, 2014  |  Comments Off

Opportunities for privacy professionals have been expanding exponentially for the past few years, and presumably, will continue to do so (Thank you Target, Neiman Marcus, Snapchat, and Starbucks). At any given point in time, companies across the US are hiring privacy counsels, privacy officers,  privacy engineers and Chief Privacy Officers in an amazing variety of industries: pharmaceuticals, retail, IT, government, and of course, healthcare.  But there is one glaring omission from that list: non-profit organizations.

Why would a non-profit need a CPO? Well, let’s examine the type of data a sizeable, well-established non-profit collects:

  • Employee data: In order to conduct their day-to-day operations, much like the many, many companies in the privacy sector with CPOs, the non-profit has some staff members. Those staff members get paid and have health benefits. In order to process that, the non-profit has to collect their name, address, date of birth, and social security number, among other data elements.
  • Donor data: Non-profit organizations are hotbeds of donor activity. They solicit one-time donations and recurring donations via credit cards.  Many non-profits engage in what is referred to as planned giving – donations through estate-planning tools like trusts and wills. To execute both small and large donations these organizations collect personal and financial information. They also maintain notes on the donors detailing their families, lives, and what methods are most successful for soliciting donations. This, in essence, is a development professional’s intellectual property.
  • Grassroots data: Advocacy organizations collect another type of data that, for lack of a better term, I’ll refer to as grassroots data. This data associates supporters with their state and congressional districts so that “action alerts” can be targeted to the appropriate activist members who can reach out to influential state and federal representatives. These “action alerts” are distributed via phone and email blast communications.  

Social security and credit card numbers can be the keys to committing identity theft, but in no way have the power to reveal what donor or grassroots data can. Association with a particular non-profit can reveal political, social, or religious leanings. For example, it could be professionally or personally damaging if a vocal social conservative was revealed to have made large donations to a well-known liberal organization, such as Planned Parenthood or the ACLU. This is precisely the reason that major donors often times prefer to stay anonymous. 

For some non-profit organizations, especially those that work on controversial issues, the personal safety of employees and donors is at stake. These organizations receive harassing phone calls, emails, and letters threatening attacks on the people and facilities committed to carrying out their mission. Should the personal addresses of employees or donors be breached, it presents a risk of personal danger.

Are there privacy officers at non-profits? Yes, some exist. Some have members of their counsel’s office who are the designated privacy official. But, unlike those other industries, there is not a lot of opportunity and very rarely a job posting.

Private sector CPOs may serve double-duty. There are plenty of corporations with philanthropic arms separately incorporated as non-profit organizations such as the Washington Nationals Dream Foundation and the Avon Foundation for Women. For the privacy counsels and CPOs of those companies, they must protect not only the traditional employee and customer information, but also donor information.  

Donor and grassroots data are not regulated like, for example, medical diagnoses are. There is no legal or regulatory obligation to keep it secret. This is precisely why non-profit organizations should hire full-time CPOs to protect and preserve the privacy of employee, donor, and grassroots data. The implications resulting from a breach of this data have the potential to be far more severe than credit card numbers, which are easily replaced.

Non-profit CPOs, where they do exist, are performing the same work as private sector CPOs, but with vastly different risks. There is no legal or regulatory framework for their data to provide a baseline for compliance. The real challenge, and skill, for non-profit CPOs is understanding a much wider set of risk factors specific to each organization’s mission, donor, and activist base.

Comments Off

Category: Uncategorized     Tags:

The (Cloud) Ship Has Sailed

by Heidi Wachs  |  November 6, 2013  |  Comments Off

Yesterday, the New York Times published a letter to the editor submitted by Marc Rotenberg, president of the Electronic Privacy Information Center (EPIC) entitled “Protecting Data Privacy.”  The opinion piece provides a nice chronology of the attempts made over the past decade or so to persuade technology companies to do a better job of preserving data privacy through methods including data minimization, destruction, segregation, and encryption. These are all core privacy principles that I have implemented in previous jobs, and encouraged through my published research and client interactions.

But the conclusion of the opinion piece left me scratching my head. “Perhaps it is time to rethink the cloud computing model. The risks are too high. The safeguards are too weak. And the companies are not prepared to carry the responsibility of gathering so much user data.”

Rethink the cloud computing model?  The horse has left the barn, the cat is out of the bag, take your pick of cliches but let’s be realistic. If we’re going to address the privacy issues associated with cloud computing, then we need to start by accepting the current state of play and figure out how to enhance and strengthen it moving forward.

  • Cloud computing is far more pervasive than the average consumer or citizen can possibly realize.  Thanks to both free and fee-driven providers and services like Amazon Web Services, Salesforce.com, and of course Google, the cloud is the foundation for how large segments of the global economy conduct business.
  • The security features, controls, and personnel supporting these cloud services are far more advanced and skilled than many companies could achieve in-house.  Think about it: their core function is to keep stuff safe and secure in the cloud.  If they fail at it, they go out of business. For more on this, see “Managing Privacy Risks in the Public Cloud.”
  • Even if we concede that no matter what security controls we put in place to try and protect privacy, the government will find a way around it, that doesn’t eliminate the need to protect legal, ethical, or moral business to business or business to consumer privacy concerns.

We can protect data privacy better through contracts with enhanced privacy protections, applying increased security controls, and increasing transparency with regards to data handling. We need to have open, frank negotiations with cloud service providers to clearly establish where data is being stored, how it is being protected, who is accessing it, how it is being used for marketing purposes or resold to third parties, and how it is being destroyed. The expectations for notification when data is inappropriately accessed or exposed also need to be set. All of these factors combined will lead to better data privacy in a cloud-centric world, rather than starting over from scratch.

Comments Off

Category: Uncategorized     Tags:

“The car is the thing on the road that takes you back to your abode”

by Heidi Wachs  |  October 8, 2013  |  Comments Off

Here’s the privacy problem with a lot of location-aware technology: most of the time we are coming and going from the same handful of starting points and destinations.  Whether it’s driving with a GPS or EZ Pass, commuting with a reusable fare card, or even using an app to track walking or running, this data creates distinct patterns. We’re basically telling all sorts of companies and government agencies where we live and work, where we run our errands, where we eat, and anything else we do with some regularity.

Sometimes, we provide this information in exchange for an advantage or perk.  Take, for example, Pay As You Drive (PAYD), the latest trend in auto insurance. Once you’ve plugged that nifty device into your car to track your activity in exchange for lower rates, you’ve also provided your insurance company with a treasure trove of your location-based information.

Researchers at the University of Denver found that “driving habits data such as speed, time of travel, number of miles driven, braking and acceleration data” meshed with publicly available maps enabled them to predict top destinations for a trip. In 60% of their cases, the actual destination was one of their top three predictions.  This is not so dissimilar to the trade-off of using EZ Pass.  Our routes are tracked in exchange for a more efficient drive and less waiting at toll plazas. (Although it turns out that, at least in New York City, toll plazas aren’t the only place our EZ Pass devices get read.)

If you’re thinking to yourself “there ought to be a law about this!” I have bad news for you.  At least in the US, there are no laws, regulations, requirements, or even best practices specific to geolocation data. And likely, there won’t be any for some time. Consumers love their discounts and flying through automated toll plazas!

But that doesn’t mean that insurance companies, EZ Pass, or anyone else collecting geolocation data shouldn’t have good data privacy policies and practices. And it wouldn’t hurt to provide some solid customer education and notice about not only what data is collected as part of these programs, and how it will be used, but how it could be used. Geolocation technology is being deployed much faster than technologists and privacy professionals can assess the ramifications, but as we sort through these issues and begin to figure things out, we should also be teaching consumers just what personal information they’re exchanging for discounts and convenience.

Comments Off

Category: Uncategorized     Tags:

The iPhone5s, Fingerprints, and Privacy

by Heidi Wachs  |  September 13, 2013  |  2 Comments

The truth about what a product can actually deliver lies somewhere in between the hysteria and dismissive attitudes that accompany its launch. I believe the privacy implications associated with the fingerprint scanner arise from the collection and use of the fingerprint by Apple, third parties, or anyone else (law enforcement?). The issue of how well the fingerprint protects the data on the phone is a security control.

Let’s acknowledge that this is not the first time that technology devices have been sold with fingerprint scanning technology. My company-provided laptop seven years ago had a fingerprint scanner and you know what? It sucked. But this is the first time a fingerprint scanner is being incorporated into what will surely be such a popular device with consumers.

The newest member of the GTP IdPS team, Anne Robins, has some more information and thoughts on the technology behind the fingerprint scanner. Anne is such a recent addition that she doesn’t have a blog of her own yet, so I’m going to share some of her expertise here:

The Reader: The fingerprint reader has been cleverly located within the iPhone home button, which will make using the reader a very natural thing for users.  A typical single finger reader would have a platen of approximately 30mm x30mm, but it would appear that the capture surface for the “Touch ID” is more like 12-15 mm square. This affects accuracy (less data points available per capture) and usability (finger positioning becomes more important).  Apple seems to be addressing the platen size issue at enrolment with people reporting 20-60 seconds to enrol each finger as they are asked to move their finger around on the reader, almost doing a roll style capture, to ensure that sufficient data is captured at enrolment.  The downside of integrating the fingerprint reader with the home button will be the robustness and utility of the reader in common usage conditions – dirty hands, phones in pockets and bags, food and goo on the device (http://cals.arizona.edu/spotlight/why-your-cellphone-has-more-germs-toilet).  Data from large-scale fingerprint deployments shows that system performance is very strongly correlated with the quality of enrolment AND the quality of on-going captures (clean fingers and clean readers are a major part of this).  I find it hard to believe that this situation will occur in common iPhone usage.

Multi-fingers, Multi-user and Multi-factor: The iPhone 5S will enable up to 5 fingers to be enrolled and that can be five of yours or up to five single fingers from different people.  From a usability perspective it is certainly practical to have at least two fingers enrolled just in case you injure one and it isn’t able to be used on the reader.  However, Apple seem to be also be targeting the iPhone user who shares their device with family members.  Enrolling your child’s fingerprint to enable access could well be preferable to giving them your access code.  However, if you have also enabled iTunes purchases using TouchID (the other promoted use case for the fingerprint reader), you have now enabled your child to make iTunes purchases, including app-in-app purchases without needing your iTunes password – all at the touch of a finger.  A final comment here is that the addition of a fingerprint reader to the iPhone had the potential to improve security on the device – and you could argue that if some of the high proportion of people who don’t use an access code at all changed to using TouchID then that is an improvement – but providing this as an either/or option (you can use your access code OR your fingerprint) misses the opportunity to include an option to require your access code AND your fingerprint which would then be a much stronger two-factor authentication option.

Tuned for Usability and not Security: When tuning a biometric system, the options are to minimise the False Negatives (the number of times the correct person is not granted access) or to minimise the False Positives (the number of times an unauthorised person is granted access).  It seems highly likely, that Apple will have tuned the TouchID system to have very low False Negatives – this would improve usability and reduce the likelihood of you needing to scan your finger multiple times to gain access.  But the flipside of usability here is security.  When your system is tuned for very low False Negatives, it means that it will be reporting correspondingly high False Positives – which means the chances of someone other than you gaining access to the device is increased.  At what point does the chance of an impostor gaining access to your phone due to the high False Positive become at least as likely as someone guessing your access code (perhaps from the tell-tale smudges on the screen)?

 

As for the privacy issues, here are some of the questions that came to mind as I combed through some of the media coverage:

  • Apple states that the data is encrypted and stored locally, not in iCloud or anywhere else, but where on the device is it being stored? Is it being correlated with things on the phone so that say, if an app’s data is accessed it’s a back door way of getting to the fingerprint? Is it associated with the IMEI or UDID? 
  • Can using the fingerprint somehow eliminate anonymity? I know this is a stretch, but let’s say somehow the fingerprint provides authentication. The fingerprint can be directly, and uniquely, traced back to a specific human being (especially if say, their fingerprints are on file somewhere, which are collected and stored all the time. I had to get fingerprinted at the local police station for a job at the YMCA in high school). What if someone was using the fingerprint to authenticate to an app that indicates a health issue whereas before it was a login/email address and password that could not be confirmed to be associated with a specific individual?
  • Setting aside some of these questions, I can see a use case where the fingerprint reader could be used to enhance information protection, but again from a security perspective. (Swipe once to unlock the phone, swipe again to access an enterprise container, swipe a third time (or combine it with another factor) to access a specific document/file/app within the container.) This is certainly an increased level of protection for information stored on the device (or for apps that access sensitive information) rather than just the device passcode or a login and password combination.
  • Usability is tantamount to the fingerprint scanner’s success.  If the fingerprint doesn’t work efficiently enough, or feels like an extra burden to the user, they will reject it. If the users feel annoyed by it, and can deactivate it, then they can be creating information protection risk when we’re talking about enterprise data being stored locally on the device.
  • Will the devices now be subpoenaed as a way of getting fingerprint evidence for criminal activity?
  • From a security perspective, will the fingerprint reader help lower the iPhone as a crime target? If you have the fingerprint reader turned on, does that imply that there is no other way of unlocking the phone, or does it leave it so that either mechanism (a passcode OR the fingerprint) will work? (And why has Apple still not announced that they will brick stolen/lost phones!?!?)
  • Corollary to the above question, can you jailbreak the fingerprint reader or, if the phone is jailbroken does that either disable the reader or expose the saved, supposedly locally encrypted scan of the fingerprint?

Some of these questions will be answered pretty quickly once the device ships, like the usability factor and how jailbreaking the device affects the reader. The privacy questions will probably take a little bit longer to come to light, but once they do, I have no doubt that they will garner a lot of attention.

 

 

2 Comments »

Category: Uncategorized     Tags:

Blurred Lines

by Heidi Wachs  |  August 21, 2013  |  5 Comments

Blurred Lines is not only one of this summer’s breakout songs (and the subject of a copyright lawsuit), but is also the theme when it comes to the commingling of enterprise and personal data on mobile devices. 

My current research explores the technical controls that can be successfully deployed to balance enterprise information protection needs with employee personal privacy expectations on mobile devices, as well as best practices for Corporate-Owned Personally Enabled (COPE) and Bring Your Own Device (BYOD) programs.  Interestingly, as I get deeper and deeper into the research I’m finding that many of my assumptions were just plain wrong:

  • Companies are not using solely one approach or the other, many seem to have a hybrid COPE/BYOD environment
  • Many organizations do not have mature, developed programs rolled out but rather are still in the preliminary planning/test phases

But that’s good, because that’s the whole point of doing thorough research.  I’m learning a lot about this area including, perhaps most importantly, that technology as a whole is just at the tip of the COPE/BYOD iceberg.  And not just in the US, but across the globe.  To complicate the subject further, both the devices and the available apps and software are constantly evolving.  Talk about a moving target.

And that’s just from the technical perspective.  There are big privacy issues to be addressed here too.  How has your organization grappled with:

  • What happens when employees install apps with personal information or use corporate-provided file-syncing resources to store personal photos and documents on a COPE device? 
  • What if an employee opts in to a BYOD program only for both parties to learn after the fact that her geolocation data is now available to her employer?
  • Should organizations develop secondary privacy policies governing the data being collected through their BYOD programs in addition to their external facing privacy policies? 

There are many more questions than answers, but if your organization is discussing, or has resolved, any of these issues, I’d love to hear about it! For now, back to the research. Keep an eye out for the final product sometime this fall!

5 Comments »

Category: Uncategorized     Tags:

Get ready, get set…GO! It’s Catalyst Time!

by Heidi Wachs  |  July 23, 2013  |  Comments Off

A week from today Gartner’s Catalyst Conference will be in full force. You might wonder, “what does that even mean?” A few years ago I had the opportunity to present and attend Catalyst as an outsider and what I remember about that week was a whirlwind of information: learning, discussion, presentations and, perhaps most exciting, healthy debate. 

I am excited to return to the Catalyst stage this year representing Gartner’s Identity and Privacy Strategies team. I’m just a few months short of my one year anniversary, but the research I’ve done during that time has built on my prior experiences and provided me with so much more information to share with all the attendees.  My sessions next week include:

  • Saving Privacy in the Public Cloud (Monday, 7/29, 3:10pm): discussing how to address the privacy risks associated with moving enterprise data into the public cloud.
  • Roundtable Discussion on Privacy Risks in the Public Cloud (Tuesday, 7/30, 1pm): an opportunity for attendees to discuss where they are in the public cloud adoption process and their privacy challenges.
  • Workshop on Implementing a SSN Remediation Plan (Tuesday, 7/30, 3:30pm): a step-by-step tutorial on how to remediate SSNs from your organization.

If you’re attending Catalyst (and if you’re not, you should seriously consider booking your last minute airfare now), take the opportunity to schedule some one on one time with me.  We can discuss information protection and data privacy, or dive deeper into Social Security Number remediation and addressing privacy risks in the public cloud. I’d love the chance to hear about what challenges your organization is facing, and what you see coming down the road.

 

Comments Off

Category: Uncategorized     Tags: , ,

“Explain it to me Iike I’m a four-year-old…”

by Heidi Wachs  |  April 10, 2013  |  1 Comment

I attend a lot of conferences. I mix and mingle with technologists, educators, attorneys, and privacy professionals and I can’t tell you how many times I’ve heard them all say “we need a translator.” This may have been true as all of these fields collided, but in 2013 we shouldn’t need a secret decoder ring to communicate with each other.

In every project, meeting, or crisis I’ve confronted a team of people with diverse backgrounds was required to bring the issue to resolution. Privacy professionals are often unwillingly thrust into the translator role, but I don’t think it should be one person’s job. Effective communication involves each person breaking down their own vocabulary so that things can be easily explained regardless of whether the letters following the name on your business card are CISSP, CIPP, PhD, or Esq.

When I think about ways to overcome this communication barrier I’m often reminded of the movie Philadelphia. Denzel Washington’s character, attorney Joe Miller, asks people throughout the movie to explain things to him as if he’s a four or six year old. Children, and attorneys, don’t need to be able to understand the intricate details of an ERP system, but they can understand that there is software on the computer that tracks how package A moves from location X to location Y. And children, and technologists, don’t need to understand the differences among all the various data privacy laws, but they can certainly understand that there is information about individuals that needs to be protected from someone using it badly, and when that happens, we have to tell the people so they can protect themselves.

Changing the way we communicate overnight is far too daunting, so achieving this goal takes small steps. For starters, the next time you send an e-mail, take a moment to reread it before hitting send. The most powerful question to anticipate when preparing communications in these cross-functional situations is “why?” Don’t simply state that something needs to be done. Explain the cause and effect, the rationale.

Broad questions are often met with broad answers. Consider communicating with more detail, explaining why you need an answer. I’ve found that bullet points in e-mails can be extremely helpful in focusing the exchange. Find a partner in this process who works in a different field or part of the organization than you and develop your communications with each other. As you’re discussing things ask, “Was that clear? How could I have explained it better?”

The challenge here is to not expect one person to translate among all of our languages, but rather for each of us to choose our words more carefully when we’re collaborating. I know this is not an easy task. Putting this level of effort into communication does not come easily to most people. But technology, privacy, and the law no longer operate in silos. If you want to be part of the solution, you have to be able to speak to the rest of the team with words they understand.

1 Comment »

Category: Uncategorized     Tags:

Lessons from Amazon S3: Default isn’t a bad word

by Heidi Wachs  |  April 8, 2013  |  Comments Off

Default often has a negative connotation – what we settle for when there’s no better option. In information classification terms, we reinforce the idea of a default classification for things that don’t fit nicely into other categories. But in the case of Amazon’s S3 buckets the default is a good thing. So how did so many customers screw it up?

That’s right, this time it was the customer’s fault. A researcher from security vulnerability firm Rapid 7 discovered that the contents of one in six of Amazon’s S3 buckets are publicly accessible. The objects in each bucket aren’t all publicly accessible, but the names of the first 1,000 objects in each bucket are visible. In some cases, however, the objects were also publicly accessible. These buckets contained objects ranging from benign to highly sensitive. Examples of the contents included pictures for social media sites, source code, sales data, and employee data. My colleague, Kyle Hilgendorf explains the incident on his blog in a bit more detail and agrees – Amazon is not to blame.

But Amazon’s default setting for the buckets is private. Somewhere along the way, the customers controlling these buckets adjusted the settings and switched them from private to public. While in some cases this may have been intentional, let’s hope that where the bucket contained employee data or source code, it was not.

The bottom line: when organizations move enterprise data to the public cloud, they must be vigilant about that data’s privacy. The lesson here is not that this data shouldn’t have been stored in the cloud, or that each of these customers is on the hook for a breach notification (although some of them might be.) Amazon, and other cloud providers, offer a wide range of tools and controls for customers to protect the privacy and security of data being stored on their servers. Amazon developed a catalog full of documentation and guidance on how to use all of the security features. But the cloud providers can only do so much. At the end of the day the customers bear the responsibility for their settings.

So what can organizations do, or do better to protect the privacy of their enterprise data in the cloud? Here’s a checklist to get started:

  • Determine who is the point person for each and every cloud deployment. If you don’t have a person in that role, delegate one. Make sure this person is intimately familiar with the full suite of controls offered by the cloud provider.
  • Map the required privacy and security settings for each type of data stored in the cloud and align the controls and settings with each cloud provider accordingly.
  • Educate the entire user community on how to preserve the privacy of data in the cloud. Provide step-by-step guidance on appropriate data handling and settings.
  • Check and double-check the controls on a regularly scheduled basis to ensure that there haven’t been any unintentional modifications, such as switching from private to public.

Comments Off

Category: Uncategorized     Tags: , ,

Privacy Pros: A Work in Progress

by Heidi Wachs  |  March 13, 2013  |  Comments Off

The International Association of Privacy Professionals (IAPP) has only been around for 13 years. Compare that, for example, to the American Bar Association, which was founded in 1878, or the Institute for Electrical and Electronics Engineers (IEEE), which traces its roots back to 1884. But for a profession still in its infancy, there already seem to be some established “generations.”

I view the emerging generation as the fourth generation. The opportunities available for them as privacy professionals are unprecedented: undergraduate and graduate coursework, privacy-centric graduate degrees, fellowships, and internships with established privacy departments. But they face the same question that the generations before them faced: is privacy a viable career?

The first generation, the founders of privacy as a profession, are predominantly attorneys who pioneered a new field. They found creative ways to define privacy and established the position of Chief Privacy Officer, a high-level point person essential to preserve the integrity of data and prevent it from being inappropriately or inadvertently shared. The immense amount of respect for these luminaries is easily identifiable among their fellow privacy professionals, but their career paths are varied and unique.  There is no discernible pattern that a student could emulate.

A second generation “came of age” as federal and state legislators established a new set of data protection laws.  Privacy Officer positions increased throughout the public and private sectors and this second generation was ready, willing, and able to take on the challenges of privacy in the mobile and digital age.  This generation drew the outline for a career path in privacy and is eager to mentor those in their wake, always generous with their time and advice.

The third generation of privacy professionals, of which I consider myself a member, were the first to seek out privacy as a career. Our options have flourished. In addition to privacy counsel and privacy officers, we now have privacy analysts and engineers, but the path is still not well-trodden. We struggle to craft our resumes wisely and map a long-term privacy career. When members of the fourth generation ask us for advice, we want desperately to help, but are often at a loss since we still rely so heavily on the second generation for advice and networking and obsess over making the right career move for ourselves.

So how do we help the fourth generation define themselves and, by extension, a traditional privacy career path? The IAPP can facilitate mentoring opportunities by bringing all the generations together as often as possible. As a community, we need to define what privacy professional career paths look like, from undergraduate through retirement. Most importantly, we need to ensure that no generation rests on their laurels. In building privacy as a viable career, we must invite the fourth, fifth, and sixth generations to stand on our shoulders and continue to build on our foundation.

Comments Off

Category: Uncategorized     Tags:

If I click my heels three times….

by Heidi Wachs  |  February 7, 2013  |  Comments Off

Time flies when you’re having fun, or so it would seem as I’m now entering my fifth month with Gartner and working on “finding my groove” as an analyst.

My fantastic GTP Identity & Privacy Strategies teammates wasted no time throwing me into the mix.  I was on the road more than home in November and December, which left me feeling a bit like Dorothy in the twister.  The difference between Dorothy and me, however, is that rather than cows, houses and witches swirling around me were vendor briefings, client dialogues and research interviews.

In my previous life, I advised on identity-related projects from a privacy perspective, but am now fully engaged.  The journey I’m embarking on with Gartner will take me down a new yellow brick road, and I even have a Wizard and Glinda to help guide my way.

In the meantime, I’m focusing my research and writing on privacy, a subject matter in which I’m well-steeped.  My first document, A Guidance Framework for Implementing a Social Security Number Remediation Program, was just published on February 1st and is available to GTP subscribers.  My next area of research is attempting to explore how organizations can mitigate privacy risks in the public cloud, and how to rein in shadow IT that has already moved there.

If you’d like to join me on this adventure, you can check back here for updates or follow me on twitter, @hlwachs.

Comments Off

Category: Uncategorized     Tags: