Happy Data Privacy Day! Or if you are in the EU – Happy Data Protection Day!
In my last post I talked about the desires of all the parties involved in this new style of relationship, one in which, not only you and I are involved, but also your apps. One thing is clear – better controls are needed.
Controls for me
To attain the kind of control that I want, there are two things I need. First, I need a language to express my wishes with respect to information about me. I must be able to express myself easily even though the expressions may be complex. The manner in which I build these expressions must be miraculously usable. This is not a problem for developers to solve, nor is it a problem for privacy professionals to solve. This is a problem of big “D” design. Here designers such as IDEO, Jonathan Ive, and Charlie Lazor must be enlisted to tackle the problem.
Second, the language and methods used to express how I want information about me to be handled must not only be simple, they must be applicable in multiple contexts. A person will not dream up every way that information about them can be used in just one sitting. Thus how someone wants their information used has to be malleable to be applicable in new unforeseen situations.
Controls for you
You want to obey social norms if for no other reason than thousands of years of societal development has constructed a reward system for compliance. Simply put – the more trustworthy you are the more trusted you become. And you want your apps to be as trustworthy as you are.
To accomplish this you need usable controls that govern how your apps use information about your relationships. You need to understand how apps use this information. You also need to know how apps build information through relationship-based interactions. Lastly, you need a means to control both relationship information use and construction. These controls must be usable, complete, granular, and non-invasive. Again, this is a problem of big “D” design.
One approach – trustworthy agents
Copies of information are bad. Copies of information about me lead to ownership issues. They lead to authenticity issues. They lead to freshness issues and they lead to disclosure issues. Instead of copying information about me, it would be far better for all parties to reference information about me.
But the question is from where should this information be referenced?
I believe the answer is – from a trustworthy agent. I provide the trustworthy agent with information about me and the terms under which this information can be shared. The agent then acts on my behalf. This agent is persistent and available. It can be consulted regardless of whether I am at my computer, on a bus in Bhutan, or on the operating table. This agent is a polyglot and knows how to express my information usage wishes to all parties. This agent is portable and can be consulted anywhere in both on-line and real world situations.
In a world of trustworthy agents, you still express your preferences about information sharing to your apps. When an app wants to use information about me it first checks your preferences to see if that use is okay. Then the app is directed to my trustworthy agent, which in turn determines if that use is okay. Then, and only then, does my trustworthy agent share the relevant piece of information. In this way your wishes are respected, my wishes are respected, and the app developer’s need for information are respected while minimizing the amount of information copied from place to place.
What is out there
So does such a fantastical trustworthy agent exist? … not exactly, but technology is beginning to catch up with our collective need for controls. First, User Managed Access (UMA) aims to provide a framework in which an individual can control access and use of information about them. Leveraging OAuth 2.0, UMA provides the kinds of controls that are needed. Second, the idea of a trustworthy agent, known more commonly as a personal data service, is starting to gain real traction. Projects stemming from ProjectVRM such as Personal Data Ecosystem and Mydex have begun to further explore how such data services could work. XDI has shown potential as the underlying protocol for such services. Again, a personal data service would be a way for your apps to respect my preferences: I’m happy and you and your apps appear trustworthy.
What isn’t out there
Unfortunately, your needs and the needs of the app developers aren’t addressed by both UMA and personal data stores. In order to meet these needs, device and platform makers must build “concern for the other” into their products. This is a big “D” design problem that requires not just user-experience intelligence but also classically trained design expertise. Baking “concern for the other” into products can be used to gain a competitive advantage in a market. By acknowledging that referencing information and pulling it from the source when needed, is superior to copying it, app developers have an opportunity to both mitigate their risks as well as provide better controls.
Because societal norms are formed at social speeds, it will take some time before society can provide guidance when your app does me wrong. To assist us all, technology and design must be enlisted and the needs of all parties must be considered.
Furthermore, both you and I are at the mercy of platform providers. Your apps run on a social network platform whose provider may have different goals, values, and principles than either of us. Similarly, your devices are likely connected to communication platform over which neither of us have a great deal of leverage.
While social and legal norms play catch-up, we need to make our apps are trustworthy as ourselves. Until concepts such as UMA and personal data services are more wide spread we are at the mercy of the paltry controls that app developers provide. Apps are not uniformly trustworthy and their use of information far exceeds what is visible to and expect by their users. The reality is for the foreseeable future – I like you but I hate your apps.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.