In the developments relative to trust and privacy technologies, one of the goals was to provide people with the means to protect their personal data: personal data is mine! This has naturally led to the idea of personal lockers and personal data stores: if all of my data is in my personal locker, then I can decide who has access to it and under which conditions (for how long, to do what, etc.). Personal Data Stores rapidly became the Holy Grail for the most advanced actors in the field of personal data management, from ePortfolios to personal health records. Personal lockers and personal data stores helped us to understand that an Internet based on a clear separation between storage of personal data and services creating/exploiting them would revolutionise the Internet. Empowered users would be at the centre of an ecosystem they control. “The Semantic Web & THE POWER OF PULL”, by David Siegel admirably describes the transformations one should expect from the systematic use of personal lockers.
However radical and transformative, personal lockers and personal data stores have their limits. One is to be found in the initial statement: personal data is mine! Data, the product of social interaction and processes, is generally shared with other people and organisations: I share the name of my parents, the review of a paper submitted with reviewers and conference organisers, the diagnosis of my illness with a doctor, a laboratory and a drugstore. Even my intimate thoughts can be shared when I commit a freudian slip… If most data is shared with others then we might want to rewrite the initial statement with: personal data is ours! Translation into technology of this statement might lead to something radically different from personal data stores as personal information silos.
Wouldn’t it be wonderful if we were able to exploit the natural property of data as a connection to people (places, ideas etc.) while preserving the need for privacy, anonymity and enabling trust? Could Shared Data Stores or Shared Lockers be a solution?
Another problem with personal data lockers is in the name itself. If they are personal, that means that they contain information that renders their owners identifiable. If they are lockers it means that there will always be someone ready to break in to steal data —who would be stupid enough to break into a safe if money grew on trees in public parks? And is there not a contradiction between aiming at the creation of a trust environment while basing it on highly protected safes and lockers? In an environment I trust, I’m not afraid to leave my wallet on the table… So, if personal lockers are not that safe, does it mean that the alternative is between abandoning the idea of privacy altogether and developing technologies that would create higher and thicker walls around our personal lockers? Is there an escape from an alternative that can only lead to an escalation in the development of distrust technologies?
Starting from the premises above, can we design an architecture that is at the same time natively social (data is ours!) and natively anonymous (I share my data but you can’t connect this data with the real me)? Anonymity is extremely hard to implement, it is why it should be a native feature and not an add-on, like anonymisation or encryption are.
Imagine that, instead of storing our data in personal lockers, we store them in Public Anonymous Data Stores (PADS). When I store a piece of data in a PADS, anonymously, I receive in exchange a key that allows me to edit it. Associated to this data is a kind of mailbox, so if someone wants to contact me, it is possible to leave a message in the box. My data can be distributed over a number of PADS and I’m the only one to know that it is my data. For the rest of the world, my data is just a drop in an ocean of anonymous data.
Putting personal data in PADS allows fine search granularity while respecting anonymity. Let’s say that someone is looking for a professional in the region of Chablis (not far from where I live) who has some expertise in ePortfolios. The enquirer leaves a message in the PADS mailboxes of all the people who have declared living in Chablis and in all those who have declared an ePortfolio expertise. When people collect their mailboxes from their PADS, only those that match both criteria are notified*. The person who has made the query doesn’t know whether there is someone matching the query until the target(s) decides to notify him/her that there is a match; and even then the target(s) remains fully anonymous.
Of course, when we make a query we expect to have timely, if not instant, feedback. As it is very unlikely that people will collect their mail at the same time and even less likely that they will want to spend any time validating more of less relevant queries. We need something more, something able to take decisions on our behalf. A software agent or proxy could do the trick, so that when someone queries the Internet, it is the agents that act on our behalf that validate, or not, visibility of the match. PADS + agents/proxies give us the power to control our visibility on the Internet.
Going back a few years, we advocated that every citizen should have an ePortfolio, then that every citizen should have a personal data store, we now would like to explore how to provide every citizen with a personal agent or proxy operating, on our behalf, in a space where our personal data is stored in PADS to explore the question:
To create a trustworthy Internet respectful of privacy, shouldn’t we simply make our personal data public?
It is one of the discussions that will be in the background of the 9th international ePortfolio and Identity Conference. You are welcome to contribute to it.