UK IGF Identity & Trust Workshop – 10th September 2013
This workshop was presented by the BCS and covered a number of areas related to the work of the BCS Identity Assurance Working Group. The speakers were:
- Andy Smith BCS
- John Bullard IdenTrust
- William Heath mydex
Andy introduced the workshop and the work the BCS is doing in this area.
William gave a short introduction on the incentives to go online. He opened with the hypothesis we are seeing the emergence of a new personal data ecosystem, one in which an individual’s control of their personal data will play a significant and valuable role. This will have many benefits ranging from new business opportunities to better protecting human rights.
To do this we need incentives for three sets of actors; the individuals, the organisations that provide services over the Internet and the new breed of application developers such as those writing applications for iphone and android. Together they will form the ecosystems which will allow individuals to protect and manage their information.
At the moment the move to ‘Digital by default’ is being driven by cost reductions and providing better services. In order to do this the organisations need to be able to prove that people are entitled to the services they are asking for, through some sort of attribute verification. Coupled with this application developers want a predictable environment in which to create their applications.
One method of doing this is to use a trusted third party to provide identity provision, linking the individual to the organisations via new applications and tools while ensuring privacy and data protection under the control of the individual.
John then looked at trust and liability. He started with the question of “what is meant by Identity” He stated that it means “you have absolute certainty you know who you are dealing with” and you can check and validate that is the case and have someone that guarantees that the person asserting the identity really is the person they claim. There is also the need to have a clear resolution process in case things go wrong.
He explained how the parties interrelate and operate, making the point that in the global world of the Internet it is not possible to have a single organisation that would provide identity services or a guarantee of trust for everyone. Though you need a third party to provide that identity assurance and validate a persons identity, you need a method of doing this globally.
The finance industry has been doing this for many years. There is a trust model that works through the financial sector globally, as they have to do “know your customer” and for financial transactions to work they have to be sure of who all the parties are in a transaction.
He hypothesised that this capability could be used to provide the same capability for use of identity in any other transaction, with a trust model and liability model that is already in place today being expanded in scope to cover other areas where trust in an identity is required.
He emphasised that this would have to be done via the regulated financial industries as they already have the legal and regulatory models in place and can implement the necessary validation and liability models for use of identity in the future. This would include any regulated financial institution being able to validate an identity to any relying party via a regulated financial institution that the relying party trusts.
This does not need a global regulator, only that the regulated bodies trust each other, which they do today. This would require new rules and governance structures to take this model into the Internet era where you are not just talking about financial transaction but are also talking about other transactions with a liability model based on assured identity.
Andy then talked about the value of identity. Initially he covered the point that a person’s identity attributes have value. Even though people think they are getting free ‘stuff’ on the Internet, they are actually paying for it by giving away identity attributes and information about themselves, which can then be used for targeted marketing, sold or data-mined for various purposes.
The information about who you are, what you buy, where you shop is all collected and used to support business on the Internet. However if the large organisations could not do this and did not have access to such information to drive their business models, they would have to find some other way of funding the services and software they provide on the internet. A simple example is software for the Android smart phones, where they is usually a free version which contains advertising and a paid for version which does not. Other examples are social media sites and search engines.
He stated that what we require is ensure that this stays in balance, that data protection and privacy do not become so onerous it disrupts funding of the Internet, but equally collection and data mining of personal information does not invade people’s privacy or become uncontrolled. In the worst case such activity could become dangerous with people being targeted for nefarious activities.
He made clear that this is a balancing act and at the moment it is not in balance. If large organisations that provide popular services cannot collect and sell personal information or use it for targeted marketing those services would either become expensive or disappear. He then went on to show a number of forms that are used to collect information online, with examples showing that organisations are already collecting far more personal information than they need to offer their services and this is counter to the principles of a right to privacy.
He then made the point that filling in forms online with lots of personal information can be dangerous if you do not have a machine with good anti-virus software, as a keyboard logger on the machine could collect that information and send it to someone who could then steal or misuse your identity.
The final point covered was the issues around aggregation and data mining, with electronic databases being easy to search and cross-correlate it becomes much easier to build up a picture of someone’s life or even find information about them such as their name and address from other attributes about them.
He then introduced the discussion section where questions were taken from the audience.
The first question was about what topics would be covered at UN-IGF. Andy explained that BCS is doing a workshop on the first day and the topics covered during this UK IGF workshop, including input during the discussion, session would be fed in to that workshop.
Another question was that some of the statements made during the talks, such as the quote “if the product is free, you are the product” were a bit sweeping. The example given was that the BBC News website is free and that does not collect personal information or do advertising. However the point was that the BBC New website is not actually free its paid for by the TV licence fee, which answered the question as to why there is no advertising on the website.
A point made was that collection of unnecessary information is already covered by data protection legislation and we do not need more laws, just better enforcement of the current ones. William made the point that we want an online economy based on an honest premise and people rights and protections should be fundamental to this.
The next question was around online jurisdictions and the models that had been discussed. If lawyers got together and agreed a model, how would this work. John pointed out this would it be based on the laws of contract with different layers providing a local perspective for the users or organisations but covered by a global contractual model similar to those currently used by Visa and Mastercard. The only contractual relationship the person would have would be with their identity provider.
There was a discussion on the UK Government GDS identity trust framework, which is a good concept and has gone a long way to solidifying the balance between provision of assured identity and provision of only those attributes needed for a transaction, reducing secondary use of personal information especially where the ability to do so is hidden in long complex online privacy statements.
The next contributor used to work for a newspaper and commented that when you signed up for an online subscription, all of that data was collected and sold off to marketing companies. This meant that each person’s subscription data was worth about 12 pence. The question was is there a way that this value could be split between the organisation and the individuals? William who also worked in this area said he understood the model and it was possible to share the value, but this was not the individuals primary motivation, what they wanted was the subscription.
A company in the US tried this as a business model, but it did not work, as once individuals realised their data was valuable they wanted to retain the whole value. William also made the point that the value is not just about the personal information attributes, it’s about a person’s preferences, what they like, where they shop etc. This information has value for targeted marketing.
There was then a discussion on supermarket loyalty cards and the pros and cons of these. At least with such loyalty cards people know who they are sharing the information with and for the most part what it is being used for. They expect targeted marketing from the supermarket.
There was a short conversation on the use of new application types such as heart rate monitors which can record a person’s heart rate over extended periods and store this information online. This information is also being sold off, supposedly as anonymised data but in some instances personal attributes have been included with the data sets which allow though data mining the individual to be identified. These are the sorts of accidental secondary uses that need to be better controlled.
Andy then introduced one of the topics that will be covered at UN-IGF, which is the balance between security, privacy and anonymity. This solicited a useful discussion on the topic and as usual provided strong opinions on both sides of the argument.
The point was made that security and privacy are actually mutually supporting and are both good things. It is anonymity and its ability to support nefarious activities that is the bad thing. Andy pointed out that the underlying problem is that there is too much personal information on the Internet and once something is published it is virtually impossible to redact or remove it. This means the Internet is a huge data warehouse that can be mined. He stated that we need to improve privacy online, but that does not mean we need to make things anonymous.
The discussion went on to identify that anonymity and privacy are very contextual and depend on the transaction and context in each case. The view was that this whole area is far more fragile and nuanced than the discussions currently address. Much more debate is needed that should move away from what is good and bad and look with a more nuanced view of the context around the use of anonymity and privacy and how they interrelate. There was agreement that there are very strong opinions both ways and this will not change any time soon.
A comment was made that attribution is something that needs to be taken into account here. Anonymity and attribution are interrelated and can be used to improve the balance. Being able to attribute an action to a person may be necessary in one context such as solving a crime, but this may not be needed in general use in which case the attribution could be anonymous.
William pointed out that a good challenge had been set by one of the audience to be taken to UN-IGF, and that a simple proposition should be put forward that users should have more control of their personal data and governance of the Internet should address this specific issue. This does not stop business exploiting personal information, but it would be more under control of the individual.
Andy made a comment about the scale of the Internet and the fact information is virtually never deleted. This makes aggregation and data mining all the more effective and dangerous. He asked the audience for their views on the ability to withdraw consent. Everyone agreed that this was a good idea, but the practical aspects around this would be very difficult to implement.
Williams point was we now have a totally organisational centric structure on the Internet which has been built up over many years. We need to start thinking about user centric data models. We need to move to a more balanced view with individual centric aspects being seen as just as important as organisational aspects. There is the ability to get copies of all the information held about you, but there is currently no way to enforce the removal or redaction of personal information online. Another point was that given all the copies, caching and archiving it would be very difficult to implement data removal.
There are also other aspects that need to be considered such as legislation covering know your customer and record retention which may prevent removal. However postings on social networks and news groups should not retain information that has been deleted by the individual.
The discussion moved on to the ability of online organisations such as social networks to change their privacy policy without the users consent. Andy answered this and made the point that one social network he had been a member of kept changing the privacy policy and the last change meant that they owned all your photos, at which point he removed his account which was the only choice other than agreeing to the policy.
However most people will not have read the policy and will not realise that all of their pictures are now owned by the social network. Most teenagers may not care about this today, but may in the future when such pictures impact their livelihood or ability to get a job. There is a whole area that needs to be address about protecting the naïve from themselves.
William made the point that there is a dichotomy for some organisations as they are stewards of personal data on the one hand, but have an obligation to maximise profits for shareholders on the other which can lead to a conflict of interests. This means in some instances they cannot be regarded as responsible stewards of personal data. Even if they have the best intentions they may in the future be forced to sell the personal information as an asset of the organisation.
The last point made was privacy can be thought of as security by obscurity as for the most part it prevents access to the information, however where it is required such as for law enforcement it can be obtained and most legislation such as the Data Protection Act does have clauses to allow for this. This also means that anonymity online is extremely difficult to achieve as everything from the end IP address onward is recorded somewhere and can be obtained with the relevant authority.
At that point the discussions were closed.