The transcripts of the official inquiry into the culture, practices and ethics of the press. More…

  • MR RICHARD BEECROFT ALLAN (affirmed).

  • Mr Allan, good afternoon. Could you tell us your full name, please?

  • My full name is Richard Beecroft Allan.

  • Are the contents of your witness statement true and correct to the best of your knowledge and belief?

  • You tell us that you are the director of public policy for Europe, Middle East and Africa for Facebook?

  • You're responsible for the company's involvement on matters of public policy across the region, including the United Kingdom. Your team works on a broad portfolio of issues, including privacy, online child safety, freedom of expression, e-commerce regulation and public sector uses of social media.

    Before joining Facebook in June 2009, you were the European government affairs director for Cisco and you've been an academic visitor at the Oxford Internet Institute.

    You also between 2008 and 2009 chaired the UK Cabinet Office's power of information taskforce, working on improving the use of government data. You were the Member of Parliament for Sheffield Hallam between 1997 and 2005, and you were appointed to the House of Lords in 2010.

  • Can I ask you now a little bit about the product Facebook. You tell us that in essence the company develops technologies that facilitate the sharing by individuals of their information through what you call the social graph, the digital mapping of people's real-world social connections. Anyone over the age of 13 can sign up, but you wish to emphasise that Facebook does not itself produce the content that is shared via the service.

    You tell us a little bit more detail about the platform. It's made up of core site features and applications. Fundamental features include a person's home page and timeline. There is also a news feed -- it's not news in the sense that we may have been using it, but this is news about a user's friends and what they are posting?

  • About their activities. And that the application also has photography, event, videos, groups and pages, which are ways of connecting one user to another.

    There are various other communication channels, chat, personal messages, wall posts, pokes or status updates. Is that right?

  • There is a development platform, which enables companies and developers to integrate their own applications and services with Facebook, and you tell us that the net result of offering these services is that there are 800 million active users globally, including some 30 million in the United Kingdom alone, and that number is not just people who have had accounts, but who have returned to the site in the last 30 days?

  • Well, what percentage of the population of that -- I was about to say that's one in two, but it's more than one in two, because, of course, you can only get into it when you're 13.

  • It is, sir, yes. For the adult population of 13 plus, it's more than 50 per cent of the UK population.

  • Facebook employs 3,000 people worldwide. A lot of private and public sector organisations use Facebook services. For example, you tell us Facebook partnered with the Electoral Commission in the run-up to the last General Election in this country, indeed to encourage young people to register to vote, and the monarchy made extensive use of the service during last year's royal wedding celebrations.

    It's a service which is free to use at the point of use, and funding is derived mainly from advertising, but there is also supplementary revenue from the sale of Facebook Credits.

  • Moving now to the corporate structure, Facebook's international headquarters are in Dublin, and the global headquarters are in Menlo Park, California. Can you help us with the distinction between international headquarters and global headquarters?

  • Yes, the headquarters operation in Dublin consists of around 400 people carrying out a very broad range of functions, including those which are directly related to users. Any use of the service outside the US and North America has a contract with Facebook Ireland for the delivery of that service to them. Then Facebook Ireland in turn has a number of subsidiary offices around the particular union. Of particular relevance here, it has an office in the UK, which provides a much more limited set of functions, primarily related to marketing and sales support.

  • Indeed, you explain that in your witness statement, that Facebook UK Limited is really a small and supporting operation, and that the user is actually contracting with Facebook Ireland Limited?

  • Having dealt with the product in outline, and the corporate structure, can I ask you, as I did with the witnesses from Google, a little bit about Facebook's approach to privacy in principle, please. Can we start with the document at tab 11 of the bundle. It's an article published by the Guardian on 11 January 2010, so just over two years ago, reporting the words of the Facebook founder, Mark Zuckerberg, and he was saying that he thought that privacy was no longer a social norm. He's quoted as saying -- I'm looking at the third paragraph:

    "People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people. That social norm is just something that has evolved over time."

    Can I ask you: what it is Facebook's approach in principle to the privacy of information?

  • So Facebook has created a platform whose express purpose is to allow people to connect with other people, be that family or friends or organisations of interest to them, and then to share information with that group of connections. So our core raison d'etre is to give people the ability to share personal information with others. But crucial to that is the notion that the individual controls what information they're sharing and who they may share it with, so they control both the content and the audience. So for us, privacy is a notion which is very much at the heart of what we're trying to do, but very much a notion that's allied with that concept of control.

    I guess we would contrast it with a notion of secrecy, keeping information entirely to yourself and not sharing it with anyone, where clearly a platform like ours is of no use to somebody who's not interested in sharing information with a group of people. So it's very much about sharing what you wish to share with the group with whom you wish to share it, and that's articulated when you use the service by a set of very clear controls.

    If I go on to the Facebook site, I'm offered the ability to share whether it's a photo or a textual comment or a link to something else and right in front of me is a little icon that says, "Do you want to share this with the whole world? Do you want to share it with all of your friends or do you just want to share it perhaps with a subset of them, your family or your closest friends?"

  • Thank you. Like a lot of very large media companies, privacy has been a controversial issue for Facebook, and if I could take you to the last tab in the bundle, tab 12, we have an article there dating from February 2009, which reports the interest of the American regulator, the FTC, in changes to Facebook's privacy policies, which rather widened the uses which Facebook could put information to.

    Can you help us, please, with what the outcome of that FTC involvement was?

  • I'm pleased to be able to tell you that we reached a settlement with the FTC in November of last year, with a series of undertakings that we agreed to with them to ensure that, for example, we have clearly defined privacy officers both on the product side and the policy side within the company, that we will report regularly back to the FTC on what it is that we're doing, and that, for example, we will undertake certain forms of engagement with our users beyond those which we already do and which are very extensive, when we make certain forms of changes to the platform.

    So that agreement is there with the FTC, and I think it does -- I mean, the fact that this happened reflects the fact that as a platform Facebook is under an enormous amount of scrutiny, and that huge user base of 800 million users means that people are very willing to come forward if they have concerns or criticisms about the platform, and I would say equally we're willing to meet them and to try and find an agreement and a settlement.

  • In addition to American regulation, of course, Facebook Ireland is subject separately to the regulation of the Irish authorities, and in particular their data protection commissioner. You have in the bundle the report of an audit, recent audit, dated 21 December 2011 into Facebook's activities from a data protection point of view. Is that right?

  • I won't go into the details of that just at the moment, but to continue with the legal theme, it's right, isn't it, that Facebook, like Google whom we heard from, tries to comply as a matter of policy with the laws of the lands where it operates?

  • Can I ask about the way in which the agreement between the individual user and Facebook Ireland works? As I understand it, at the core of the agreement is a statement of rights and responsibilities, and this sets out what it is that the user is promising to do and not to do. This is exhibited to your witness statement, and perhaps if we look at page 12, following the pagination at the bottom of the page --

  • 54812. Is that what you mean?

  • I see. So 12 on the internal numbering?

  • There's a section of the statement of rights and responsibilities. Paragraph 5, "Protecting other people's rights":

    "We respect other people's rights and expect you to do the same."

    I won't go through all of it, but perhaps I could pick up on number 1:

    "You will not post content or take any action on Facebook that infringes or violates someone else's rights or otherwise violates the law.

    "2. We can remove any content or information you post on Facebook if we believe that it violates this statement ...

    "8. You will not post anyone's identification documents or sensitive financial information on Facebook."

    Does that give you the contractual underpinning to remove illegal material?

  • That's precisely the purpose of those clauses, yes.

  • There is a range of other materials, sir, that's set out in our community standards, which is a separate document that covers areas, for example, like nudity and pornography. So nudity and pornography that would otherwise be legal in many jurisdictions will be removed from Facebook as a matter of policy because we don't want that material on the site. So it does go way beyond the illegal into other forms of content that are simply regarded as unsuitable under our terms for the audience that we have.

  • So that might include, for example, bullying?

  • Precisely, and there is -- you're right, sir, there are specific clauses on bullying and harassment, nudity and pornography, excessive violence, hate speech and other forms of content which we would regard as unsuitable for what we have, which is a general audience, 13 plus, across multiple cultures and jurisdictions.

  • Because the problem is, of course, that unlike Google, which only provides you with an index -- and I don't intend to belittle the importance of an index -- you are hosting content and to that extent have a responsibility not for the content, because you're not putting it on and you haven't got the people to read it all, but you have some measure of control over it.

  • That's correct, sir, and I would say, and I think it's hopefully clear from the evidence we've given, that we fully accept that responsibility and have taken the necessary measures to make sure we can discharge it.

  • If we turn back one page from the section we were looking at in the exhibit at section 3, we have the safety section, which contains many of the prohibitions to which you've just referred. 6 is the prohibition on bullying, intimidating or harassing any user, 7 deals, amongst other things, with pornography, violence or threats, and 10:

    "You will not use Facebook to do anything unlawful, misleading, malicious or discriminatory."

  • An important feature of Facebook is that you have to use your real identity, don't you?

  • In the bundle, we needn't turn it up, is an example from a news report -- from a PCC report of a case in which a reporter had used a false identity to create a Facebook address. Can I take it from the terms that we've just looked at that that would be a breach of the Facebook terms and conditions and, if you'd been aware of it in advance, would have been an account which would have been closed?

  • Absolutely. If I can elaborate on that just a little, the real identity culture is at the core of what Facebook has done. You may be aware that there are a wide range of services on the Internet that offer a similar functionality, that people can connect with each other and form into groups. We believe that Facebook has been so successful precisely because it has enforced very robustly a policy that says: if you're coming on the platform, you must present yourself as yourself, so that when others engage with you, they can have a reasonable confidence that you are who you say you are. That means that people typically have 100 or 200 connections of people they know in the real world, and a much richer engagement, we think, than they would have on many of the other spaces on the Internet where you're talking with people operating under pseudonyms, made-up names --

  • Most of the verification we get is precisely that social verification. If you come onto the platform and don't present yourself under your real identity, you don't have a meaningful experience. Conversely, if you do present yourself under your real identity, so if, for example, you connected with me, I would be able to see that you have an ecosystem of friends and family around you, and therefore reasonable confidence you are who you say you are. If you had no friends at all, or simply a random set of friends, then I would have a lot less confidence that you were who you said you were.

  • Do you have mechanisms available to you -- I'm not going to ask what they are -- to check up on that sort of thing?

  • We have a security team who are constantly looking for the people trying to get around the system, and indeed, in many of perhaps the sort of hard cases we indeed might be looking at the sort of people who are carrying out malicious behaviour will use fake identities quite deliberately because they feel less accountable for doing so.

    So we have systems precisely to try and pick that up because we don't want those people on our platform, we don't want those identities on our platform. Yes, there are some systems in place, and we actually find the strongest protection, again, is that community of users. We effectively have an 800 million strong Neighbourhood Watch community of people who will very happily report to us if they think someone is a fake identity or behaving strangely.

  • Since Facebook took on the policy of real identity and enforced it rigorously, has there been any discernible change in the amount of objectionable content that's been posted and had to be removed?

  • Just to be clear, real identity has been at the core of what Facebook's done since the beginning, and we firmly believe that that's why, for the typical user of Facebook, they can be using it day in, day out, month in, month out, and never come across objectionable content. It really is a rare experience that one comes across content that is problematic on the Facebook platform, and that's because most people are feeling accountable. When they do something on Facebook, it's literally in front of their friends and family, and therefore -- people will overstep the mark, but they're much less likely to do so.

    What we've also found with our partners, that's been one of the reasons that Facebook has been taken up to such a high degree, so, for example, many newspapers now will use Facebook identities for people wanting to comment on the site. So you read an article, and instead of commenting as "Angry of Tunbridge Wells", you now comment as Richard Allan, and they found that people commenting in their real identities will engage in a better discussion than they would do when they were Angry of Tunbridge Wells.

  • Can I ask you now about what mechanisms there are for dealing with posts which readers and users find objectionable? I understand there are various mechanisms, so perhaps we can deal with them one at a time.

    First of all if we deal with the horizontal controls, if I can call them that, between users. It's right, isn't it, that there are mechanisms for one user to object to the post of another directly and ask them to remove it?

  • That's right. We've created our system called social reporting really for two reasons. One is being very conscious of this scale that we have where people are posting phenomenal amounts of content, you're always looking for the most effective way of resolving a dispute. So having mechanisms where if somebody posts a photo of me I simply let them know, in most cases that will resolve the dispute. You don't need to escalate it either to Facebook or to a regulator or to a court to resolve that situation if we make it very simple for people to do that, fix things between themselves.

  • So this is as simple as "I don't like that, please will you take it down"?

  • "Please remove it". And the second part of that is people do learn, and if I tell somebody that I don't like them posting photos of me, hopefully they're going to stop posting photos of me in future, because they'll have learnt from me. Whereas if an anonymous source simply removed that content, they may never get the message that it's me who's upset about it.

  • And is there a function for allowing an intermediary to get involved in the user-to-user disagreement?

  • Precisely. We've also recognised that in some cases, and you might think particularly in those instances of bullying for a younger person, that it would be helpful to bring in a teacher, a parent or some other trusted adult and make them aware of the dispute, because you need to resolve that dispute between individuals in a physical space, you can't just resolve it just online. So the social reporting feature also allows you to say "Please send this report to a third party because I want to get them engaged in my dispute".

  • There is though still an option to go straight to Facebook, isn't there, and complain about content?

  • Exactly. So there are reporting buttons right across the site and this design essentially tries to deal with it in a tiered way: resolve it between yourselves if you can, perhaps escalate to somebody else if that's appropriate. If the dispute is still going on, then escalate it to us and we can remove the content or remove the user, and of course in very extreme circumstances you may wish to escalate it to the public authorities in your country because there's something that requires their intervention.

  • Just to be clear, does the user have to start at the bottom or can the user go straight to Facebook?

  • They get the choice. They get offered the different options, they can come straight to us if they choose to do so.

  • When a complaint comes to you, whether it's after a failed attempt below or direct to you, what test does Facebook apply to the post of a UK user in deciding whether or not to take down the content?

  • The primary test is conformance with the statement of rights and responsibilities, and we actually find that most of the incidents that are reported to us -- actually, even including many of those where there may be an allegation of illegality, they're generally resolved because of some other breach of rights and responsibilities. Somebody may be using a fake identity to post the information, there may be nudity or pornography involved, there may be forms of hate speech that are unacceptable under our terms, therefore the situation can be resolved if you like by reference to the statement of rights and responsibilities rather than requiring a technical legal analysis.

  • Sorry, carry on.

  • I was going to say for cases where it's clear that it's about illegality or illegal compliance in the UK specifically, then we would apply the test, I think similar to many other companies, of saying: if it's not in conformance with UK law and it's been posted by a user in the UK, then that user has breached our terms of service by making that posting and then we'll take the appropriate action.

  • You've explained the tools, the weapons in your arsenal, if I put it that way. One is to just remove a post. At a more serious level, you can prevent a user using the system at all. There's a third way, blocking content. You are technically able, if needs be, to block certain content to certain destinations; is that right?

  • Yes. I think it's perhaps important to understand the distinction between a service like Facebook and I think you heard evidence earlier about people using different national domain names to create different entities like Google does and some other service providers, and Facebook, which is a single global community. It's designed so that I can speak with my cousin in the United States, so it makes no sense to have a UK Facebook and an American Facebook. There is one Facebook.

    Given that we have that structure, that design goal, to have a single global community, there are sometimes exceptional circumstances where we get a report of content that is illegal in one jurisdiction and not in others, and there are technical means available to restrict the access to some of the content on Facebook on the basis of the person who is viewing it. It's not something we do by preference, and as I say our experience is that it's not something that we commonly have to do, because most of the breaches are breaches of our terms of service that are global breaches and therefore actionable globally.

  • I was talking earlier about the situation where one user is objecting to the material posted by another. Can I ask you now about the situation where a non-user, a third party learns that objectionable material has been posted by a Facebook user. How does such a person complain to you about that?

  • So we offer an extensive help centre on the service, and the help centre contains material directed to people who use the service but also directed towards people who don't use the service and they can go there and carry out searches on some of the common terms you might think of like defamation, invasion of privacy and so on, and they will find material that directs them towards getting help. Typically they may need to use a web form in order to report things, because they can't report it directly themselves.

    We also find in practice that again because of the large number of people now using Facebook, that in practice people will simply find somebody else who is a user of Facebook and get them to report it for them.

  • Does that third-party reporting system allow complaints of defamation and privacy invasion to be made?

  • So we have a generic reporting term that covers -- which is designed to allow people to give us notice of potential illegal content and the kinds of things they give us notice of are typically a combination of intellectual property violations, copyright, trademark, et cetera, and issues like defamation and invasion of privacy. So there is a form available on the site that people can use to report content that they believe is illegal and in order to put us on notice of that illegal content.

  • The Inquiry has heard some evidence about the speed at which new media companies are able to deal with complaints and complaining that they're not dealt with quickly enough. Are you able to help us with how quickly Facebook is able to turn around complaints of privacy and defamation made by UK users?

  • Yes. In common with what you'll hear from companies generally, we will operate a system where we can't entirely control the inputs, because they will be responsive to particular pressures at a particular time, but we do have some targets and I checked with the legal team who deal with this class of violation, the material that comes in as a form of notice, including defamatory material, and their expected turnaround time is 24 to 48 hours. That's what they aim to do.

  • We heard from the Google witnesses that they have lawyers adjudicating on whether or not material is defamatory and making decisions as to whether or not it should be taken down. Do you do the same?

  • We have teams both in Dublin and in our California offices who are a combination of lawyers and non-lawyers. Our front line staff are known as our user operation staff and we train a set of those staff particularly in these kinds of violations. So in many cases it can be fairly obvious, a trademark or a copyright violation, for example, can be very straightforward. Some forms of defamation can be very straightforward, particularly where the case is well known. Those staff are trained to identify and deal quickly with those cases that are obvious, and then are able to escalate through, if you like, the more legally trained staff, and even through to outside counsel, if necessary, for very specific cases where there's some area of contention or doubt.

  • Can I ask you now about more complicated cases? Take, for example, a photograph which is a gross invasion of privacy, which goes viral throughout the Internet, but including very many Facebook users. If you received a complaint about such a photograph from a UK user, what can Facebook do about that?

  • The system that we operate is a notice and take-down system and the notice relates to a specific item of content on the site rather than to, if you like, a generic piece of content, so again I think similar to a response you may have heard elsewhere, we don't have in place a system that allows us to say this photo should be removed from every place on which it occurs on the site, but we could have in place reporting links on every photo on the site so people can report them individually.

  • For a photograph that has gone around thousands or even millions of users, that means that the subject of the intrusion has to make, if it's a million copies of the photograph, a million separate requests for it to come down. Is that right?

  • I think that's correct for the Internet generally, and yes, correct for Facebook.

  • Lord Allan, could you speak a bit more slowly, please, because we're trying to keep a track of it.

  • Does that mean for all practical purposes that there are some viral transmissions of images or texts which, once out there, are almost impossible to put back into the bottle?

  • I think practically on the Internet, yes. This is -- I think there is a much broader debate, shall we say, on the Internet, of which I think the issues before this Inquiry are very much a part of that debate, around how one stops content of all sorts that is either grossly illegal or, for example, copyright infringement material, how one stops that spreading across the Internet, and I think this is a common challenge that is faced in all of those debates, that there are -- the ability to copy digital material instantaneously does represent a new set of challenges.

  • Has this debate reached any conclusion?

  • It hasn't, if I say respectfully. I mean it is an incredibly fierce debate, particularly around the copyright area. I'd say that's where it's become most advanced, and there are huge debates in many countries around the world about how to deal with it there.

  • It's not just of course written copyright, it's also -- one knows about music, films, the rest of it. Everything.

  • Precisely. I think that's where perhaps if I can suggest there may be some interesting material for your Inquiry, because they are looking at similar issues, like how does one stop a particular film clip being copied across the Internet, a photo. Some of the technical issues and the philosophical issues about what's the responsibility of the person who posted it, what's the responsibility of the intermediary, how do we prevent this without adversely impacting freedom of expression, I think some of those debates are consistent with some of the issues that you're examining.

  • The other problem is that you may get a book or an article, but if one of them -- if they're copied in just a slightly different form, you have to have some sort of mechanism to identify them, which I would have thought quite difficult.

  • Precisely. That's another area which again has become very current in these broader debates around does one simply create an incentive for the clever technologist to find a technological work-around of a regulatory measure designed to prevent something, and all of these factors I think are -- in trying to get to the right solution for creating good order across the Internet, I think all of these factors are relevant.

  • Is there any guidance given to users which might inform them about when they should think twice before further disseminating material?

  • Yes. One of the innovations that we've been working on -- and again, to be very clear, we regard our success as being dependent on a number of factors. I already talked about real identity as one of them. Providing a safe and orderly environment in which your daily experience is not coming across illegal or offensive material in terms of our terms of service is another of them. So we're constantly trying to assist the people who use our service to understand what the limits are, what they can and can't do.

    One of the innovations that we're working on at the moment is where we've had to remove a piece of content, to post a message, so when that user next logs on, they get a message right in front of them that says, "Hey, you've reached our terms of service, this is what you did, you must click here to acknowledge that you breached the terms of service before you can carry on using Facebook", so that kind of thing, which we call an educational checkpoint, makes people stop and take some education, is an example of the kind of innovation that we think can provide for a very safe environment and one in which hopefully people get better behaved over time because they understand the rules better.

  • Another problem to bowl at you, and which you touch upon in your witness statement, is what happens when you have a link in a post which is a link to material which is very largely not a problem, but includes some objectionable content. What could you do in that circumstance?

  • All the time our starting point -- and again I think the starting point for most of our peers -- is that we've created a platform on which people should be free to speak, as long as they do that within our rules. So if they are -- part of their speech is that they're interested in linking to a newspaper site, for example the New York Times, and discuss material on there, that should be fine. You could imagine the circumstances under which somebody has a problem with one particular article on the New York Times, and in those circumstances, we would regard it as disproportionate to remove all links to that publication because of the one article.

    Again, I think there's a very comparable debate going on in the copyright space about at what point does a site that someone might link to become wholly illegal or primarily illegal and therefore subject to some form of action, removal, and at what point does that site that's otherwise perfectly legitimate that happens to have a very small amount of illegal material, to what extent should one be reasonably permissive of that site having connections?

  • Can I ask you now about regulation? What is Facebook's view about decisions of domestic regulators? For example, in this country, we have the Press Complaints Commission. Would you regard a decision of the PCC as being conclusive or at least very cogent proof that material was objectionable?

  • I looked at the examples that you kindly sent from the PCC, and I think what was interesting to me was that they seemed to be rather examining the behaviour of newspapers in taking material from Facebook and using it, rather than directed at things that were posted on Facebook.

  • We'll certainly come to that aspect in a moment. But have you come across PCC decisions being used to support an application to take material down?

  • No, not that I'm aware of. The cases we've been aware of have rather been of that nature, people taking material from Facebook elsewhere rather than putting material onto Facebook, and the PCC -- or PCC judgments in some way being seen as part of that, of a complaint to Facebook. Again, looking at it structurally, I would imagine that if the PCC have found against a newspaper and they've published a correction, then anyone on Facebook who linked to that newspaper would, one would hope, see the corrected version rather than the original version that was subject to complaint.

  • Indeed and one would expect compliance by someone who was within the PCC scheme with the judgment.

    Can I ask you now about what Facebook's position might be if there were to be, and I stress the "if", a future media regulator in this country dealing with press complaints, if it were to find content objectionable and say so and it was material posted on Facebook. Is Facebook likely to be receptive to such decisions and prepared to take such material down?

  • It's not surprising to say I think any Internet provider would want to give this considerable thought, but just to start that process off, it does seem to me that looking at the PCC judgments in most of those cases that citizens typically place different stock on a piece of content on the basis of whether it's posted on a social network like Facebook or printed in a newspaper.

    In other words, people were in some cases very comfortable to have material online on a social network service like ours, it wasn't causing them a problem, but the moment that it was put into an editorialised authoritative source like a newspaper, it became significantly problematic for them.

    So I think that if one is moving towards the kind of model that you've discussed with ourselves and other witnesses, for us it would be important to distinguish editorialised published content from what one might call chatter on the Internet, and that to make an adjudication about editorialised published content would in turn feed through to the Internet platforms. If the original material were corrected, that in turn would feed through to anyone who linked to that original material in an editorialised publication.

    If the model is to somehow make judgments about the kind of chatter that people do on Internet sites, I think my starting point would be to have concerns about whether that's workable and whether proportionate to the offence that's being caused.

  • Not least given the number of users?

  • Yes, and the amount of content that's simply on the site.

  • Can I explore just a step further: if there was to be a future regulator to whom a person could apply directly and make a complaint about a Facebook posting and that Facebook was expected then to respond to that complaint and be the subject of a binding adjudication by the body, what would Facebook's response be to such a proposal?

  • So to look at the proposal, but to, I think, issue some words of caution, that we are familiar with dealing with disputes between people about content at very large scale and getting to that point where we feel confident about dealing with those complaints has been very challenging, and is for any Internet service in terms of getting systems that can cope with the amount of conversations that are now taking place across the Internet.

    If you were setting out to create something similar as a regulatory body, then I would offer some words of caution about the thresholds you apply before you start investigating, so you're not ending up simply unable to cope with the volume, and if you do decide to proceed, then we could offer some expert guidance on how to cope with volumes of complaints on the Internet.

  • But is there a search mechanism on Facebook?

  • There is a search mechanism. It's not the same as the Google-type search mechanism because it's generally just searching public content. Again, sir, one of the crucial distinctions between a social network likes ours and general searchable Internet content is that very large amounts of the content is only published between small groups of individuals.

  • Rather than to the whole world. And therefore are not searchable, sorry, I should say, for that reason.

  • Yes. And not at the core of the issues that concern me, because what you're really saying is that Facebook is one giant for children's playground conversation or for other groups' collective communal conversations?

  • I think that's a very very good analogy, yes. A lot of the conversation is the chatter in the pub, if you're an adult, or the chatter in the playground if you're younger. It happens to be online and digital, but the way in which people approach those conversations is very similar to any other kind of conversation.

  • Could I ask a question which may reduce the impact of all this: does it have a shelf life? In other words, if you've put some material on Facebook, is it there forever?

  • So it's the individual themselves who decides when to put the content on and when to take it off. We're very clear, our terms of service again state very clearly you own the content on Facebook. We're just undergoing a transition at the moment to a different way of displaying the content that a user posts to something called Timeline. When we've gone through that transition, any user will be able to see all the content they've ever posted on Facebook and be able with a click to delete it or restrict the audience or change it.

    So we, Facebook, don't put the shelf life on, but we give individuals the tools to decide --

  • The opportunity to decide their own shelf life. I've got it.

  • I said I would return to the question of others using or misusing material which has been posted on Facebook, and one of the articles that we've put into your bundle concerns the survivors of the Dunblane massacre and how Facebook material which they had posted was used in a story about the anniversary of the tragedy at Dunblane. Is there anything that Facebook can do to prevent that sort of misuse of Facebook-posted material?

  • The primary way in which we approach this is to offer the user education and tools, so the kind of tools they have are their ability to choose who are on their friends list, which audience they have for a piece of content, to block people if people are trying to access their data they don't like, they can create a block so that person can never access them.

    So we've given them the toolkit to do that, because of course sometimes people will go around that and get hold of the content.

    I'm afraid once they've taken the photo and copied it off to somewhere else or taken the content and copied it elsewhere, then there's nothing at that stage that Facebook can do to recover that content. It is an area where, I guess, as a citizen I can see there is potentially a gap now between the individual citizen's ability perhaps to take action about misuse of their data, where it's been copied digitally from the Internet, but I think it's not something the service provider can do once the content is in another environment.

  • Returning to your witness statement, is it right that Facebook works with domestic institutions such as the Advertising Standards Agency and the Information Commissioners' Office?

  • I think that's all I have for you. Thank you very much indeed.

  • Sorry, one of the great issues that the Inquiry is facing is the extent to which what might be described as the traditional media is being impacted by social media and other similar types of publication online, and the concern that information that they are not permitted to publish can spin around social media sites in a way that puts them at a commercial disadvantage, but also could prejudice proceedings or whatever.

    I'm not saying that Facebook were at all involved in, for example, identifying the name of somebody who had sought an injunction, and whether that came to Facebook, it doesn't really matter, because it could equally come through any one of these routes.

    Has your industry given any thought to how that position can be regularised or made better, because one of the arguments that is presented to me is -- they don't put it in this way, but I will: "Why are you hitting me, because however you control me, there are a whole load of other people out there who are just poking their thumbs up at you and there's nothing you can do about it"?

  • I would make two points on that.

    Firstly, I think, just to put in context the relationship between the media and social media, that actually we are becoming one of the major distribution channels for traditional editorialised media content. So somebody like the Guardian has now over 5 million people using an application where they bring the Guardian content into Facebook, and we drive traffic for them and help them to share their material across social media.

    We certainly see it as much more complementary set of things that we offer. They offer great content, we don't produce content. We offer great distribution. That can be a challenge for them to distribute through their traditional websites. So I think the relationship is hopefully less confrontational than --

  • No, I don't think they were saying it's confrontational. It's used to me as a confrontation with me saying, "It's all very well you having a go at us, but" --

  • To put that on one side and come then to your comments about what do people say on our environment, as I understood it, should that be equalised with what people can say in their environment, again to come back to that analogy of the chatter, the conversation in a social space, to us that would be like saying you should equalise what people are allowed to say in a pub with what people can say in a newspaper. They are just different ways of speaking, and of course people do gossip in pubs and spread names and so on in the same way that they do online. Without unravelling the Internet and shutting down or severely curtailing these kinds of services, I find it hard myself to see how one can deal with that.

  • Because there is no mechanism whereby you can, even if you wanted to, really control content, save for individually looking at a particular post and saying, "That shouldn't be there, it's off"?

  • Exactly. And the kind of measures that one could take to control content, you know, at a deeper network level, I think are ones that most people would regard as disproportionate and excessive.

  • That's the point. Because as soon as you have to insert a human being into the process of making a decision, you have made it extremely labour intensive and utterly incompatible with trying to service the need of 30 million users across the UK.

  • Yes, well, I think I'm trying to summarise your evidence rather than make some new suggestion. Lord Allan, thank you very much indeed.

  • We have one witness left. Let's just have five minutes now before we take the witness. Thank you.

  • (A short break)

  • Good afternoon, sir. The final witness today is Ms Camilla Wright from Popbitch.