The transcripts of the official inquiry into the culture, practices and ethics of the press. More…


  • Ms Keller, could I start with you first, please. Could you tell the Inquiry your full name?

  • My full name is Daphne Hija de Primavera Keller.

  • And could you confirm that the contents of your witness statement are true and correct to the best of your knowledge and belief?

  • You tell us that you are the legal director and associate general counsel for Google Inc. and you've been an associate of Google for seven years.

    Mr Collins, could you give the Inquiry your full name?

  • David John Collins.

  • You tell us that you are the vice-president of global communications and public affairs?

  • For Europe, Middle East and Africa, yes.

  • Could you give us a little bit more information about your professional background in new media, please?

  • Yes. I've worked at Google for five and a half years. I advised the company on predominantly public policy issues and before that I've spent between 10, 11 years in public policy and communications.

  • I know, Ms Keller, I'm right in saying that you've come from America to give evidence. I'm very grateful to you. I gather from where you're posted you've probably not come from America.

  • I've come from Victoria in London.

  • Then I won't extend the same thanks to you, but I certainly will thank Ms Keller for taking the time to come.

  • I'm happy to be here.

  • Before I descend into the detail, could I ask you some broad questions about Google's approach to privacy in principle? Can I start with a document which is at tab 21 of the bundle. It's an article in a publication called The Register, which was published on 7 December 2009. I think I need only read the headline. It quotes the Google chief executive officer, Mr Eric Schmidt, with the summary:

    "Only miscreants worry about net privacy."

    The quotation being:

    "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."

    Can I ask, is that representative of Google's approach to privacy in principle?

  • It's -- I think I'll answer in two ways, if I may. It's not representative of -- the headline is not representative of the point that Eric Schmidt, now our chairman, was making. The obvious point that he was making was if you share information online, you are sharing it, and it's then shared and it's out there online. But obviously the headline is not representative of our privacy principles.

    Google takes privacy extremely seriously, and it's governed by essentially three broad principles. Firstly, transparency, so making it incredibly clear to the user, someone accessing our services, what data is being collected, how the data is being stored, but also how they can then access, delete and/or remove that data.

    Secondly, choice, so when the user is using the product, they have a very granular level of choices about what settings they want to make using that product.

    Thirdly, control, and this is really important, so when the user is using the product, they have -- ultimately they have the control over how that data is being used. If I may give an example, if I open a Google account and I want very personalised search results, most relevant to me, I can turn that setting off and on at any stage. It's not Google that sets it, it's the user.

    So to go back to the principles, they really are transparency, choice and control, and just to emphasise it, the headline in the register is not representative of either Eric's view or the company's view on privacy.

  • Over the page at tab 22, there's a BBC News item no doubt with the unwelcome headline "Google ranked worst on privacy" and it's reporting back in 2007 that a rights group, Privacy International, rating a lot of media companies, had rated Google worst on privacy. I'm very conscious that that's an article which is now some years old. You've provided to us a copy of the current Google privacy policy, which is dated 20 October 2011, and I understand that there is a further privacy policy which is shortly going to supersede the existing policy.

    Is it fair to say that Google has made considerable efforts in recent years to concentrate on privacy and its approach to privacy?

  • Absolutely. If I can refer to the BBC News article that you mentioned, I think if you spoke to Privacy International now, their view of Google and privacy would be very, very different to the view that they had then. I would also not have agreed with their position then, and I think I remember having conversations with them at the time --

  • -- but their view would be very different now.

    I think it's right to say that Google has always taken privacy very, very seriously, it's not just taken privacy seriously from a very strict legal compliance position, it's taken privacy seriously because ultimately the trust that we have with our users is incredibly important.

    Over time, the way in which our privacy governance model is built, and the way that then that plays out in the engineering or product design decisions that we make, has certainly improved and improved over time, we make a very big effort at that. I'm very happy to go into some of those processes if you like now, or later on.

  • What I would like to ask you about that is: what consultation has there been with the United Kingdom in formulating the forthcoming privacy policy?

  • The forthcoming privacy policy, that will be announced this week. We talked to the ICO beforehand, we talked to many data protection authorities around the world, plus also privacy advocates, activists, beforehand. But it's important to emphasise not just because of this privacy policy change. We have an ongoing dialogue, a very regular dialogue, with many of them throughout the whole of the year. The reason why we do that again isn't because there's a legal obligation to do it, it turns out there isn't, it's because we want the benefit of their wisdom. Google does not have all of the wisdom on privacy. We want to hear from other people to make sure that we get the decisions right, and we welcome the input that we get.

  • There was a well publicised problem when Google Street cars accidentally collected some private data. That resulted in the Information Commissioner's office here looking into the matter. It concluded there had been a breach of the law, but that it was accidental, and it exercised its discretion not to take legal proceedings. But as a result, I think I'm right to say that Google submitted to a data protection audit report?

  • Yes, we did, and it's important to emphasise that we profoundly regretted that incident. As soon as we discovered the incident, we announced it very, very publicly and we immediately contacted the ICO in the UK but also data protection authorities around Europe and around the rest of the world. Part of the agreement that we reached with the Information Commissioner's office was to submit ourselves to an audit, which we obviously did, and the ICO audit report makes clear that we made a number of very significant changes and improvements to our privacy governance model internally.

    I think one of the most important of those was the construction of what we call our privacy working group, headed by a director. That brings together the different functions internally to ensure that our privacy principles are being constantly enacted internally and we've welcomed the Information Commissioner's audit as affirmation that we were heading in the right direction on the specific issues that they raised.

  • The audit we have in the bundle at tab 10, dated August of last year. It was obviously reporting on a Google privacy report, no doubt dealing with the matters you've just outlined, and the overall conclusion that I'm looking at reads:

    "The audit has provided reasonable assurance over the accuracy and findings of the privacy report as provided by Google Inc. to the Information Commissioner. It has also provided reasonable assurance that Google have implemented the privacy process changes outlined in the undertaking."

    It went on to identify some scope for further improvement and records the fact of improvements which had taken place.

    Are those recommended improvements matters which have been taken into account in the October policy in the forthcoming new March privacy policy?

  • Well, to go back to one of the core elements of the audit report was the construction of the privacy working group, and absolutely that was -- the privacy working group was very much part of the privacy policy change that we announced this week, which of course is part of an ongoing process of improving our privacy policies.

    I just, if I may, sir, just outline very briefly what those -- what we are trying to do this week with our privacy policy change because it's very relevant to the idea that we discuss our privacy policies with outside parties.

    Part of the feedback that we had had from data protection authorities was that we had too many privacy policies. It turned out that we had over 70 covering our different products. Each of those privacy policies was accurate, it gave users really useful information, but the fact that there were so many of them probably didn't help the average user understand exactly what those privacy policies were intended to do.

    So we took that feedback on board and produced one simplified privacy policy, and we were very pleased yesterday that Viviane Reding, the European Commissioner in charge of privacy, who published new regulations around online privacy yesterday, said that she applauded it, so I think it's very much a product of the feedback that we've had, the privacy principles that we -- that govern our approach on privacy, and also the fact that we take privacy very seriously.

  • Thank you. That's all I want to ask you about privacy in principle. If we could move now to look at the corporate structure of Google and Google's operations in the United Kingdom, Ms Keller, you tell us in your witness statement that in the United Kingdom Google has over 1,000 staff working on advertising sales, software development and other functions in London and Manchester. They're employed by a subsidiary company, Google UK Limited, which is incorporated under English and Welsh law. Importantly, however, your search engine services are owned and operated by Google Inc., which is a Californian company, isn't it?

  • Yes, that's correct.

  • You tell us a little bit about in practice where your computer servers are actually based, and there's an exhibit which tells us where the various data centres are. It's right, isn't it, that none of them are in the United Kingdom?

  • Yes. The list of data centres that we submitted is correct. None of those are in the UK.

  • Although Google Inc. is an American company and your servers are located outside of our jurisdiction, you tell us that that's Google policy to operate your UK-directed services consistently with UK law, and users of your service will be familiar with the address Is that the vehicle for achieving that aim?

  • It's a vehicle for achieving a larger product aim, which is providing a search service which is particularly useful for users in the UK, so that the service is tailored to be as relevant and useful as possible for UK users. So, for example, if you were to search for "football" on that service, it would show results about Manchester United or about what Americans would call soccer, whereas a search for "football" on our US-directed service, on .com, is going to turn up results for US football.

    Really, it's, as I said in the written statement, on the UK service we structure it to comply with UK law. This is where the UK law-based removals happen, but it's not just about that, it's about providing a service that's the best for UK users.

    And I would add that we try very actively to channel users to that service, so if someone -- if a UK user types in the web address in their browser, we automatically redirect them to because we think that's the best service here and that's the one that we operate in compliance with UK law.

  • But it is possible for a user here to log onto, isn't it?

  • And that is a different site?

  • That is the site that we operate as targeted to the United States and it's operated in compliance with US law, much as the site is the site targeted to Germany and operated under German law, and so forth.

  • That becomes relevant because you go on to tell us a little bit about your removals policy in relation to your website. I'm going to explore that in some detail in a moment, but before I do that, it might be helpful if we explore in the most summary terms what it is that the Google search engine does. I'm going to try what I'm sure is a rather ham-fisted summary, and you can tell me whether, broadly speaking, I'm correct.

    Is it right that your service works by first of all crawling through Internet web pages, indexing those pages and then, when a user enters search terms, drawing from the index using algorithms those sites which you think best match the search terms that the user has inputted?

  • That sounds perfectly right.

  • I'm not sure about the word crawling, given the speed at which it operates, but ...

  • I'm certainly going to quit while I'm ahead there, sir.

    If we move on now to removal, if someone applies for something to be removed from the search engine, what they're in fact asking you to do is to remove it from the index that I've just mentioned?

  • Right. They're asking us to remove it from the search results that they'll see if they enter a search term that would have brought up that web page as a result.

  • I see. Can I ask exactly how that works? If someone wants to complain about a search result which is being thrown up because the content of the web page which is being offered is, for example, defamatory, how does a user go about doing that?

  • Yeah, so I'm glad to explain that. I think I can clear up a lot about how that works.

    Let me start by saying that obviously Google is not the Internet so what I'm going to describe isn't a way to make a website come down. What we are doing is reflecting in our index the content that came from these third-party sites that are put up by someone else that we have no editorial control over and so forth. We're just attempting to sort of neutrally index them. So the process I'm going to describe is the way to stop a search result from showing up on Google, on sort of our little corner of the Internet, but it doesn't change the fact that it's out there and that a user might find it by following a link, you know, from Facebook or Twitter or from an email.

    So there are two basic processes that I'll go through and each has a different public-facing tool that can be used to get something removed from Google's search results.

    The first is a process for webmasters, so this is for the actual operator of the website, the newspaper in the case of a news website. If a webmaster puts something up and does not want it to appear in our search index, it's really important to us to make sure that we honour that intention. It's a fundamental tenet of our business and I think of every big search engine, of every responsible search engine's business to honour that webmaster's intent.

    In the first place, if they don't want it indexed, there's a technical standard they can use to say, "Hey, Google, don't put me in your search results".

  • That's for a particular story or a particular web page?

  • It's for any particular page, or for an entire site. The webmaster can choose, at whatever level he or she wants to, to say whether or not something should be indexed in our search results.

    But supposing they didn't do that, they published something and they want to retract it, the sort of slowest and easiest option is they just take it down off of their website, and the next time we crawl the website, the next time we visit it, our results will be refreshed and we'll show that it's gone, or that the page has different text now.

    Assuming that there's a more urgent need than that to take it down, we offer a public-facing tool that's called the cache removal tool, and I don't think that was in the evidence we submitted but I'm happy to get you the url or a screen shot.

    The webmaster can go to that tool, type in the web address, a little more information and click a button and say, "Google, get this out of search results as soon as possible", and we do that, we get it out quickly.

    If that, for some reason, were to fail, we have people who can help to accelerate this, because, as I said, it's really important for us to do what webmasters want, to not index them if they don't want to be indexed, and also I would say that for a person who is the victim of defamation or of bad content online, this is by far the best option because it means you've gone to the webmaster, they've taken down the content where it sits, they've solved the problem at its root, and at that point getting it out of Google's index is sort of clean-up.

    That's the first scenario.

    The second scenario, and what I think that you've heard about here, is if the content is appearing on a website, again a third-party website that Google indexed and has no other relationship with, other than being an indexer, and the individual who is the victim of, say, defamation on that site wants to get it taken down and the webmaster isn't responding, for example, then they can come to Google, using the tool that's called -- I think it's called "removing content from Google", the one we had a screen shot of it in our evidence submission, and that also is just sort of you fill out a form, you name the url, you can tell us which product -- which Google product you're talking about, what the basis is, and click a button and submit it, and that comes in to a team in Mountain View and we review it and we take things down in compliance with UK law.

    I should also elaborate that although we think it's sort of best for people to use that site, because it's a very efficient process, it automatically gets added to a queue for review, we really have our ears open everywhere to pick up complaints. So if somebody were to send a paper letter, say, to the UK entity, they would send it to me in Mountain View and we would follow up and apply UK law.

    And so that is the mechanism for getting web search removals in the UK.

    The final thing I would add there is of course we get complaints that are in the form of somebody saying, "Hey, take that site out of your index, it's defaming me", but what we also get, and what is better, I think, as a policy matter, is people sending us court orders -- not against us, but orders against third parties, saying, "Look, Google, I went to court and a judge looked at the facts of this case, a judge weighed a public interest defence" or whatever other complex questions of law might be raised there, "and the judge said that this is defamatory", and it's our clear policy to honour those court orders and to process removals based on that, and it's very helpful to us because it takes us out of the sort of looking at this "he said, she said" situation.

    We submitted in our evidence the Metropolitan Schools case, where Mr Justice Eady discusses exactly this issue, the sort of difficulty of having a technology intermediary confronted with making a decision about a defamation claim.

  • A judgment rather than a clear answer.

  • If we can get a judgment from a court, that's so much better because it tells us what to do.

  • Yes, I understand. What you want to avoid having to do is yourself making a judgment, because you're not in a position to do that.

  • But do I gather that each of the examples you've given me requires the knowledge of a url?

  • Yes. Yes, they do.

  • So if somebody were to come to you and say, "Listen, I've been hideously defamed, and as a result a story has gone around the world about me and I can prove that it's in breach of my privacy rights or whatever, but I can't identify every url, that would take me forever because I can't find them, or whatever", actually you can only work on urls? You can't then do your own search to find out where it is?

  • We do get people coming and asking for that, and as you can imagine, we are not in a very good position to look at every url and figure out --

  • I understand, I understand.

  • The thing -- yeah, so getting urls is sort of the starting point, and lets us know a person in a position to make judgment -- and maybe it's just the complainant, you know, the person being defamed has looked at this and said, "This is one of the ones that's bad, and this is one that's bad, and here, Google, take it down".

  • I'm going to interrupt one more minute, Mr Barr, and go down a bit of a siding for the Inquiry, perhaps, but I can't resist the opportunity.

    In this country, there is a real issue about what juries learn in criminal cases, and before Google and before this ability to search, it's true that you could go to a newspaper archive and flick back through all the old pages and find out about the criminal history of a defendant, and find out what it was said he'd done or not done, but of course nobody did that.

    But now, where we don't necessarily allow our jurors to know about background history of our defendants, it's very easy for somebody to go on a search engine, type in the name of the defendant and then find all sorts of details, and you may be aware that only this week, in this country -- you may not be aware, but Mr Collins may be -- a juror got into a great deal of trouble for doing just that during the course of a trial and so disrupting the trial.

    Now -- I'm sorry to everybody else -- is there anything that can be done about that problem, or is that just in the too difficult box because of the reasoning you've just explained?

  • Yeah. I think -- so we have the same issue in the United States and the same question about what information should be accessible to jurors. I have not heard a proposal more technically tailored than the idea that one might disappear content from the entire Internet or from the entirety of Google search results so that no person can see it in order to protect this one juror from violating a sworn obligation not to go look for it. So as a technical matter, I'm not aware of any proposals that narrowly get off that one juror.

    As a legal matter, I've heard a little bit about this UK case, where I believe the juror was found in contempt. There are legal obligations on the jurors already and consequences for going out and doing this, so I should hope that the answer lies there.

  • Yes, that's how we're dealing with it, but I'm interested that the same problem arises in the States. There isn't a technical way through, is what you're saying to me?

  • Not that I know of.

  • All right. I apologise for that, but it's rather topical.

  • Certainly no need to apologise, sir.

    You explained that it is useful to you, when receiving a complaint from a third party about content, deciding whether to remove it from the index, to have a court judgment. Can either of you recall coming across a case where a complainant had submitted a decision of the Press Complaints Commission?

  • I'm not aware of one.

  • Would that be sufficient? A regulator -- that raises a question, but if there was a regulator who made an order saying that the newspaper had infringed the privacy in this way or that way, would that be sufficient for your purposes?

  • To be honest, I'm not familiar with the Press Complaints Commission, so we would have to look at it if it came.

  • Can I ask you then about the case which you would rather not see but presumably do sometimes see, which is when a person writes in and says, "Look, your search engine is throwing up results directing users to defamatory material about me. I say it's defamatory because ..." and that's all you get. Do you have a legal team who will consider that, applying UK law and deciding whether or not in their opinion it is defamatory?

  • Yes. We operate in a regulatory framework that includes things like the E-Commerce Directive and implementing legislation for that, and we follow a notice and take down system.

    So if we receive a notice without a court order, which we certainly do, then we look at it and we apply UK law for the UK service, obviously, as best we can. That's me, that is my team in Mountain View doing that, but of course taking extensive advice from outside counsel and counsel in the UK.

  • Can I ask -- you touched upon it in your answer, you said taking down from the UK site or from the UK search engine. Does that mean that when you make a search on for the defamatory article, it won't be produced by the search?

  • But if you deliberately circumvent the automatic redirection and go to, you will still be able to find the defamatory material?

  • Assuming that it is lawful under US law and we haven't received a complaint under US law, assuming those things, then yes.

    Let me talk a little bit about why I think that is the right outcome as a policy matter. We -- you can imagine a world in which we or other Internet companies undertook to apply all countries' laws to all versions of our service, so that a user in the UK on the domain would see search results that had been filtered effectively for the laws of Japan and the laws of Chile and the laws of France and so forth. So the third-party websites that show up in our search index that are perfectly lawful for a UK citizen to see would all be missing. It would be a lowest common denominator of lawful speech.

    This isn't an outcome that I think most people want to see, and this is the basis for our dividing our services in the way that we've described.

  • Understanding the rationale for the system, if we take the example of someone who is famous internationally -- we've heard evidence from a man, Mr Mosley, about whom a video which invaded his privacy went viral and spread globally -- would it mean that somebody in his position, as well as having to try and deal with the individual websites that were posting the material that was offending, so far as Google was concerned, to have it removed from your search results, would have to make an application in respect of each jurisdiction in which that content was illegal?

  • It does. I would hope that wouldn't be a terribly difficult thing to do, and I can tell you that in his case we have removed hundreds of urls, although I agree -- you referenced him going to the individual sites and trying to get them down, and I have to say that because Google isn't the Internet, taking it down out of our search results doesn't make it disappear, that is the right way to get at it and get the content to actually come down from the sites that did put it up.

  • Would it be right that if the video was considered legal in any of the countries in which you operate, it would remain accessible using the Google site for that country?

  • Yes. If there's a country whose law says that that should stay up, then in that country we would comply with that law.

  • So effectively the opposite effect of what you described as the lowest common denominator, if someone is prepared to look in the right country?

  • I suppose you could put it that way.

  • Can you help us with some indication of how quickly you are able to deal with notices asking for material to be removed? No doubt there is a variation according to whether or not it's obvious whether thought needs to be given and so on and so forth, but can you give me a range from best to worst of how long the process takes, please?

  • I don't have specific numbers. I can tell you that we've been getting steadily faster. We've made a lot of improvements both to the tools -- the public tools and to our internal processes. Actually, often a lot of the volume that can keep us busy comes from copyright complaints, and over the past year we launched what we call our fast-track process for copyright that has greatly accelerated sort of new technologies to greatly accelerate the intake and processing of those complaints, and that speeds everything up greatly.

  • I think I probably should have refined my question. We're primarily interested in privacy and defamation complaints. What sort of turnaround would you expect for those?

  • Sorry, I bring in the copyright thing only because it adds to the queue. We process all of the complaints as they come in, and if there's sort of a glut from one source, that would cause it to slow down.

    But because of that tool, we've gotten considerably faster. Of course we're constantly expanding the team that does this within the legal department, and we have -- we improve tools like the user form that I submitted with the testimony. So it's getting steadily faster, but I don't have the exact figure.

  • For a case which involved a submission with a judgment, are we talking hours, days, weeks or months?

  • I think we're talking days.

  • And for a submission which wasn't backed by a legal judgment, it was just a submission to you that something was defamatory?

  • I think in those cases we're also talking days.

  • It's right, isn't it, that whenever you receive a notice asking you to remove content from your search results, you send a copy of the application to

  • That's correct, with the personal information of the sender redacted.

  • Of course. And the purpose of that is?

  • So that -- Chilling Effects is a third-party public interest organisation, and as their somewhat loaded name suggests, they have a mission to document the ways in which content disappears from the Internet, or at least -- I think they apply it more broadly to the Internet. I know that they are trying to document the ways in which results disappear from our corner of the Internet, namely the search results. And so they maintain a database of the removal requests that we've received.

  • In short, these are people who are monitoring censorship of the Internet?

  • I don't know if I would put it that way, because they're monitoring removals, whether legitimate or illegitimate. I don't want to sort of put a cast on whether it counts as censorship or not.

  • I would note there's been a tremendous amount of scholarship that's come out of their database. They recently submitted an Amicus brief in a case and the brief was basically a three-page string citation to different academic articles written using their information.

    One that may be relevant here, this was years ago, a couple of outside scholars looked at -- it was actually Google's copyright removals, but this observation would apply to other kinds as well, you know, looking at the copies of letters that were on Chilling Effect, and they concluded that over 30 per cent of the letters received and processed were from competitors trying to use the law as an excuse to take down each other's websites.

    So they're documenting both totally legitimate uses of the law to remove things from search and also ways in which the law can be abused as an excuse to try to take down lawful speech.

  • So, as the name suggests, looking at chilling effects. What I want to know is if Google is working in close partnership with an organisation which is considering chilling effects on the Internet, what does it do to help look into the destructive effects of abuses of the Internet on individuals? Is there any equivalent activity that Google is involved in?

  • We've had a number of efforts recently to help users protect their privacy online. I think DJ can probably speak to these more --

  • I can speak to a couple of examples. In the UK recently, we ran a very expensive, widely publicised campaign called "Good to know", and this was a very simple set of tools for people to remain safe online, through whether it's privacy protection or securing their email accounts -- not specific, by the way, to Google, but generally how to maintain their identity on the Internet.

    So in terms of the investments that we're making, we ran the same campaign in Germany, where I think if you enter Germany, discussions around privacy generally in society are very intense. We've just launched the same campaign in the US. We'll be doing the same in Italy by the end of next month.

    So in terms of the tools and the investment and the advice and the education that we make for users to maintain their identity online I think in many ways outweighs the relationship we have with the website that you mentioned.

  • That's privacy that you're talking about. I'm really interested in the prevention of illegal destructive content. Is there any research on monitoring work there?

  • I would have to -- I mean, I actually work with a team that commits significant investment to a large amount of very long-term academic debate in all areas of Internet regulation, internal policy. I would -- rather than give you an answer that is incorrect now, I would want to go back and look at the investment that we're making in academic research, which, as Daphne has said, is relevant to the website that you mentioned, and then supply after, if it's okay.

  • If there is any relevant research we would be very grateful to receive it. Thank you.

  • You provide in your exhibit, Ms Keller, at tab 3, some statistics -- I'm afraid certainly in my bundle they're very difficult to read. A shot from a screen -- about requests from the United Kingdom for content to be removed. And these are, as I understand it, requests from all sides of the UK state, including courts; is that right?

  • I'm not going to go into great detail --

  • That's good. I can't read it either.

  • Certainly a very high percentage of requests appear to be complied with. 65 content removal requests with an 82 per cent compliance rate. It would seem, certainly in the period that this was referring to, January to June of last year, that the single biggest category appears to have been national security matters. Matters of privacy also feature reasonably strongly, is that fair?

  • I think that is fair. I'm sure if you can read it that it's correct.

  • Can I now pick up a little bit on the question the Chairman asked you a moment ago, about Google's attitude towards domestic regulators, media regulators? You've explained that you haven't come across the PCC in practice, but can I ask you about the future?

    First of all, and this is in relation to your search engine, if there was to be a future regulator of the British media, which was to consider a complaint by an individual about, say, a newspaper and its online content, and to rule against the newspaper, what is Google's attitude likely to be to the weight that it would attach to the ruling of such a body, if it was applying an agreed press code, if such a ruling was deployed to support a request to remove a site or an article from your results?

  • It is an incredibly interesting question. I think it gets to the nub of what the Inquiry is looking at. I don't want to get into the sort of position of speculating about what the regulation might look like, or whether it's backed by law or not.

    I think, with a process like that, we would look for exactly the same things that you would look for, which were robustness, that justice is being done, that there's fairness, that there's -- that people get to the truth of the issue.

    I don't want to speculate what our submission to that idea might be, or our reaction to it might be, because I'd want to look at it in a great deal of detail.

    The one point I would make is that we obviously, with our UK services, comply with UK law, but I would want to have a very serious think about a process like that before giving you a full answer. So maybe as you develop the ideas for a process like that during the Inquiry, then we could give you some evidence written into the Inquiry --

  • I can certainly accept that it's unfair to ask you to go into any detail, but would it be fair that from the principles you've enunciated, that if it was something that was working to UK law, you would in principle be content?

  • As we said in our submission, we comply with UK law in this country. As I said, I would want to look at the process in some detail and give you a really full answer.

  • It follows, therefore, doesn't it, that what was backed by law would be more effective in that regard than something that wasn't backed by law?

  • -- in summary, as Daphne said, again I don't want to speculate on something that hasn't been fully developed, but as Daphne said at the start, we prefer for removing results from our search index, it's much better for users if those judgments have been made by essentially a court or a legal process that has weighed all of the evidence, that has been robust, that has been fair and that justice is done, and then the result is not just, by the way, handed to a search engine, but handed to the webmaster and the other entry points to the web.

    I think there is just one point I would like to make, Mr Barr. Google is, as Daphne said, Google is not the Internet. We're also not the only entry point to the Internet. There are now multiple entry points to the Internet. I think it's fair to say there are more entry points to the Internet now than there were when Google was started 12 years ago. So whatever robust system that you recommend will have to cover all those multiple entry points, not just a search engine.

  • Yes, I quite understand that, and I appreciate that you are rightly careful. That's entirely appropriate. Of course, to some extent I have a chicken and egg here, don't I, because if something is going to be more effective one way, then that might drive me more in that direction. If it's going to be less effective, then I'm going to be moving away from that all other things being equal, which of course they're not.

    But it may be that in your answer, you've identified something of significance, because you're right, there are many, many different search engines, and many different entry points to the Internet. Of course whatever order was made would bite the webmaster, because if it was a newspaper or whether published in print or not in print, or just online, you'd have wanted them to be part of the debate. But how one transmits that to everybody is a slightly different problem.

  • And therefore may require, for that reason, somewhat more authoritative backing, if I put it like that.

  • It's a very interesting question, sir. The first principle, as Daphne has rightly set out, is that ultimately the person that publishes that content to the Internet is ultimately responsible for the content that they've published. I think that's the first principle. But it's a very interesting question, and as I said, sir, as you develop your proposals around the system that you just outlined, we'd be very happy to submit some written evidence in time, if you asked us to.

  • Hang on, let me just work that out. So if I have some provisional view, then I could ask you to provide some provisional response to my provisional view?

  • It sounds very provisional, but --

  • Well, it is because this is back to my chicken and egg. I will need to know that whatever I suggest is going to work, and it won't help me if six months after I've published a view, you come along and say, "Well, actually, this doesn't work, but if you'd done it this way, it might have worked".

  • Just to reiterate a point that DJ made, first, ultimately, we will comply with what UK law requires. But what we would hope to see in such a process are the same things that I'm sure you're thinking about already, you know, an opportunity for the publisher to defend himself, an adversarial process, a collection of facts, application of public interest defence. This is not news to you.

  • It may not necessarily be adversarial, it may be inquisitorial. I'm sure you understand the difference. This Inquiry is inquisitorial. Of course, there are serried ranks of the press here to make sure their interests are protected, but it's a question of how best to achieve the result when a complainant might not have the benefit of legal representation, and therefore there's a mismatch of power.

  • But there would have to be a process that was fair, that was fully compliant with the right to be heard, and that comported to a set of principled -- I won't say laws, but rules, which themselves were bounded in respect for all the elements that you would want to see, privacy, freedom of speech, freedom of expression, everything. I don't believe that there would be anything that we would suggest that you would not find entirely compatible with your concerns of fairness, although whether it goes quite as far as your First Amendment is different, but that's a UK position rather than --

  • Of course, a different country, different laws.

  • If I can, I think in summary, as your recommendations apply to services like our own, of course we'll take a very close interest and I'm sure we'll -- if you ask us for our advice, then we would very happily provide it, sir.

  • Can I ask you, given that you have a multinational portfolio, either of you if you can answer, does Google take into account the decision of any foreign media regulators when considering removal notices?

  • I cannot recall ever seeing an example of a media regulator being a basis for a removal, so I think it just hasn't come up.

  • Okay. Can I now move on from the questions I've been asking about your search engine to look at some, but not all, of your other products? I'm going to start with those which are closely related to your search engine because they also work on search principles.

    First of all, Google Images, which is a product for searching for images. What I'd like to ask, first of all, is if someone wants to ask for an image to be removed from your search engine, from Google Images, is the process the same as the one that you've outlined for Google Search?

  • Yes, it's exactly the same, the same web form, the same team at the back end assessing the request.

  • If the request relates to a url, as I think I've understood correctly it must, what happens if the image is being hosted by multiple websites or if there is someone who is prepared to repost the same image on another url as soon as it's removed from the search results of the original -- the original url is removed from the search results? Is there anything that you can do about that?

  • Much as I described for Web Search, because we're not the Internet, we're just reflecting what's out there on these third-party sites, and we are not in a position to assess whether each of -- what the legal defence is for each of them. We undertake to remove based on the complainant, or a court order, identifying the urls.

    I think what you're getting at is maybe the idea that there could be a way to identify if the same image exists on multiple urls and sort of automatically make them all disappear from our search results at the same time.

  • The first part of the answer is we don't have a switch that we can flip, or a button we can push to make that happen. But I think a second and important part of the answer is I'm not sure on policy grounds that you would want such a thing to exist, because while our algorithms, our computer programs are quite good at identifying when a page is relevant to a query, the kinds of things we work on, they're not good at making the kind of judgments that the judge or a court or a human would make about the context in which something appears.

    So they won't necessarily distinguish between a particular image or a particular text phrase used in news reporting or scholarship or art criticism compared to when used in some other context. So if there were to be this sort of -- the switch that you flipped to make all duplicates disappear, I think that an inevitable result would be overfiltering and would be the suppression of perfectly lawful content to the detriment of the webmasters who put up that content. A small business, a small newspaper, losing its traffic from one of the major search engines and losing a lot of readers because of sort of overbreadth of technical filtering.

    If I could offer a personal anecdote on this, I am a mother of two young children, and I miss them when I travel like this, so last night I used my mobile phone to try to look at some pictures of them, which my husband uploaded -- you know, they're our pictures, we took them, and my husband uploaded them to Flickr, which is a photo hosting site owned by Yahoo, and the mobile carrier gave me a message saying that I couldn't see them unless I attested that I was over 18 and then another message saying I was not allowed to attest that I was over 18. So I was technologically blocked from seeing pictures of my own children that I took and that my husband uploaded. This is, I think, an example of the kind of technical error and overbreadth of filtering that can arise through perfectly good intentions.

  • Accepting the technical difficulties and the potentially unwanted results which you've just explained, does Google have, if we take perhaps Google Videos as an example, in the Max Mosley case, if one was trying to search for the Max Mosley video on Google Videos, is there a way of blocking certain combinations of search words, so that it would be quite acceptable to allow through Max Mosley Formula 1, but you wouldn't get a result if you put in Max Mosley and then words to try and single out the offending video?

  • That also is something that we don't have. We couldn't throw a switch and do that, although I assume that an engineer could build it in theory, but I think that that has perhaps even greater potential for overbreadth -- we submitted in our evidence the Metropolitan Schools case which talks about a very similar case to filter all results for a particular pairing of words, and the court noted that there were a number of totally unrelated and totally innocent sites that would have disappeared had it been possible to implement that request.

    I think in the Max Mosley case, obviously there's been all kinds of news coverage about this very Inquiry, and other coverage that is legitimate and that you wouldn't want to disappear from search results.

  • Can I move now to Google News, please. First of all, I'd like to get a summary as to just what the process is. If one if goes to Google News, what's happening behind the scenes in a nutshell?

  • Okay, I'll attempt not to be overtechnical.

    For instance, if I wanted to find out news about this Inquiry, I would -- I could either go to the Google Search home page, put in "Leveson Inquiry". Within those page of search results, some of them would be news search results. Or I could go to Google News, which is dedicated to making queries amongst news content.

    So I put in that query. We then serve back to you what we think is the most relevant information linked to the query that you've made.

    To be more specific, if I put in, say, maybe I'm interested in a particular football team and I follow a particular player and I want to track whether that person is injured for Saturday's game or not amongst the news, I put in the name of that person into Google News, and then at the back end our algorithm works very hard to serve back to you links to newspaper or other news content websites that is most relevant to that query.

    It's also important to explain what we're not doing. We're not producing that news ourselves. We're merely producing the relevant links to the most relevant information that we think you're looking for.

  • I was about to say "merely", but I don't say "merely". It is a subset of the general search of Google?

  • It's part of Search.

  • It's a restricted search on news --

  • It's a more refined search, but it's essentially part of Google Search, absolutely.

  • If you're not creating the news, writing it yourselves, as has been pointed out part of the search technology, the algorithm is very important, isn't it? It operates something as a remote automated editor, doesn't it, in what is served up to the user?

  • Can I ask you, does Google accept payment to promote particular news results in response to searches from news organisations?

  • Absolutely not. That's absolutely not what we do. Also, if I may just pick up on what word you used -- sorry, two words, "remote editor", it's really important to emphasise this isn't editor in the sense of many of the people who have given evidence in the Inquiry. We don't have an editorial board for Google News, we don't have an editor saying, "I'd really like to promote that particular link" or "Let's push that particular piece of content up the rankings because my sense is that's what people are looking for".

  • It's a computer programme?

  • Absolutely right. But just to re-emphasise, we absolutely do not take payment for rankings in Google News, just as we don't take payment for rankings in other parts of that natural Web Search result.

  • And so is there any other way by which Google will filter out particular news content or otherwise promote one sort of news over another except for simply trying to match the search terms?

  • I will come to Daphne's world in a second, but as you said at the end, what we're trying to do is to provide information that is the most relevant to the query that you've made, and again I want to emphasise, not to you, but to the query that you've made. That's the criteria. We don't say, "We don't like this particular newspaper this week", someone sits in an office and says, "Let's just take those people out"; that's not how it works.

    In terms of content, which is the subject of the discussion Daphne's been having with you around removals, obviously there's a process for that form of content, and if you're the webmaster of an online newspaper or a newspaper's online site, then you use the tools that Daphne has outlined to refresh content, for instance if you've taken something down because it's been found to be defamatory, but I want to underline the central premise of Google News just as the central premise of our overall search service is relevance, not whether we like a particular newspaper or not. Doesn't come into it.

  • Just to fill that out, we do legally based removals from Google News if that comes up as well on exactly the same model I've described before. I can't tell you the number of times I've looked at the results for the word "Google" on Google News and there have been a number of things that I disagree with. But we don't have people making choices about that.

  • You don't censor your own content?

  • That's reassuring.

    Can I move now to, which is owned by Google, isn't it?

  • Yes, it's, but yes.

  • It's a service which allows a user to set up and run a blog?

  • Who do you regard as publishing the content on the blogs? Is it the user or is it Google or is it both?

  • It's the user, and sort of to make the comparison to Web Search, as I described before, Web Search is us being an intermediary, a technical indexer of third-party content that's hosted on third-party machines. Blogger is us providing a hosting platform for third-party content that's hosted on our machines. So it is different. It's on our machines. We didn't create it, we didn't write any of it, we certainly don't have time to read it, given the scale at which it's uploaded, but we do host it and have the power to take it down, and do when appropriate.

    What's the same about Web Search and Blogger is the notice and takedown framework that I described. In both cases, the same web form that I've shown you, where you can check the box to say, "My complaint is about Search", you can also check the box to say, "My complaint is about Blogger", and consistent with the E-Commerce Directive notice and takedown framework and the implementing legislation in the UK, we operate the same kind of notice and takedown process.

  • The process is the same but the result is different because this time you actually kill it, you take it down.

  • Right. So, I mean, unless there happens to be a different copy that someone has hosted somewhere else, it actually is solved at the root.

  • Do you permit on anonymous blogs, or do they have to be blogged in the real name of the person posting the content?

  • They are pseudonymous?

  • I don't want to give you the wrong answer. I will check and then come back to the Inquiry afterwards.

  • But I'm sure we have bloggers blogging under names that are not --

  • -- their real names.

  • -- I want to give you the right answer.

  • It's certainly a very popular service. From a recent judgment I've taken the fact that it has half a trillion words on it and 250,000 words are added every minute. Do they sound like familiar statistics?

  • That sounds plausible.

  • So one would understand why you can't read it all.

  • All of that is accessible to UK users, is it, wherever in the world the blog is posted?

  • Does that mean, again we have the same trans-national issues, where if a blog is contrary to the law of one country, you would take it down on, for example, the UK side, or does it work differently, is it just, or do you have

  • At present, just because of the way that product was technologically designed, it only has domain.

  • So which law do you apply when deciding whether to take down a post?

  • If we determine that something is in violation of UK law, we do take it down.

  • If it's a UK post. What would you do if it was a French post saying something defamatory about an English person?

  • I don't think we would draw a distinction based on the origin of the post. I should double-check that and get back to you, but I'm fairly confident -- sorry, I represent the web search product, so I'm reaching a little here.

  • Fine. I'm asking searching question, if you'll forgive the pun, and if you wish to put the answer in writing, that would be helpful, but I'm interested in what the test for jurisdiction is, as to which law you apply.

    If, in relation to Blogger, you are also an Internet host, could I now pose another hypothetical future regulatory question to you based on your Blogger service? If there was to be a future regulatory body in this country, which was going to adjudicate on defamation and privacy complaints, would Google -- and I'm not going to hold you to a firm answer -- consider and what sort of considerations would you apply to the question of whether Google would be prepared to be a part of that system, to be prepared to respond within the regulatory system to complaints about blogging posts on

  • Again it's a very interesting question. I think there are two essential parts to this. Firstly, there is a very clear set of regulations which apply to technical intermediaries hosting platforms. It's called the E-commerce Directive and it does place a number of responsibilities on us around removal of content. I know that you're very aware of it.

    It's important to make the distinction between -- in the system that you've outlined, it's important to make the decision between someone who provides a hosting platform for other people to create and post content, and a publisher. or other products that are -- attempt to form a community around the product, YouTube, et cetera, they don't make us a publisher; we remain a hosting platform. So I think whatever system that you devise, it's important to retain that distinction, because not only is there already a very clear set of regulations around those principles placing responsibilities on us, but it retains a very essential balance online, which is: where does that responsibility lie? We have our responsibilities, which we fulfil; the person that produces and uploads that content has his or her responsibilities as well.

    So again, it's -- this is obviously a hypothetical scenario, and something I know that you're working through as you work through your evidence. Again, I would give the same answer that I gave to the Lord Justice, that, as you develop your system and as that regulatory proposal affects our services, we would be happy to supply written evidence if you asked for it.

  • A short question about YouTube. It's owned by Google, isn't it?

  • That service -- would your answer be the same? -- do you regard yourself as hosting the content rather than publishing it?

  • When you go on to YouTube, you do get an image on the screen, which you then click on to watch the video. Do you regard yourself in any way as publishing at least the thumbnail image or do you regard yourself purely as the vehicle for somebody else's publication?

  • I think I would give the same answer that I gave before, that -- and as Daphne gave around Google Images, which is again we're the technical intermediary, we're the hosting provider. We're not ourselves publishing that content, and I think under the E-commerce Directive and the judgment you referred to earlier sort of underpins that.

  • The thumbnail is part of the processing that we provides a the host. I don't know if this has come up under UK law, but it has under the sort of analogous provisions for copyright of US law and I think there's a general understanding that hosting includes showing a thumbnail or video --

  • -- or whatever the sort of normal processing would be.

  • Sir, I've very nearly finished; will you indulge me for a few minutes, please?

  • Can I come on now to a separate topic. It's dealt with at the end of your witness statement, Ms Keller. It's about the concept of self-regulatory traditions, which are developing on the Internet. I've been posing questions about more formal regulation, but perhaps you could tell us a little bit about what you mean when you say that the Internet has also developed global self-regulatory traditions?

  • I will give you a couple of examples, I think DJ may be able to provide some others. I'll start from the bottom up with self-regulatory traditions that came from the engineers who built the Internet and the primary one that really affects us is the robots.txt protocol, which I mentioned earlier, which is the way -- there's actually technically two varieties. You can use robots.txt or something called metatags, which is text in the source code of a page that's not visible to the user. Used following a standardised protocol that every webmaster can follow the same way, that every search engine can understand, for webmasters to give instructions saying, "Don't index me", or they can vary a little, they can say, "Index me but don't show a snippet", or things like that, so that's a sort of a foundational example in the world we operate.

    An example that comes closer to the kinds of things we think of as regulation is the work done by groups like the IWF, the Internet Watch Foundation in the UK, and some comparable groups like the BPJM in Germany and Nikmeg(?) in the US. The IWF is primarily private, I think almost all of its funding comes from corporate members, and it creates standards for creating and disseminating lists of urls with child abuse content, very abhorrent content, so that we can get those lists disseminated quickly to every who might need to act to them. It's a pretty effective example of a self-regulatory body that came together through industry agreement.

    DJ, I think you --

  • Yes, I think it -- I would say a couple of things. Firstly, the Internet is very well regulated in two ways. There is regulation, and we've been discussing what future regulation might look like, and that is really embodied by principles around the E-commerce Directive. The European Commission's published its proposals for online privacy regulation, but -- so it's important to emphasise we don't think the Internet should just be self-regulated. There is already a body of very tight regulation, particularly in areas around data.

    But self-regulation is also important, because regulation doesn't cover everything, and we see ourselves as a responsible company. We work very closely with the IWF, but also other bodies. I can think of some -- an example in the UK that we've been involved in with the Advertising Standards Authority, so to making sure that online advertising is being checked in the right sort of way, and we play an active part in that, but also in terms of global governance of the Internet, there are very well established forums such as the Internet Governance Forum, the IGF, where every year a collection of government and NGOs and Internet companies come together and work out where the new responsibilities should be lying. So both traditions are important, but I don't want to imply and neither of us want to imply that in some way we think it should just be self- regulated, because --

  • The regulation you're talking about is slightly different. You're using the word in a slightly different sense. What you're saying is that there need to be common standards, common agreements, common mechanisms that work for everybody, because if they work for everybody, then they will work for everybody. If everybody goes in their own direction, there is a risk for chaos.

  • But there isn't anybody to police it, it's what you all do in order to achieve the common good for all. Would that be fair?

  • Correct. I think it's robust in some areas and I think, if I may, if you look back at the technology developments for the last hundred years, this has been a very common theme; it's not just relevant to the Internet. But a lot of my work and a lot of my team's work, and I work very closely with some of the people that you're going to be hearing from this afternoon, and we take these issues very seriously, because I come back to something I said right at the start, that the trust that we have with our users is very, very important, and in some cases the regulation goes obviously so far, and -- but we don't always just rest with where that regulation lies. Sometimes we think we can tighten up even further, or because our technology understanding sometimes runs a little ahead of some regulatory bodies, we want to do it before there is a need for statutory regulation.

  • Thank you very much, both of you. Those are all the questions that I have.

  • Thank you very much indeed, and thank you again for coming. I'm sorry to deprive you of the sight of the pictures of your children.

  • I'll see them soon. Thank you.

  • (The luncheon adjournment)

  • Our first witness this afternoon is Mr Allan from Facebook.