INTRODUCTION AND BACKGROUND
The Costeja ruling
The “right to be forgotten” (RTBF) —perhaps more accurately described as a “right of selective de-indexing”,1 or “right to delisting”,2 since the “forgetting” involved is not and cannot be absolute—,3 first emerged into mainstream public consciousness and practice with the European Court of Justice’s (EUCJ) May, 2014 ruling in the Google Spain SL, Google Inc. v. Agencia Española de Protección de Datos, Mario Costeja González case.4 In this case, the Court took up the question of whether the aspects of the 1995 Data Protection Directive,5 dealing with “data controllers” and “data processors” (since superseded by the General Data Protection Directive that was adopted in April of 2016 and went into effect on May 25, 2018) also applied to search engines like Google. In an opinion that surprisingly chose not to heed the earlier recommendations of the EUCJ’s Advocate-General6 assigned to research and report on the issue, the court held that the directive did apply to search engines.7 As a result, EU citizens like Costeja González were now able to request that certain Internet search engine results pertaining to a search for their name be removed from those results. The underlying details of the Costeja case were predicated on Costeja’s request to Google in particular to remove a specific URL result, an announcement of bankruptcy proceedings in the Spanish newspaper La Vanguardia that the newspaper had been legally obligated to publish,8 from online searches for his name.
The EUCJ ruling stated, controversially,9 that going forward, once a request like Costeja’s had been made, that it should initially be the search engines themselves10 that would decide whether or not to honour and act on the request,11 according to a list of criteria provided by the court,12 with rejected requests appealable to the local national data protection authorities.13
The immediate response to the ruling
Google, because of its dominant market share14 as a search engine within the EU region, was most immediately affected by the ruling. Facing potentially enormous fines,15 and likely also needing a “win” in the court of public opinion, Google debuted16 its new RTBF specific request form17 only a few months after the Costeja ruling, just before18 the EU released its own RTBF guidelines.19 This timing was controversial,20 and also spoke volumes about how seriously Google was taking the ruling, as well as the resources Google considered necessary to invest —and had available— in addressing the implications of the ruling. The new tool also immediately brought to light both predictable21 and unforeseen obstacles.22 Private companies emerged to both facilitate EU citizens’ participation in the process and offer insight into its mechanics as it evolved, and other search engines affected by the ruling also created their own mechanisms for making removal requests, although the scale of the requests subsequently sent to them was far smaller than those sent to Google.23
For example, ReputationVIP, a reputation management company, launched a service called “ForgetMe” that compared and cross-referenced Google Transparency Report results24 to the requests filed through ReputationVIP’s services, to accumulate and generate statistics about the nature of the requests for removal sent. Although ForgetMe’s statistics page mentions data collected on requests sent to both Google and Bing, the discussion and underlying request data appear to deal almost exclusively with Google,25 Bing is mentioned only once. ForgetMe’s initial data from 2014 shows Bing receiving fewer than one thousand requests, compared to Google’s nearly two hundred thousand, somewhat to Bing’s chagrin.26 Reputation VIP released a follow-up analysis in 201627 with similar and more extensive data, as well as a retrospective report in May 2018, where they announced that they were “closing” their service, due, in part, to the GDPR and the maturation of the RTBF.28
Subsequent developments
The contours and scope of the RTBF request and removal process, especially how search engines are to decide what online material may be deemed suitable for removal and why,29 has continued and will likely continue to evolve,30 especially in the wake of the EU’s General Data Protection Regulation having become law as of May 25, 2018,31 and especially with the GDPR offering removal of information as an entitlement to EU citizens, one with privileges over a number of services beyond search engines. Most notably, related more recent high-profile litigation such as the various Equustek v. Google cases32 and the on-going CNIL lawsuit against Google in France33 have grappled with the question of whether a particular country’s rulings on de-indexing should have global reach. Ironically, the original Costeja case and its plaintiff have become so iconic and embedded in public consciousness that the Spanish Data Protection Authority subsequently ruled that Mr. Costeja had become a sufficiently public figure that he no longer qualified as someone who could exercise his right to be removed from search results,34 a perfect example of the Streisand Effect in action.35
HISTORICAL ANTECEDENTS AND POSSIBLE EXPANSION
Historical Antecedents
Although the Costeja case and ruling have come to stand as convenient symbols of the “right to be forgotten”, the case and its ruling’s aftermath have almost certainly brought the concept and its practical application to popular consciousness and for the first time; the concept of an individual member of society having some kind of right or ability to exercise a degree of control over publicly available information about themselves has existed in a number of countries beyond the EU,36dating back prior to the 2014 ruling, prior to Mr. Costeja’s 2010 lawsuit, and possibly beyond. Indeed, the Costeja case itself did not purport to create a new right, only to include search engines and their results as relevant to and controlled by the EU’s 1995 data protection law.37 Both legal academia38 and the courts have discussed an analogous right since at least 2009,39 if not since much earlier, perhaps even since the early part of the 20th century or before,40 using a more expansive definition of the cultural concerns underlying RTBF.
Roughly similar concepts have been called a right to “practical obscurity”41 in United States jurisprudence; “droit d’oubli” in France42 and in French-speaking Canada prior to the EUCJ’s Costeja ruling;43 and as a “right of cancellation” in Mexico’s data protection law,44 to name only a few. However, most prior discussion of similar ideas fell under the rubric of privacy45 or defamation law. In fact, questions of reputation underlie many of the current RTBF or GDPR right of erasure requests being made, with privacy concerns, especially for businesses as the other top category of request.46
Possible Expansion
Whatever the broad concept behind RTBF has been or is now being called, there is a powerful current global demand by individuals for the ability to control the nature and tenor of one’s online “identity”, or presence, even in countries that place high value on unrestricted speech such as the United States.47 In fact, a 2015 poll of US citizens found that 88% either somewhat or strongly supported the idea of a law that would allow US citizens to remove personal information from search engine results.48 This is somewhat surprising in light of the First Amendment to the United States49 Constitution, which guarantees, among other things, the freedom of speech, and which informs virtually all discussions of content regulation in the United States, especially since it is often considered a legal “outlier” among global free speech laws.50 Although there is as of yet no true RTBF jurisprudence in the United States, there have been a few key cases that address the rights, obligations, and liability of search engines, at least, in the context of the First Amendment and intellectual property disputes.
Quite early on in Google’s existence, in Search King v. Google Inc.,51 a court ruled that Google’s rankings were protected opinion. Later, Zhang v. Baidu52 addressed a search engine’s own right of expression in 2013, (just prior to the judge advocate’s advisory opinion in Costeja), somewhat controversially53 finding that a search engine’s decisions regarding results are themselves expressive content protected by the First Amendment.
This desire for an ability to manage one’s online reputation may be swiftly making its way into the US and global zeitgeist for a variety of reasons: because more and more of the citizenry’s social interaction takes place online; because online material is more “permanent”, at least in a chronological sense; because yesterday’s or last year’s headline news is potentially just as accessible as today’s, and any associated harm is similarly evergreen;54 because digitisation of archives and records is gradually eliminating obscurity-based privacy; because the ease of access and ubiquity of the web mean that online information always has a potentially global audience, making it increasingly difficult to leave one’s past behind by virtue of the passage of time or a change in geography; or, most likely, because of some combination of all of these. Social expectations, mores, and coping mechanisms have not been able to evolve as rapidly as technology, its affordances, and its consequences.
OTHER TECHNIQUES FOR THE REMOVAL OF ONLINE CONTENT
In tandem with this history and context of a de facto “right to be forgotten”, and the desire for a power of removal or obfuscation, whether rooted in privacy, reputation or simple control, an ecosystem of mechanisms and practices has developed that are intended for and used to achieve some kind of “forgetting”, at least in the form of the removal, whether partial or complete; or obscuring of online materials. Because of the historical accident that many, if not most, of the global internet’s largest companies were founded in the United States,55 this ecosystem exists, and is defined by, to a great extent, within the context of United States law, although it is used globally. This removal ecosystem’s tools range from “standard” lawsuits and injunctions, usually having to do with defamation, to other official affordances for the removal or obscuring of online material, such as those rooted in copyright. The almost nine million requests56 for removal of online information found within the Lumen database provide an intriguing and informative window into the use of these various tools, as well as into the mind-set and motivations of the individuals using them. Whether it is examples of the pre-Costeja, pre-GDPR practice of seeking to use copyright takedown requests to attain the (even temporary) removal of critical material or even information not subject to copyright; or the post-Costeja and GDPR error of using non-RTBF removal affordances to seek RTBF removal; or either continuing to rely on the judicial system, or gaming that same system,57 the Lumen Database has an example of each. A closer look at these may offer some insight as to what sort of legislative action the United States (or another country currently without a legally defined RTBF), might take in the future with respect to creating its own version of a legal right to be forgotten.
EXAMPLES OF LUMEN NOTICES ROUGHLY ANALOGOUS TO RTBF REQUESTS
What is the Lumen project?
Lumen is an independent research project, based at the Berkman Klein Center for Internet & Society at Harvard University, that studies requests sent to platforms, search engines, and others to remove materials created or uploaded by Internet users, using legal or extra-legal theories. Formed in or around 2001 by Wendy Seltzer as the Chilling Effects Clearinghouse, the project’s goals are to educate the public; to facilitate research into different kinds of complaints and requests for removal —both legitimate and questionable— that are sent to online publishers and service providers; and to provide as much transparency as possible about the “ecology” of such notices in terms of who sends them, why, and to what effect. Lumen’s database contains notices from a wide range of recipients and senders. Although the majority of the entities that share with Lumen copies of the removal requests they receive are based in the United States, the senders of those removal requests can be, and are, from all over the world, and visitors to Lumen come from virtually every country.58
Any comprehensive detailed examination of the entirety of Lumen’s data corpus —approximately nine million notices as of this article’s publication— is beyond the scope of a single individual’s capacity. Even with the assistance of machine learning techniques, only broad aggregate and statistical conclusions are possible.59 With that in mind, the discussion that follows is of necessity limited in scope, a preliminary overview, or pilot inquiry restricted to an initial pass over the Lumen corpus relying on the search tools Lumen makes available to the public.60 It is unquestionably not methodologically rigorous or comprehensive. It is to be hoped that other researchers will have the time and inclination to pick up this gauntlet and carry out a more exhaustive examination of one or more of these topics. That being said, even at the low level of granularity of this first-pass approximation, several broad categories of attempts to achieve the “forgetting” or functional obscurity of pieces of online material do clearly emerge.
THREE CATEGORIES OF NOTICES IN THE LUMEN DATABASE USING CURRENTLY EXISTING REMOVAL MECHANISMS TO ATTAIN THE “FORGETTING” OF ONLINE MATERIAL
“Misapplied” copyright notices
Broadly, this category includes any removal requests using US copyright law’s Digital Millennium Copyright Act (DMCA) notice and takedown (N&TD) system, to seek or attain takedowns where no copyright interest is involved; or, even if a valid copyright claim could be made, traditional copyright interests are only tangentially involved or not at stake at all.61 These notices do not mention forgetting, but their broader underlying goal is the same: to have specific content items made unavailable to the public.
Within this group of “misapplied” DMCA notices, at least three distinct subsets are readily apparent. Only two of these are requests seeking removals to “forget”, but the details of the third offer useful parallels for considering the expansion of removal tools beyond their originally intended use especially within a legal regime like that of the United States.
DMCA notices with a weak or absent underlying copyright interest
Lumen’s database contains DMCA notice requests for the removal of material where there is no underlying copyright. The material in question might be explicitly non-copyrightable;62 or the sender might not possess a copyright to enforce;63 the request might seek the removal of material not related to the copyright;64 or the underlying material might be more appropriately protected by enforcing another right, such as trademark —(see below for more on this)—. These notices represent either a misunderstanding of the scope and boundaries of the DMCA65 —not everyone is an expert on US federal copyright law—66 or possibly a knowing misuse of the DMCA system, whether malicious or well-intentioned, in recognition of the scale at which many OSPs, such as Google or Twitter, receive DMCA notices; as well as the toothless penalties for false submission67 and therefore, the likelihood that sending such a notice will still achieve a successful, if temporary, removal. This is probably the largest sub-category of RTBF-analogous notices. DMCA-based notice-and-takedown is the largest, most prominent, most well known (if not most well-understood), and most well instantiated N&TD mechanism currently available to members of the public. It is also the mechanism that has been available the longest, dating back to 1998, only a few years after the explosion of the Internet and Web into common usage and parlance. As one final element leading to DMCA notices being seen as a near universal removal tool, there is the near-ubiquitous notification at the bottom of many Google search result pages.68 Google is the primary search engine for many Internet users,69 and it is all too easy to skip right past “copyright” —itself a potentially confusing topic— to “removals”.
Typical examples of DMCA notice submissions seeking the removal of non-copyrighted material include those aimed at removing government documents, other public materials, and requests motivated by reputational or privacy concerns. As of this writing, there are several thousand DMCA notices in the Lumen database that contain the words “privacy”. Compared to the millions of DMCA notices, this is an insignificant fraction, but it is important to recall that each of these notices may well represent an individual and their story. Seen another way, several thousand people have tried to use the DMCA to remove what they considered to be private information from the Web, and subtracting out the notices that contain “Electronic Communications Privacy Act” in boilerplate70 still leaves several hundred.71 Additionally, there are several hundred DMCA notices in the Lumen database that contain the word “reputation”, including, as just a few examples, notices 10115339,72 1945884,73 12747534,74 12766591,75 and 12731263.76
DMCA notices seeking removal of non-consensually distributed intimate images (“NCII”)
NCII images are more colloquially known as “revenge porn” but this is a misleading label.77 NCII is a more accurate term.78 The subject of a non-consensually distributed intimate image photograph may or may not have the copyright on the images in question.79 It is true that the right to control the distribution of copies is a baseline right within copyright law doctrine,80 especially works not yet (or ever) to be published, but the roots of the underlying concerns motivating that right within traditional copyright law doctrines are not usually linked to a desire to remove a copyrighted work from public consciousness because of the reputational concerns associated with its content.81 Those concerns are more traditionally associated with privacy.82 NCII images (and how to handle them) are a charged and contentious topic,83 with the issue prominently featured in late 2014 when the so-called “Celebgate”84 data theft took place.85 At least some of the celebrities involved launched on-going legal campaigns to have their images removed from the many locations at which copies were stored, as well as from Google search, and there are many related notices in Lumen’s database.86 Additionally, there are over 800 DMCA notices that contain the word “nude” and over six hundred containing “naked”.87 It seems inarguable that were an RTBF to exist in the USA, individuals would be sending similar notices, but with different labels, used to remove these images and links to them. Whether the DMCA is working effectively to do so, or whether it is the correct tool with which to do so, is an open question.88 It should also be noted that because of the unusual nature and growing frequency of NCII removal requests, some OSPs have created special dedicated reporting mechanisms for them. For example, Google has a form for submitting NCII requests89 and does not share copies of those requests with Lumen or include them in the Google Transparency Report.90
DMCA NOTICES SEEKING TO ENFORCE TRADEMARK (“TM”) RIGHTS
There is no streamlined N&TD process in the US for trademarking as there is for copyright.91 Although removal requests enforcing TM are not within the conceptual space of RTBF, it is worthwhile to consider this subset of “mistaken” DMCA removal requests because they are a perfect example of individuals or corporate entities re-purposing existing removal tools if they find that the available modalities are insufficient for their perceived needs. It is of course possible to bring and enforce a trademark claim, even with respect to online material, but to do so requires a lengthy and expensive court proceeding.92 It is no wonder that trademark complainants, seeing the ease with which a DMCA complaint can be sent and the powerful effect it can have, might try to play a little fast and loose with the DMCA process in an attempt to fold trademark concerns in, especially given the close proximity of the two sets of rights. It seems more than plausible that analogous parallels between RTBF-analogous complaints and defamation or private information93 claim related tools, to say nothing of copyright, may continue to emerge. Some excellent preliminary work on this topic was performed by Lumen team members in 2013.94 A few more recent examples of this notice type include notices 14504359,95 which does not even pretend to be bringing a copyright claim; 12767693;96 12746343;97 and 12755816.98
ERRONEOUS ATTEMPTS TO SUBMIT A LEGITIMATE RTBF REQUEST
The second major types of notice are erroneous attempts by individuals or companies to make use of the EU’s Data Protection rights as described by the Costeja ruling. Searching Lumen for “right to be forgotten” with all words required yields approximately forty thousand results. Given that no search engines currently share copies of RTBF requests with Lumen, this represents a significant number of notices possibly linked in some way to the concept of RTBF, even allowing for DMCA and other notices that contain all 4 of those words by coincidence. The actual RTBF-related notices in this set likely represent a misunderstanding of RTBF and its mechanisms, but they also represent public awareness, however nascent, of a new way to remove online material and a desire to use it. Public perception and understanding of existing modalities, especially on the part of legislators may well shape the next incarnation of an RTBF system.
The larger set includes DMCA notices that are in fact attempts to submit RTBF requests. Some of them were even retroactively categorized as “official” data protection notices by Google, their recipient.99 Still others reflect a kind of “kitchen sink” approach, citing any and every conceivable grounds for removal, in the hope that something will work.100
In another subset, some “court order” notices reference RTBF but are not actually documents from courts of law, or at least do not have an order attached or included. These notices come from a range of countries, including the US,101 and include at least one from the US where the notice sender claims to be an EU citizen.102 A few senders seem to have chosen the route of excess, and have sent many identical “court order” requests.103 Others are simply making sure their request is seen or acted on.104
Other notice senders do not understand the complexities of the RTBF as instantiated,105 and apparently think that performatively,106 or almost talismanically invoking or naming RTBF as a right is sufficient, whether or not they actually use the words “right to be forgotten” or “data protection”. This contrasts from the correct use of the form provided by Google or other search engines for this purpose, and thereby allowing the request to be evaluated according the EUCJ’s criteria and developing precedent.107 These confused requestors could be EU citizens or their agents using the wrong tools to submit what might otherwise be a valid claim.108 Still others have sent RTBF notices directly to Lumen,109 possibly to “troll” Lumen.110 Finally, at least a few curious notices from the same sender were sent as defamation notices, but reference data protection, with the URLs themselves containing the text “right to be forgotten”.111
QUESTIONABLE COURT ORDERS FOR REMOVAL
This broad category of notices includes court orders for the removal of online material, for whatever reason, that do not qualify as “appropriate” or at their core legal, court orders. There are two typical types: first, court orders generated through appropriate legal processes, the expansiveness of which may exceed the bounds of what is legally permissible,112 but only on close inspection; second, deliberately falsified orders. Both typically seek to achieve the removal of material for which there would be no other legal recourse for removal, and which would therefore otherwise remain online, absent a valid court order and the implicit determination of illegality. Both types exploit the weak points of the judicial system, as well as those of OSP removal mechanisms generally.
Professor Eugene Volokh of UCLA School of Law is performing incredible research with the Lumen database on the topic of falsified US court order removal notices. His work has revealed quite a few intriguing examples of these.113 Some are amateurish.114 But, more concerning than these, there appear to be entire businesses whose revenue model is built around removing online materials that are critical of their clients, specifically by operationalising falsified court orders.115
The practical implications of these developments regarding US court orders should be of great concern to any legislative drafter working on an RTBF law. Previously, a properly presented court order was a kind of “gold standard” for removals.116 If this can no longer be presumed, and it is not only possible but likely that as many as ten per cent or more of US court orders may have issues, the OSPs like Google who are their most typical recipients will have to handle any court order they receive very carefully.117
The dilemma seems clear. If the removals mechanism is left largely, or at least initially, to data controllers and processors,118 there are questions regarding appropriate transparency, as well as an accumulation of power in the wrong hands, and a lack of a strong voice for the affected public at large.
But, if removals are left to courts, what assurances will the recipients of the court orders have that the document with which they have been presented is not a forgery? What safeguards will need to be to put in place, and who will bear their cost? The additional burden on the courts alone, even if those courts were bodies newly constituted for the purpose of adjudicating RTBF claims, would be substantial. A hybrid model that strikes an appropriate balance will be complex and challenging to get right.119 This is an especially ripe area for further analysis, both in the US and the EU, given the role that local data protection authorities play in the latter jurisdiction with respect to adjudicating initially rejected requests for removal under RTBF and now the GDPR.120
THE FUTURE OF POSSIBLE “RIGHT TO BE FORGOTTEN” LEGAL REGIMES IN COUNTRIES CURRENTLY WITHOUT ONE
The official existence, since 2014, of a “right to be forgotten” in the EU, the “right of erasure” in the GDPR, and the robust development of the corresponding official mechanisms for requesting and acting on the removal of online materials has radically and permanently changed the nature of the debate surrounding instantiating a similar regime in other countries, or globally. The Overton Window121 has been moved substantially toward a general acceptance of some form of an RTBF. Even in the United States, lawmakers who previously dismissed any discussion of an RTBF as incompatible with US concepts of freedom of speech have changed their minds.122 However, as is already being seen in the heated debates over the appropriate scope and scale of de-indexing, and the reach of a particular country’s court orders to a global online intermediary,123 the differing cultural and legal norms in countries or jurisdictions around the world will likely prevent the easy adoption of a mutually agreed-upon removal framework.
Any RTBF regimen in the United States will have to exist in harmony with the centuries of jurisprudence surrounding that country’s 1st Amendment,124 although some of the most recent US Supreme Court rulings on that topic have caused legal analysts and commentators to point out that many of the current doctrine’s roots are actually quite shallow.125 Many other countries are signatories to the Universal Declaration of Human rights, whose Article 19 specifically enumerates a freedom to “seek, receive and impart information and ideas through any media”,126 meaning that in theory a “right to be forgotten” must successfully balance itself with those interests. The EUCJ obviously believes itself to have done so with the Costeja ruling, and while there are some who disagree,127 subsequent analyses of various subsets of RTBF and GDPR request data by Google128 and Reputation.com129 among others, reveal that any of these concerns were exaggerated or unjustified.130
Countries that have experienced radical upheaval or regime change, especially recently, may have a vested cultural interest131 in preventing influential politicians and other figures from sanitising or otherwise obscuring their role.132 On the other hand, those same countries may well also want to be able to move forward without being anchored by a dark or unpleasant past, and it is not at all clear where the balance between truth and reconciliation commissions133 and a right to be forgotten is to be struck, nor in whose favour. Finally, autocratic or dictatorial regimes may well seek to weaponise or otherwise misuse removal tools to censor political opponents, critics, and dissidents.134 This entire discussion must take place within the context of a theoretically and, at least potentially, global Internet. The democratisation of publication and the lowering of nearly all barriers to entry have given a networked individual access to unprecedented levers of power. It is not going too far to say that humanity’s social norms regarding the diffusion, accessibility, and permanence of information have not yet adapted appropriately to digitisation and networked connectivity.135 Our inherent cognitive biases,136 especially those related to memory and reputation, may well never be able to do so. Humanity’s traditional behaviours and inherent frailties and biases may mean that no reconciliation between these various interests is ever possible. It may be that a fully functional and global RTBF will require the effective crippling or deliberate shackling of technology or more likely, a “splinternet”137 that belies the original bright promise of the Web.138
It can be reasonably argued that the DMCA is a de facto global removal regime.139 Although copyright law and its enforcement are still territorial, the Berne Convention140 requires foreign copyright holders to be given local resident status when enforcing their rights. Since a majority, or at least a plurality, of the Internet’s largest companies originate in the United States,141 under the United States’ Digital Millennium Copyright Act (DMCA) [CITE; harmonisation] a DMCA request sent to Google Search, for instance, will achieve the practical global removal, via the de-indexing of that content.142 However, there are already several widely acknowledged problems with the DMCA, including the sheer scale at which it now operates,143 with questionable success, false positives,144 broad-brush automation of the process,145 obvious errors of inclusion,146 a lack of opportunity for recipients to challenge claims prior to removal of the material in question, and deliberate147 or even fraudulent misuse.148 All of this has led to widespread calls for reform from stakeholders on all sides,149 and US Congress recently held a series of hearings150 to examine to what extent the DMCA should be revised. Given this level of tumult, developing an RTBF-analogous removals regime in the United States predicated on existing copyright mechanisms seems problematic at best, especially given that the material most commonly the subject of RTBF removal requests is either journalistic or in the critical expression of individuals wherein no copyright has been or can be asserted.151 Not only that, but also, “[T]he trouble with comparing copyright law to privacy, though, is that the United States and Europe broadly agree on what constitutes copyrighted content, but the boundaries of private information are far more nebulous”.152
With that in mind, it may well be that the first mover from a regulatory perspective will, by default, largely set the tone and terms for other future laws.153 Consider, as just one example in this space, Facebook’s announcement, which came shortly after Mark Zuckerberg testifying before the US Congress with respect to the Cambridge Analytica scandal, that Facebook will extend EU-style GDPR privacy controls to everyone, not just EU citizens.154
Use of copyright (or other existing means for takedown), rather than a new RTBF, will have several potential categories of outcome; some positive, some negative, some yet to be determined. First, it seems plausible to surmise that the more effective the repurposing of an existing removal regime proves to be in a given country, the less pressure there will be to create a wholly separate RTBF-analogous one, whether via judicial interpretation of existing law or by drafting a new law. To give a specific example, if it proved to be the case that individuals wishing to be “forgotten” could accomplish all that they wanted by filing DMCA notices or court orders for removal, then they simply would. We would then be having a discussion about “de facto” RTBF, or the “copyright safety valve”155 for reputation management, or perhaps celebrating the expansive inclusiveness of existing privacy and defamation law.
Another mechanism for removal that is already in use and gaining traction is a reliance on the Terms of Use of the most popular private platforms, especially social media. See, for example, the work of OnlineCensorship.org156 in trying to document the true extent of these private mechanism removals. A full discussion of the implications of a reliance on private ordering as a mechanism for the removal of online content is beyond the scope of this paper, but suffice it to say that there are both advantages and disadvantages to relying on companies to police themselves rather than relying on a comprehensive single legal or regulatory regime; including, on the one hand, greater flexibility and theoretical responsiveness to stakeholders, but on the other, the potential for a complete lack of transparency or accountability, as well as for “capture” by economically powerful stakeholders at the expense of individuals or the public.157 The lack of explicit guidance for search engines in the immediate aftermath of the Costeja ruling, combined with how directly they were affected and the role they were expected to play provided an unusual opportunity for the development of a public-private hybrid regulatory regime, with an unusually high degree of autonomy for the private actors in question.158
However, it already seems fairly obvious that existing mechanisms are not completely satisfying public demand. The “street finding its own uses for things”159 notwithstanding, would-be users criticize the inadequacies of relying on laws and mechanisms not designed for their new and unforeseen purpose;160 and the cognoscenti concern about the corrupting effect unintended uses might have on an existing law’s ability to do what it was, in fact, designed to do.161 As Eugene Volokh has put it “Any system attracts parasites”.162 And, by contrast, the more copyright law becomes inappropriately expanded, and misunderstood/misused, the more effective copyright proves to be at the removal of material not originally envisioned as being within the purview of copyright’s incentive structure. So, there are, and will continue to be, countervailing pressures against the continued expansion of copyright’s or other regimes removal tools, which in turn will drive the adoption of other, especially more targeted, removal regimes.
Another foreseeable downstream effect of the expansion of other existing tools will bear increases in the costs of compliance with one or more regimes, whether in terms of execution, oversight, or consequences. The falsified court orders identified by Eugene Volokh offer a perfect case study for this. Preliminary estimates by Professor Volokh imply that as many as 10% of the court orders sent to Google for the removal of online material are in some way questionable.163 Assuming those numbers are reasonably accurate and will continue in the future, the implications for effective oversight by Google or other court order recipients are substantial, if only from the perspective of resource allocation to monitoring costs. On the side of recipients, are they to now have to individually verify the provenance of any order instead of presuming validity? If so, how? Given the patchwork quilt of electronic court records in the United States, to say nothing of the global, it seems clear that to satisfactorily verify each court order would require the full attention of a human being for some period of time, to call courthouses or email judges, who themselves are already overworked. The consequences of ignoring a genuine court order are too great for even a recipient the size of Google to take any of them less than seriously. So this simply cannot scale. On the side of the senders of court orders, as utopian a vision of civic engagement as it might be to imagine that any citizen could easily obtain a court order for the removal of online material, when such an order is legally merited, it is just that: utopian. The costs in both money and time of engaging with the legal system will put court orders out of the reach of many, if not most, relegating that removal tool to the wealthy or legally savvy, and thereby rendering it an inappropriate tool for general removal of objectionable material. Something more universal is to be desired.
It seems most likely then, that, at least in the United States, the country’s robust tradition of federalism and legislative experimentation at state and even local level164 will lead to a patchwork of state laws165 that attempt to create some form of an RTBF, with varied success but that, given the obstacles a true national RTBF law would have to overcome, any specific RTBF US federal legislation is a long way away.166