lunes, 1 de junio de 2020

La “Executive Order” sobre plataformas de Internet vista desde Europa


La “Executive Order” (EO) del Presidente de EEUU relativa al control de contenidos en línea y a la responsabilidad de las plataformas de Internet (“Preventing Online Censorship”), adoptada el pasado jueves y que tiene amplio eco en los medios de comunicación estos días, aborda cuestiones cuyo tratamiento en el seno de la Unión Europea está también en proceso de revisión. En síntesis, dos son las cuestiones principales a las que va referida la EO (cuyo texto reproduzco como Anexo al final de esta reseña). De una parte, el debate acerca de qué reglas resulta adecuado imponer a las plataformas en relación con su funcionamiento, en la medida en que la posición alcanzada por algunas de ellas determina que sean un elemento esencial para el ejercicio de derechos fundamentales como la libertad de expresión o de información, que pueden justificar precisamente limitar la libertad de empresa del proveedor para configurar su servicio habida cuenta de su especial relevancia social. De otra parte, la EO pretende revisar el régimen de responsabilidad de las plataformas en relación con los contenidos que sus usuarios difunden a través de ellas. Un objetivo esencial de la EO es vincular ambas cuestiones, con la idea de que la exención de responsabilidad de las plataformas por contenidos de sus usuarios únicamente debe beneficiar a aquellas cuyo funcionamiento no imponga restricciones a los contenidos que sus usuarios difunden. Aunque el marco legal del régimen de responsabilidad de los intermediarios de Internet, como las plataformas, difiere sustancialmente entre EEUU y la UE -dejando a un lado el tratamiento específico de las infracciones de derechos de autor que además queda al margen de la EO-, resulta claro que la situación en EEUU ejerce una gran influencia en la UE, en particular en la medida en que las plataformas más significativas se han extendido desde EEUU, tratando de trasladar al resto del mundo, y en particular a Europa, un modelo de negocio diseñado a partir del marco normativo estadounidense, en un contexto en el que la aplicación efectiva de los estándares (más restrictivos) previstos en otras legislaciones ha presentado importantes carencias. Más allá de hacer referencia al contenido y la limitada eficacia práctica de la EO en el contexto de la legislación estadounidense, puede ser útil reflexionar acerca de si la modificación que se pretende llevar a cabo en EEUU se corresponde con la situación en la UE. 
  

En la medida en que el marco legal existente en EEUU no puede ser modificado mediante la EO, el núcleo de este instrumento –que se corresponde con su sección segunda- promueve una determinada interpretación de ese marco legal de cara a ser aplicada en el ámbito administrativo y en la eventual adopción de normativa de desarrollo. Habida cuenta de que ese marco legal es reflejo de la importancia capital atribuida en el sistema de EEUU a la libertad de expresión de la Primera Enmienda de su Constitución, su interpretación –objeto ya de una abundante jurisprudencia- no parece que pueda venir determinada por la EO ni por las medidas en el ámbito administrativo a las que pueda dar lugar, cuya compatibilidad con la propia Primera Enmienda ya ha sido cuestionada por sus críticos (véase, por ejemplo, aquí) y es algo acerca de lo que eventualmente se tendrán que pronunciar los tribunales.

Es conocido que el marco legal básico de EEUU en materia de responsabilidad de los prestadores de servicios de intermediación en Internet, como es el caso de las plataformas, está recogido en dos instrumentos: la sección 230(c) de la CommunicationsDecency Act (CDA) y la Digital Millennium Copyright Act (DMCA). Esta última contempla un régimen específico para favorecer la tutela de los derechos de autor, en el que se inspiró el legislador europeo al establecer las normas de la Directiva 2000/31 sobre el comercio electrónico (DCE, en particular su artículo 14, traspuesto en el artículo 16 de la Ley 34/2002 de servicios de la sociedad de la información, LSSI), que –a diferencia de la DMCA- se aplican con carácter horizontal (es decir, con independencia de la materia a la que vaya referida la responsabilidad) y no solo en materia de propiedad intelectual. Por su parte, con carácter previo –en 1996- la CDA, a partir de la importancia atribuida a la libertad de expresión en EEUU y con el propósito de favorecer el desarrollo de Internet, estableció una amplísima exención de responsabilidad a favor de los intermediarios de Internet en relación con los contenidos que se difundan a través de sus servicios, que es de aplicación con carácter general, salvo en materia de derechos de autor. El texto de la sección 230(c) de la CDA es el siguiente:

(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability. No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

La sección 230(c)(1) CDA adopta en principio un estándar muy diferente al del artículo 14 DCE (16 y 17 LSSI), en la medida en que proporciona al intermediario una protección prácticamente absoluta que lo exime de responsabilidad por los contenidos difundidos por sus usuarios, al establecer que en ningún caso será tratado como proveedor de los contenidos o responsable de su publicación cuando hayan sido proporcionados por terceros. Frente a ese modelo, el prevalente en Europa, en el que la ponderación entre los derechos fundamentales que pueden resultar implicados –por ejemplo, la libertad de expresión y el derecho al honor o a la intimidad- es diferente (principalmente por el distinto alcance atribuido a la libertad de expresión) requiere en la práctica valorar a la luz de las circunstancias del caso si el intermediario que pretende beneficiarse de la limitación de responsabilidad actuó con el nivel de diligencia que le es exigible para poder apreciar que no tuvo conocimiento efectivo de la ilicitud de los contenidos difundidos por el usuario a través de sus servicios. Se trata de un estándar cuya aplicación, condicionada por la evolución de los modelos de negocio, requiere un análisis casuístico –por ejemplo, cuanto mayor sea el riesgo inherente al modelo del negocio del intermediario, mayor ha de ser el nivel de diligencia exigible-, como se desprende de las  fundamentales aportaciones del TJUE y del TEDH.

Ahora bien, en la aplicación de estas exenciones de responsabilidad hay otro aspecto fundamental, que es la concreción de quiénes pueden beneficiarse de las mismas. En el caso de la UE resulta clave, conforme a la jurisprudencia del TJUE, a partir de su sentencia L’Oréal, y del considerando 42 de la propia DCE, que la actividad del prestador de servicios sea “de naturaleza meramente técnica, automática y pasiva” para que pueda beneficiarse del régimen de protección de los intermediarios. La concreción de este extremo con respecto a las plataformas de Internet resulta de gran importancia práctica y plantea todavía interrogantes, que las varias cuestiones actualmente pendientes ante el Tribunal de Justicia sobre este particular deberían contribuir a resolver. En todo caso, con respecto a los contenidos de terceros que una plataforma valora, prioriza,  o recomienda a sus usuarios hay motivos para sostener que no desempeña una actividad meramente pasiva o neutral, como exige la DCE, lo que determinaría que no pudiera beneficiarse de la limitación de responsabilidad del artículo 14 DCE. No obstante, el no beneficiarse de la inmunidad no equivale a ser responsable sino que esto ha de ser valorado en cada cado en función de la normativa aplicable en el ámbito de responsabilidad de que se trate.

La EO pretende una evolución del marco existente en EEUU en lo relativo a la concreción de quiénes pueden beneficiarse de la inmunidad de la sección 230(c)(1) CDA, como vía para combatir lo que denomina censura en Internet de la plataformas. Considera que una interpretación conjunta del subapartado 1 de la sección 230(c) con su subapartado 2 debe llevar a considerar que quienes restringen el acceso a ciertos contenidos más allá de lo previsto en ese subapartado 2 no pueden beneficiarse de la exención y deben ser equiparados a un proveedor de contenidos, de modo que pueden ser responsable de lo que otros difunden a través de sus servicios. En principio, puede considerarse que se trata de un planteamiento semejante, salvando todas las distancia, al que prevalece en la UE en relación con la exigencia de que la actividad del prestador de servicios sea de naturaleza meramente técnica, automática y pasiva. Sin embargo, en la práctica entre el enfoque de la EO y la situación en la UE subsisten significativas diferencias. La EO pretende favorecer la ausencia de control de contenidos por las plataformas, lo que no se corresponde con el estándar basado en la diligencia debida de la DCE. En todo caso, la ausencia de control en el modelo de EEUU se basa en la consideración de las salvaguardas previstas en los subapartados (1) y (2) de la sección 230(c) como garantías independientes, situación que la EO cuestiona, tal como se ha puesto de relieve al criticarla (aquí).

Más allá de la defensa de la no intervención de las plataformas -sin perjuicio de que la concreción de cuándo las medidas restrictivas pueden entenderse "taken in good faith" y estar justificadas resultará en todo caso controvertido-, la EO no incluye medidas específicas relativas al funcionamiento de las plataformas del tipo de las contenidas en el Reglamento (UE) 2019/1150 (aquí) y de las que previsiblemente adoptara la UE en relación con otros usuarios de plataformas (aquí).


ANEXO

EXECUTIVE ORDER
- - - - - - -
PREVENTING ONLINE CENSORSHIP

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows:

Section 1. Policy. Free speech is the bedrock of American democracy. Our Founding Fathers protected this sacred right with the First Amendment to the Constitution. The freedom to express and debate ideas is the foundation for all of our rights as a free people.

In a country that has long cherished the freedom of expression, we cannot allow a limited number of online platforms to hand pick the speech that Americans may access and convey on the internet. This practice is fundamentally un-American and anti-democratic. When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power. They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.

The growth of online platforms in recent years raises important questions about applying the ideals of the First Amendment to modern communications technology. Today, many Americans follow the news, stay in touch with friends and family, and share their views on current events through social media and other online platforms. As a result, these platforms function in many ways as a 21st century equivalent of the public square.

Twitter, Facebook, Instagram, and YouTube wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete, or disappear information; and to control what people see or do not see.

As President, I have made clear my commitment to free and open debate on the internet. Such debate is just as important online as it is in our universities, our town halls, and our homes. It is essential to sustaining our democracy.

Online platforms are engaging in selective censorship that is harming our national discourse. Tens of thousands of Americans have reported, among other troubling behaviors, online platforms "flagging" content as inappropriate, even though it does not violate any stated terms of service; making unannounced and unexplained changes to company policies that have the effect of disfavoring certain viewpoints; and deleting content and entire accounts with no warning, no rationale, and no recourse.

Twitter now selectively decides to place a warning label on certain tweets in a manner that clearly reflects political bias. As has been reported, Twitter seems never to have placed such a label on another politician's tweet. As recently as last week, Representative Adam Schiff was continuing to mislead his followers by peddling the long-disproved Russian Collusion Hoax, and Twitter did not flag those tweets. Unsurprisingly, its officer in charge of so-called "Site Integrity" has flaunted his political bias in his own tweets.

At the same time online platforms are invoking inconsistent, irrational, and groundless justifications to censor or otherwise restrict Americans' speech here at home, several online platforms are profiting from and promoting the aggression and disinformation spread by foreign governments like China. One United States company, for example, created a search engine for the Chinese Communist Party that would have blacklisted searches for "human rights," hid data unfavorable to the Chinese Communist Party, and tracked users determined appropriate for surveillance. It also established research partnerships in China that provide direct benefits to the Chinese military. Other companies have accepted advertisements paid for by the Chinese government that spread false information about China's mass imprisonment of religious minorities, thereby enabling these abuses of human rights. They have also amplified China's propaganda abroad, including by allowing Chinese government officials to use their platforms to spread misinformation regarding the origins of the COVID-19 pandemic, and to undermine pro-democracy protests in Hong Kong.

As a Nation, we must foster and protect diverse viewpoints in today's digital communications environment where all Americans can and should have a voice. We must seek transparency and accountability from online platforms, and encourage standards and tools to protect and preserve the integrity and openness of American discourse and freedom of expression.

Sec. 2. Protections Against Online Censorship. (a) It is the policy of the United States to foster clear ground rules promoting free and open debate on the internet. Prominent among the ground rules governing that debate is the immunity from liability created by section 230(c) of the Communications Decency Act (section 230(c)). 47 U.S.C. 230(c). It is the policy of the United States that the scope of that immunity should be clarified: the immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.

Section 230(c) was designed to address early court decisions holding that, if an online platform restricted access to some content posted by others, it would thereby become a "publisher" of all the content posted on its site for purposes of torts such as defamation. As the title of section 230(c) makes clear, the provision provides limited liability "protection" to a provider of an interactive computer service (such as an online platform) that engages in "'Good Samaritan' blocking" of harmful content. In particular, the Congress sought to provide protections for online platforms that attempted to protect minors from harmful content and intended to ensure that such providers would not be discouraged from taking down harmful material. The provision was also intended to further the express vision of the Congress that the internet is a "forum for a true diversity of political discourse." 47 U.S.C. 230(a)(3). The limited protections provided by the statute should be construed with these purposes in mind.

In particular, subparagraph (c)(2) expressly addresses protections from "civil liability" and specifies that an interactive computer service provider may not be made liable "on account of" its decision in "good faith" to restrict access to content that it considers to be "obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable." It is the policy of the United States to ensure that, to the maximum extent permissible under the law, this provision is not distorted to provide liability protection for online platforms that -- far from acting in "good faith" to remove objectionable content -- instead engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree. Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike. When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

(b) To advance the policy described in subsection (a) of this section, all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard. In addition, within 60 days of the date of this order, the Secretary of Commerce (Secretary), in consultation with the Attorney General, and acting through the National Telecommunications and Information Administration (NTIA), shall file a petition for rulemaking with the Federal Communications Commission (FCC) requesting that the FCC expeditiously propose regulations to clarify:

(i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher or speaker for making third-party content available and does not address the provider's responsibility for its own editorial decisions;

(ii) the conditions under which an action restricting access to or availability of material is not "taken in good faith" within the meaning of subparagraph (c)(2)(A) of section 230, particularly whether actions can be "taken in good faith" if they are:

(A) deceptive, pretextual, or inconsistent with a provider's terms of service; or
(B) taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard; and

(iii) any other proposed regulations that the NTIA concludes may be appropriate to advance the policy described in subsection (a) of this section.

Sec. 3. Protecting Federal Taxpayer Dollars from Financing Online Platforms That Restrict Free Speech. (a) The head of each executive department and agency (agency) shall review its agency's Federal spending on advertising and marketing paid to online platforms. Such review shall include the amount of money spent, the online platforms that receive Federal dollars, and the statutory authorities available to restrict their receipt of advertising dollars.

(b) Within 30 days of the date of this order, the head of each agency shall report its findings to the Director of the Office of Management and Budget.

(c) The Department of Justice shall review the viewpoint-based speech restrictions imposed by each online platform identified in the report described in subsection (b) of this section and assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.

Sec. 4. Federal Review of Unfair or Deceptive Acts or Practices. (a) It is the policy of the United States that large online platforms, such as Twitter and Facebook, as the critical means of promoting the free flow of speech and ideas today, should not restrict protected speech. The Supreme Court has noted that social media sites, as the modern public square, "can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard." Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017). Communication through these channels has become important for meaningful participation in American democracy, including to petition elected leaders. These sites are providing an important forum to the public for others to engage in free expression and debate. Cf. PruneYard Shopping Center v. Robins, 447 U.S. 74, 85-89 (1980).

(b) In May of 2019, the White House launched a Tech Bias Reporting tool to allow Americans to report incidents of online censorship. In just weeks, the White House received over 16,000 complaints of online platforms censoring or otherwise taking action against users based on their political viewpoints. The White House will submit such complaints received to the Department of Justice and the Federal Trade Commission (FTC).

(c) The FTC shall consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices in or affecting commerce, pursuant to section 45 of title 15, United States Code. Such unfair or deceptive acts or practice may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities' public representations about those practices.

(d) For large online platforms that are vast arenas for public debate, including the social media platform Twitter, the FTC shall also, consistent with its legal authority, consider whether complaints allege violations of law that implicate the policies set forth in section 4(a) of this order. The FTC shall consider developing a report describing such complaints and making the report publicly available, consistent with applicable law.

Sec. 5. State Review of Unfair or Deceptive Acts or Practices and Anti-Discrimination Laws. (a) The Attorney General shall establish a working group regarding the potential enforcement of State statutes that prohibit online platforms from engaging in unfair or deceptive acts or practices. The working group shall also develop model legislation for consideration by legislatures in States where existing statutes do not protect Americans from such unfair and deceptive acts and practices. The working group shall invite State Attorneys General for discussion and consultation, as appropriate and consistent with applicable law.

(b) Complaints described in section 4(b) of this order will be shared with the working group, consistent with applicable law. The working group shall also collect publicly available information regarding the following:

(i) increased scrutiny of users based on the other users they choose to follow, or their interactions with other users;

(ii) algorithms to suppress content or users based on indications of political alignment or viewpoint;

(iii) differential policies allowing for otherwise impermissible behavior, when committed by accounts associated with the Chinese Communist Party or other anti-democratic associations or governments;

(iv) reliance on third-party entities, including contractors, media
organizations, and individuals, with indicia of bias to review content; and

(v) acts that limit the ability of users with particular viewpoints to earn money on the platform compared with other users similarly situated.

Sec. 6. Legislation. The Attorney General shall develop a proposal for Federal legislation that would be useful to promote the policy objectives of this order.

Sec. 7. Definition. For purposes of this order, the term "online platform" means any website or application that allows users to create and share content or engage in social networking, or any general search engine.

Sec. 8. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect:

(i) the authority granted by law to an executive department or agency, or the head thereof; or

(ii) the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals.

(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.

(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

DONALD J. TRUMP
THE WHITE HOUSE,
May 28, 2020.