“No one should be at the mercy of an algorithm that they do not control, that was not designed to safeguard their interests and that tracks their behavior to collect personal data and keep them hooked,” declared this Monday the Secretary General of the United Nations in the presentation of the United Nations Global Principles for Information Integrity.
This is a framework for international action to make information spaces safer, in a context in which disinformation and incitement to hatred are fueling conflicts, threatening democracy and human rights and undermining public health and climate action.
Its proliferation is now fueled by the rapid rise of Artificial Intelligence (AI) technologies, increasing the risks for groups targeted in information spaces, including children.
“Opaque algorithms push people into information bubbles and reinforce prejudices such as racism, misogyny and discrimination of all kinds. Women, refugees, immigrants and minorities are common targets,” said António Guterres.
The five principles: trust and resilience of society; independent, free and plural media; healthy incentives; transparency and investigation; and public training, are based on a primordial vision of a more humane information ecosystem, added the UN chief.
Responsibility of governments and the private sector
Guterres noted that the principles mark a clear path forward, firmly rooted in human rights, including the rights to freedom of expression and opinion.
In this context, he urgently called on governments, technology companies, advertisers and the public relations industry to to assume their responsibility in the dissemination and monetization of harmful content.
“Recognize the harm your products inflict on people and communities,” Guterres told big tech companies.
The erosion of information integrity jeopardizes the United Nations' own missions, operations and priorities, including vital humanitarian and peacekeeping efforts. “Our staff are facing a tsunami of falsehoods and absurd conspiracy theories,” said the Secretary-General in this regard.
The Principles are the result of extensive consultations with Member States, the private sector, youth leaders, the media, academia and civil society.
The recommendations they contain are designed to promote healthier and safer information spaces that defend human rights, peaceful societies and a sustainable future.

Secretary-General António Guterres briefs journalists at the launch of the Global Principles for Information Integrity.
A secure and private digital world
The proposals include:
Governments, technology companies, advertisers, media and other stakeholders must Refrain from using, supporting or amplifying disinformation and hate speech for any purpose.
Governments must facilitate timely access to information, ensure a free, independent and plural media landscape, and ensure strong protection of journalists, researchers and civil society, researchers and civil society.
Technology companies must guarantee security and privacy by design in all its products, along with consistent implementation of policies across countries and in all languages, paying particular attention to the needs of groups that are frequently targeted online. They must improve crisis response and take measures to support the integrity of information around the elections.
All parties involved in the development of AI technologies must take urgent, inclusive and transparent steps to ensure that all applications are designed and used in a safe and ethical manner and that respect human rights.
Ethical advertising and protection of children
Technology companies must abandon business models based on programmatic advertising and prioritize engagement over human rights, privacy and security, allowing users greater choice and control over their online experience and your personal information.
Advertisers should demand transparency in the technology sector's digital advertising processes to help ensure that advertising budgets Don't inadvertently fund misinformation or hate or undermine human rights.
Technology companies and AI developers must ensure meaningful transparency and allow researchers and academics access to data while respecting user privacy, commission publicly available independent audits, and develop industry accountability frameworks.
The government, technology companies, AI developers and advertisers must take special measures to protect children, and governments must provide resources for mothers and fathers, guardians and educators.