5
min read
Freedom and Privacy
Risks of AI

Do we need a bill of Digital Rights?

March 13, 2024

A Bill of Digital Rights

Jeremy Peckham, Research Lead at the AI, Faith and Civil Society Commission

Mohammed Ahmed, Research Manager at the AI, Faith and Civil Society Commission

Defining human rights

The UN Human Rights Charter and the US Declaration of Independence do not adequately define what inalienable rights are, especially in the context of modern technology and its impact on human nature. The idea of a Bill of Digital Rights would be to define the fundamental rights that humans should have in the digital world in which we now live.

A Bill of Digital Rights needs to cover several areas where AI applications can harm humanity. Some of these areas are listed below and discussed in this article in an introductory manner. Please be sure to check our other case studies for more detail.

Cognitive acuity

Cognitive acuity is diminished through reliance on decision support systems (e.g. in financial, judicial and medical areas) we are likely to loose cognitive acuity over time. In addition, these systems incorporate bias and lack transparency in respect of any decision reached. Such systems only deliver a probability. It is naïve to think that such algorithms can be made transparent nor free of bias, given that they are usually stochastic processes, not rule based, and data will always be biased because humans are biased. Decisions that impact individuals and groups must always have human oversight and a right of appeal with such process involving human assessment and judgement only.

Relationships

Relationships are damaged through over engagement with and reliance on digital assistants, along with the drive to create ever more realistic simulations of humanness is impacting relationships and communication, as well as encouraging gender stereotyping. Whilst there shouldn’t be outright ban on such devices, it should be a requirement that users should always know that they are interacting with an artefact, not a human. More empirical research is needed in this area regarding harms to humanity. Research should be conducted on methods to ensure that such artefacts do not appear human (e.g. the use of non-human voices). The evaluation of a user’s emotions, personality and character by AI based artefacts, simulating a dialogue should be banned (e.g. Interviewing systems).

Freedom & privacy

Privacy and freedom is lost through the use of private data and surveillance of citizens, whether by the state or private companies. The use of AI to monitor, track, and identify citizens from facial or other personal attributes is unprecedented in any civilisation and is quite different from the use of other biometrics such as fingerprints. The European Commission and many other countries are well aware of these dangers and action is required urgently to avoid mass surveillance being normalised.

Although not involving AI, the deployment of Covid-19 tracking apps has brought this prospect even closer. There should be an outright ban on the state's use of AI based surveillance technologies. An even greater level of surveillance has already been established in the private sector through Big Tech’s use of a user’s browsing data, shopping activity and a host of other data gatherers such as FitBit health monitors. Much of humanity has already lost its freedom and autonomy. GDPR legislation needs strengthening to avoid the extraction and use of personal data. The practice of companies providing free services or products in exchange for data should be banned without explicit and informed consent. Subscription models for these services, that don’t use manipulative AI algorithms, would help to preserve our privacy and freedom.

Moral agency

In assigning moral agency to artefacts such as autonomous weapons and self-drive vehicles, humans are loosing their moral agency. This should be banned and it should be a requirement that human decision making is required where life is at risk.

Dignity of work

AI systems, including robotics are already changing the workplace and displacing jobs. There is human dignity in work and alternative work must be a condition of job replacement by AI systems and robotics, except where such systems preserve life in carrying out hazardous tasks.

Loss of reality

We are in danger of losing a sense of what is real and embodied through the overuse of Augmented and Virtual Reality systems there is a danger that we loose a sense of what is real and people will also become addicted to such immersive technology. Research is needed in this area to provide more empirical evidence and to inform potential health warnings for the use of such devices. Strong human oversight and a cautious approach to application development is required to protect the harms to humanity from users losing touch with the real world and real relationships.

References

BBW-DPDI-Briefing-for-House-of-Lord-Committee-Stage.pdf (bigbrotherwatch.org.uk)

Fit or misfit: Can GDPR and the AI Act interplay? | Simmons & Simmons (simmons-simmons.com)

EPRS_STU(2020)641530_EN.pdf (europa.eu)

Auctioneer stock image. Image of rights, hammer, verdict - 113333089

G