Does AI signify the end for human rights?
As we stand at the cusp of a technological revolution, Industry 4.0, history alters its pace when impacted by scientific progress. Artificial Intelligence (AI) is a domain that is revolutionizing industry and society into one mainly facilitated by robotics and machinery. AI is an umbrella terms that includes Machine Learning (ML), Natural Language Processing (NLP), algorithms, big data analytics, and a lot more. Although humanity’s intelligence is defined by inherent biases with regards to decision-making, such traits can also be identified in AI products that function with human developed intelligence. These factors of biases and discriminatory attitudes – rooted in an array of various technologies and enmeshed in social frameworks, are worrying to universal human rights. It is not open to debate that AI impacts the human rights of susceptible individuals and groups by enabling discriminatory attitudes, therefore developing a new way to oppress through the medium of technological advancement.
AI to discriminate
With the progression of AI in our (particularly) urban societies, the matter of discrimination and systematized racism has taken increased prevalence in political discourse with regards to technological advancements. Art 2 of the UDHR (Universal Declaration of Human Rights) and Art 2 of the ICCPR both describe individual permission to all rights and freedoms without exception. Obviously, this is tough in practical application. Especially when we take into account the various opinions and rhetoric that facilitates discriminatory attitudes, serving as a breeding ground for it, further exacerbated by oppression within human communication. Although some possess a naïve perception of AI as the answer to this obstacle, a wonderous tool that liberates humanity from the biases of humanity’s decisions, these opinions do not consider the foundation of AI’s intelligence as being an extension of human intelligence itself. After all, AI is a human creation.
As a matter of fact, AI’s algorithms and facial-recognition advancements have consistently fallen short of ensuring a fundamental standard for equality, especially by depicting discriminating tendencies towards people of African origin. 6 years ago, Google Photos, which is perceived as a progressive recognition tool, classified an image of two African origin people as images of gorillas. When search strings like ‘Black girls’ were put into the Google Search Bar, the algorithm output sexually provocative content in reply. Analysts have also discovered that an algorithm that detects which patients are in need of extra clinical care underestimated the requirements of African origin patients.
Face recognition systems are now being leveraged in the justice framework of several nation states, these nations include India, Denmark, China, and Hong Kong – to hone in on suspects for predictive policing. Naysayers have highlighted that in lieu of preventing and managing the work of cops, these algorithms instead reinforce and fortify current law enforcement and legal practices that have their basis in discrimination.
The untested bias of these utilities has put African origin individuals at a larger risk of having a perception as at-risk offenders, further reinforcing racist undertones in the legal systems. Discriminatory attitudes ingrained in AI shames its revolutionary applications in society and is in violation of equality and the right to protection.
While society at large are now realizing the rights of African origin individuals as manifested in the black lives matter movement, the increasing implementation of artificial intelligence in various aspects of society is serving as breeding ground for cyber bias and recreating the malaise that we are at war with. From that perspective, this advancement disproportionately impacts the susceptible by amplifying discrimination that is inherent to society at large.
Technology as the origin of unemployment
The right to work and safeguards against unemployment are guaranteed through Art 23 of UDHR, Art 6 of ICESCR, and Article 1(2) of the ILO. Although the swift proliferation of AI has revolutionized organizations and personal lives by enhancing the efficiency of infrastructure and services, such evolution has also given birth to an age of unemployment owing to the displacement of labor. In reference to the considerable evolution in technologies, Robert Skidelsky states in his book ‘Work in the Future’ that “Sooner or later, we will exhaust our jobs.” This was also specified again in a groundbreaking research carried out by Oxford researchers Frey and Osborne, which forecasted that nearly half of U.S. jobs are apt for being automated owing to AI technologies.
4 years ago, Changying Precision Technology, a Chinese Factory manufacturing mobile phones, substituted 9/10ths of its labor force with machinery, leading to a 250% increase in productivity levels and a considerable 8% drop in defective output. Likewise, Adidas is moving towards robot exclusive factory setups to enhance efficiency and effectiveness. Therefore, economic, and business growth is no more dependent on human labor, as a matter of fact, the numbers indicate that human intervention might have a negative influence on production. It is to be noted that technology has had a more negative impact on low- and middle- skilled staff, with reducing work opportunities and reducing wages, causing the existence of job polarization. Although, as technology progresses on an ongoing basis, several jobs that we would at present think of as being safeguarded from being automated will ultimately be substituted by AI.
For instance, AI-driven digital assistants like Cortana, Siri, Alexa, and Google, have slowly substituted personal assistants, foreign language translators, and other aspects of business that were historically dependent on human presence.
The COVID-19 pandemic has already had a negative impact on millions and millions of jobs, and a new onslaught of AI advancements might further intensify the scenario. By consistently involving AI in various employment sectors, on an increasing basis, the old adage of the poor becoming poorer and the rich getting even more riches will be reinforced. AI is a new variant of capitalism that yearns for profit at all costs, without generating new jobs, rather, a human workforce is looked at as an obstacle to business growth. Therefore, there is a pressing need to tackle the impact of AI on social and economic rights, through the generation of a techno-social governance framework that may safeguard the work rights of humanity in an age of AI.
Population control and movement
Freedom of movement finds itself from several global declarations and has been classified as a fundamental right by several nation-states. AI’s capacity to restrict this right is particularly in relation to its utilization for surveillance reasons. A research study from the Carnegie Endowment for International Peace highlighted that 75/176 nations internationally are leveraging AI for security reasons on an ongoing basis, like for example, border management. There have been voices raised with regards to the disproportionate impact of such monitoring on populations that already experience discriminatory attitudes from the law enforcement, like African origin individuals, refugees, and irregular migrants – as predictive law enforcement tools wind up including “dirty data” that are reflective of conscious and overt biases.
The Guardian featured a report stating that to curb illegal immigration, several towers outfitted with laser-driven imaging equipment were setup at the US-Mexico border in Arizona. Additionally, the US State employed a face recognition tool to document imagery of individuals in vehicles entering and exiting the nation.
Technological movements have also had an impact on the army and humanitarian sector. The escalating utilization of armed drones in the battlefield – especially by the USA in Pakistan and Afghanistan – has been consistently condemned as violating the Global Humanitarian Law in 2010 UN Report. An inquest by the Intercept on US army operations in opposition to the Taliban and al Qaeda in the Hindu Kush had the revelation that almost 90% of people who faced their deaths in drone strikes were not the targeted individuals. The speedy development of automated tech and AI has had the outcome of completely autonomous weaponry like killer robots which bring up a variety of moral, legal, and security considerations. Lacking ethical judgments of these machines has raised questions about how reliable and the error in judging with regards to these weapons, which may cause inadvertent deaths and quickly escalating conflicts. A troublesome article by Zachary Kallenborn illustrates the incapability of these weapons to distinguish between targeted and non-targeted individuals.
Further, the proliferation of humanitarian drones, where army technologies might be leveraged for humanitarian reasons – has given raise to ethical dilemmas concerning how these technologies may negatively influence populations in need. There are definitely negative consequences in store for susceptible groups, where private data has put them in the trajectory of further violence. Biometrics has been leveraged to register refugee groups within the UNHCR, while the assumption is that it is an objective strategy for identification, there is copious evidence that these tools merely codify discriminatory tendencies. For instance, biometric information obtained from Rohingya refugees in the Indian subcontinent was leveraged to enable their repatriation instead of integrating them into society, furthermore intensifying the duress experienced by this section of society.
The swift increase in entities being dependent on AI to regulate social control in the current pandemic has also led to many troublesome questions with regards to privacy. Arogya Setu, leveraged by the Indian state, is a troublesome combination of health information and electronic surveillance. Technologies are a vehement threat to fundamental human rights and can be leveraged as tools to exploit and oppress. As a matter of fact, if the leveraging of AI goes on without regulatory measures, the human rights of susceptible populations will further be negatively impacted.
We are living in the age of AI. The friction between AI and humanity’s rights is illustrating itself on an increasing basis as these technologies become more fundamental to our daily existence and the operation of society. AI is typically viewed as an enhancement and crowning accomplishment of our modern society, lacking Information Protection Policies provides tech organizations a global village ripe for cyber exploitation. With minimal regulatory controls or accountability, these organizations willfully stage intrusions into the personal lives of citizenry and violate basic human rights on an increasing basis.
From stimulating discrimination to intrusive surveillance strategies, AI has proved itself to be a threat to equal protection, economic rights, and fundamental freedoms. To turn the scenario around, adequate legal provisions and standards should be integrated in our evolving societies and communities. Enhanced transparency in AI decision-making processes, improved accountability with regards to technological organizations, and the capacity for civil society to challenge the implementing of these advancements within community are the need of the hour. AI literacy should also be a prerequisite through investing in public awareness and knowledge initiatives, which would assist societies to know not just about the functionalities of AI, but also its influence on our everyday lives.
At the end of the day, we have to remember that AI is manmade. And man’s creation can only be as mature as he is. Just like humanity needs regulation and control at the societal and community level, these disruptive technologies that radically alter the landscape of society should have similar regulatory standards and control, so we can ensure that the integrity of humanity is maintained in this exciting era, of incredible progress, and potentially catastrophic danger.
The call for humanity to evolve through initiatives such as the Black Lives Matter Movement is correlated with the need to regulate technologies such as AI, Blockchain, and Big Data – as a matter of fact, they are one and the same. Human beings are regulated because we are addicted to direction, we need guidelines, we need to know what’s right and what’s wrong, and why it’s right and why it’s wrong. It stands to reason that the technologies that are the creation of the very man who craves order and direction, need the same order and direction in order to be leveraged in an appropriate way and prevent any abuses that might occur along the way to unprecedented innovation.
Technology is merely a reflection of humanity, and never has that been the case more so than today, with technologies such as deep learning, AI, artificial neural networks, and machine learning.
Unless adequate policies are instated to protect the interests of humanity’s society, the future of human rights in the age of AI stays a question mark.