Abstract For all intents and purposes, the application of artificial intelligence in the legal field…
AI Positives and Negatives
Introduction to AI:
AI has been successful in creating its own space in almost every walk of life. It is a versatile tool that helps to analyse data and blend information creatively to enhance decision-making.
What is AI?
The phrase “artificial intelligence” (AI) refers to the overarching objective of enabling “computer systems to perform tasks that typically require human intelligence, such as visual perception, speech[1] recognition, decision-making, and language translation,” Sub-field of AI includes:
- a) Machine Learning: ML, is an advanced version of AI, where computers analyse the input in the form of data and generate predictions based on their experiences.
b) Deep learning: It’s a subset of ML, to create a single output from a huge number of inputs, several layers of computerized neural networks function together. - c) Cognitive Computing: It targets to enhance the interaction between machines and humans so that the machines can adopt behavior inspired by humans.
- d) Computer Vision: As the name suggests, computer vision enables computers to recognize, evaluate, and interpret visual data.
AI’s Dependence on Data:
AI Backed systems collect enormous data. Let’s do a case study. On April 10, 2018, a Senate Judiciary and Commerce Committee hearing called for Mark Zuckerberg to testify.
During the hearing, Senator Orrin Hatch questioned, “How do you sustain a business model in which users do not pay for your service?” To which Zuckerberg merely replied, “Senator, we run ads.”
According to John Wanamaker, half the money he spends on advertising is wasted; the problem is, he doesn’t know which half. Facebook’s ability to target particular demographics and its extensive user data are what make it such an appealing platform for putting advertisements. The advertising’s efficiency and efficacy are significantly increased by this targeting[2].
The clash between AI and Human Rights:
Many nations use cutting-edge technology, eg. biometric tracking and video surveillance to prevent illegal and harmful activity, including terrorist acts.
Our lives are safeguarded thanks to these government initiatives, which also serve to deter criminal activity. However, these same technologies actively follow and monitor ordinary citizens, which is a breach of their privacy, and in the future, may lead to discrimination on the grounds of political views, health issues, or even their religious beliefs.
Dignity, the guiding principle for all human rights and the idea of the inherent equality of all people is challenged by this technological upliftment. Particularly in the preamble and paragraphs 1st and 2nd of the 1948 UDHR and the United Nations Charter both demonstrate concern for dignity. All people have unalienable rights, which ensure natural equality. Algorithmic Prejudices can result in social discrimination, so developing an ethical code of data becomes crucial to avoid amplifying discrimination.
Compliance with GDPR (General Data Protection Regulation)[3]:
GDPR is a bundle of guiding principles that direct companies and organizations to handle the personal data of their customers with the utmost integrity.
Personal data is such data that is enough to identify a person, and it further helps in preparing business models and marketing campaigns.
- a) Adherence to Ethical standards: Data must not be gathered with the intention of storing and utilizing it for futuristic purposes, and processing needs to have a clear goal. Regarding how you utilize data, be sincere, upfront, and straightforward. Organisations are required to possess and preserve the necessary documents as proof of their adherence to the rules.
b) Legal Compliance: Personal data must be legally used, and adherence to the legally established norms is a must. - c) Individual’s Right: The individual whose data is gathered, must be informed regarding the purposes for which their data will be utilized.
“Draft Ethics Guidelines for Trustworthy AI”:
The European Commission, in its communication in 2018, set forth its vision for artificial intelligence (AI), which encourages the development of ethical, safe, and cutting-edge AI “made in Europe.”
The Commission’s objective is underpinned by three pillars:
(i) increasing public and commercial investments in AI to encourage adoption;
(ii) preparing for socio-economic changes; and
(iii) offering a tolerable ethical and legal framework to promote European values.
Building trust
AI can assist in attaining sustainable development objectives, combating climate change, and assisting in better resource utilisation. It supports the optimization of our mobility and transportation infrastructures as well as our capacity to track development in relation to sustainability and social coherence metrics.
AI is thus a tool to improve both individual and societal well-being rather than an end in itself. For individuals and society to create, implement, and employ artificial intelligence, trust is a must. Without AI being proven to be trustworthy, adverse outcomes might occur and its adoption by customers and people could be hampered, impeding the achievement of AI’s enormous economic and societal advantages. The goal is to create an environment that will support the development and use of AI most effectively.
Understanding Data Ethics
Data science is a fascinating domain, which on the one hand provides ample opportunities to improve the standard of living—for instance, the planning of smart cities, on the other hand, fails drastically in complying with ethical norms.
When reliance is placed solely on the algorithms to operate on sensitive data and analyze it to make choices, there is a reduction in human oversight over such AI-backed decisions, which naturally raises concerns about putting human rights at stake, among other things. Data science advancement and data ethics must go hand in hand.
The Love-Hate Relationship of AI and Data Protection Laws
- a) Problem with AI: AI brings a lot with it. Looking at the current trend, AI will probably eliminate the need for workers in some professions in the foreseeable future, even though it is already helping those in many others. In particular, if the data utilised in AI development only represents a small portion of the population or reflects pre-existing social bias, it may result in bias and new kinds of discrimination. Traditional ideas of urban and residential planning, which designate substantial areas to parking lots and garages, are expected to be challenged by AI. In particular, if the data required for its development is concentrated in the hands of a small number of firms, AI may present significant antitrust difficulties.
- b) AI Boosting Data Protection: Scholars argue that AI can be used as a surveillance tool to not only monitor the use of an individual’s data, but also to respond in real-time the prevention of any wrong usage, or harnessing of such data for unintended purposes. Companies are deploying AI-based solutions concerning privacy such as privacy policy scanners, which intend to comprehend and present these policies in simple words as such users can understand them. Last but not least, AI is helping businesses to deploy solutions that are more private and user-protective.
The Data Protection Regulation’s Purview in the Context of AI
Personal Data Instances involving personal data are governed by data protection legislation. Unfortunately, the connections and conclusions that may be drawn from aggregated data sets have significantly eroded the distinction between what is “personal” and what is not. Data users and regulators alike must make the tough decision of whether data should be regulated since the information that was formerly thought to be non-personal now has the potential to be personal data.
Black-Box Problem
The advances in deep learning for computer vision have created a majority view that the most precise models for any given data science problem must be inherently uninterpretable and complex.
The “black-box” models in ML are derived using data by an algorithm, and even the designers are not in a position to comprehend and explain how variables are integrated to generate results. Even after knowing the list of the input variables, no human can foreseeably comprehend how the variables interact with one another to reach a final prediction, Against the black box models exist the interpretable models.[4]
India’s Revolution with AI
An approach centred around the idea that AI can improve lives and make society more equitable has placed the development, use, and promotion of AI high on the list of priorities for the Indian Government. A significant amount of funds, a 100% increase from prior investments, was given by the Union government in 2018 to research, training, and skill development in new technologies.
- An AI-powered smartphone anthropometry tool being tested by Wadhwani AI would enable medical professionals to detect low-birth-weight infants without the use of specialized equipment.
- A firm called NIRAMAI has created a handy, non-intrusive non-contact AI-based technology for early-stage breast cancer screening.
- IIT Madras researchers want to utilise artificial intelligence (AI) to anticipate the likelihood that expecting mothers would drop out of healthcare programs, to enhance targeted treatments, and to improve healthcare facilities for newborns and mothers.
Artificial Intelligence for All: NITI Aayog’s National Strategy[5]
According to estimates, AI has the potential to boost India’s GDP in 2035 by $957 billion, or 15% of current gross value added[6] India still lacks comprehensive AI-related legislation.
The Personal Data Protection Bill (2019) (PDP Bill), is a piece of legislation comprehensively dealing with different facets of privacy regulations that decisions deployed through AI must be in compliance with, or strive to remain within proximity.
It lays down restrictions on data processing, effective security measures to prevent misuse of data and curtailing the instances of data breaches, and the implementation of specific rules dealing with weaker users, including minors. Additionally, it calls for a dynamic data protection legislation where the law would be supported by rules and norms of conduct, facilitating the evolution of privacy in line with new technological developments.
IT Act, The Information Technology Act, 2000 is the cornerstone of India’s data protection laws. The IT Act’s provisions and the Information Technology (Reasonable Security Practises and Procedures and Sensitive Personal Data or Information) Rules, 2011 create a framework, independent of technology, intended to protect the personal and sensitive information of all corporate bodies.
Suggestions to Tackle the Ongoing Issues Pertaining to AI
- a) Principle of safety and reliability: For the individuals putting their personal and sensitive data at stake, for that surveillance must be there to ensure that the risk associated with the AI is first recognized, and some redressal mechanisms must be in place to deal with unexpected harm.
- b) Principle of Equality: The AI system must not be unjust, it must be deployed in such a manner as to eradicate bias, and discrimination of any sort among individuals under similar circumstances.
- c) Inclusivity and Non-discrimination: AI systems must be free from unreasonable bias and they must not deny opportunity to a qualified and an eligible person on the grounds of his/her identity. It should propel that unfair exclusion of services or benefits must not happen. In case of an adverse decision, an appropriate grievance redressal mechanism should be designed in such a way that is accessible and affordable to every individual.
- d) Privacy and Security: AI must ensure the privacy and security of data of every individual as well as of entities that are harnessed for training the system. Access must be given strictly to those authorized with necessary and sufficient protections.
- e) Transparency: The functioning and design of the AI system should be recorded and must be provided for external scrutiny and an audit must be done to the utmost extent to ensure that deployment is honest, fair, impartial, and provides accountability.
- f) Accountability: Every individual having involvement in the design, development, and deployment of the AI system must be made responsible for all their actions, regarding the AI System. These individuals must conduct risk and impact assessments to assess direct and indirect potential effects that AI systems can produce on end users.
- g) Protection and reinforcement of positive human values: AI must ensure the promotion of positive human values and it should not create any disturbance when it comes to social harmony among communities.
Conclusion
From the above discussion, it’s clear that the impact of AI-based technology is enormous and irreversible; it’s growing day by day at a quick pace; its nature is highly smeared, and it lacks transparency, Transparency in AI and the Black-box problem in the context of ML is inherently proportional to one another. AI Regulation, drafting policies, and future deliberations must be formed by multiple disciplines on a par basis, given the complex terrain of navigating challenges posed by AI systems. If the issues about the protection of personal data privacy are not adequately looked into and are not effectively handled, the integration and growth of the usage of AI algorithms with the commercial and social sectors critical to improving the economic health of India will be disrupted.
The question of imposing civil or criminal obligations on AI if such use of AI results in damages or commits an offence harmful to the interests of society is of utmost importance in this context. This will assure the harmony between law, ethics, and AI as well as the issue of calming the interaction between them.
Author: Saumya Singh, A student at Guru Gobind Singh Indraprastha University (GGSIPU), Delhi, in case of any queries please contact/write back to us via email to [email protected] or at IIPRD.
[1] Anant Manish Singh & Wasif Bilal Haju, Artificial Intelligence, IJRASET (2022), https://www.ijraset.com/research-paper/paper-on-artificial-intelligence.
[2]1. Christoph Bartneck et al., Privacy Issues of AI, in An Introduction to Ethics in Robotics and AI 61 (2021), https://doi.org/10.1007/978-3-030-51110-4_8.
[3] GDPR Summary, https://www.gdprsummary.com/gdpr-summary/?gclid=CjwKCAjw3POhBhBQEiwAqTCuBsW3C-L6YBJgfQZYta_hIC_iJZl6UuF2eSAXLyFvVYuaVXuOfINwTxoC8uYQAvD_BwE.
[4] Cynthia Rudin & Joanna Radin, Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition (2019), https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/8
[5]RESPONSIBLE AI #AIFORALLApproach Document for India Part 1 – Principles for Responsible AI, (2021), https://www.niti.gov.in/sites/default/files/2022-11/Ai_for_All_2022_02112022_0.pdf.
[6]Rewire for growth (2021), https://www.accenture.com/in-en/insights/consulting/artificial-intelligence-economic-growth-india