“Artificial Intelligence (AI) has the potential to perpetuate and amplify discrimination,” remarked the Chief Justice of India recently while speaking at the 60th convocation of IIT Madras. AI uses algorithms that can affect or significantly manipulate human decisions vis-à-vis education, employment, credit, health insurance, etc. These machines lacking Artificial General Intelligence (similar to human common sense) make algorithmic decision-making opaque, complex, and subject to bias, error, and discrimination.
The provision of the “right to explanation” as outlined in the European Union’s (EU) data protection law, i.e., the General Data Protection Regulation (GDPR), provides a panacea to this by governing ‘decision-making’ by complex algorithms used by AI. Despite being the most populous country, India lacks AI regulations for emerging societal challenges, politics, data privacy, crime, and welfare.
Global research indicates biases in data-driven systems. A recent European Union Agency for Fundamental Rights research report highlighted ethnic and gender biases in AI’s offensive and hate speech detection. The findings show that the social media posts by a certain religious community were disproportionately flagged and taken down as ‘potentially offensive’ compared to others. More asymmetric AI behaviours have been reported, such as camera-based face recognition systems being more error-free on fair-skinned persons than on dark-skinned persons.
Further, in his article for the Washington Post, Justin Jouvenal discusses “Predictive Policing” utilising a crime prognostication system PredPol. The author shows that the system is biased against minorities while predicting crimes. Another major worry with AI’s dominance is automated decision-making (without human involvement) and profiling.
Individual automated decision-making has a significant influence in various sectors, for instance, a recruitment aptitude test using programmed algorithms and a set of criteria, an online decision to award a loan, etc. However, these automated decisions need checking due to algorithm bias that affects the fundamental rights of specific sections of people.
Secondly, according to Article 4(4) of GDPR, profiling is the processing of personal data to analyse a person’s performance, economic status, health, preference, behaviour, etc. Organisations might collect personal information from internet searches, lifestyle and behaviour data, buying habits, etc. and use algorithms or machine learning to classify people into different groups or sectors. Again, this involves using algorithms and machine learning, which can be biased.
The key difference between a decision taken by a human alone and a decision assisted by an AI is in the case of humans, the person affected by the decision can seek an explanation from the decision-maker. However, AI-involved decisions need more explanation and accountability.
The EU’s GDPR establishes a comprehensive and robust algorithmic accountability regime, recognising the risks posed by biased AI algorithms. It provides a “right to explanation”, as in ‘Recital 71’ by providing interpretative guidance on rights related to automated decision-making. Specifically, Articles 13 and 14 of the GDPR provide individuals with the “right to be informed” of- the existence of solely automated decision-making, meaningful information on the rationale involved and the significance and consequences for the individuals. This primarily restricts the use of solely automated decision-making (without any human involvement) and profiling.
Simultaneously, it also ensures that a person can obtain human involvement, express their opinion, get an explanation of the decision and challenge it, ensuring transparency and accountability. For instance, a person applies for a loan from a bank, only to be rejected by an automated algorithm, leaving one unaware of the decision’s rationales.
The “right to explanation” provision changes this. It lets a person demand an explanation for clarity, allowing us to protect our rights actively. Moreover, providing ‘process-based’ explanations will help the public to have a fruitful discussion about AI and its associated risks and benefits. They will ensure the use of AI as human-centric by continuously improving AI with customer feedback.
The provision of the “right to explanation” has been labelled by some as ‘impractical’ and a harmful restriction on the advancement of AI. However, this mischaracterisation overlooks the socio-political framework where it works, first, about the concerns of explaining the highly technical algorithms. As a first step, it can enhance the general understanding of the functioning of intricate socio-technical systems such as bank lending practice and credit management.
An explanation can be a means to an end that a governance system pursues and enables the receipts to exercise their legal rights. It can give a customer the details to identify that they have been wronged and can successfully appeal a decision. For instance, an individual customer must know the reasons for refusing him a loan and, ideally, an explanation of what change in input data would have resulted in a different verdict/output data. Simultaneously, it requires cooperation between experts with normative expertise (ethics, law) and computer science practitioners. Additionally, for lay users, the explanation has to be different, accessible and more understandable.
Second, technological advancement should never undermine the welfare and fundamental rights of the people. For instance, recently, the latest technology introduced in the Mahatma Gandhi National Rural Employment Guarantee Scheme (MGNREGS) resulted in many beneficiaries being struck out in this process unfairly. The new mandatory requirement of Aadhaar leads to work denial and payment delays. This adversely affects the right to livelihood and social security of these individuals.
Third, algorithmic bias arises from learning from the data collected from the world, which pins towards human intervention despite automation. Furthermore, private entities import or manufacture many of these systems, rendering them vulnerable to unauthorised access and storage. As these machines are also vulnerable to cyber-attacks, we cannot be sure about the decision, which may be a manipulated calculation. This further underscores the need to focus on ‘process explanation’ rather than ‘product explanation’.
Algorithmic decisions are more predictive than conclusive till every bit of the process becomes explainable, upholding the integrity of technology to assure transparency, reliability and accountability.
