Right to Explainability under the Right to Object to Automated Decision Making and whether the PDPB must Incorporate such a Provision
Author: Divya Pinheiro,
IIIrd year,
School of Law, Christ (Deemed to be University)
The technological revolution has brought with it developments in technology that were unheard of before. Development in this regard has been all-encompassing covering a range sectors. Machines today can perform a wide range of activities that were for a long time considered to be solely human functions, one such function is that of decision-making. These decisions include whether to extend credit, to hire or fire somebody, what educational track to put a student in, etc. algorithmic decision making is often preferred on the premise that it replaces something worse: human decision making. Human decision making is often considered biased, based on the experiences, and natural biases that exist in the human mind; human decisions often fail to rely equally on all pieces of information before them. Algorithmic decision making does have its merits especially when the decision making relies on clear mathematical data, transparent inputs and easily verifiable outputs. [1]But the certainty of this output can change drastically when faced with decisions where data is limited and filled with proxies and the results are difficult to test. [2]
The dominant reasoning, therefore, in favour of regulating algorithmic decision making, stems from the fact that such decisions can be erroneous, based on incorrect facts and derived from incorrect references. The algorithms can be biased; reflecting the bias of the programmer.[3]Decision making algorithms can also be programmed to be inherently discriminatory or to hide biased decisions behind rational fronts (for example, Facebook was accused by the Department of Housing and Development, United States, for violating the Fair Housing Act by using machine learning devices to discriminate in offering housing advertisements to users based on their membership on certain protected classes).[4]Algorithmic decision making regulatory concerns can be categorised into three: Dignitary concern; which propose that humans control algorithmic decision making to protect human dignity and autonomy. Justificatory concerns; advocate assessment of the legitimacy of algorithmic reasoning and lastly, instrumental concerns; are those that call for regulation to prevent subsequent problems such as error and bias.[5]
The European Union has attempted at address this policy vacuum trough the introduction of the General Data Protection Regulation (GDPR)[6]which into effect in the EU on May 25, 2018. The GDPR contains a significant set of rules on algorithmic accountability, imposing transparency, process, and oversight on the use of computer algorithms to make significant decisions about human beings. [7]The GDPR is further substantiated by the use of recitals. The recitals refer to the extensive preambular provisions of the directive, and do not have the direct force of law in the EU.[8] This is because a recital is supposed to “cast light on the interpretation given to a legal rule but cannot by itself constitute such a rule;”[9]thus recitals are an authoritative interpretation that can be referred to when the GDPR is vague.
There are four Articles of the GDPR that specifically address algorithmic decision-making. Article 22 of the GDPR addresses “automated individual decision-making, including profiling.”[10]Articles 13, 14, and 15 each contain transparency rights around automated decision-making and profiling.[11]
Article 13 establishes a series of notification rights /requirements when information is collected directly from individuals.[12]Article 14 establishes a similar set of notification rights /requirements when information about individuals is collected from third parties.[13]Article 15 creates an individual right of access to information held by a company that can be invoked “at reasonable intervals.”[14]All three Articles contain an identical provision requiring disclosure of “the existence of automated decision-making, including profiling.”[15]These provisions iterate that in case of automated decision-making, the data subject possesses the right to a
ccess “meaningful information of the logic involved as well as the significance and the envisaged consequences of such processing for the data subject.”
ccess “meaningful information of the logic involved as well as the significance and the envisaged consequences of such processing for the data subject.”
The language of these sections has sparked debate. While on one hand, Articles 13 and 14, require companies to notify individuals when data is obtained, on the other, Article 15 creates access rights at almost any time. Articles 13 and 14 might require an overview of a system prior to processing, but Article 15’s access night could provide deeper disclosure, including insight into a particular decision affecting a particular individual. There is also debate on the meaning of the terms “meaningful information about the logic involved” and whether all three articles refer to the same information, or whether they refer to multiple things. The text of the GDPR does not clarify this conflict one way or another.
The text of the GDPR thus creates both transparency and process rights around algorithmic decision-making. The text itself, however, leaves considerable room for interpretation. But both accompanying and subsequent
interpretative documents attempt to narrow and clarify the GDPR’s text, resolving a number of the conflicts discussed above.
Article 22 states that individuals “have the right not to be subject to a decision based solely on automated processing.”[16] The terms “meaningful information” and “significance” (from articles 13-15) can be understood in the light of Articles 21 and 22, the former of which lays down that a data subject has the right to object to the processing of his or her data while the latter details that the data subject has the right to not be made subject to a decision made solely on automated processing which produces legal effects concerning or significantly affecting the data subject. This suggests that a data subject is entitled to such information about the automated system that would allow him or her to make an informed decision to opt out. Recital 71[17], that explains the Regulation in non-binding language, explains that automated processing must be made subject to suitable safeguards to obtain specific information and to receive human intervention in order to be provided with an explanation of the decision arrived at after such an assessment and to challenge such a decision. The right under Article 22 could be interpreted to mean either a right to object to such decisions or a general prohibition on significant algorithmic decision-making. If the former interpretation is followed, it would result in a narrower interpretation of the right; and in practice would result in companies regularly using algorithms in significant decision making and only changing their behaviour if individuals invoke their rights. The latter interpretation would require all companies using algorithmic decision making to determine for themselves which exception they fall under and to implement safeguards to secure individual rights. [18]
There are three exceptions to the Article 22 right/prohibition. The first is when the automated decision is “necessary for … a contract.”[19]The second is when a Member State of the European Union has passed a law creating an exception.[20]The third is when an individual has explicitly consented to algorithmic decision-making.[21]Even when an exception to Article 22 applies, a company must implement “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests ….”[22]
One aspect of decision making that has received a lot of attention, is that of the likely discrimination. Issues include, data being collected in a non-representative manner, data baked into pre-existing human biases, a particular set of input variables can be more predictive for one group than another. [23]Most reasons for flaws algorithmic decision making usually found are directly related to data, this is due to the lack of technical details available of an algorithm that will make it perform worse to certain classes as opposed to certain others. [24]Most legal scholarship refers to decision making taken by machines as a ‘black box;’ with most people, including programmers, being unable to understand the manner in which the algorithm processes data and produces results.[25]This further establishes the need for automatic decision making to create a more transparent system. There are four ways in which machine learning algorithms can be made explainable.
Under the first approach, explainability must mean that it must be possible to establish exactly what trait or characteristic of an individual or subject caused its prediction.[26]In other words, it is the ability to say exactly how changes in a certain input variable’s values for that individual or subject would have yielded a different prediction, holding all of the individual’s other input variable values constant. This is a slightly unattainable level of explainability just as, for example, one cannot ask how the hypothetical criminal’s prediction would be different if he were instead a thirty-five year-old female, holding everything else about him constant, because it is not in fact possible to hold everything else constant outside of a true scientific experiment. Another less useful manner of asking for an explanation for automated decision making, is asking, in a general sense, why a programme made the decision it made. The predictable but vague and insufficient answer is that that the algorithm’s predictions result because making those predictions optimizes the algorithm’s objective function. [27]
There are two more attainable and useful versions of reason-giving. One approach is to make an attempt in describing how important different input variables are to the resulting prediction. That is, how important a certain input was to the algo
rithm’s accuracy during training across many individuals. The output of such a method is displayed in what is know as a variable importance plot, which displays the relative importance of different inputs graphically.[28]Although, such a method has not yet been used on complex algorithms like machine learning, the aim of such a system is to explain what the most important variables were in coming to an individual decision. [29]The last of the reason-giving methods is one that attempts to describe how increases or decreases in in the various input variables translate to changes in the outcome variable.[30]This attempt to reveal the “functional form” of the relationship between the input variable and the outcome variable is a useful way of understanding the correlations that the algorithm is making when coming up with a result. The result of this explanation is made through the plotting of a graph with the outcome variable as a function of the given input variable.[31]
rithm’s accuracy during training across many individuals. The output of such a method is displayed in what is know as a variable importance plot, which displays the relative importance of different inputs graphically.[28]Although, such a method has not yet been used on complex algorithms like machine learning, the aim of such a system is to explain what the most important variables were in coming to an individual decision. [29]The last of the reason-giving methods is one that attempts to describe how increases or decreases in in the various input variables translate to changes in the outcome variable.[30]This attempt to reveal the “functional form” of the relationship between the input variable and the outcome variable is a useful way of understanding the correlations that the algorithm is making when coming up with a result. The result of this explanation is made through the plotting of a graph with the outcome variable as a function of the given input variable.[31]
The Data Protection Committee under the chairmanship of Justice Srikrishna in its report refers to the principle of transparency in the light of the lack of information provided to data principals on how their information is to be processed. Accordingly, a data fiduciary has the duty to provide notice to the data principal of the collection of such information. It is also suggested that information that must be provided must include the basis of processing and the ability to withdraw consent. The White Paper on Data Protection in India argues that automated decision making may have adverse consequences and the possibility of creating a practically enforceable right may be carved out.
In my view, the PDPB must make efforts to incorporate the right to explainability and the right to object to an automated decision in its efforts towards ensuring a truly inclusive regime for the protection of data. While India does lag behind in terms of technological developments as when compared to other jurisdiction, it is submitted that the right to explainability remains an important right and a step in the right direction for the development of technology rights in the country. As laid down in the Puttaswamy judgement, the concept of privacy is, “founded on the autonomy of the individual. The ability of an individual to make choices lies at the core of the human personality. The notion of privacy enables the individual to assert and control the human element which is inseparable from the personality of the individual.”
[1] Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev.1529, 1540 (2019).
[2] Id.
[3] Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 WASH. L. REV. 1, 16-18 (2014).
[4] Facebook, Inc., FHEO No. 01-18-0323-8 (U.S. Dep’t of Hous. & Urban Dev. Mar. 28, 2019), (20 Jan 2020, 10 PM), https://www.hud.gov/sites/dfiles/Main/documentsL/HUD vFacebook.pdf
[5] Kaminski , supra note 1 at 1529.
[7] Regulation (EU) 2016/679, of the European Parliament and the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1 at arts. 22, 13, 14, 15 [hereinafter GDPR].
[8] Sandra Wachter, Brent Mittelstadt & Luciano Floridi, Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, 7 INT’L DATA PRIVACY L. 76, 80 (2017).
[9] Margot E. Kaminski, The Right to Explanation, Explained, 34 Berkeley Tech. L.J. 189, 194 (2019)
[10] GDPR, supra note 7, at art. 22.
[11] Id. at arts. 13(2)(f), 14(2)(g), 15(l)(h).
[12] Id. at art. 13.
[13] Id. at art. 14.
[14] Id. at art. 15.
[15] Id. at arts. 13(2)(f), 14(2)(g), 15(1)(h).
[16] Id at art. 22(1).
[17] GDPR, supra note 7, at recital 71.
[18] Supra note 9 at 196 – 197.
[19] GDPR, supra note 7, at art. 22(2)(a).
[20] Id. at art. 22(2)(b).
[21] Id. at art. 22(2)(c).
[22] Id. at arts. 22(2)(b), 22(3).
[23] Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 CALIF. L. REV. 671, 677-87 (2016).
[24] David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn about Machine Learning, 51 U.C.D. L. Rev. 653 (2017). At 704
[25] Nicholson Price II, Black-Box Medicine, 28 HARV. J.L. & TECH. 419, 421 (2015); Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. PA. L. REV. 871 (2016).
[26] Supra note 7 at 707.
[27] Id.
[28] Supra note 7 at 708.
[29] Anupam Datta et al., Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems, 2016 IEEE SYMP. ON SECURITY & PRIVACY 598, 601, 608-09.
[30] Supra note 7 at 709.
[31] Id at 710.