ETHICAL DILEMMAS IN AI – An overview into the challenges in AI

ETHICAL DILEMMAS IN AI – An overview into the challenges in AI

3rd Year BBA LLB (Hons.),
School of Law Christ (Deemed to be University), Bangalore.


“Artificial Intelligence” as it is called, is a new way ahead in the technological revolution that had made the world re-think certain humane possibilities. In today’s world, AI had overtaken human beings in performing every such social function which was only subjected to human rationale previously, though by definition it presupposes something artificial, the scope of its applications is indeed surreal. Notwithstanding how the realm of Artificial Intelligence had subjugated the human beings in various particular fields, from a standalone perspective of ethics in the application of such knowledge it suffers from a clear-cut vice and deems to be exhaustive to that extent. To quote Winston Churchill “The price of greatness is Responsibility”, it is indeed true and in line of the same it is pertinent to understand that the need of the hour is to bring about true responsibility, transparency, auditability, incorruptibility, and predictability in the modern AI systems so as to bridge the void between their efficient applications and ethical consequences arising thereof.

Thus, this paper analyses relevant ethical issues concerned with Modern Applications in AI such as, AI algorithms taking over social functions being predictable to those they govern, such algorithms being robust against manipulation, Orientation and decision making of AI not merely exhaustive to task specificity but also to significantly general issues and the theory of machines with moral status. In furtherance of the same, the paper aims to deal with ideal innovations construed towards creating an actual human element in AI applications such as whole brain emulation or uploading and the theory of Superintelligence in AI. In line with the aforementioned concepts, the paper analyses the trends in the Automated Self Driving Cars market and the issues concerned therewith simultaneously on a parallel connotation with the aforementioned issues.
                     I.          Introduction:
As recent as two decades ago, if somebody was to profess that they can create machines that think, the society found such propositions to be naïve and unviable, but preferably it is pertinent to note that the world full of possibilities is only for those who are willing to dream. Such a dream had today become an augmented reality, wherein machines had started thinking way ahead of the human brains and started retrenching the odds of impossibilities. 

However ideal and progressive such a scenario might seem in today’s racing world, it is also to be taken with a tinge of caution that these thinking machines pose specific ethical issues in their day-day operations and activities. To flag off the decision, let us consider a bank using a machine learning algorithm to recommend mortgage applications for approval. Gone are those days when the applicants had to shuttle from pillar to post, shuffle in the menace of red-tapism that existed and the ingenuine human bias in the former banking system, which almost took a great number of days and efforts to just put the application in the line of processing. However, with the advent of new age technology, the banks having had imputed a machine learning algorithm, which supposedly uses its programmed intelligence towards processing applications at a rapid rate of accuracy and in the purview of certain unbiased notions. However, it has been observed that, despite incorporating such an intrinsic algorithm into the machine, it was still not set free from exercising bias, which was adverted by way of certain statistics that had shown a steady decline in the bank’s approval rate for black applicants. It was rather dexterous for programmers to understand as to why an algorithm solely based on a complicated neural network or a genetic algorithm would exercise such racial bias. In determining such cause of failure as aforementioned, it is pertinent to pursue the issue from a machine learner’s perspective who uses decision trees or Bayesian networks rather than a programmer’s inspection which is construed to be coarse and not transparent. Looking out from such a perspective, it was decided that the Machine had been basing out its decisio
ns upon the address information of applicants who were born or had previously resided in predominantly poverty-stricken areas. Thus, this example imputes the increasing importance to develop AI algorithms which are not just powerful and scalable but those which are also transparent to inspection. Further, when AI algorithms take up cognitive work with social dimensions, it is presumed that such an AI inherits the pre-requisite social requirements before being put to general operations. It indeed would be frustrating to find that no bank in the world renders out to any class cutting sublime application.

Also, Transparency is not the only desirable feature of AI, another vital aspect to the purview of ethical issues would be regarding AI algorithms taking over social functions- be predictable to those they govern. An analogy can purport this issue, consider the legal principle of stare decisis, that which binds judges to follow past precedent whenever possible. An engineer in such an instance would argue that being tied up with the past is flagrant when on the other hand technology is driving various domains in a futuristic perspective. To think futuristic is no bad, but the core of any legal system lies in purporting a predictable environment in which human beings can optimize their lives[1].

Further it has also become increasingly important that the AI system be robust against manipulation. A Machine vision to scan airline luggage for bombs must be pre-emptive against human adversaries trying to search for exploitable flaws in such an algorithm, for example, a shape that is placed next to a pistol in one’s baggage would neutralize the recognition of it. In light of the aforementioned setback, it seems extremely relevant to understand that the current issue pertaining to Automated Self Driving cars, against them basing out decisions on the conventional trolley cart problem as a premise to algorithm had been augmented as a blatant manipulation by many critics.

Another seemingly crucial social criterion for dealing with AI systems is being able to find out the person responsible for getting something done and also the person liable for the repercussions arising out of such a command. When confronted with such a question, the Modern bureaucrats often take refugee to established procedures that distribute responsibility so widely that such liquidity in a literal sense dilutes the object of imposing a deterrence. Often, off late discussions had been sparked about Automated Self Driving cars and imposing of sanctions such as No-Fault liabilities on such owners of the cars, despite them having specific programmed over-ride systems in case of failure[2].

Thus, all criterion applying to humans performing social functions such as responsibility, transparency, auditability, incorruptibility, and predictability should be applied in consonance towards an algorithm intending to replace human judgment in such social functions, to not make innocent victims scream with helpless frustration.
                      II.          Artificial General Intelligence
There is almost a consonant opinion amidst various AI professionals that the modern-day Artificial Intelligence falls short of specific human capabilities in some critical sense, even though there is sufficient evidence that the AI systems had beaten the humans in many specific domains such as chess. It has been suggested by some AI professionals that as soon as they figure out to do something, to that extent, such a possibility which had been deemed to be intelligent seems to discern. In substantiating the above stance, chess was regarded as an epitome of human intelligence until Deep Blue had won the world championship from Kasparov, but even in such a magnanimous achievement, the AI researchers subject AI’s to be missing out the characteristic of generality in their applications[3]. Though Deep Blue became the chess champion, its exceptionality is only limited to that game, or rather AI algorithms superseding human equivalent or human superior performance are characterized in a deliberately programmed specific domain, in simple words it cannot perform in another specific domain such as playing checkers let alone driving or creating a breakthrough in scientific inventions.

An analogy that presupposes the aforementioned proposition of restriction to a specific domain is that a bee exhibits competence in building beehives whereas a beaver exhibits his competence in building dams, but on another connotation, a human being observing such activity can perform both. This analogy leads to a preponderance that if human intelligence is truly general and is it better at certain cognitive tasks than others. In light of such questions, it is noteworthy that human intellect is sure, significantly and more generally applicable than nonhominid intelligence.[4]
However, it stands to be an undisputed fact that specific domain AI’s tend to be more intellectually exuberant as compared to in contrast with a human being. Consider the same example of Deep Blue, wherein if it were to be programmed assessing the local specific game behavior, the programmers would have had to input enormous moves into the database considering the behemoth of sample space in the game of chess. Rather avoid such a herculean confrontation, the programmers of Deep Blue relied their confidence upon Deep Blue’s non-local criterion of optimality, namely the moves would tend to steer the future of the board in the winning region as defined by chess rules. Deep Blue computed the non-local game map so intrinsically that the programmers themselves could not anticipate its specific response to its certain moves.

Further, it is relatively easy to envisage the sort of safety issues that may result in AI operating only with a specific domain, but it is a different class of a qualitative problem to handle an AGI operating across many generic domains at once.[5] A mundane example can be that of Toaster, building a toaster involves envisioning bread and its reaction towards purporting an ideal heating element. The toaster in itself is unaware of the purpose that it is exclusively built for (i.e.: to make bread toast) whence when any other alien substance rather than bread is placed for heating, with no sense regardless it happens to apply the same heating algorithm which had been programmed suiting the requirements and composition of a bread. This particular consequence is said to have occurred due to the design executing an unenvisioned context with an unenvisioned side effect.[6] This issue can further be discussed in reference to Automated Self Driving cars, wherein as a plethora of cases across the world suggest, they perform the meticulous task of driving with ease when placed in a controlled or somewhat disciplined environment such as a Highway, but rather more often are subjected to have failed when faced against complex driving atmospheres such as the City traffic, wherein they require a sense of more general computation over and above, this view is conducive concerning the fatal accident that occurred in Arizona in the later year due to a judgemental error occurred on board of an automated uber  cab.

Considering all the above said notions, to build an AI that acts safely in many domains, with many consequences, including problems that its creators or programmers had not specifically envisioned thereof, one must specify good behavior in terms as “X such that the consequence of X is not harmful to humans. ” This kind of computation is non-local in its nature and involves extrapolating the indirect consequences of actions, also inspecting such a cognitive design tends to formulate the crux of its augmentation into the realm of real society, by creators assuring and respecting such a verification to distinguish right assurances from that of whimsical ones.

One should also forebear in mind that purely hopeful expectations had been concerns in the late past. Thus such engineers must envision into building an AGI that in real s
ense thinks like a human being with ethical cognition rather than a futile product of ethical engineering.
                 III.          Machines with Moral Status:
Futuristic AI systems having moral capabilities host a plethora of ethical issues considering such a possibility. Our dealings with beings possessing moral status are not mere matters of instrumental rationality, for all good reasons human beings construe certain prerequisite notions that make them deal different beings in certain ways and also to refrain from dealing with them in certain ways. Francis Kamm[7]had propounded the following definition that shall be used to highlight the essence of moral status further on, and it reads as follows:

X has moral status = because X counts morally in its own right, it is permissible/impermissible to do things for its own sake.

In reference to the aforementioned definition on moral status, if one has to contrast between a physical object such as a rock and human being, in case of a rock, we may crush it, pulverize it or subject it to any treatment without any effectual consensus from it. Whereas in case of human being, we might have to treat her in a certain way she prefers to be, ensure the protection of her physical property, ensure safety against being harmed by any other person, etc., thus in parlance to the above definition we can conclude that human beings do have moral status. Attribution of moral status had previously been an able factor in determining certain ethical issues in areas such as the practice of Abortion and the status of Embryo, Animal Experimentation, Euthanasia and its legality and many other such debatable contexts. Thus it ensues that such application towards the Modern AI systems in question would render out an efficient solution.

However, it is widely agreed that the current AI systems have no moral status, in regards to the un disputable possibilities such as changing, copying, terminating, deleting and using such computer programs as we please. Given the same, it is not to be ignored that the moral constraints to which we subject our dealings with contemporary AI systems are all grounded in our responsibilities to other beings, not in any duties to the system themselves.

While it is generally agreed that current AI systems are deterrent of moral status, it is precisely un-attributable as to what constitutes moral status as such. However, two criteria are commonly proposed to be linked with moral status than can either be assessed individually or conjointly. These criteria may be categorized as follows:
Sentience: the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffering.
Sapience: a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent.
In light of these two criteria, a common view is that many animals have qualia and therefore imbibe some moral status, but on the other hand human beings possess the additional quality of sapience, which justifiably confers a higher moral status than that of animals. In furtherance of the same, moral status if often debated in what is so-called the Borderline Cases. These borderline cases deal with conferring upon moral status to specific categorical cases such as, Marginal Humans (i.e., Newborn children or Infants tend to lack sapience or awareness of their specific interactable environment) whose full moral status is often criticized by analysts.[8]

This picture of moral status suggests that in a hypothetical scenario if at all an AI would have qualia and possess the element of Sentience, it would more or less be by far better off than the aforementioned borderline cases. The crux of the argument is that an AI which is sentient, though it may lack in respects of language and higher cognitive abilities, it still portrays reactions as in contrast to it earlier being a mere inanimate object. Thus, in an invariably ideal situation wherein an AI system is both sentient and sapient, it by far can be regarded as a subject with full moral status as such.
            IV.          Minds with exotic properties:
It is an accepted pre-notion that AI systems possess a scintillating attitude by way of them having minds with exotic properties. In respect of considering such technically advanced cases, many ethical issues arise in regards to the principle of subjective rate of time in AI’s and the problems of reproduction of AI systems.[9]
The issue of subjective rate of time arises as to when the AI deviates drastically from the rate that is characteristic of a biological human brain. This concept is best explained in reference to new age or rather hypothetical technological process such as “whole brain emulation,” which in its very own colloquial sense is termed as “uploading.” This process enables a human or animal intellect from its original implementation form into a digital computer. This process is usually carried by performing a high-resolution scanning of some particular brain (destroying the original physical structure of the brain by means of dissecting such brain into thin slices) by way of high-throughput microscopy combined with automated image recognition, then this three-dimensional map of components of the brain and their interconnections is combined with a library of advanced neuroscientific theory which specifies the computation of the specific kinds of neurons and synaptic junctions, lastly the computational structure and the associated algorithmic behavior is emulated into a powerful computer. The resulting upload may inhabit a simulated virtual reality, or it may be given an ostensible physical form to interact with the natural environment, which thereby replicates the essential functional characteristics of the human brain. In such cases, when the upload is rendered out to be successful, the real challenges begin. The challenges quite plausibly might include questions such as regarding the feasibility of such a process, that in case of a successful upload would the computer thinking thereof be sentient towards its physical surroundings and such those relating to the problem of personal identity being dilapidated, considering the possibility of two like minds running parallel to each other. However, these questions have not received due attention as till date, but the problem relating to the subjective rate of time had been discussed time and again.
The issues considered with the subjective rate of time can be best described by way of an example wherein, assuming that an upload is rendering the inputs at a pace manifold times faster than the human brain, the external would appear to have been slowed down by that manifold right in the output. In such a case, for instance, the upload observes a coffee mug falling to the ground slowly, but sends off a couple of emails and finishes reading the morning newspaper erstwhile. Thus, one second of objective time corresponds to seventeen minutes of subjective time. The concept of objective time is defined as one which is deemed to be the lapse of real time, whereas subjective time is the perceived time that influenced by a confluence of perception distorting factors. For example, any person under the influence of a drug might perceive the objective time with a deception that it is flowing slowly, but in reality, the objective time is inalterable but the subjective time is perceived from outputs rendered by the brain due to such an influence. 
Another critical set of an exotic property of artificial intelligence relates to reproduction. A number of empirical conditions are by far not prerequisites to artificial intelligence. For example, the process of reproduction in its natural and biological form is a confluence of two DNA systems coming out of from both the parents, further the process of such reproduction is one which is exceedingly long for a comprehensive period of nine months from the conception of the baby
in the embryo to the delivery of such baby. Thus, it can be conceived that humans possess a complexly evolved set of emotional adaptations relating to reproduction, nurturing and the blissful child-parent relationship. In light of these propositions, it becomes increasingly pertinent to note that the aforementioned empirical conditions are completely void in the process of reproducing a machine intelligence.
To understand why the aspect of reproduction is abominable in the context of AI’s, the problem or rather a possibility of rapid reproduction in AI’s comes into picture thereof. For instance, if we are to consider, given access to computer hardware, it indeed becomes highly viable for a pre-existing AI to duplicate itself with the same computations that it had been pre-programmed thereof. Moreover, since the copy of such an AI would be completely identical to the original, it would be born as a completely mature unit, and it would develop such capability of replicating it at an expeditious pace. In such a case wherein hardware limitations are absent, AI’s could, therefore, grow exponentially at a rapid rate, wherein the doubling time falls to the order of minutes as in contrast to decades or centuries as in the case of human beings. Such impeding scenarios would have possible consequences if unregulated such as, the domination of AI’s attributed to their en masse production and also leading to a prerogative crisis due to higher population growth ousting the already scarce resources.
                     V.          Conclusion
Albeit current AI offers us a couple of moral issues that are not already present in the structure of automobiles or power plants, the methodology of the AI calculations towards increasingly human-like behavior forecasts unsurprising difficulties such as in case of social jobs filled by AI calculations, suggest new structural necessities such as transparency and predictability in requirements such as transparency and predictability. Sufficiently general AI algorithms may no longer execute in predictable contexts, requiring new kinds of safety assurance and the engineering of artificial ethical considerations. Artificial intelligence’s with adequately progressed mental states, or the correct sort of states, and their plausibility regarding moral status shall to that extent qualify them as persons, though very unlikely as in contrast with biological persons, who are to be governed by the legislation of higher standards. Whence finally, the prospect of AI’s with superhuman intelligence and superhuman abilities presents us with the extraordinary challenge of stating an algorithm that outputs super ethical behavior.

[1] THE CAMBRIDGE HANDBOOK OF ARTIFICIAL INTELLIGENCE  316-332 (Edited by Keith Frankish and William M Ramsey, 2014).

[2] John W. Campbell & Isaac Asimov, Runaround, Astounding science fiction (1942).

[3] Nick Bostrom, Existential Risks: Analyzing Human extinction scenarios and related hazards, 9 Journal of Evolution and Technology (2002),

[4] The Future of Human Evolution – Nick Bostrom, , (last visited Feb 22, 2019).

[5] David John. Chalmers, The conscious mind: in search of a fundamental theory (2007).

[6] Ben Goertzel, Artificial General Intelligence (2006).

[7] F. M. Kamm, Intricate ethics rights, responsibiLities, and permissible harm (2018).

[8] Trevor Hastie, Robert Tibshirani & J. H. Friedman, The elements of statistical learning: data mining, inference, and prediction: with 200 full-co
lor illustrations (2004).

[9] Lawrence A. Hirschfeld & Susan A. Gelman, Mapping the mind: domain specificity in cognition and culture: Conference entitled “Cultural knowledge and domain specificity”: Papers (1994).

Leave a Comment