
“If the moon, in the act of completing its eternal way around the earth, were gifted with self-consciousness, it would feel thoroughly convinced that it was travelling its way of its own accord…So would a Being, endowed with higher insight and more perfect intelligence, watching man and his doings, smile about man’s illusion that he was acting according to his own free will.” – Albert Einstein
Einstein, just as modern neuroscience has, rejected the notion of a free will, finding such a position untenable in the face of our increasingly intimate understanding of causality. However, I resonate with an unintentional sentiment in this quote: that there is beauty in the human condition, that our consciousness could be somewhat uplifted in spite of its evanescent volitional power. But modern philosophy can be cynical, and its approach to free will tends to trample upon such sophistic sentiments, discarding them in favour of reductionism. Gilbert Ryle famously derided free will as a ‘myth,’ referring to the concept of any meaningful mental substance as the “Ghost in the Machine.” I’ve always found discussions of free will to be intensely frustrating; there is little that can be done to deny the impotency of consciousness in the context of emergent neuroscience, yet there remains an intense inclination within me to uphold its significance. Reductionist approaches to free will regard consciousness as merely epiphenomenal; they maintain that both human thought and action are derived secondarily from our neurological processes, and are thus outside of our conscious control. Further, in subsequent discussions on moral responsibility, they arrive at an airtight exclusion of consciousness by restricting their purview entirely to rationality. However, I have found that by considering these arguments in light of the recent and rapid developments in AI, such approaches can be unveiled as entirely inappropriate in their dismissal of consciousness, and thus we may yet find a new purpose for our precious “illusions.”
First, we must proceed from the assumption that findings about neuroscience do in fact reduce both action and conscious will to something outside of our control. This is perhaps best summed up by Daniel Wegner (2018, 169), who argues the reductionist perspective that “unconscious and inscrutable methods create both conscious thought about the action and the action, and also create the sense of will we experience by perceiving the thought as cause of the action.” The mental states that make up our conscious experience do not causally influence our actions and are therefore merely an epiphenomenon. This is to say that they are a byproduct of the physical systems and connections which make up our brain’s operation – the “inscrutable methods” – and as such, Wegner (2018, 15) regards that “conscious will is an illusion.” The consequence of this is that we have no true agency. This paper will not attempt to argue for a definition of free will, but will assume a necessary condition that our consciousness is able to cause action. This seems to align with the classical definition of free will – an ability to do otherwise – since if a consciousness is causally disconnected from action, this ability does not exist. Arguably, my definition also aligns with some reasons-responsive definitions of free will, since if consciousness is an epiphenomenon, then any reasons or desire for action are not truly owned by an agent in question. So, whilst these reasons may be the cause of action, they are a part of these “inscrutable methods” and thus it is not truly the consciousness which is responsive to these reasons. Whether this definition is the best representation of free will is not of great concern; it is enough to argue that it encapsulates some significant portion of our intuition about what it means to freely will something, and thus it is acceptable to proceed with it. Therefore, if it is the case that our conscious will cannot causally influence our actions, then we do not have free will in any significant sense. Again, whether this position is in fact true is not at debate here; this is a generally well supported position, and is as such a genuine possibility that does require some philosophical attention.
And from this reductionist perspective, there is a clear ethical consequence: how can we punish people if they are not the causers of their actions? According to Greene and Cohen (2004), human beings are punished based upon the notion of desert. “What is important for our purposes is that retributivism captures the intuitive idea that we legitimately punish to give people what they deserve based on their past action.” Retributivism is best explained as ‘an eye for an eye;’ if you bring harm to another, breach some cultural standard, or commit any other action deemed wicked by society, then you should be given your ‘just deserts,’ and this is the primary purpose of punishment. There is thus an immediate issue with our intuitive system of punishment, and the reduction of conscious will to mere epiphenomena. If conscious will is an illusion, then it is unintuitive to say that people deserve to be punished for the actions that they have committed. This is because their will would not have caused the action, and therefore they would have been as much a bystander to the process as another person witnessing it. Therefore, they could not have had moral responsibility.
However, there is no immediate issue with this reduction of conscious will, and the law. Greene and Cohen (2004) argue that “the reason that the law is immune to such threats is that it makes no assumptions that neuroscience, or any science, is likely to challenge.” This is because the assumption that law does make is that “people have a general capacity for rational choice,” to which their responsibility can be credited. Whilst Greene and Cohen do not dwell on it, it is important to my original definition of free will that we distinguish between rationality – which the law relies on – and reason. Wolfgang Welsch, alongside his comprehensive definition of rationality, offers this distinction. “Reason operates on a fundamentally different level from rationality. While forms of rationality refer to objects, reason focuses on the forms of rationality.” Consider this example: there are two people who are standing outside a bathroom, and there is a spider in the sink. One loves spiders and rushes in to examine it, and the other has a distaste for them, and thus remains outside the bathroom. In both cases, the subjects were acting rationally; the appearance of the spider prompts a decision that is inherently related to the appearance of the spider. An irrational response for example could be punching the wall out of like or distaste for the spider. It is an action that does not seem related to the like or distaste of spiders, and the subsequent goal of increasing or decreasing proximity to them. Perhaps if punching walls was a culturally significant way of expressing excitement or distaste – akin to cheering or booing – this may be rational, but presently we can recognise that it was not. Whilst both subjects acted rationally, they did so for different reasons, which reaffirms this idea that there are different systems of rationality based upon different reasons. Crucially, it explains how one subject might say the other’s response was not reasonable (based upon their preconception of the positive or negative worth of spiders), but would not be able to decry it as irrational. However, consciousness did not necessarily have to be involved for this to be a rational action. Recall what Wegner (2018, 169) says: “unconscious and inscrutable methods create both conscious thought about the action and the action.” The reasoning for each subject’s action, their like or distaste, is this conscious thought, and is the result of these unconscious methods, not the cause. This reasoning, the selection of their system of rationality is recognised epiphenomenally, after the fact according to the reductionist perspective. There is no selection, but rather a recognition of our specific system of rationality that forms the illusion of our “reason” for acting. And therefore, by showing that it is possible for rationality to be divorced from reason, we can argue that an action can be rational, but not reasons-responsive. When we argue for a retributivist system of punishment, we assume that someone is eligible for desert because they are the owner of their reasons for action, and by causal proxy, the actions themselves. Therefore law, by attending only to the rational aspect of choice, is not affected in the same way by neuroscientific findings about the nature and impotency of consciousness. And as a result, whilst neuroscience may preclude moral responsibility according to these intuitions, we can still have legal responsibility.
Greene and Cohen argue that this gap between the way neuroscience affects our intuitions behind the way we punish and our legal doctrine is a potent one, as it will in time undermine our ability to have a legal responsibility. Greene and Cohen (2004) posit that “the legitimacy of the law itself depends on its adequately reflecting the moral intuitions and commitments of society.” The issue is then a simple one; if neuroscience can eviscerate the foundations of our moral intuitions, but leave our legal doctrine theoretically untouched, our legal doctrine does not reflect our moral intuitions. Therefore, the legitimacy of the law is severely undermined by neuroscience and its reductionist findings. Greene and Cohen appropriate Stephen Morse’s terminology, regarding this difference between the effects of neuroscience on the law and our moral intuitions as a “fundamental psycho-legal error,” or “the gap between what the law officially cares about and what people really care about.” In this sense, the relationship between the law and intuition is more clear. The law on its own cannot hand down sentences or dole out punishment; it requires people to man its helms. There is a “tense marriage” between the law and our intuitions; its operation rests to some degree upon the intuitions of those judges and jurors who steer it. Thus, if the intuitions of these people who operate it become more clearly divorced from the methods by which the law itself is postured, legal consequences will seem less as a direct consequence of the law and more as a simple exercise of people’s intuitions, something which is far less universal and arguably unfair. Greene and Cohen are thus justified in arguing that “new neuroscience will continue to highlight and widen this gap,” as we have previously shown that it is neuroscience that drives the obvious distinction between moral and legal responsibility. Therefore, the loss of our moral responsibility, and the subsequent gap between it and our existing legal responsibility, will in time also erode our ability to have legal responsibility.
Greene and Cohen argue that the means to resolving this gap is not through modification of the law, but through the shifting of our intuitions to consequentialist ones. Consequentialism is when “punishment is justified by its future beneficial effects…chief among them are the prevention of future crime through the deterrent effect of the law and the containment of dangerous individuals.” This is something they both “forsee and recommend,” primarily “because consequentialist approaches to punishment remain viable in the absence of common-sense free will,” and therefore “we need not give up on moral and legal responsibility” (Greene and Cohen, 2004). Just as our law remains unaffected by a forced shift away from free will, so too does a consequentialist justification of punishment. The reasoning of consequentialism is entirely focused upon the net societal effects of punishment; deterring and containing bad actions. This calculation of a type of utility of punishment is orthogonal to the question of whether or not we have agency over this behaviour, because the behaviour and actions which are the focus of consequentialism exist regardless of free will. Therefore, because these consequentialist approaches are compatible with a lack of free will, if our intuitions about punishment are shifted towards them, then we may still have moral responsibility in light of the neuroscientific reduction of conscious will. As a result, because our intuitions about punishment would be just as unaffected by neuroscience as the law, a shift towards consequentialism would bridge the existing gap between the two and resolve this psycho-legal error, and thus we could have both moral and legal responsibility.
However, I believe that the argument for this shift to consequentialism highlights a potential issue regarding legal responsibility and the development of AI. This is because a shift towards consequentialism, in order to side-step the free will challenge, must put all its moral and legal responsibility eggs into the basket of rationality; that is, a human’s ability to be responsible for their actions hinges on their capacity for rationality. Therefore, I believe that if AI can be shown to be rational in the same way as humans, we can argue that it too can have legal responsibility in a consequentialist system. First, Zambak (2018) provides an analysis of free will as rationalised action that offers the analogous possibility for AI to possess free will. He argues that “an agentive action as the origin of the free choice is caused by prior events. Rationalized action is the source of this causation. Therefore, the rationalization process of an agent can provide a causal analysis of free will.” In a similar vein to my spider argument, Zambak offers examples of situations and sets of rational choices. He offers one scenario, scenario “F”, which is leaving your apartment on the 14th floor, and argues that because of this scenario, the two rational options of going about this scenario and leaving your apartment are “F1” – using the elevator – and “F2” – using the stairs. Convincingly, he argues that because of the scenario, leaving through the window is not rational, and thus is not part of our choices. Therefore, he argues that “our free will is the rational choice of actual events in a limited and predetermined condition.” Whilst I disagree with the characterisation of this as a free choice because of my previous definition of free will and distinction between reason and rationality, the value in Zambak’s argument for our concerns is the ability for his argument to enable AI rationality. Zambak identifies that “in AI, it is possible to construct a rationalization model and give the analysis of causation of agentive actions. Moreover, AI can simulate these causations in machine intelligence.” Arguably, we have already arrived at such a point in AI development, with recent breakthroughs in “deep learning” AI. These machine learning systems are designed to emulate human brains in the way that they process information, aligning with Zambak’s specifications for a “rationalised” AI. Furthermore, Zambak argues that because “agentive action is the only condition for the occurrence and analysis (in causational and rationalized form) of free will” and “these occurrence and analysis conditions can be modelled and simulated in machine intelligence,” that “therefore, AI can possess the tools through which it can realize its autonomous free choices.” Again, I disagree with Zambak’s conclusion that AI would be autonomous in a free sense because of its ability to rationalise, but I agree that this rationalisation amounts to AI being the proximate cause of its actions in the same way classical agents (humans) do.
Similarly, Christina Mulligan (2017) argues that because actions by such AI are “neither proximately caused nor reasonably foreseeable by the robots manufacturers and developers…in cases of robots running black-box algorithms, the best answer to the question ‘What proximately caused this action?’ is ‘The robot.’ Any other answer tortures the definition of ‘proximate cause.’” This notion of proximate cause and AI responsibility will be expanded upon later, but for now I believe we have sufficient support to argue that AI can function as rational agents in the same way humans can, and can be its own proximate cause for actions. Therefore, by shifting to consequentialism and routing all of human responsibility from the nexus of rationality, Greene and Cohen unintentionally advocate for a system where AI can have eligibility as responsible agents, and thus there is no theoretical difference between humans and AI in this regard.
Now that we have established the ability for AI to have rational agency and theoretical legal responsibility, we must tackle some of the intuitive responses that will attempt to separate human and AI agents in a consequentialist, rationality-forward outlook. In order to separate AI and humans in a meaningful way, we must find an aspect relevant to their respective rationalised decision making processes, where humans have a distinct gap from AI. First would be the objection that AI cannot come into existence on its own. This is a simple enough premise to accept, as although humans may not be an uncaused cause, and are brought into existence by other humans, we can reasonably consider them something of a blank slate. The creation of deep learning machines however necessarily requires some initial input of a programmer or software engineer, in spite of its ability to seek out data sets on its own in a similar way to humans. We could respond that there are currently deep learning AI which have the ability to create other AI subsidiaries, such as Google AutoML, but ultimately this sort of artificial intelligence still has an initial cause which is identifiable, and so this response fails. We can however respond that humans and their rational systems can be influenced in similar ways to an AI being programmed. Greene and Cohen (2004) offer the example of the Boys from Brazil, a movie about a group who attempt to recreate Adolf Hitler by raising a child “in environments that mimic that of Hitler’s upbringing.” This could lend support to the claim that the aforementioned means of creation are irrelevant, as it is possible through environmental factors, and the established science of epigenetics, that we could determine the behaviour of a human agent in the same way we might an AI.
Yet, this still seems a flimsy argument, as influencing behaviour and psyche is not the same as writing out specific code for how an agent should learn or behave in certain situations. The latter case of AI seems intuitively to be externally determined by another in a far greater manner, and thus less able of actioning its own proximate causes. However, Greene and Cohen propose one more example based upon the ideas introduced by the Boys from Brazil: Mr Puppet. Mr Puppet is a man accused of committing a crime, but it is revealed that not only his environmental factors, but his very genetic code has been altered by a scientist with a near perfect success rate in his other human altering experiments. So specific is his meddling, that the scientist attributes the early onset of angry letters that Mr Puppet writes, to “a handful of substitutions [he] made to his eighth chromosome,” and argues as such that Mr Puppet “deserves none of the credit” for his actions. In this sense, we can see a very clear analogy between Mr Puppet, and AI. Their code, be it computer binary or DNA nucleotides, was written out and their exposure to data sets or environments was closely controlled to achieve a clear goal. As such, they were both clearly programmed. The only difference is the flesh and blood of a human as opposed to the circuitry of a computer, but the electrical signals that make up neural connections and AI neural networks might well be the same. Either way, the artificial or natural origin of these “brains” is not relevant to their rationalisation; only the aforementioned programming is. Therefore, if we can show that Mr Puppet can still be legally responsible, this entails the same for AI.
From this, Greene and Cohen wrestle further with the issue of external vs internal determinism, and it thus seems we might have a relevant distinction between human rationality and AI rationality once more. Indeed, recalling back to notions of programming, our initial understanding of AI as having an identifiable cause gives us justification that an AI’s ‘intentions’ are not its own, but rather they are external to it. In fact, Abbott and Sarch (2019) identify this in their summary of the “Eligibility Challenge,” of how AI could be legally responsible. “As a mere machine, AI lacks mental states and thus cannot fulfill the mental state (mens rea) elements built into most criminal offenses. Therefore, convicting AI of crimes requiring a mens rea like intent…would violate the principle of legality.” However, I would agree with Greene and Cohen that this is a misunderstanding of intent which is based upon our libertarian moral intuitions. We fundamentally believe that we can will our own actions, and in clear cut cases such as Mr Puppets where “forces beyond his control played a dominant role in the production of his behavior,” we do not assign responsibility. Greene and Cohen argue that assuming there can be any internal determinism is in fact a mistake, and they question “what is the difference between Mr Puppet and anyone else accused of a crime?…we have little reason to doubt
that (i) the state of the universe 10 000 years ago, (ii) the laws of physics, and (iii) the outcomes of random quantum mechanical events are together sufficient to determine everything that happens nowadays, including our own actions. These things are all clearly beyond our control. So what is the real difference between us and Mr Puppet?” The answer, is nothing. “The fact that these forces are being guided by other minds rather than simply operating on their own seems irrelevant…so long as his genes and environment are intrinsically comparable to those of ordinary people, this does not really matter” (Greene and Cohen 2004). Therefore, an attempt to distinguish between external and internal causes and intentions is not relevant, since all causation is in fact external to consciousness. In this light, we can more affirmatively argue that AI also have intent. “One conceivable way to argue that an AI…had the intention (purpose) to cause an outcome…would be to ask whether the AI was guiding its behavior so as to make this outcome more likely” (Abbott and Sarch 2019). By eliminating the distinction between internal and external determination of actions, this machine system of intention functions identically to human intention identified in Mr Puppet. Crucially, Greene and Cohen regard Mr Puppet as legally responsible in their consequentialist system because of his rationality, and by this same reasoning, so is AI. Because they are both determined by factors outside of their control, there is no difference between the legal responsibility of the caricaturesque Mr Puppet and a regular human agent, and because all that distinguishes Mr Puppet and AI is flesh and blood, AI is capable of legal responsibility too.
The consequences of AI meeting the standard for legal responsibility, are that we now have a new psycho-legal error: AI is legally responsible in a consequentialist system, but there is a great difficulty in punishing it. Abbott and Sarch (2019) implore us to “recall that, arguably, the paramount aim of punishment is to reduce harmful criminal activity through deterrence. Thus, a preliminary objection to punishing AI is that it will not produce any affirmative harm-reduction benefits because AI is not deterrable…if AI cannot detect and respond to criminal law sanctions in a way that renders it deterrable, there would be nothing to affirmatively support punishing AI.” Abbott and Sarch do consider the difference between specific deterrence – deterring the specific agent from not committing crimes in the future – and general deterrence – deterring other actors from committing crimes – in order to provide some semblance of AI deterrence. They argue that “direct punishment of AI could provide unrestricted general deterrence against the developers, owners, or users of AI and provide incentives for them to avoid creating AI that cause especially egregious types of harm without excuse or justification.”
However, this argument fails for three reasons. It firstly assumes that AI action is always reducible to the intent of developers or operators. Conversely, Abbott and Sarch also identify the “AI-Criminal gap,” wherein unforeseeable harmful actions taken by AI in spite of the reasonable care of programmers cannot be attributed to any one culpable individual, and punishment in this case would likely involve overly-broad liability, with severe legal and commercial consequences as the result. Similarly, punishment of the AI itself would likely have to be severe for it to have any deterrence on the “upstream actors” who stand to gain from the operation of the AI, possibly including its destruction. We might analogise such a severe response to one of the responses to consequentialism that Greene and Cohen (2004) identify: “imposing the death penalty for parking violations would maximize aggregate welfare by reducing parking violations to near zero.” They continue that this is absurd, offering that “People everywhere would live in mortal fear of bureaucratic error.” Therefore it would seem that for an irreducible AI to serve as general deterrent we would have to enable these legally responsible agents occasionally being over-punished, which as Greene and Cohen identify “could never survive in a free society…[which] is required for the pursuit of most consequentialist ends.” Therefore, this over-punishment is at odds with consequentialism itself. Lastly, as identified in the first issue, for AI to act as a general deterrent, it is assumed that we must somehow shift its responsibility onto another; we have to reduce it to a human agent. In this case, we are absconding the legally responsible party, which is a clear undermining of our moral and legal intuitions. Simply put, if an AI is the responsible proximate cause of an action, it isn’t fair to automatically reduce the blame to the creator, just as it wouldn’t be fair to immediately jail a mother for her son independently causing a crime. In any case, it does not seem as if we are able to punish AI without there being some societal consequences. This constitutes a psycho-legal error, because if we are unable to punish the agent who is legally responsible for a crime, we still have a gap between our intuitions, and how the law works. If AI is legally responsible, but we cannot fulfil our consequentialist intuitions of deterring or containing its harm, then there is clearly a new gap between the law and our intuitions. This will pose a similar threat to the law as Greene and Cohen’s original error; the legitimacy of the law will be undermined. As such, we can conclude that it is not possible for consequentialism to persist whilst AI have legal personhood.
Similar challenges would arise in a retributivist system however, in spite of its redundancy if we accept that neuroscience precludes free will. John Danaher (2016) proposes this argument for a “retribution gap.”
(1) If an agent is causally responsible for a morally harmful outcome, people will
look to attach retributive blame to that agent (or to some other agent who is
deemed to have responsibility for that agent) — what’s more: many moral and
legal philosophers believe that this is the right thing to do.
(2) Increased robotisation means that robot agents are likely to be causally
responsible for more and more morally harmful outcomes.
(3) Therefore, increased robotisation means that people will look to attach
retributive blame to robots (or other associated agents who are thought to have
responsibility for those robots, e.g. manufacturers/programmers) for causing those
morally harmful outcomes
(4) But neither the robots nor the associated agents (manufacturers/programmers) will be appropriate subjects of retributive blame for those outcomes.
(5) If there are no appropriate subjects of retributive blame, and yet people are looking to find such subjects, then there will be a retribution gap.
(6) Therefore, increased roboticisation will give rise to a retribution gap.
This shows a similar issue; our intuition for punishment, whether based on our notions of desert or desire for societal utility, cannot be satisfied when an AI is the proximate cause of an action.
It therefore seems that for both consequentialist and retributivist intuitions about punishment, AI acts as a wedge, whose legal responsibility drives a gap between our intuitions about punishment, and legal doctrine. Thus, the psycho-legal error persists even when considering a return to retributivist intuitions of punishment.
Furthermore, not only can we not punish AI, but its status as a legally responsible agent would entitle it to legal personality, which has undesirable social consequences. Abbott and Sarch (2019) identify that “Legal personality is necessary to charge and convict an AI of a crime,” and thus legal responsibility and legal personality go hand in hand. At this point, we cannot reject the legal responsibility of AI without undermining the ability for humans to have legal responsibility, and therefore we must concede that AI would have legal personality too. Abbott and Sarch note however, that “full-fledged legal personality for AIs…with all the legal rights that natural persons enjoy, would clearly be inappropriate.” They argue that such legal personality would lead to “rights creep,” even in circumstances of using modified rights and obligations as is the case with corporations. “Such rights,” they argue, “for corporations and AI, can restrict valuable human activities and freedoms.” It is therefore clearly in our best interests to effectively argue that AI should not be deemed to have legal responsibility, so that we may rescue ourselves from the societal consequences of its legal personality, and the psycho-legal error that will undermine our own legal principles.
I believe that the means by which we can rescue ourselves will therefore come not from a selection of intuitions about punishment, but rather by returning to an emphasis on consciousness. Crucially, the mistake that Greene and Cohen, and others who take neuroscience as reducing free will to predetermined physical processes make, is that they ignore the lingering importance of consciousness. Because it is an epiphenomenon, and thus causally unrelated to our decision process, it tends to be discarded from such discussions on decision making and responsibility. I believe however, that it provides an important and relevant distinction between our rationalisation process, and that of AI, which will therefore allow us to preclude AI in its current form from legal and moral responsibility. Firstly, we must establish that AI does not have consciousness. Dehanae et al (2017) answer this question by distinguishing between two types of consciousness: “the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense).” It is potentially arguable that deep learning machines could have consciousness in the first sense, given the ability of deep learning machines to select data sets and create action on its own without input, however it will become clear in the next paragraph why the most important part of consciousness in the scope of my argument is the second sense of consciousness. This second sense of consciousness aligns well with Ned Block (1995) and his definition of “phenomenal consciousness,” which he regards as “what it is like to be in that state [of consciousness].” This would indicate the experience of consciousness to necessarily include the experience of and ability to reflect on emotion. AI are unable to do this, and again, whilst this will be expanded upon later, it is important to emphasise now that this precludes it from a “C2” conscious experience. Dehanae et al confirm that “current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain,” and therefore we can regard AI as not being in the possession of true consciousness.
The importance of epiphenomenal consciousness, and its ability to distinguish human rationality and AI rationality, is in the authorship of our actions that this consciousness grants us. As Wegner (2018, 585) summarises: “the illusion makes us human.” Wegner (2018, 541) likens consciousness to a compass, arguing that “the experience of will is therefore an indicator…the feeling of doing tells us something about the operation of the ship…just as compass readings do not steer the boat, conscious experiences of will do not cause human actions.” This does not immediately explain why consciousness is relevant to our decision making process, but Wegner (2018, 579-582) handily likens the reductionist perspective to that of robots. Wegner (2018, 580) analogises intentional murder, when void of consciousness, to “an act of clumsiness that happens to take a life.” A human in this regard would essentially be a faulty robot, and this consequence-focused approach seems to take all emphasis off of authorship, because if it was not willed, it doesn’t seem to matter. However, Wegner (2018, 581-582) argues that a robot that could “keep track of what it was doing, to distinguish its own behaviour from events caused by other things” on the other hand, would be able “to assess its behaviour with respect to the laws…and avoid future situations in which it might break laws.” Our conscious epiphenomena would serve this role, acting as ways for us to identify, through our phenomenal conscious experience, the “meaning and likely occurrence of our behaviour.” Therefore “these thoughts about actions need not be causes…to serve moral functions” (Wenger 2018, 582). These thoughts about actions that give us some insight into the properties of our behaviour are not available in the same way in deep learning AI, and therefore this human ability to reflect upon action is morally significant and therefore a meaningful and relevant difference between the decision making processes of humans and AI. “Illusory or not, conscious will is the person’s guide to his or her own moral responsibility for action…it tells us where we are and prompts us to feel the emotions appropriate to the morality of the actions we find ourselves doing” (Wegner 2018, 582-582). These emotions, inaccessible to AI, play an important moral role, and thus influence our ability to be responsible for actions, whether or not the reception of these emotions and the actions we take based upon them are truly caused by us. Therefore, by providing us with authorship over our actions, an emphasis on our epiphenomenal consciousness serves as an additional dimension to human responsibility in light of an absence of free will, and maintains our special status above AI.
Lastly, we must therefore resolve that AI do not in fact have a legal responsibility, and thus are presently not able to be punished for actions. Now that we have established that human rationality is distinct from that of AI, we can set different standards for the type of responsibility AI is able to have, and soundly argue that it does not have to be punished. This will resolve our psycho-legal gap, as our inability to punish AI no longer conflicts with our consequentialist intuitions. However, AI can still be the proximate cause for harm, and therefore we intuitively feel that punishment should be doled out as retribution upon the AI. This is what Abbott and Sarch identify as the AI-Criminal gap; similar to the retribution gap, it involves cases where actions taken by an AI cannot be reduced, but we do not seem able to punish it. We can resolve this gap by arguing that our intuitions for punishing AI are misguided and should be rejected, and promote moderate changes to civil liability that compensate victims whilst not attempting to punish AI.
First, similar to the argument Greene and Cohen make about retributive intuitions, this feeling that AI should be punished for action is not founded in reality and should be rejected. Our reason for feeling a desire to punish AI, in light of the inability to extract consequentialist ends from such punishment, results from anthropomorphising them. Lima et al (2021) summarise the various research around our tendency to anthropomorphise AI, showing for example how “if an AI system is described as an anthropomorphized agent rather than a mere tool, it is attributed more responsibility for creating a painting.” This attribution of responsibility for action entails a corresponding attribution of blame, and subsequently our desire for retribution. They also identify the increased anthropomorphisation of social robots, and Abbott and Sarch (2019) concur that these “tendencies are likely to be even more powerful for AI-enabled robots that are specifically designed to seem human enough to elicit emotional responses from humans.” Consider a baseball mistakenly shot into a human by a standard baseball dispenser versus a full size robot that mimics human pitching. We may occasionally lash out at the dispenser if we are hit by its pitch, but we likely wouldn’t blame or feel anger towards it in the same way we would the robot. It would seem then that our tendency to desire punishment for AI is based on the inconsistent assumption that AI have the same level of responsibility as humans, which in light of the previous few paragraphs, is something we can discard. This intuition is thus based not upon the idea of AI having rationality in its own regard, but through our tendency to anthropomorphise it; bestowing misplaced rationality through our imposition of humanity upon it. Therefore, we can soundly argue for a rejection of these intuitions and escape the retribution gap, without compromising the principles that form our general approaches to responsibility and punishment. In the same way we might condemn abusing an animal for their bad behaviour by arguing that “they don’t know any better,” we can comfortably condemn approaches to AI punishment that attempt to place responsibility on an AI in the same way one might a human.
Secondly, regarding the compensation of victims, Abbot and Sarch (2019) propose an insurance scheme. Similar to the National Vaccine Injury Compensation Program, or the Price Anderson Act for nuclear power, “Owners, developers, or users of AI, or just certain types of AI, could pay a tax into a fund to ensure adequate compensation for victims of Hard AI Crime.” This way, compensation can be ensured without utilising punishment or attributing blame in a way that might jeopardise our legal principles. Therefore, it is possible to reconcile our intuitions of AI with our new understanding of its responsibility, and close this final AI-Criminal gap by ensuring that there is adequate liability and compensation in the cases of AI caused harm.
To conclude, the key consequence of reductionist approaches to free will is the overlooking of consciousness. Because consciousness is arguably an epiphenomenon, reductionist approaches are not entirely unreasonable in discarding it. As mentioned, a consequence of accepting the reductionist approach to free will is accepting that desert-based systems of responsibility are untenable, and in doing so there will thus exist an important gap between our intuitions and legal doctrine. The subsequent reductionist fix for this gap is to shift our intuitions to those which do not require any notion of desert or free will. And with an analysis of such a position using AI, we can see that by effectively reducing ourselves to our rationality in order to maintain our eligibility as responsible agents, this approach eliminates any distinction between ourselves and AI. This is because such a reduction to rationality does not initially involve consciousness, and since AI is capable of being rational, we cannot argue that we have any more propensity to responsibility than AI. Therefore, by overlooking consciousness, reductionist approaches are trapped in a new gap that exists between our inability to punish AI according to our intuitions, and its irrefutable legal responsibility under such an approach. And thus, it is important even when accepting reductionist approaches wherein consciousness is merely an epiphenomenon, that we place an emphasis on it. I believe that in time, consciousness will serve as the threshold between us as Ghosts, and the new Machines.
This image was chosen as it is an AI’s visual conceptualisation of consciousness and the merging between humanity and AI
Works Cited
Abbott, Ryan Benjamin, and Alex F. Sarch. 2019. “Punishing Artificial Intelligence: Legal Fiction or Science Fiction.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3327485.
Block, Ned. 1995. “On a Confusion about a Function of Consciousness.” Behavioral and Brain Sciences 18 (02): 227. https://doi.org/10.1017/s0140525x00038188.
Danaher, John. 2016. “Robots, Law and the Retribution Gap.” Ethics and Information Technology 18 (4): 299–309. https://doi.org/10.1007/s10676-016-9403-3.
Dehaene, Stanislas, Hakwan Lau, and Sid Kouider. 2017. “What Is Consciousness, and Could Machines Have It?” Science 358 (6362): 486–92. https://doi.org/10.1126/science.aan8871.
Fevzi, Aziz. 2018. “METAZİHİN YAPAY ZEKA ve ZİHİN FELSEFESİ DERGİSİ METAMIND: JOURNAL of ARTIFICIAL INTELLIGENCE and PHILOSOPHY of MIND Free Will and Artificial Intelligence [Özgür İrade ve Yapay Zeka]” 1 (2): 167–81. https://dergipark.org.tr/tr/download/article-file/625094.
Greene, Joshua, and Jonathan Cohen. 2004. “For the Law, Neuroscience Changes Nothing and Everything.” Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 359 (1451): 1775–85. https://doi.org/10.1098/rstb.2004.1546.
Lima, Gabriel, Meeyoung Cha, Chihyung Jeon, and Kyung Sin Park. 2021. “The Conflict between People’s Urge to Punish AI and Legal Systems.” Frontiers in Robotics and AI 8 (November). https://doi.org/10.3389/frobt.2021.756242.
Mulligan, Christina. 2017. “Revenge against Robots.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3016048.
Wegner, Daniel M. The Illusion of Conscious Will. Cambridge, MA: The MIT Press, 2018.
Welsch, Wolfgang. “Rationality and Reason Today.” n.d. Ecommons.cornell.edu. Accessed May 4, 2023. https://ecommons.cornell.edu/bitstream/handle/1813/55/Welsch_Rationality_and_Reason_Today.htm?sequence=1.