an international and interdisciplinary journal of postmodern cultural sound, text and image
Volume 11, April - September 2014,
ISSN 1552-5112
Apocalypse
Not, or How I Learned to Stop Worrying
and Love
the Machine
Often it is the little things that matter,
like the difference between two seemingly inconsequential words, the
prepositions "through" and "with." But in the area of
communication technology everything depends on this distinction. Here's why:
When it is employed for the purposes of communication, the computer has
customarily been assigned one of two possible positions, both of which are
dictated by a particular understanding of the process of communication. The
machine has either been defined as a medium through which human users exchange
information, or it has occupied, with varying degrees of success, the position
of the other in communicative exchange, becoming a participant with whom human users interact. These
two alternatives were initially formalized and distinguished by Robert Cathcart
and Gary Gumpert in their 1985 essay "The Person-Computer
Interaction." In this relatively early text, the authors differentiate
interacting through a computer from
interacting with a computer. The
former, they argue, names all those "computer-facilitated functions"
where "the computer is interposed between sender and receiver." The
latter designates "person-computer interpersonal functions" where
"one party activates a computer which in turn responds appropriately in
graphic, alphanumeric, or vocal modes establishing an ongoing sender/receiver
relationship" (Cathcart and Gumpert, 1985, p. 114).
This
difference—the difference between through
and with—has important moral
consequences. If the computer is situated in the position of a medium through
which human users interact and exchange information, then it is operationalized
as a more-or-less neutral channel of data transfer and the only subjects in the
relationship are the human users who are connected through it. This is a rather
intuitive formulation insofar as it recognizes that technology, no matter how
sophisticated, is really nothing more than a tool or instrument of human
action. If, however, the computer is positioned as the other with whom we interact and exchange data, then things are
entirely otherwise. In this circumstance, the computer is not merely a tool of
human concourse but is itself another subject, that is, an interactive agent
and/or patient in the relationship. Although this sounds, at least initially,
to be somewhat counterintuitive, it is supported by recent facts. In fact, the majority of online activity is no longer
(and perhaps never really was) human-to-human exchanges but interactions
between humans and machines and machines and machines. Current statistics concerning web traffic
already give the machines a slight edge with 51% of all traffic being otherwise
than human (Foremski, 2012),
and this statistic is expected to increase at an accelerated rate (Cisco
Systems, 2012).
The
following will trace the challenges and opportunities of this subtle but
substantive transformation in contemporary culture—this shift from the machine
understood as an instrument through
which human users act to the machine as another subject with whom one interacts. Toward this end, we will first examine the
advantages and disadvantages of the instrumental view of technology—the
standard theory that explains and justifies situating the machine in the
intermediate position of through. Second, we will consider recent
advancements in posthumanism and autonomous technology that challenge this
tradition and provide good reasons to locate the machine in the position of
an-other with whom we interact. Third, we will examine the challenges
and opportunities of this transformation whereby what had been a mere
technological object is recognized as a socially active subject who
matters. This is, again, a matter of two small words, and as Jacques Derrida
(2005, p. 80) points out, everything turns on and is decided by the difference
that separates the "who" from the "what." Finally, we will
conclude with an exploration of two ways in which the conversation about
machine moral standing might proceed.
Machines—like
computers, smart phones, and even sophisticated robots—are technologies, and technologies
are mere tools created and used by human beings. The computer means nothing by
itself, it is the way it is used that ultimately matters. This common sense
assumption is structured and informed by the answer that is typically provided
for the question concerning technology.
We ask the question concerning technology
when we ask what it is. Everyone knows the two statements that answer our
question. One says: Technology is a means to an end. The other says: Technology
is a human activity. The two definitions of technology belong together. For to
posit ends and procure and utilize the means to them is a human activity. The
manufacture and utilization of equipment, tools, and machines, the manufactured
and used things themselves, and the needs and ends that they serve, all belong
to what technology is (Heidegger 1977, pp. 4-5).
According to Martin Heidegger's analysis,
the presumed role and function of any kind of technology, whether it be the
product of handicraft or industrialized manufacture, is that it is a means
employed by human users for specific ends. Heidegger terms this particular
characterization of technology "the instrumental definition" and
indicates that it forms what is considered to be the "correct" understanding
of any kind of technological device (p. 5).
As
Andrew Feenberg (1991) summarizes it in the introduction to his book Critical
Theory of Technology, "The instrumentalist theory offers the most
widely accepted view of technology. It is based on the common sense idea that
technologies are 'tools' standing ready to serve the purposes of users"
(p. 5). And because an instrument "is deemed 'neutral,' without valuative
content of its own" (p. 5) a technological artifact is evaluated not in
and of itself, but on the basis of the particular employments that have been
decided by its human designer or user. Understood as a tool or instrument of
human activity, sophisticated technical devices like robots, AIs, algorithms,
and other computer systems are not considered the responsible agent of actions
that are performed by or through them. "Morality," as AI scientist J.
Storrs Hall (2001) points out, "rests on human shoulders, and if machines
changed the ease with which things were done, they did not change responsibility
for doing them. People have always been the only 'moral agents'" (p. 2).
Consequently,
blaming the computer (or any other technology) is to make at least two
mistakes. First, it wrongly attributes agency to something that is a mere
instrument or inanimate object. This categorical error mistakenly turns a
passive object into an active subject. Second, it allows human users to deflect
moral responsibility by putting the blame on something else. In other words, it
allows users to "scapegoat the computer," and effectively avoid
taking responsibility for their own actions. As Deborah Johnson (2006)
succinctly summarizes it: "Computer systems are produced, distributed, and
used by people engaged in social practices and meaningful pursuits. This is as
true of current computer systems as it will be of future computer systems. No
matter how independently, automatic, and interactive computer systems of the
future behave, they will be the products (direct or indirect) of human
behavior, human social institutions, and human decision" (p. 197).
The
instrumental theory has served us well, and it has helped make sense of all
kinds of technological innovation. But all that is over. In other words, the
instrumental theory, although a useful instrument for understanding technology,
no longer functions as initially designed. It is beginning to show signs of
stress, weakness, and even breakdown as the boundary between
technology-as-instrument and technology-as-agent become increasingly
indistinguishable.
2. The
New
There
are, for our purposes, at least two challenges to the instrumental theory of
technology. The first, which proceeds from recent work in posthumanism,
demonstrates that the one-time clear and distinct line dividing the human from
its constitutive others, namely the animal and machine, has become
increasingly difficult to define and defend. These innovations do not so much
demonstrate that machines are legitimate subjects, but rather critically
questions the traditional assumptions surrounding human exceptionalism and
technological instrumentalism. The second involves what Langdon Winner calls
“autonomous technology,” that is, technologies of various kinds and
configurations that are deliberately designed to be more than tools and as a
result come to occupy a position that is otherwise than a mere instrument of
human action.
2.1 Posthumanism and Cybernetics
One
challenge to the instrumental tradition comes from recent work in the theory of
posthumanism and research in cybernetics. As Donna Haraway argued in the
“Cyborg Manifesto,” the line that had once divided the human from the animal
and the animal from the machine has become increasingly indistinguishable and
leaky:
By the late twentieth century in
Nowhere is this dual erosion of the boundaries of human exceptionalism more evident
than in the Human Genome Project (HGP), a multinational effort to decode and
map the totality of genetic information comprising the human species. This
project takes deoxyribonucleic acid (DNA) as its primary object of
investigation. DNA, on the one hand, is considered to be the fundamental and
universal element determining all organic entities, human or otherwise.
Understood in this fashion, the difference between the human being and any
other life-form is merely a matter of the number and sequence of DNA strings.
On the other hand, HGP, following a paradigm that has been central to modern
biology, considers DNA to be nothing more than a string of information, a
biologically encoded program that is to be decoded, manipulated, and run on a
specific information-processing device. This procedure allows for animal bodies
to be theorized, understood, and manipulated as mechanisms of information. For
this reason, Haraway (1991) concludes that "biological organisms have
become biotic systems, communications devices like others. There is no fundamental, ontological separation
in our formal knowledge of machine and organism, of technical and organic"
(177-178).
Haraway uses the term “cyborg,” which she
appropriates from a 1962 article on manned space flight (Clynes and Kline
1962), to name this new form of hybridity that is simultaneously both more and
less than what has been traditionally considered to be human. According to
Haraway, and other posthumanist thinkers who follow her lead (Hayles 2001,
Wolfe 2010), the cyborg constitutes a critical intervention in programs of human
exceptionalism, making available new configurations of agency, responsibility,
and justice. As a result of these critical efforts, technologies are no longer
conceptualized as mere tools used by fully-formed, pre-existing, and
independent human agents but are already constitutive of the hybrid forms of
agency that comprise posthuman subjectivity.
This blurring of boundaries, however, is
not merely theoretical speculation. It has taken empirical form in work
surrounding what are now called “biological robots.” And following the work of
cyberneticist Kevin Warwick (2010), it is now possible to distinguish three
varieties. The first is a
mechanical body that is controlled by a biological neuronal network or “brain.”
In these cases neurons are cultivated on an electrode array in a closed-loop
environment. The neurons grow normally and integrate with the electrode array
which is connected (directly or remotely) to the robotic body. As the body
encounters obstacles through sensors (sensory input), that information is
translated into electrical signals, sent through the electrode array and
delivered to the “brain.” The brain responds by generating its own electrical
signals, transmitted through the electrode array, and also to the robot body,
instructing movement away from the obstacle. Most interestingly, these
biological robots demonstrate observable individuality and learn the obstacle
avoidance behavior from trial and error while becoming increasingly proficient
at their tasks. Neuronal clusters associated with movement and obstacle
avoidance grow and are strengthened through repetitive learning, just like
human brains (
The
second kind of biological robot is a biological body that is controlled, at
least partially, by a computer–brain interface. This type is even more common
than robots controlled by neuronal networks. For instance, St. Jude Medical
Device Company recently received approval for their LibraXP deep brain
stimulation system. This is a device implanted in the patient’s brain to
modulate brain activity and physical manifestations associated with a genetic
disease called dystonia (Reuters, 2013).
More extreme examples include what are called “line-following
terrestrial insect biobots” (Latif, 2012). In these cases, a computer system
delivers electrical impulses directly into the brain of insects to compel them
to follow a curving line on the floor, or to fly in certain patterns (in the
case of hawkmoth studies). As a result, the actions of the organism are not
entirely under its control but are also determined by programmed input.
The
third kind of biological robot is a brain emulation. While long the topic of
science fiction, work is underway through The Blue Brain Project at Ecole
Polytechnique Federale De Lausanne (
2.2 Autonomous Technology
Machines
are not tools. Although "experts in mechanics," as Karl Marx (1977)
pointed out, often confuse the two concepts calling "tools simple machines
and machines complex tools" (p. 493), there is an important and crucial
difference between the two. "The machine," Marx explains, "is a
mechanism that, after being set in motion, performs with its tools the same
operations as the worker formerly did with similar tools" (p. 495).
Understood in this fashion, the machine occupies the position not of the tool,
but of the human worker—the active agent who had used the tool. Evidence of
this is already available in the Luddite rebellions, which took place in
This concept of machinic agency is taken
up and further developed by Langdon Winner in his book Autonomous Technology:
To be autonomous is to be self-governing,
independent, not ruled by an external law of force. In the metaphysics of
Immanuel Kant, autonomy refers to the fundamental condition of free will—the
capacity of the will to follow moral laws which it gives to itself. Kant
opposes this idea to "heteronomy," the rule of the will by external
laws, namely the deterministic laws of nature. In this light the very mention
of autonomous technology raises an unsettling irony, for the expected
relationship of subject and object is exactly reversed. We are now reading all
of the propositions backwards. To say that technology is autonomous is to say
that it is nonheteronomous, not governed by an external law. And what is the
external law that is appropriate to technology? Human will, it would seem"
(Winner 1977, p. 16).
"Autonomous technology,"
therefore, directly contravenes the instrumental theory by deliberately
contesting and relocating the assignment of agency. Such mechanisms are not
mere tools employed by human users but occupy, in one way or another, the place
of the agent—the other person in social situations and interpersonal
interactions. To put it in Kantian language, tools are heterogeneous
instruments that are designed, directed, and determined by human will. Machines,
however, exceed this conceptualization insofar as they show signs of increasing
levels of (self) direction and determination that exceed the reach of human
volition and control.
Predictions
of fully autonomous machines on par with human capabilities is not only the
subject of science fiction but is becoming science fact. It can be seen, for
instance, in the work of the futurist Ray Kurzweil (2005), AI researcher Hans
Moravec (1988), and robotics engineer Rodney Brooks. "Our fantasy
machines," Brooks (2002) writes referencing the popular robots of science
fiction, "have syntax and technology. They also have emotions, desires,
fears, loves, and pride. Our real machines do not. Or so it seems at the dawn
of the third millennium. But how will it look a hundred years from now? My
thesis is that in just twenty years the boundary between fantasy and reality
will be rent asunder" (p. 5). And it may not even take that long as
working examples of autonomous technology are already available and in operation
in many parts of contemporary culture.
First, consider what has happened to the
financial and commodity markets in the last fifteen years. At one time, trades
on the New York Stock Exchange or the Chicago Board Options Exchange were
initiated and controlled by human traders. Beginning in the late 1990's,
financial services organizations began developing algorithms to take over much
of this effort. These algorithms were faster, more efficient, more consistent,
and could, as a result of all this, turn incredible profits by exploiting
momentary differences in market prices. These algorithms made decisions and
initiated actions faster than human comprehension and were designed with
learning subroutines in order to respond to new and unanticipated
opportunities. And these things worked. They pumped out incredible profits for
the financial services industry. As a result, over 70% of all trades are now
machine generated and controlled (Slavin 2009).
What
this means is that our finances—not only our mortgages and retirement savings
but also a significant part of our nation's economy—is now directed and managed
by machines. The consequences of this can be seen in an event called the Flash
Crash. At about 2:45 on the 6th of May 2010, the Dow Jones
Industrial Average lost over 1000 points in a matter of seconds and then
rebounded just a quickly. The drop, which amounted to about 9% of the market's
value or 1 trillion dollars, was caused by some “bad” decision making by a
couple of trading algorithms. In other words, no human being was in control of
the event or could be considered responsible for its occurrence. It was
something initiated by the algorithms, and the human brokers could only
passively watch events unfold on their monitor screens not knowing what had
happened or why. To this day, no one is quite sure what occurred. No one, in
other words, knows who or what to blame.
Similar
things are happening in customer service interactions. When you call your bank
and apply for credit over the telephone, for instance, your call is often taken
by a human operator. This human being, however, is not the active agent in the
conversation. He or she is only an interface to a machine, which ultimately
decides the outcome of your application. This situation, in fact, inverts the usual
roles and assumptions. In the case of credit application decisions, the machine
is the active agent and interlocutor. The human operator is only an instrument
or interface through which machine generated decisions pass and are conveyed.
Although autonomy in this instance is limited to decisions concerning a very restricted domain
(to extend or to deny credit), what is not disputed is the fact that the
machine and human being have effectively switched places—the machine occupies
the location of the active agent, while the human operator is merely an
instrument in service to these machinic decisions.
A
third and final example is drawn from the area of culture—literature, art,
music, etc. Currently recommendation algorithms at Netflix and Amazon decide what
cultural objects we experience. It is now estimated that 75% of all content
that is obtained through Netflix is the result of a machine recommendation
(Amatriain & Basilico, 2012). These algorithms are effectively taking over
the role of film, book and music critics, influencing—to a significant
degree—what films we see, what books we read, and what music we hear. It is
important to recognize that far from simply matching similar keywords from one
product to another in a pre-defined catalog and calling that the
“recommendation,” recommendation systems are self-learning mechanisms. They are
designed to identify patterns and connections in an ever growing corpus of
data. The term “Big Data” has grown in popularity recently. The data is “big”
not because there is a large fixed set of it located in some database on a
particular network connected server. Rather it is “big” because it is growing
in organic fashion and at exponential rates. Data analytics that deals in the
big data field (including decision engines and “next best action” systems) are
programmed to be deliberately fuzzy. They allow for and expect new and novel
data to be added to their corpus from which they can draw, analyze, and apply
algorithms. In this way, the algorithms define the boundary conditions of the
system and the corpus provides freedom of self-determination. For such systems,
there is no right or wrong answer, only a more or less effective
recommendation. Semantic web systems and semantic data processing is one kind
of technique currently used in recommendations systems that have large volumes
of unstructured data (i.e. data that does not fit well into a row or cell of a
database). These systems are inferential and take advantage of patterns that
only emerge at large scales. This means
that they look for patterns and conceptual similarity rather than binary
matches or deterministic lexical stemming, which are the techniques typically
employed in database searches.
Such conceptual inference, for example,
allows these systems to locate, recommend and identify business “colleagues”
rather than requiring persons to search for and correlate employee records from
many independent systems. This is because they “understand” that colleagues are
employees who worked for the same organization during the same time period.
Without having a specific metadata attribute of “colleague”, semantic systems
are able to infer “colleague,” if they have met the boundary conditions. Furthermore, the concept of a colleague is
not fixed. It may change over time as
new employees are added to an organization and current ones leave. This is just
one example of how autonomous or semi-autonomous machine learning and
self-deterministic, inferential processing happens today. Newer computer
systems use data extraction, snippets, clustering, tuning, ontology-assisted
matching, heuristics-based learning and corpus-driven extraction techniques.
The items extracted are raw data. But when that data is linked,
patterns, clusters and classifications emerge. The interesting part is that it
is based on what is inside the content item - the information in the container,
rather than what people say about what is inside the container. The ability of
machine networks to get into not just the data, but also the meaning of the data,
as contextually understood, is a very real and growing phenomenon. This is also
why it is so important to ensure that talk of “the machine” is inclusive of the
machine network rather than any single node, atomic unit or subset. To do so
not only misses out on the opportunity of our new interactions but it also
risks conflating the part with the whole.
But
machines are not just involved in recommending cultural products, they are also
actively engaged on the evaluative and creative side. In the field of education,
machines now qualitatively evaluate and grade student essays. EdX is a
nonprofit organization led by
In
the field of journalism, algorithms do not just evaluate compositions but
actually perform the writing. Beyond the simple news aggregators that currently
populate the web, these programs, like
3. The
Rise of the Machines
In November of 2012, General
Electric launched a television advertisement called "Robots on the
Move." The 60 second video, created by Jonathan Dayton and Valerie Faris
(the husband/wife team behind the 2006 feature film Little Miss Sunshine),
depicts many of the iconic robots of science fiction traveling across great
distances to assemble before some brightly lit airplane hanger for what we are
told is the unveiling of some new kind of machines—"brilliant
machines," as GE's tagline describes it. And as we observe Robby the Robot
from Forbidden Planet, KITT the robotic automobile from Knight Rider,
and Lt. Commander Data of Star Trek: The Next Generation making their
way to this meeting of artificial minds, we are told, by an ominous voice over,
that "the machines are on the move." Although this might not look
like your typical robot apocalypse (vividly illustrated in science fiction
films and television programs like Terminator, The Matrix Trilogy,
and Battlestar Galactica), we are, in fact, in the midst of an
invasion. The machines are on the move. They are everywhere and doing
everything. They may have
begun by displacing workers on the factory floor, but they now actively
participate with us in all aspects of
our intellectual, social, and cultural existence. This invasion is not some
future possibility coming from an alien world. It is here. It is now. And
resistance is futile.
As these increasingly autonomous
machines come to occupy influential positions in contemporary culture—positions
where they are not just tools or instruments of human action but actors in
their own right—we will need to ask ourselves important but difficult
questions: At what point might a robot, an algorithm, or other autonomous
system be held responsible for the decisions it makes or the actions it
deploys? When, in other words, would it make sense to say "It's the
computer's fault"? Likewise, at what point might we have to consider
seriously extending rights—civil, moral, and legal standing—to these socially
aware and interactive devices? When, in other words, would it no longer be
considered non-sense to suggest something like "the rights of
machines"? In response to these questions, there appears to be at least two options, neither of which
are entirely comfortable or satisfactory.
On
the one hand, we can respond as we always have, treating these machines as mere
instruments or tools. Joanna Bryson (2010) makes a case for this approach in
her provocatively titled essay "Robots Should be Slaves." "My
thesis," Bryson writes, "is that robots should be built, marketed and
considered legally as slaves, not companion peers" (p. 63). Although this
might sound harsh, her argument is persuasive precisely because it draws on and
is underwritten by the instrumental theory of technology—a theory that has
considerable history and success behind it and that functions as the assumed
default position for any consideration of technology. This decision—and it is a
decision—has both advantages and disadvantages. On the positive side, it
reaffirms human exceptionalism, making it absolutely clear that it is only
human beings who have rights and social responsibilities. Technologies, no
matter how sophisticated, intelligent, and influential, are and will continue
to be mere tools of human action, nothing more. But this approach, for all its
usefulness, has a not-so-pleasant downside—it willfully and deliberately
produces a new class of slaves and rationalizes this decision as morally justified.
This decision also ignores, or at least delays consideration of what appears to
be an inevitable evolution and emergence along our current techno-innovation
trajectory; the dissolution of the boundary between human and machine. As we have already demonstrated,
instrumentalist advances in machine controlled prosthetics for humans and
human-patterned innovations in machine processing force a difficult but
necessary reconsidering of what it means to be human-as-such.
On
the other hand, we can decide to entertain a kind of rights of machines just as we had previously done for other
non-human entities, like animals and the environment. And there is both moral
and legal precedent for this decision. In fact, we already live in a world
populated by non-human entities that are considered moral persons. Recently the
Indian government recognized dolphins as “non-human persons” (Coelho, 2013) and
there is an on-going debate concerning the status of the corporation, which in
both US and International law is considered an artificial person, at least for
the purposes of contracts, free expression, and other legal adjudications. Once
again, this decision sounds reasonable and justified. It extends moral standing
to these other socially aware entities and recognizes, following the
predictions of Norbert Wiener (1988, p. 16), that the social relationships of
the future will involve both humans and machines. But this decision also has a
significant cost. It requires that we rethink everything we thought we knew
about ourselves, technology, and ethics. It requires that we learn to think
beyond human exceptionalism, technological instrumentalism, and all the other -isms that have helped us make sense of
our world. No matter how we decide to respond to this machine question, it will
have profound effects on how we conceptualize our place in the world, who we
decide to include in the community of moral subjects, and what we exclude from
such consideration and why.
4.
Answering the Machine Question
Ending
with a question, although standard practice in philosophical discourse, is
often unsatisfying and can, from a compositional perspective, be considered bad
form. As Neil Postman (1993, 181) once described , “anyone who practices the
art of cultural criticism must endure being asked, What is the solution to the
problems you describe?” Consequently, we conclude by looking at two recent
proposals by which to begin formulating a response to this machine question.
These solutions are neither complete nor even necessarily consistent. Rather
they are offered as a way of contributing positively to the ongoing
consideration and debate regarding social responsibility in the 21st
century. Despite their differences, both proposals take what is arguably an
existentialist approach. Following Jean-Paul Sartre, who famously asserted
“existence precedes essence,” we might say that the existence of an
entity—human, animal, machine, or otherwise—precedes determinations of its
essence. In other words, the fact that it is trumps what it is.
4.1 Machine Ethics
The
first concerns what is now called Machine Ethics. This relatively new idea was
first introduced and publicized in a 2004 Association for the Advancement of
Artificial Intelligence paper written by Michael Anderson, Susan Leigh Anderson,
and Chris Armen and has been followed by a number of dedicated symposia
(Anderson et al, 2005) and publications (Anderson and Anderson 2006 and 2011).
Unlike computer ethics, which is mainly concerned with the consequences of
human behavior through the instrumentality of technology (Johnson 1993), "machine ethics is concerned," as
characterized by Anderson et al. (2004, 1), "with the consequences of
behavior of machines toward human users and other machines." In this way,
machine ethics both challenges the "human-centric" tradition that has
persisted in moral philosophy and argues for a widening of the subject of
ethics so as to take into account not only human action with machines, but the
behavior of actual machines, namely those that are designed to provide advice
or programmed to make autonomous decisions with little or no human supervision.
Because
of this, machine ethics takes an entirely functionalist approach to things.
That is, it considers the effect of machine actions on human subjects irrespective
of metaphysical debates concerning agency or epistemological problems
concerning subjective mind states. As Susan Leigh Anderson (2008, 477) points
out, the Machine Ethics project is unique insofar as it, "unlike creating
an autonomous ethical machine, will not require that we make a judgment about
the ethical status of the machine itself, a judgment that will be particularly
difficult to make." Machine Ethics, therefore, does not necessarily deny
or affirm the possibility of, for instance, machine consciousness, sentience,
or personhood. It simply endeavors to institute a pragmatic approach that does
not require that one first decide these questions a priori. It therefore leaves this as an open question and proceeds
to ask whether moral decision making is computable and whether machines can in
fact be programmed with appropriate ethical standards for social behavior.
This
is a promising innovation insofar as it recognizes that machines are already
making decisions and taking real-world actions in such a way that has an
effect—an effect that can be evaluated as either good or bad—on human beings
and human social institutions. Despite this, the functionalist approach
utilized by Machine Ethics has at least three critical difficulties. First, functionalism
shifts attention from the cause of an
action to its effects. "Clearly," Anderson and company write
(2004, 4), "relying on machine intelligence to effect change in the world
without some restraint can be dangerous. Until fairly recently, the ethical
impact of a machine's actions has either been negligible, as in the case of a
calculator, or, when considerable, has only been taken under the supervision of
a human operator, as in the case of automobile assembly via robotic mechanisms.
As we increasingly rely upon machine intelligence with reduced human
supervision, we will need to be able to count on a certain level of ethical
behavior from them." The functionalist approach of Machine Ethics,
therefore, derives from and is ultimately motivated by an interest to protect
human beings from potentially hazardous machine decision-making and action.
This effort is thoroughly and unapologetically anthropocentric. Although
effectively opening up the community of moral subjects to other, previously
excluded things, the functionalist approach only does so in an effort to
protect human interests and investments. This means that the project of Machine
Ethics does not differ significantly from computer ethics and its predominantly
instrumental and anthropocentric orientation. If computer ethics, as Anderson,
Anderson, and Armen (2004) characterize it, is about the responsible and
irresponsible use of computerized tools by human users, then their
functionalist approach is little more than the responsible programming of
machines by human beings for the sake of protecting other human beings.
Second,
functionalism institutes, as the conceptual flipside and consequence of this
anthropocentric privilege, what is arguably a slave ethic. "I
follow," Kari Gwen Coleman (2001, 249) writes, "the traditional
assumption in computer ethics that computers are merely tools, and
intentionally and explicitly assume that the end of computational agents is to
serve humans in the pursuit and achievement of their (i.e. human) ends. In contrast
to James Gips' call for an ethic of equals, then, the virtue theory that I
suggest here is very consciously a slave ethic." For Coleman, computers
and other forms of computational agents should, in the words of Bryson (2010),
"be slaves." Others, however, are not so confident about the
prospects and consequences of this "Slavery 2.0." And this concern is
clearly one of the standard plot devices in robot science fiction from R.U.R. and Metropolis to Bladerunner
and Battlestar Galactica. But it has
also been expressed by contemporary researchers and engineers. Rodney Brooks,
for example, recognizes that there are machines that are and will continue to
be used and deployed by human users as instruments, tools, and even servants.
But he also recognizes that this approach will not cover all machines.
Fortunately we are not doomed to create a
race of machine slaves that is unethical to have as human slaves. Our
refrigerators work twenty-four hours a day seven days a week, and we do not
feel the slightest moral concern for them. We will make many robots that are
equally unemotional, unconscious, and unempathetic. We will use them as slaves
just as we use our dishwashers, vacuum cleaners, and automobiles today. But
those that we make more intelligent, that we give emotions to, and that we
empathize with, will be a problem. We had better be careful just what we build,
because we might end up liking them, and then we will be morally responsible
for their well-being. Sort of like children (Brooks 2002, 195).
According to this analysis, a slave ethic
will work, and will do so without any significant moral difficulties or ethical
friction, as long as we decide to
produce dumb instruments that serve human users as mere instruments and
extensions of our will. But as soon as the machines show signs, however minimal
defined or rudimentary, that we take
to be intelligent, conscious, or intentional, then everything changes. At that
point, a slave ethic will no longer be functional or justifiable; it will
become morally suspect.
Finally,
machines that are designed to follow rules and operate within the boundaries of
some kind of programmed restraint, might turn out to be something other than
what is typically recognized as a moral agent. Terry Winograd (1990, 182-183),
for example, warns against something he calls "the bureaucracy of
mind," - "where rules can be
followed without interpretive judgments." Providing robots, computers, and
other autonomous machines with functional morality may produce little more than
artificial bureaucrats—decision making mechanisms that can follow rules and
protocols but have no sense of what they do or understanding of how their
decisions might affect others. "When a person," Winograd (1990, 183)
argues, "views his or her job as the correct application of a set of rules
(whether human-invoked or computer-based), there is a loss of personal
responsibility or commitment. The 'I just follow the rules' of the bureaucratic
clerk has its direct analog in 'That's what the knowledge base says.' The individual
is not committed to appropriate results, but to faithful application of
procedures.
Mark Coeckelbergh (2010, 236) paints an
even more disturbing picture. For him, the problem is not the advent of
"artificial bureaucrats," but "psychopathic robots." The
term "psychopathy" has traditionally been used to name a kind of
personality disorder characterized by an abnormal lack of empathy which is
masked by an ability to appear normal in most social situations. Functional
morality, Coeckelbergh argues, intentionally designs and produces what are
arguably "artificial psychopaths"—robots that have no capacity for
empathy but which follow rules and in doing so can appear to behave in morally
appropriate ways. These psychopathic machines would, Coeckelbergh (2010, 236)
argues, "follow rules but act without fear, compassion, care, and love.
This lack of emotion would render them non-moral agents—i.e. agents that follow
rules without being moved by moral concerns—and they would even lack the
capacity to discern what is of value. They would be morally blind."
4.2 Social Relationalism
An
alternative to moral functionalism can be found in Coeckelbergh's own work on
the subject of moral subjectivity. Moral standing, as we have seen, has been
typically decided on the basis of essential properties. This “properties
approach” is rather straight forward and intuitive: "identify one or more
morally relevant properties and then find out if the entity in question has
them" (Coeckelbergh 2012, 14). But as Coeckelbergh insightfully points
out, there are at least two persistent problems with this undertaking. First,
how does one ascertain which properties are sufficient for moral status? In
other words, which one, or ones, count? The history of moral philosophy can, in
fact, be read as something of an on-going debate and struggle over this matter
with different properties—rationality, speech, consciousness, sentience,
suffering, etc.—vying for attention at different times. Second, once the
morally significant property has been identified, how can one be certain that a
particular entity possesses it, and actually possesses it instead of merely
simulating it? This is tricky business, especially because most of the
properties that are considered morally relevant are internal mental states that
are not immediately accessible or directly observable from the outside. In
other words, even if it were possible to decide, once and for all, on the right
property or mix of properties for moral standing, we would still be confronted
and need to contend with a variant of what philosophers call the “other minds
problem.”
In
response to these difficulties, Coeckelbergh advances an alternative approach
to moral status ascription, which he characterizes as “relational.” By this, he
means to emphasize the way moral status is not something located in the inner
recesses or essential make-up of an individual entity but transpires through
the actually existing interactions and relationships situated between entities.
This "relational turn," which Coeckelbergh skillfully develops by
capitalizing on innovations in ecophilosophy, Marxism, and the work of Bruno
Latour, Tim Ingold, and others, does not get bogged down trying to resolve the
philosophical problems associated with the standard properties approach.
Instead it recognizes the way that moral status is socially constructed and
operationalized. But Coeckelbergh is not content simply to turn things around.
Like Friedrich Nietzsche, he knows that simple inversions (in this case,
emphasizing the relation instead of the relata) changes little or nothing. So
he takes things one step further. Quoting the environmental ethicist Baird
Callicot (1995), Coeckelbergh insists that the "relations are prior to the
things related" (p. 110). This almost Levinasian gesture is crucial
insofar as it undermines the usual way of thinking. It is an anti-Cartesian and
postmodern (in the best sense of the word) “intervention.” In Cartesian
modernism the individual subject had to be certain of his (and at this time the
subject was always gendered male) own being and his own essential properties
prior to engaging with others. Coeckelbergh reverses this standard approach. He
argues that it is the social that comes first and that the individual subject
(an identity construction that is literally thrown under or behind), only
coalesces out of the relationship and the assignments of rights and
responsibilities that it makes possible.
This
relational turn in moral thinking is clearly a game changer. As we interact
with machines, whether they be pleasant customer service systems, biological
robots, or even full or partial brain emulations, the machine is first and
foremost situated in relationship to us; with the information we produce and
make available to it; with the inferences it makes for and about us; with the
predictions it makes about and learns from us. Morality, conceived of in this
fashion, is not determined by a prior ontological discovery concerning the
essential capabilities, intrinsic properties, or internal operations of others.
It is rather a decision—literally a cut that institutes difference and that
makes a difference by dividing between who
is considered to be morally significant and what
is not. Consequently, "moral consideration is," as Mark Coeckelbergh
(2010, 214) describes it, "no longer seen as being 'intrinsic' to the
entity: instead it is seen as something that is 'extrinsic': it is attributed
to entities within social relations and within a social context." This is
the reason why, as Levinas (1969, 304) claims, "morality is first
philosophy" ("first" in terms of both sequence and status) and that moral decision making precedes
ontological knowledge. Ethics, conceived of in this way, is about
decision and not discovery (Putnam 1964, 691). We, individually and in
collaboration with each other (and not just those others who we assume are
substantially like ourselves), decide who is and who is not part of the moral
community—who, in effect, will have been admitted to and included in this first
person plural pronoun. This decision, as Anne Foerst (Benford and Malartre
2007, 163) points out, is never certain; it is has always been and continues to
be provisional. In effect, and to paraphrase James Carey, we make the rules for
ourselves and those we consider Other and then play by them...or not.
Should machines like AIs, robots,
and other autonomous systems be granted admission to the community of moral
subjects? This question cannot be answered definitively and finally with a
simple "yes" or "no." The question will need to be asked
and responded to repeatedly in specific circumstances and the answers that come
to be provided will necessarily be provisory and open to ongoing debate. But
the questions need to be asked and explicitly addressed rather than being
passed over in silence as if they did not matter. As Norbert Wiener (1954, 16)
predicted over a half century ago, "Society can only be understood through
a study of the messages and the communication facilities which belong to it;
and that in the future development of these messages and communication
facilities, messages between man and machines, between machines and man, and
between machine and machine, are destined to play an ever increasing
part." What matters, then, is how one responds, how the terms and conditions
of these relationships are decided, and how responsibility comes to be
articulated in the face of all these others.
5. Postscript
A discussion of the moral status of the machine held
entirely by humans suggests at least a kind of insensitivity to the very topics
considered. So we wanted to get the opinions of at least one machine on this
particular subject. We selected Cleverbot (cleverbot.com) which is a natural
language learning AI that is available to anyone on the web. So we asked
Cleverbot the question. The transcript of the conversation (which took place on
4 August 2013 at approximately 8:40am cst) follows:
Human User: Are you a moral
being?
Cleverbot:
Yes, I am. Are you?
Human User: I believe I am. Why
do you believe you are a moral subject?
Cleverbot: Because you
believe that you are a robot.
Human User: What makes you
think that I am a machine?
Cleverbot: Because you are
only responding, you are not taking any activity.
an international and interdisciplinary journal of postmodern cultural sound, text and image
Volume 11, April -September 2014,
ISSN 1552-5112
References
Amatriain, X. & J. Basilico. (6 April 2012). "Netflix
Recommendations: Beyond the 5 Stars." The Netflix Tech Blog.
http://techblog.netflix.com/2012/04/netflix-recommendations-beyond-5-stars.html
Brooks, R. A. (2002). Flesh and Machines: How Robots will Change Us.
Bryson, J. (2010). "Robots Should be Slaves." In Y. Wilks
(Ed.) Close Engagements with Artificial Companions: Key Social,
Psychological, Ethical and Design Issues, pp. 63-74.
Cathcart, R. & G. Gumpert. (1985). "The Person-Computer
Interaction: A Unique Source." In B. D. Ruben (Ed.) Information and
Behavior, vol. 1, pp. 113-124.
Cisco Systems. (2012). Cisco Visual Networking Index: Global
Coelho,Saroja. (2013). Dolphins gain unprecedented protection in
Costandi, Moheb. (June, 19, 2012). The Ethics of Unconsciousness. The
DNA Foundation. http://dana.org/news/features/detail.aspx?id=39132
Davy, Barbara Jane. (2007). An Other Face of Ethics in Levinas.
Ethics and the Environment, Vol. 12, No. 1 (Spring, 2007), pp. 39-65, Published
by: Indiana University Press, http://www.jstor.org/stable/40339131
Derrida, J. (2005). Paper Machine. Translated by Rachel Bowlby.
Erwin, Sandra. (April 9, 2013)
Feenberg, A. (1991). Critical Theory of Technology.
Firth, Roderick (1952). “Ethical Absolutism and the Ideal Observer.” In
Steven M. Cahn and Joram G. Haber (ed.) 20th Century Ethical Theory, pp. 225-246.
Prentice-Hall. 1995.
Foremski, T. (3 March 2010). "Report: 51% of Website Traffic is 'Non-human'
and Mostly Malicious." ZDNet. http://www.zdnet.com/blog/foremski/report-51-of-web-site-traffic-is-non-human-and-mostly-malicious/2201
Gunkel, David. (2012). The Machine Question: Critical Perspectives on AI,
Robots and Ethics. Cambridge, MA: MIT Press.
Hall, J. Storrs. (5 July 2001). "Ethics for Machines." KurzweilAI.net.
http://www.kurzweilai.net/ethics-for-machines
Heidegger, M. (1977). "The Question Concerning Technology." In The
Question Concerning Technology and Other Essays, 3-35. Translated by William
Lovitt. New York: Harper & Row.
Johnson, D. G. (2006). "Computer Systems: Moral Entities but not Moral
Agents." Ethics and Information Technology 8, pp. 195-204.
Kerwin, P. (9 December 2009). "The Rise of Machine-Written Journalism."
Wired.co.uk. http://www.wired.co.uk/news/archive/2009-12/16/the-rise-of-machine-written-journalism.aspx
Kurzweil, R. (2005). The Singularity is Near: When Humans Transcend Biology.
New York: Viking.
Latif, Tahmid, and Alber Bozkurt. (2012). Line Following Terrestrial Insect
Biobots. Engineering in Medicine and Biology Society (EMBC), 2012 Annual International
conference of the IEEE. (2012). Pp. 972-975.
Lehrer, Jonah. (March 3, 2008). Can a thinking, remembering, decision-making,
biologically accurate brain be built from a supercomputer? Seed Magazine.
http://seedmagazine.com/content/article/out_of_the_blue
Markoff, John. (April 4, 2013). Essay Grading Software Offers Professors a
Break. The New York Times, http://www.nytimes.com/2013/04/05/science/new-test-for-computers-grading-essays-at-college-level.html
Marx, K. (1977). Capital: A Critique of Political Economy. Translated by Ben
Fowkes. New York: Vintage Books.
Moravec, H. (1988). Mind Children: The Future of Robot and Human Intelligence.
Cambridge, MA: Harvard University Press.
Reuters. (2013). St. Jude wins European OK for brain implant to treat Dystonia.
Reuters. http://www.reuters.com/article/2013/04/10/stjude-implant-idUSL2N0CX0LK20130410
Shermis, Mark D. (2012). Contrasting State-of-the-Art Automated Scoring of
Essays: Analysis. The University of Akron. Retrieved from Scribd. http://www.scribd.com/doc/91191010/Mark-d-Shermis-2012-contrasting-State-Of-The-Art-Automated-Scoring-of-Essays-Analysis#download
Warwick, Kevin (February, 12, 2010). Implications and Consequences of Robots
with Biological Brains. Ethics of Information Technology (2010) vol. 12. Pp.
223-234.
Wiener, N. (1988), The Human Use of Human Beings: Cybernetics and Society,
Boston, MA: Da Capo Press.
Williams, Caroline. (April, 2013). Brain Imaging Spots Our Abstract Choices
Before We Do. New Scientist. http://www.newscientist.com/article/dn23367-brain-imaging-spots-our-abstract-choices-before-we-do.html
Winner, Langdon. 1977. Autonomous Technology: Technics-out-of-Control as a
Theme in Political Thought. Cambridge, MA: MIT Press.