On the Ethical Conduct of Warfare: Predator Drones

Author Topic: On the Ethical Conduct of Warfare: Predator Drones  (Read 3308 times)

0 Members and 1 Guest are viewing this topic.

Offline bigron

  • Moderator
  • Member
  • *****
  • Posts: 22,124
On the Ethical Conduct of Warfare: Predator Drones
« on: February 22, 2011, 10:05:23 AM »
On the Ethical Conduct of Warfare: Predator Drones

By Prof Jim Fetzer
Global Research, February 22, 2011


“A robot may not injure a human being or, through inaction, allow a human being to come to harm”

—  Isaac Asimov’s “First Law of Robotics”


Among the most intriguing questions that modern technology poses is the extent to which inanimate machines might be capable of replacing human beings in combat and warfare.  The very idea of armies of robots has a certain appeal, even though “The Terminator” and “I, Robot”, have raised challenging questions related to the capacity for machine mentality and the prospect that, once they’ve attained a certain level of intelligence, these machines might turn against those who designed and built them to advance their own “interests”, if, indeed, such a thing is possible.  In an earlier article, “Intelligence vs. Mentality: Important but Independent Concepts" (1997), for example, I have argued that, while machines may well be described as “intelligent” because of the plasticity of behavior they can display in response to different programs, they are not the possessors of minds and therefore may be capable of simulating human intelligence but not of its possession.                               


From a philosophical point of view, there are at least three perspectives that could be brought to bear upon the use of the specific form of digital technology known as “predator drones”, which are pilot-less aircraft that can be deployed with the capacity to project lethal force —perhaps most commonly, by missile attacks, primarily — with or without any intervention by human minds.   The first is that of metaphysics, in particular, from the perspective of the kinds of things they are, especially with respect to the question of autonomy.  The second is that of epistemology, in particular, the question of the kind of knowledge that can be obtained about their reliability on missions.  And the third is that of axiology, in particular, the moral questions that arise from their use as killing machines, where, as I shall suggest, there is an inherent tension between the first and the third of these perspectives, which is considerably compounded by the second.


As a former artillery officer, I can appreciate the use of weapons that are capable of killing at a distance with considerable anonymity about those who are going to be killed.  In traditional warfare, artillery has been used to attack relatively well-defined military targets, but has not infrequently been accompanied by civilian casualties, which today are often referred to as “collateral damage”.  An intermediate species of killing machine arises from the use of controlled drones, where human minds are an essential link in the causal chains that produce their intentional lethal effects.  The use of predator drones, of course, is distinct from surveillance drones in this respect, because surveillance drones can acquire information without bringing about death or devastation.  Without those capacities, however, there would be scant purpose in the deployment of predator drones, the existence of which is predicated upon their function as killing machines.