Tag: robots

Improvised Explosive Robots

A common argument made in favor of the use of robotics to deliver (lethal) force is that the violence used is mediated in such a way that it naturally de-escalates a situation.  In some versions, this is due to the fact that the “robot doesn’t feel emotions,” and so is not subject to fear or anger.  In other strands, the argument is that due to distance in time and space, human operators are able to take in more information and make better judgments, including to use less than lethal or nonlethal force.  These debates have, up until now, mostly occurred with regards to armed conflict.  However, with the Dallas police chief’s decision to use a bomb disposal robot to deliver lethal force to the Dallas gunman, we are now at a new dimension of this discussion: domestic policing.

Now, I am not privy to all of the details of the Dallas police force, nor am I going to argue that the decision to use lethal force against Micah Johnson was not justified.  The ethics of self- and other-defense would argue that the Mr. Johnson’s actions and continued posturing of a lethal and imminent threat meant that officers were justified in using lethal force to protect themselves and the wider community.   Moreover, state and federal law allows officers to use “reasonable” amounts of force, and not merely the minimal amount of force to carry out their duties.   Thus I am not going to argue the ethics or the legality of the use of a robot to deliver a lethal blast to an imminent threat.

What is of concern, however, is how the arguments used in favor of increased use of robotics in situations of policing (or war) fail to take into consideration psychological and empirical facts.  If we take these into account, what we might glean is that the trend actually goes in the other direction: that the availability and use of robotics may actually escalate the level of force used by officers.

Continue reading


The Politics of Artificial Intelligence and Automation

The Pew Research Internet Project released a report yesterday, “AI, Robotics, and the Future of Jobs” where it describes a bit of a contradictory vision: the future is bright and the future is bleak. The survey, issued to a nonrandomized group of “experts” in the technology industry and academia, asked particular questions about the future impacts of robotic and artificial intelligence advances. What gained the most attention from the report is the contradictory findings on the future of artificial intelligence (AI) and automation on jobs.

According to Pew, 48% of respondents feel that by 2025 AI and robotic devices will displace a “significant number of both blue-and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.” The other 52% did not envision this bleak future. The optimists did not deny that the robots are coming, but they estimate that human beings will figure out new jobs to do along the way. As Hal Varlan, chief economist for Google, explains:

“If ‘displace more jobs’ means ‘eliminate dull, repetitive, and unpleasant work,’ the answer would be yes. How unhappy are you that your dishwasher has replaced washing dishes by hand, your washing machine has displaced washing clothes by hand, or your vacuum cleaner has replaced hand cleaning? My guess is this ‘job displacement’ has been very welcome, as will the ‘job displacement’ that will occur over the next 10 years.”

The view is nicely summed up by another optimist, Francois-Dominique Armingaud: “The main purpose of progress now is to allow people to spend more life with their loved ones instead of spoiling it with overtime while others are struggling in order to access work.”

The question before us, however, is not whether we would like more leisure time, but whether the change in relations of production – yes a Marxist question – will yield the corresponding emancipation from drudgery. In Marx’s utopia, where technological development reaches a pinnacle, one is free to “do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, shepherd or critic.” The viewpoints above have this particular utopic ring to them.

Yet we should be very wary of accepting either view (technological utopianism/dystopianism) too quickly. Marx, for instance, was a highly nuanced and careful thinker when it came to theorizing about power, freedom, and economics. Mostly because we must realize that any relations in the market are still, at bottom, social and political ones between people. In fact, if one automatically assumes that increased automation will lead to greater personal time a la Marx, then one misses the crucial point of Marx: he was talking about his communist ideal. Up until one reaches that point – if it is even possible – technological development that results in the lessening of labor time “does not in fact lead to a lessening of bound time for the producing population. Quite the contrary, the result of this unprecedented transformation and extension of society’s productive powers is the simultaneous lengthening and intensification (…) of the working day” (Booth, 1989). Thus even though I am able to run my dishwasher, my washing machine and my vacuum cleaner at the same time, I am still working. In fact, given the reality that in my household my partner or I do these tasks on the weekend or in the evenings, means that we are working “overtime;” so much for “spending more life time” together.

Indeed, the entire debate over the future of AI and automation is a debate that we’ve really been having already, it just happens to wrap up neatly all of the topics under one heading. For when we discuss which jobs are likely to “go the way of the dodo” we are ignoring all of the power relations inherent here. Who is deciding which jobs go? Who is likely to feel the adverse affects of these decisions? Do the job destroyers have a moral obligation to create (or educate for) new jobs?  Is there a gendered dynamic to the work? While I doubt that Mr. Varlan’s responses were intended in gendered terms, they are in fact gendered. That this work was chosen as his example is telling. First, house cleaning is typically unremunerated work and not even considered in the “economy.” Second, these particular tasks are seen as traditionally feminized. Is it telling, then, that we want to automate “pink collar jobs” first?

When it comes to the types of work on the chopping block, we are looking at very polarized sets of skills. AI and robotics will surely be able to do some “jobs” better. That is where “better” means faster, cheaper, and with fewer mistakes. However, it does not mean “better” in terms of any other identifiable characteristic from the endpoint of a product. A widget still looks like a widget.   Thus “better” is defined by the owners of capital deciding what to automate. We are back to Marx.

The optimistic crowd cites the fact that technological advances usher in new types of jobs, and thus innovation is tantamount to job creation. However, unless there is a concomitant plan to educate the new—and old—class of workers whose jobs are now automated, we are left with an increasing polarization of skills and income inequality. Increasing polarization means that the stabilizing force in politics, the middle class, is also shrinking.

The optimism, in my opinion, is the result of sitting in a particularly privileged position. Most of those touting the virtues of AI and robotics are highly skilled, usually white men, considered as experts. Experts entail that they have a skill set, a good education, and a job that probably cannot be automated. As Georgiana Voss argues, “many of the jobs resilient to computerization are not just those held by men; but rather the structure and nature of these jobs are constructed around specific combinations of social, cultural, educational and financial capital which are most likely to be held by white men.” Moreover, that these powerful few are dictating the future technological drives also means that the technological future will be imbued with their values. Technology is not value neutral; what gets made, who it gets made for, and how it is designed are morally loaded questions.

These questions gain even greater consequence when we consider that the creation on the other end is an AI. Artificial Intelligence is an attempt at mimicking human intelligence, but human intelligence is not merely limited to memorizing facts or even inferring the meaning of a conversation. Intelligence also carries with it particular ideas about norms, ethics, and behavior. But before we can even speculate about how “strong” an AI the tech giants can make in their attempt at freeing us from our bonds of menial labor, we have to ask how they are creating the AI. What are they teaching it? How are they teaching it? And if, from their often privileged positions, are they imparting biases and values to it that we ought to question?

The future of AI and robotics is not merely a Luddite worry over job loss. This worry is real, to be sure, but there is a broader question about the very values that we want to create and perpetuate in society. I thus side with Seth Finkelstein’s view: “A technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole. This is not a technological consequence; rather, it’s a political choice.”


Robots and Prejudice

At ThinkProgress Alyssa Rosenberg shares a lovely new short film about robots and prejudice:

No Robots from YungHan Chang on Vimeo.

Rosenberg draws a distinction between the representations of robots in this film and the scarier representations in much popular culture:

Often, when we see robots in popular culture, they’re actually more powerful than we are. If the Cylons were a metaphor for, say, Irish immigrants to the United States, they’d be telling a story about workers rising up from the slums and engulfing us all in whiskey and potatoes. These metaphors tend to legitimate the fears of privileged class rather than debunking them. But a movie like No Robots has a different power differential. The shopkeeper is angry at a robot who is physically smaller than he is, who is annoying rather than intimidating. He commits an act of terrible violence against that much more vulnerable actor. And then he discovers that things he’s conditioned to want to protect and find adorable—kittens—are emotionally dependent on the robot, who has been stealing milk to feed them. It’s a narrative that questions the shopkeeper’s prejudices and assumptions, rather than suggesting he’s right to be angry and afraid of a new element in his environment.

I think she may overstate the case: there are an awful lot of pop culture archetypes of robots as a vulnerable, altruistic underclass even in the West (remember AI? Wall-E?) and in Japanese culture the Terminator/Cylon archetype is far less prevalent than a view of robots as cute, cuddly and benign. But still, on this blog at least we’ve certainly focused more on war-bots, and this film is a healthy reminder of the many ways robots can be used as metaphors for complex social relations and hierarchies. Kudos to the producers.


© 2021 Duck of Minerva

Theme by Anders NorenUp ↑