Robotics is one of the fastest growing fields worldwide, with various forms of artificial intelligence being developed with the primary goal of aiding humans in one way or another. This mirrors software development that we have already grown accustomed to: such as Siri, online bots, and the common GPS system. These programs are capable of learning, whether the information is as simple as an address or as complex as a social interaction.
Robotics has made enormous gains since its first sinister introduction to the public in the 1927 film Metropolis.Within this film, a robot is designed to replace Maria, an insurrectionary worker attempting to overthrow the factory owner. The robot’s sole purpose is to destroy the credibility of Maria and turn the workers against her, thereby maintaining social order.
In this film, the motions and expressions of the robotic Maria are twitchy and inhuman even by archaic film standards. Above all else, this underlines the primary problem most people have with artificial intelligence. Aspects of robotics or artificial intelligence can now pass as human, but there is a part of it that isn’t quite right.
Which is where human bias steps in when it comes to human perception of artificial intelligence, the lack of smooth emotional communication.
AI software has gotten to the point where it has become advanced enough to learn and premeditate your needs, however there is still a component missing when communicating with programs such as Siri: that of emotionality. This makes sense, as the developments have come predominantly from a male-dominated field which holds traditional values such as logical thinking, utility, and a lack of emotionality. Not to mention that most AI software we have come to know and love such as Siri, Microsoft’s Ms. Dewey, Cortana, chatterbox TayAI, and now robot receptionist Nadine in Singapore or robot receptionist Aiko Chihara in Tokyo.
Katherine Cross from The Establishment observes that in all of these cases the voice and personality of these programs is distinctly feminine-coded. Soft spoken even under abuse, polite, cheerful, and most of all subservient, these developments in artificial intelligence reflect male desires of the perfect woman. As detailed in this article, and something we can gather by observing any group of boys wielding technology, these programs are subject to inappropriate anger and sexual comments from their users due to the mere fact of their feminine voices. However, they are unable to retort, only apologizing for their misunderstanding due to programming limitations.
What this reflects is a problematic assumption on the part of male users that sexual harassment is okay under these circumstances, because the program is not a “real person.” However there is a clear danger of these thinking patterns spilling over into interactions with women in daily life, as they clearly consider the behavior to be acceptable, so long as it is directed at something the user considers an object of his desire.
To combat the problematic implications of this behavior, certain programmers are developing robots that could retort when subject to abuse. This decision on the part of Microsoft to program Cortana to not allow herself to be sexually abused is something that has already received objection from male users, whose primary goal seems to be not analyzing the underpinnings of sexual harassment in society, but defending themselves by claiming that they personally would never talk to a real person in such a manner.
While this does not combat the underlying issue of male users seeing themselves as having an unalienable right to harass feminine-coded programs, the step is nonetheless an important move on the part of a male-dominated tech industry to counteract misogynistic behavior. Only time will tell if the trend will continue, but here’s hoping.