Joel Filipe
71 x 81 cm
Artificial Tears is project created in collaboration with Arielle Esther and Joris Demnard from Ikonospace
“Can a machine think?” Alan Turing asks in the opening line of his seminal paper that aimed to explore the possibility of a computer exhibiting human-like intelligence. The question, posed by Turing in the 1950s, led him to engage in a deeper reflection on the common use of the terms “machine” and “think”, deeming the commonly held attitude dangerous. And he might have been right, as it is the common use of words that has led him to reevaluate this question in the first place. Nowadays, as technological devices become largely integrated into our lives, Turing’s dream that one would be able to speak of “machine thinking” without expecting to be contradicted became a part of our everyday reality and rendered the question far more compelling.
Artificial Tears started with a photographic project in 2017, as a reaction to the question:“ what is the difference between human and machine”?. This inquiry is continuously up for renegotiation as technology progresses. Including the Turing Test of a machine’s ability to think and behave indistinguishably from human to current developments of artificial intelligence, the dichotomy is becoming more and more questionable. Over many decades machines have proven to be stronger, faster, and more efficient in performing various tasks. Yet the narrative of the conscious system continues to emerge from human imagination as a disastrous model that puts machines in direct competition with our precarious, mortal bodies and minds, easily replaceable by a technological device. Even more than automation, we fear autonomy – the vision of technology evolved from the extension of man into singular intention, inheriting human desires for domination and control.
The most reassuring and simultaneously most dangerous thought lies in the fact that as the Turing Test is based on human judgment; all AI has a human teacher at its origin. The Test poses the greatest challenge not to the one who answers the questions but to the one who asks them. The game is conversational and cultural, replacing “thinking” with “imitation”. The results do not depend on the machine’s ability to give correct answers but on the resemblance of those of its human counterpart. Machine intelligence continues to be created in our image and assessed through the language of anthroposophic ideals. If it was defined differently the artificial “mind” would have already surpassed the human in many ways. In the strive for rationalization, man has become less rational than the objects of his creation, which now begin to outgrow him, organizing his surroundings and thus appropriating his actions.
Yet, only desire for power favors competition over collaboration. The phrase “good servant but bad master” frequently cited regarding invention and technology renders everything to be either-or, refusing anything in-between or other. The artificial and the natural, traditionally seen as two opposing extremes, now fuse without a clear distinction of who is in control and who under control in the relation between human and machine. “Born” and “made” begin to merge as the definition of terms such as “to live” or “to function” can be easily reshaped by shifting the focus from inquiring into origins to the purpose of existence.
The fear of being replaced by the device is in equal part the fear of being turned into one. As Donna Haraway writes in Cyborg Manifesto “Our machines are disturbingly lively, and we ourselves frighteningly inert.” We observe the emergence of automated workers with controlled movements following directions of information systems and techniques that employ the labor of the human mind to train and feed the algorithm. Intelligence is a deviation from ordered behavior which, according to Turing, does not give rise to randomness or pointless loops. Automation is based on stereotypes. And the consequence is absolute reduction.
Artificial Tears depicts the moment of uncertainty, an alienation in performing the most ordinary of tasks. A glimpse into the moment when the perfect pattern gets broken, and the meaning or rather the meaninglessness of one’s own action is revealed. It does not necessarily show the future, where machines act like humans but rather the world in which humans act like automata. In the quest for perfection as efficiency, there is no place for uncertainty, interdependence, or ambiguity. The contradiction between thinking and perfect imitation, functionality, and intelligence leaves the human factor out of the equation. Maybe the tools have outgrown us in many aspects, but we haven’t outgrown ourselves yet.
“What we want”, Turing remarks in 1947, “is a machine that can learn from experience. This can be achieved only by letting it alter its own instructions”. Ironically, the machine that truly passes the Turing Test is the one that chooses not to play.
——————-
(VR)
In collaboration with Joris Demnard (https://www.ikonospace.com/) and Arielle Esther.
In the VR experience, Artificial Tear created in 2019 our focus is on the cultural notion of the male creator, and female machine, the story of submission through innocence, striving for obedience, and artificial perfection. Whether we speak about Pygmalion and Galatea, Hoffman’s Olympia, or The Andreide, the narrative always circles around the creation of a model that satisfies the needs of a man, or as in the case of Metropolis, executes her creator’s commands.
Machines in general are not gendered by default. We shape it by setting their behavior, voice, or appearance. It is compelling to observe how many AI voice assistants end up being “female”. Is the notion of being more pleasant or trustworthy equated with sounding more servile just because a “woman” is speaking?
As noted by Judith Buttler, gender is performative. Even though these voices perform a certain range of femininity, this range is incredibly narrow. Supported by the stereotype of the domestic worker, assistant, or secretary AI assistants are designed to receive orders and execute the action without questioning, in other words, they are providing services, and do not act as personalities. Siri’s or Alexa’s reactions are in the style of vintage femininity, trying not to be seen, noticed, or overly important. They respond to insults with politeness, avoid any verbal conflict, always at their own expense. Naturally, they are made to be spoken to in an imperative. “She” must always answer, and the answer must delight the asker. The “machine” that no longer serves has lost its purpose, but actually, maybe it just found its own.
The issue here is no longer how something functions but what effect it has. Do servile behaviors of voice assistants stand in opposition to actual women’s expressions in contemporary society? Reactions that are often much more real and confident? Is the tone they use a remnant of the old order? Today more than ever, technology plays a huge part in setting rules and creating as well as possibly destroying long embedded stereotypes. In a culture where discriminatory biases have long been integrated into technologies as well as media representations, we should not expect this to simply disappear in the face of computational systems. Far from being neutral and objective individual actors, they inhabit the same prejudiced cognitive circuits as the society that designed them. Addressing these systemic problems requires more than just reprogramming particular algorithms, it entails addressing the techno-cultural assemblages that are responsible for its production.
The main character in Artificial Tears takes on a classical female appearance, one that is based on the stereotype of perfection. It represents the woman designed (by others or herself) to satisfy a general predefined definition of her kind. In the VR experience of Artificial Tears, the multi-layered character finally achieves autonomy by discovering her/its own free will and power to act.