From Typewriters to Decision-Makers
photo: Helen Black, flickr, CC BY-NC-SA 2.0 license

From Typewriters to Decision-Makers

BY Aleksandra Przegalińska

How could a machine possibly assist in making the decision to pollute a pond with toxic waste, or to fire an employee with a bad performance record?

2 minutes reading left

On November 25, 2008, “The New York Times” published an article by Cornelia Dean that went largely unnoticed around the world. “A Soldier, Taking Orders From Its Ethical Judgment Center” was based on an interview that Dean conducted with Dr. Ronald C. Arkin. In it, Dr. Arkin describes his work for the U.S. Army, which involves designing robot drones, mine detectors, and sensing devices. He also mentions that his greatest dream is to construct an autonomous robot that would be capable of acting on its own on the battlefield, making decisions without taking orders from humans. He claims that work on the technology has been going on for several years.

Thanks to GNU Free Documentation License:
http://www.gnu.org/licenses/fdl.html
While the concept itself may sound mad, it is in fact neither new nor surprising to people who have participating in similar projects for many years. The idea of creating systems that would assist us in making strategic military decisions and help us solve problems on the battlefield, as well as send armies of robots to war, is deeply rooted in the post-World War II history of the West. The fifties and sixties were the era when the idea enjoyed its greatest popularity, at the height of the of Cold War arms race between NATO, led by the United States, and the Soviet Union and Warsaw Pact countries. It was then that cybernetics was born in the laboratories of American universities. This field of science, which takes its name from the Greek word kybernán (to steer or to control), studies methods of controlling complex systems based on the analysis and feedback of information, in the belief that more advanced parts of the globe demand effective models for strategic planning and risk management in order to protect themselves from enemy invasions. As Phil Mirowski notices in Cyborg Agonists, game theory and operations research (OR), which lay the groundwork for cybernetics, were greatly popular in the 1950s because they offered the tantalizing hope of creating a unifying theory for organizing the post-war world: synergy between dynamically developing scientific fields, a stable economy, and the rational management of mankind’s growing resources.

It seemed natural that man, faced with significant advances in technology in a world threatened by the unknown, would be forced to work with allies that he himself had created ― machines. The debate was about the nature of that relationship. The first major project was a concept created by Alan Turing. In it, a human was to be paired with a computing machine. The human was responsible for management, while the machine, as the “more intelligent” partner, was to analyze data and present potential solutions. Other solutions were proposed as well, ones completely different from Turing’s imposed division of roles. One of them was formulated by the neuroscientist and information theorist Donald MacKay, who was the first to introduce Darwinism into cybernetics. Aside from the standard OR instrumentarium, his descriptions of machines includes such words as “adaptation” and “purpose”. MacKay rejected the idea that humans are composed of subsystems that could be replicated outside the body, nor did he accept the notion that human action and thought (and thus the actions and thoughts of a machine) were imaginable outside man’s natural environment. He proposed that scientists focus on the purpose of actions and on analyzing behavior that would enable us survive. In the context of the Cold War, MacKay’s concept singlehandedly revolutionized the notion of the enemy ― the other. He wanted to teach the machine something that humans, for obvious reasons, could never comprehend: he wanted them to think that “the other is me”. Unfortunately, the concept was never fully realized in practice.

A decade later, the lack of quantifiable results led to a reevaluation of the concept of creating a perfect fighting machine that would replicate many human functions while remaining intellectually superior. Subsequent anthropomorphic robot projects would grind to a halt as soon as the machine was expected to learn, instead of just performing calculations or tasks. The U.S. Army responded by gradually reducing funding for universities contracted to conduct research on “the terminator”. But just as the source of government funding was drying up, the burgeoning field caught the interest of American businesses, known for their investments in effective solutions rather than ambitious ones. But the business world did not demand that the machine become a “better human”. The 1970s thus became an booming era for a particular component found in lethal autonomous robots, namely decision support systems (DSS).

Simply put, these systems rely on “if–then” logic. Most consist of three basic parts: a database, which stores the system’s knowledge, an inference engine, which serves as the brain of the system, and a user interface. The decision-making process in a DSS assumes that a good decision can only result from a proper decision process. In other words, if the mechanism functions correctly, then the decision must be correct. This transparent logic suggested that DSSs could be useful in business sectors that saw great development after the war: finance and health care, as well as administration and taxes. But the more successfully these DSSs handled simple, routine tasks, the more was expected of them. As opposed to their technological ancestor, the typewriter, these machines were expected to run the office, not just help out.

Among the decision-making systems in use today, there is a group known as “expert systems” that enjoy the significant trust of society. This group is unique in that, according to many specialists, it has come to resemble humans. Expert systems, which are more or less similar to a DSS, differ from them in that they are capable of “sparking their imaginations”, just like humans. What this means is that they are able to function in complex systems about which they do not have complete knowledge. Instead of analyzing an enormous number of parameters, they take small steps in testing various solutions previously acquired in similar situations. In order to achieve the main goal, these systems set intermediate goals based on tests, and complete these checkpoints before running subsequent procedures. But because of their elasticity, expert system are sometimes incapable of identifying obviously incorrect scenarios ― or at least scenarios that humans would consider incorrect.

Despite these shortcomings, expert systems have been flooding the service market for the past two decades. Their quiet, barely noticeable presence can be detected in health care, online commerce, and banking. Over the past five years (more or less since Dr. Arkin began building his “mechanical soldier”) some have begun proposing expert system applications in the field of business consulting and even ethics and psychology. As early as the late 90s, some scientists predicted that expert systems capable of learning could not only lead executives in making business decisions, but even someday serve an important educational role by training future generations of employees. But as opponents of DSSs are quick to point out, employees and resources aren’t quite the same thing. How could a machine possibly assist in making the decision to pollute a pond with toxic waste, or to fire an employee with a bad performance record? Such proposals have provoked understandable opposition. Omar E.M. Khalil noted in his 1993 book Artificial Decision-Making and Artificial Ethics: A Management Concern that expert systems cannot make independent decisions involving ethics, or even assist humans in making such decisions, because such systems lack emotions and values, which makes them completely inadequate in assessing a situation. Daniel Dennet, a one-time friend and advocate of cyborgs, also spoke out against expert systems, arguing that they cannot manage like humans because they do not live like humans. The warnings of these opponents appear to be falling on deaf ears. It remains a fact that many people already working with such systems and making decisions with their help prefer the collective responsibility it offers to putting their own name on the line. They are convinced that “the system’s decision” is more rational than any they could make themselves.

It’s interesting that a sector as young as the IT industry has managed to develop, almost independently and in such a short time, a new, ethical professional role (along the lines of a doctor, manager, soldier, and perhaps someday even a lawyer), the competencies of which are defined by how closely they work with the system. Unlike the poignant story in Bladerunner , where a replicant longs to become a human, this one features a “replicant” teaching a human how to be human, subjecting him to the Voight-Kampff test à rebours. But perhaps the irony is unfounded. After all, self-improvement techniques are growing increasingly popular, and the role of the therapist is being replaced by that of the coach. On the other hand, a “sensitive programmer” from Stanford University has been teaching his pet robot to pick up on mechanisms of repression identified by psychoanalysts, which can be detected and visualized through MRI technology. In situations such as these, it is in fact difficult to decide who should be teaching whom.

translated by Arthur Barys


Tekst dostępny na licencji Creative Commons BY-NC-ND 3.0 PL.