The writer is a scientific commentator
I remember the first time my daughter had fibroids. She stood with her back to the living room wall, crayons in hand, trying to hide her scribbles. Her explanation is as creative as her craft: “Dad, do it.”
Deception is an important milestone in cognitive development because it requires an understanding of how others might think and act. That ability is demonstrated, to a limited extent, in Cicero, an artificial intelligence system designed to play Diplomacy, a wartime strategy game in which players negotiate, ally, trick deception, concealment, and sometimes deception. Cicero, developed by Meta and named after the famous Roman orator, pits its artificial intelligence against online players — and outperforms most of them.
The emergence of an AI that can play games as proficiently as a human, revealed last week in the journal Science, opens the door for more sophisticated AI-human interactions, such as better chatbots. and optimal problem solving when compromise is required. However, as Cicero demonstrates that AI can, if necessary, use implicit tactics to accomplish certain goals, creating a Machiavellian machine also raises questions about whether we should outsource subscribers. agents for algorithms — and whether a similar technology should be used in real-world diplomacy.
Last year, the EU conducted a study on the use of AI in diplomacy and its possible impact on geopolitics. “We humans are not always good at resolving conflicts,” said Huma Shah, an AI ethicist at Coventry University in the UK. “If AI can complement human negotiation and stop what is happening in Ukraine, why not?”
Like chess, the game of Diplomacy can be played on a board or online. Up to seven players vie for control of various European territories. During the first round of actual diplomacy, players can attack alliances or agreements to hold their position or move forces around, including attacking or defending allies.
The game is considered a major challenge in AI because in addition to strategy, the player must be able to understand the motives of others. There is both cooperation and competition, with the risk of betrayal.
That means, unlike in chess or Go, communication with other players is important. Thus, Cicero combines the strategic reasoning of traditional games with natural language processing. In one game, the AI figures out how other players might behave during negotiations. Then, by crafting messages with the right wording, it convinces, coaxes, or coerces other players to cooperate or make concessions to carry out its own game plan. Meta scientists trained Cicero using online data from about 40,000 games, including 13 million in-game messages.
After playing against 82 people in 40 matches in an anonymous online tournament, Cicero ranked in the top 10% of participants who played more than one game. There were hiccups: it sometimes gave conflicting messages about the invasion plan, confusing the participants. However, only one opponent suspected Cicero to be a bot (all of which were later revealed).
Professor David Leslie, an AI ethicist at Queen Mary University and at the Alan Turing Institute, both in London, describes Cicero as a “technically very adept Frankenstein”: an impressive combination of technology but also a window into a troubled future. A 2018 UK parliamentary committee report advised that AI should never be given “autonomy to hurt, destroy or deceive humans”.
His first worry is human deception: when one person falsely believes, as one opponent did, that there is another human being behind the screen. That could pave the way for humans to be manipulated by technology.
His second concern is that AI is equipped with dodgy but lacks a sense of basic ethical concepts, such as honesty, duty, rights, and duties. “A system that is endowed with the ability to deceive, but it doesn’t work in the moral life of our community,” says Leslie. “To put it bluntly, an AI system is, at a fundamental level, unethical.” He argues that Cicero intelligence is best applied to tough scientific problems like weather analysis, not sensitive geopolitical issues.
Interestingly, the creators of Cicero claim that its messages, filtered by malicious language, are ultimately “mostly honest and helpful” to other players, speculating that success can come from suggesting and explaining win-win moves. Perhaps, instead of being surprised at how well Cicero plays Diplomacy against humans, we should be disappointed at how poorly humans play diplomacy in real life.