- Chain of Thought
- Posts
- Welcome to Chain of Thought
Welcome to Chain of Thought
Today: AI faces prisoner's dilemma games and tennis

Quick Note
Struggling to get the most out of large language models (LLMs)? Refer five readers to this newsletter and I’ll send you my guide to writing better prompts. Check back tomorrow to get your custom referral link.
Study finds AI models have distinct strategic fingerprints

Generated by GPT-4o
Researchers from King’s College London and the University of Oxford have discovered that language models from OpenAI, Google and Anthropic exhibit unique behavior when faced with prisoner’s dilemma games.
Key background: Prisoner’s dilemma games present scenarios where two individuals must decide independently whether to cooperate or betray the other. If both cooperate, they receive a moderate reward — but one player can achieve a better outcome by betraying a cooperator. If both choose to betray, they both lose.
This study observed:
How AI models strategize during a prisoner’s dilemma and how they behave after a round finishes.
Google’s Gemini showed the most strategic flexibility, while OpenAI’s models and Anthropic’s Claude were more likely to collaborate every time.
Gemini was also the least forgiving. It had the lowest strategic fingerprint* — meaning it was the least likely to collaborate again after being betrayed.
Why it matters: This study provides some evidence that AI is capable of strategic reasoning, rather than just mimicking memorized strategies (known as pattern matching). While AI models don’t have personalities the same way humans do, it also suggests that different model “personalities” are better suited for different tasks.
What are fingerprints?
Fingerprints are subtle differences in the way a model writes or makes decisions. Even if two systems are trained the same way, tiny differences can cause them to behave slightly differently. These patterns can be used to tell one AI apart from another, kind of like how handwriting or speech can identify a person.
Tennis players criticize AI use at Wimbledon

Source: Wimbledon
Competitors at the annual Wimbledon tennis tournament are criticizing the competition’s new AI line judge, which replaced human line judges at the London event for the first time
Key background: Professional tennis tournaments have been experimenting with electronic line calling systems (ELCs) since 2021, aiming to replace human line judges who determine whether a ball is in or out.
What’s happening:
British players Emma Raducanu and Jake Draper have criticized the technology for making incorrect calls that caused them to lose points.
American Ben Shelton was asked to speed up one of his matches because the system would stop working as the sun went down.
In another match between Britain’s Sonay Kartal and Russia’s Anastasia Pavlyuchenkova, the ELC failed to make a call when a ball went out. Wimbledon apologized, labeling the incident as a “human error” as the system was accidentally shut off.
The bottom line: Automated officiating is experiencing growing pains at the world’s oldest professional tennis tournament. The technology will likely remain in use, but clearly needs some refinement to reach the level of accuracy that the pros are looking for.
More trending news
Updated version of xAI’s Grok criticizes Democrats and Hollywood’s “Jewish executives”
Meta reportedly recruits Apple’s head of AI models, continuing aggressive poaching spree of top talent
AI coding platform Cursor apologizes for unclear pricing changes that upset users
Thanks for reading! As you can see, this newsletter’s design is still a work in progress. My goal is to make these emails as easy to read as possible — if there’s anything I can do for you to achieve that goal, let me know!
See you tomorrow,