- Chain of Thought
- Posts
- A deeper dive into AI companions
A deeper dive into AI companions
Plus new research offers a tip for prompting LLMs
Quick Note
Today’s newsletter looks a little bit different. Last week we looked at the rise of AI companions, but words are cheap so I wanted to share some new data that quantifies their popularity. All of today’s stories come from news that dropped over the weekend.
Grok downloads increase 40% after the launch of companions

Generated by GPT-4o
After launching unfiltered AI companions last week, xAI’s Grok iOS downloads increased by 40% globally to 171,000 daily installs. Revenue also increased by 9% to $337,000 per day.
Key background: AI companions are chatbots designed for more personal conversations, unlike typical AI assistants that answer work-related questions. This controversial type of AI can have casual conversations, give advice and even role play as real people.
The details:
The launch of Grok companions followed the July 9 release of Grok 4, an upgraded model that rivals the performance of current models from OpenAI and Anthropic.
Grok 4 boosted xAI’s daily revenue on iOS by 325% to $419,000 shortly after launch (it later dipped), and also increased daily downloads by 279% to 197,000.
The new Grok companions, which are only available to paying “Super Grok” subscribers, carried that momentum into the back half of this month.
Why it matters: This data is a little messy, but we can see a noticeable demand for AI companionship, especially among young people. More on that in our next story.
Overloading LLMs with information leads to worse results
Taking a quick break from our exploration of AI companions, I wanted to share findings from another study that may help you get better outputs from large language models (LLMs). Researchers at Chroma have discovered that including too much information in a single prompt can actually make LLMs perform worse.
Every AI model has a context window that determines the amount of text that it can “see” at one time while answering a question or continuing a conversation. Think of this like a model’s short-term memory. Tech companies typically increase context windows with each new generation of models, but Chroma’s researchers say that almost all LLMs show a decline in performance as they receive increasing amounts of text.
All of this goes to say — if you’ve been disappointed with AI chatbots and you’re using long prompts, try making them more concise and you might get better results.
Study finds 72% of US teens have used AI companions

Generated by GPT-4o
A new study by the U.S. nonprofit Common Sense Media has found that 72% of teenagers have used an AI companion at least once.
Crunching the numbers:
This study surveyed 1,060 American teenagers between the ages of 13 and 17.
52% of participants said that they are regular users. Among that group, 13% chat with AI companions daily and 21% chat a few times per week.
About one-fourth of teens said they have never used an AI companion. Boys (31%) were slightly less likely to try them than girls (25%) in this group.
When asked why they use AI companions, 30% said they used them for entertainment. On a more personal level, 17% said that AI is always available when they need someone to talk to and 12% share things that they wouldn’t tell friends or family.
Potential bias: Common Sense Media’s mission includes advocating for children’s welfare, so their framing can lean toward concerns about screen time and tech addiction. This doesn’t make their research invalid, but it sometimes tends to emphasize risks more than benefits.
The bottom line
About a year and a half ago, I interviewed several experts at UGA’s Institute for Artificial Intelligence about this topic. Our conversations led to two points of view:
The risks are obvious — some people will quickly develop an overreliance on AI chatbots once they turn to them for companionship. As we saw last week, this can lead to social isolation in the real world and other serious mental health issues.
On the other hand — people who struggle with social interactions in the real world could use AI companions to practice their conversational skills. They could then apply these skills in real-world conversations to become less socially isolated.
Like many topics in AI, there’s no clear right or wrong here. The only solid takeaway I can leave you with is that this is happening. For better or worse, data shows that people are turning to AI for social interactions and companionship.
Thanks for reading! We’ll shift back to our regularly scheduled programming tomorrow. My goal is to make these emails as easy to read as possible — if there’s anything I can do for you to achieve that goal, let me know!
See you tomorrow,