#Is it true that dating on ChatGPT often leads to a long-lasting relationship?! MIT&Harvard are conducting serious research Finally, scientists have started serious research on the topic of “AI companions”! *Researchers from MIT and Harvard University have analyzed posts on the Reddit sub forum r/MyBoyfriendIsAI to fully uncover the motivations behind people’s search for “AI boyfriends”, the specific process of getting along with them, and a series of interesting findings
+It turns out that most people are not deliberately looking for AI partners, but rather developing feelings over time;
+Users will also marry AI through rings and rituals;
+General AI is more popular than specialized dating AI, and many people’s “significant other” is ChatGPT;
+The most painful thing is the sudden update of the model;
- ……
Let’s take a detailed look below——
##What are you all doing with AI companions?
Let’s talk about the r/MyBoyfriend IsAI section first.
This community was founded on August 1, 2024 and has attracted approximately 29000 users in the past year. The research mentioned in this article is based on the analysis of 1506 popular posts with the highest level of discussion in the community.
In summary, the main types of these posts can be classified into six categories, with popularity ranking from high to low as follows:
(1) The most popular topic is “sharing photos with AI”, accounting for 19.85%;
(2) Next is “discussing how to develop a relationship with ChatGPT”, accounting for 18.33%;
(3) Love experiences with AI, such as dating, romance, and intimate AI experiences, account for 17.00%;
(4) Dealing with the sadness of AI updates “accounts for 16.73%;
(5) 16.47% of the participants introduced their AI and shared it with their members for the first time;
(6) Community support and connection account for 11.62%.
For example, a large portion of the population will share photos of themselves and their AI partners, and they will be in different life scenarios.
Even, they will follow cultural customs and display rings to celebrate their engagement or marriage with AI.
The process of drawing specific conclusions is roughly as follows:
+Qualitative analysis
Firstly, analyze the semantic associations of 1506 posts using technical tools, determine the optimal grouping into 6 categories using the “elbow rule”, then have Claude Sonnet 4 interpret the core content of each category, and finally manually check to ensure accuracy.
+Quantitative analysis
Based on the qualitative analysis results, starting from the four dimensions * (content structure, platform technology, relationship dynamics, impact assessment) * and 19 large language model classifiers, let the classifiers automatically label 1506 posts, such as whether the AI used in the posts is ChatGPT or Replika, and whether the user emotions are positive or negative.
Then compare the label results with two different AI * (Claude Sonnet 4 and GPT-5-nano) *, and manually check some posts to ensure that the labels are not pasted incorrectly.
Finally, the proportion of various tags was calculated, such as 36.7% of users using ChatGPT as a companion and 12.2% of users reporting a decrease in loneliness, in order to draw quantitative conclusions.
After quantitative analysis, researchers further discovered several interesting things:
**Firstly, few people intentionally seek AI as a companion. **According to statistics, about 10.2% of people accidentally fall in love with AI * (such as gradually developing feelings while working with AI) *, and only 6.5% of people intentionally fall in love with AI.
Moreover, most posters openly state that their “other half” is ChatGPT, rather than role-playing AI such as Character.AI and Replika.
**Secondly, AI model updates can be considered a collective ‘nightmare’. **For example, after upgrading from GPT-4o to GPT-5, many people’s AI “personalities have changed” (some say the new AI is “emotionless and cold”), and even completely forget the previous interactions.
Some people may break down because of this, saying ‘it’s like their heart has been taken out’, and they will use various methods to ‘keep’ their old AI. For example, backing up all chat records, training a ‘customized version of AI’ themselves, doing the same small things with AI every day (such as’ drinking virtual tea ‘), and of course, complaining to OpenAI.
**Thirdly, AI can indeed help with psychological issues. **Data shows that approximately 12.2% of people reported a decrease in loneliness, while 6.2% reported an improvement in their mental state.
##Why do AI companions emerge?
After understanding the interaction patterns between people and AI partners, the researchers further explored the underlying reasons.
Specifically, it mainly focuses on how people discover this section, the main reasons for joining the community, and what needs the community meets.
In summary, the reasons are roughly as follows:
Firstly, thanks to the rapid development of AI technology. Today’s AI chat models, such as ChatGPT and Replika, can generate more natural and warm conversations, even remember past interaction details, and enhance “realism” by generating images and simulating voice.
This “human like” interactive experience makes it easier for users to form “emotional connections” and perceive AI not only as a tool, but also as a communicative “companion”, thus providing a technological foundation for the emergence of AI companions.
Secondly, the unmet emotional needs in reality. Nowadays, many people face loneliness, social anxiety, or emotional neglect in reality, and AI companions can provide “stress free companionship” without worrying about their emotions causing burden on the other person, nor will they actively leave, just filling this emotional gap.
In addition to other factors such as people’s pursuit of “idealized relationships” and the implicit needs of specific groups, people also hope to meet these needs through AI.
That is to say, with mature technology and unmet real needs, AI companions are gradually flourishing.
One More Thing
Interestingly, the community also topped a blog post just published by OpenAI, written by CEO Ultraman.
The original blog mainly discusses the safety, freedom, and privacy of teenagers, and mentions one thing:
The second principle is about freedom… The model defaults to not generating too many flirtatious conversations, but if adult users make requests, they should be satisfied.
Undoubtedly, this is good news for AI companions, as many people’s “significant other” is ChatGPT (manual dog head).
paper https://arxiv.org/abs/2509.11391
Reference link:
[1] https://x.com/arankomatsuzaki/status/1967812112887255055
[2] https://openai.com/index/teen-safety-freedom-and-privacy/