The current most advanced large language models, such as GPT-4, have a parameter scale of over 1.7 trillion and can handle training data of up to 100 trillion tokens, which provides a vast knowledge base for them to simulate human emotional conversations. However, according to a 2023 study by the Massachusetts Institute of Technology, when test subjects had anonymous conversations with ai chat Celebrities, only 58% of the emotional expressions could be identified as “highly anthropic”, while when facing real celebrities, this recognition rate could reach 92%. This indicates that although AI’s emotion simulation shows a statistical accuracy of 70%, the essence of its emotion generation is based on probability distribution rather than real experience, just like a precise but unintentional clock that can tell the time but does not understand the meaning of the passage of time.
From a technical implementation perspective, emotional computing models learn the amplitude of micro-expressions, the frequency of sound waves, and the stress parameters of intonation by analyzing over 10 million hours of celebrity video, audio, and text data. For example, an ai chat celebrity system designed to replicate the style of a certain Hollywood star can optimize the response time of the “joy” reaction to within 500 milliseconds and achieve an emotional intensity matching degree of 85%. However, the emotional fluctuations of deep neural networks are the result of algorithm optimization, and the dispersion of their emotional changes is much lower than that of humans. For instance, when simulating the emotion of “anger”, its intensity is often controlled within a safe threshold, with the peak intensity being only 60% of the natural human response to avoid output risks.
Real cases of public experience offer a more intuitive perspective. In a survey covering 10,000 users in 2024, 75% of the participants felt “warmth” and “empathy” when interacting with the ai chat celebrity that mimicked Taylor Swift. This effect stems from the system’s learning of 50 million interaction data from the fan community, making the correlation between the emotional concentration of its response and the user’s expectations as high as 0.8. However, the same study also pointed out that in long-duration (over 30 minutes) conversations, the repetition rate of AI’s emotional responses would rise to 40%, revealing its shortcoming of lacking genuine emotional memory. It’s like an infinitely repetitive script, rich in plot but unable to truly grow.
From an ethical and evolutionary perspective, the core risk of AI simulating emotions lies in the deviation range of its “authenticity”. A research team from the University of Oxford calculated in 2025 that the error rate of AI’s emotional responses can be as high as 15% to 30% in different situations, especially when dealing with complex emotional conflicts, its responses may deviate from human ethical standards by 20%. As emphasized by the EU Artificial Intelligence Act, emotional simulation systems must pass compliance certification to keep the probability of emotional misguidance below 5%. In the future, with breakthroughs in brain-computer interface technology, the accuracy of emotional simulation may be enhanced to 95%, but that will no longer be a simple program; rather, it will be a revolution about the essence of consciousness.
