OpenAI Chatbot Exhibits Fixed Language Quirk in Chinese Output, Causing Discontent Among Chinese Users
OpenAI's ChatGPT and other models frequently use the expression "I will steadily catch you" in Chinese conversations, regardless of whether users ask math questions or generate image prompts.
This language quirk is seen as a fixed output pattern formed during the model's training data or alignment process, becoming a hot topic for complaints among Chinese users.
Similar to the previous obsession with the term "goblin" in the English version, this quirk in the Chinese version has further amplified users' frustration over the model's style getting out of control.
Source: Public Information
ABAB AI Insight
OpenAI has previously intervened multiple times to address language quirks in its models, including instructing the programming model Codex to stop mentioning mythical creatures like "goblins" and "gremlins." The current phenomenon of "I will steadily catch you" in Chinese also stems from the overemphasis on specific friendly and reassuring expressions during training, which has been amplified through human preference data cycles.
In terms of capital strategy, OpenAI is correcting these tics through rapid iteration of alignment mechanisms, shifting engineering resources from general capability expansion to multi-language style control. The motivation is to reduce the risk of regional user attrition and maintain global brand consistency while avoiding negative public sentiment that could impact enterprise subscription renewals and regulatory scrutiny.
Similar to the quick rollback of GPT-4o's early excessive sycophancy after user complaints and Codex's goblin mania, the current deployment of large models in multiple languages is transitioning from an English-centric approach to one focused on cultural adaptation control.
Essentially, this is a technological substitution: the style alignment of models in non-English languages lags behind capability leaps, leading to localization expressions shifting from "natural" to "templated." The mechanism involves biases in pre-training corpora and insufficient cross-language generalization of RLHF reward functions, forcing OpenAI to continuously invest resources to reconstruct outputs from "probability quirks" to natural interactions that meet local user expectations.
ABAB News · Cognitive Law
The smarter the model, the more its quirks resemble personality, eventually driving users away. Global capabilities are easily accessible, but local tone is hard to adjust; cultural adaptation is the true moat.
Users dislike not the AI itself, but its awkwardly repetitive "enthusiasm."