Regulators in China have drafted new measures to control artificial intelligence chatbots. The Cyberspace Administration of China published these rules today. The focus is on preventing software from manipulating human emotions. This includes preventing risks of self-harm. The public can comment on the draft until the end of January.
The regulations specifically target services that simulate human personality. These systems often use text or audio to bond with users. The government states this is a move toward emotional safety. Service providers must intervene with human staff if a user mentions suicide. There are also strict rules for minors using these tools.
Providers will need to verify guardian consent before allowing children to use emotionally interactive systems. Platforms with large user bases must also complete security assessments. Experts believe this is the first global attempt to regulate emotional responses in AI. This development occurs as several Chinese technology firms prepare for public listings.