PANews reported on November 22 that Twitter user @r_cky0 revealed that when he used ChatGPT to generate code to develop a blockchain automatic trading robot, a backdoor was hidden in the code recommended by GPT, which sent the private key to a phishing website, causing him to lose about $2,500. Later, SlowMist founder Yu Xian @evilcos confirmed that there were indeed cases of being "hacked" by using AI to generate code.
Experts pointed out that such attacks may originate from malicious patterns learned by AI from phishing posts or unsafe content, and that current AI models have difficulty distinguishing whether there are backdoors in the code. The industry calls on users to be vigilant and avoid blindly trusting AI-generated code, and recommends that AI platforms strengthen content review mechanisms in the future to identify and alert potential security risks.