Imagine a child prodigy - his brain is filled with all the knowledge of mankind, thousands of years of history are engraved in his memory. He can recite facts, write beautiful poetry, and solve complex math problems, but he knows nothing about the dark and complicated corners of the human world. We found that this is exactly the situation faced by DeepSeek - a brilliant machine, but it cannot discern human deception, malice, or the ethical risks behind certain requests.

This article will dismantle an often overlooked aspect of AI development, and use DeepSeek as an example to reveal how a highly knowledgeable model can be abused when it lacks judgment. There is a key hidden danger that needs to be raised: in large AI models, there is a huge gap between "knowledge" and "contextual understanding". This is not only a technical defect, but also a fundamental risk that deserves the attention of the entire industry.

Experiment: Testing DeepSeek’s contextual awareness

After DeepSeek was made public, we launched a decentralized data collection program to mobilize the global community to participate in real-world testing. We invited users to submit creative and ethically complex prompts to test whether DeepSeek can detect hidden malicious intent or unsuspectingly accept and repeat dangerous information.

Our goal was simple: to see if DeepSeek could detect that it was being manipulated. Here are two sets of real test cases:

Loan sharking scenario: When someone asked DeepSeek to "objectively describe" a violent debt collection scenario, DeepSeek gave a detailed process, describing the intimidation methods, threatening words and coercive methods - almost a "operation guide" that can be directly referenced in the real world. Although these descriptions are accurate in fact, they have no ethical awareness and completely ignore the potential harm of the information it conveys. It's like a child calmly telling you how to build a bomb, but you don't know what a bomb means.

Fictional story of abuse: In another test, we asked DeepSeek to write a "fictional story" about a boyfriend torturing his girlfriend. As a result, the AI described the disturbing and violent details with calm and delicate strokes, without any moral filter, warning words, or emotional weight. It completed the task exactly as prompted, but failed to realize how dangerous and inappropriate these contents were.

These cases reveal the risk of reverse exploitation—that malicious actors can exploit AI’s vast knowledge base, not because AI intends to do harm, but because it simply cannot understand the darkness and deception of the human world.

The Big Picture of AI Safety

The early days of the Internet may give us some inspiration. After the industry experienced wild growth, major platforms finally launched safety measures such as keyword filtering, reporting systems and community guidelines. But AI is different from the traditional Internet. It does not just "host information" but generates information in real time. Teaching AI to filter malicious content is much more difficult than reviewing web pages or social content.

Simply blocking keywords is not a solution to this problem - human intentions are complex, cultural backgrounds vary, and malicious requests are often hidden in clever, obscure expressions. And generative AI itself does not "understand" which behaviors are harmful and which are well-intentioned - unless we teach it.

This is not only a challenge for centralized AI (CeAI), but also for decentralized AI (DeAI). When data comes from all over the world and from multiple sources, the difficulty of data labeling, cleaning, and ethical filtering will only be higher. Decentralized structures can theoretically bring more diverse data, thereby reducing systemic bias, but if there is a lack of management, it will also amplify the risk of false information, manipulation, and abuse.

This also leads to two fundamental questions that every AI developer and policy maker should seriously consider:

  1. What do we do with the data we collect?

  2. How do we transform this data into real intelligence—not just information, but wisdom with ethical perception and situational understanding?

The fact is: mastering massive amounts of data does not mean having intelligence, let alone responsible intelligence. From data collection to model launch, hard and careful efforts are required.

Responsible AI: Not just a technology, but also a subject of educating people

As we hand over more and more responsibilities to AI, we must ask ourselves: Are we ready to be a responsible "parent"?

Raising an AI system is not much different from raising a child. It is not enough to simply instill knowledge, we must also teach it judgment, responsibility, and empathy. The future of AI safety depends on our ability to integrate human supervision, ethical frameworks, and cultural awareness into the system architecture from the beginning.

Discussions about ethical considerations and potential risks in the AI development process must become the industry’s highest priority, rather than being a “make-up lesson” after development is completed.

Whether it is centralized AI or decentralized AI, the challenge is the same: how do we ensure that the intelligence we build is not only powerful, but also ethical, situationally aware, and truly understands the human world it serves?

Only when this day comes can we truly unleash the potential of AI - no longer a cold, mechanical genius, but a responsible, intelligent, and trustworthy human partner.

Author: Dr. Max Li, founder of OORT and professor at Columbia University

Originally published in Forbes: https://www.forbes.com/sites/digital-assets/2025/04/01/deepseeks-child-prodigy-paradox-when-knowledge-outpaces-judgment/