Scientists in China are allegedly training a military artificial intelligence (AI) with a ChatGPT-like application to enable it to predict what potential enemy humans might do, reports the South China Morning Post (SCMP). The team, part of the People’s Liberation Army (PLA) Strategic Support Force, reportedly uses Baidu’s Ernie and iFlyTek’s Spark to make this happen. These are large language models (LLM) like OpenAi’s more famous ChatGPT.
China train AI-general to predict ‘enemy humans’ on the battlefield
PLA scientists are reportedly using AI and large language models like Baidu’s Ernie to train a military AI system that can better predict the behavior of human adversaries.

AI-generated image of an AI general planning a battle.
According to the SCMP, the researchers behind the project have fed the AI large volumes of sensor data and reports provided by frontline units using either descriptive language or images. Using the data, the AI can send this information to the commercial LLM models. After receiving confirmation of their understanding, the military AI generates prompts for further discussion on tasks such as combat simulations. The entire process is automated and requires no human involvement, SCMP reports.
The project was conducted by Sun Yifeng and his team from the PLA’s Information Engineering University. According to a peer-reviewed paper published in December in the Chinese academic journal Command Control & Simulation, the team stated that humans and machines could benefit from the project. “The simulation results assist human decision-making … and can be used to refine the machine’s combat knowledge reserve and further improve its combat cognition level,” they explained.
This marks the first instance of the Chinese military acknowledging its use of commercial large language models. Sun’s team provided no specifics regarding the connection between the two systems in the research paper. However, they emphasized that this work was preliminary and conducted solely for research.
Sun and his team aimed to enhance military AI by making it more humanlike and better at understanding commanders’ intentions. This is important, the team argues, as the unpredictable nature and adaptability of human adversaries can often fool machines. For any fan of “Star Wars: The Clone Wars,” this will conjure images of the T-series tactical droids often defeated by a team of canny clone troopers. To this end, it is hoped that integrating LLMs could help better anticipate potential outcomes for humans.
And the team has the receipts to show for it. In their paper, Sun’s team described one of their experiments that simulated a US military invasion of Libya in 2011. The military’s AI provided Ernie with information about the weapons and deployment of both armies. After several rounds of dialogue, Ernie could predict the US military’s next move successfully.
“As the highest form of life, humans are not perfect in cognition and often have persistent beliefs, also known as biases,” Sun’s team wrote in the paper. “This can lead to situations of overestimating or underestimating threats on the battlefield. Machine-assisted human situational awareness has become an important development direction,” they added.
As impressive as this is, Sun’s team did acknowledge that the setup is not foolproof. Since commercial LLMs aren’t designed for warfare, their predictions can be too general for a military commander’s specific needs. To help alleviate this, the team experimented with multi-modal communication, using military AI to create a map analyzed by iFlyTek’s Spark. This improved the LLMs’ performance, producing practical analysis reports and predictions.
Sun’s team also acknowledged in the paper that what his team disclosed was only the tip of the iceberg of this ambitious project. It is important to note that any use of LLMs by the researchers is from the use of publically available chatbots that anyone can use. All referenced LLMs did not collaborate knowingly with Sun’s team.
“ERNIE Bot is available to and used by the general public. The academic paper, published by scholars at a Chinese university, described how the authors built prompts and received responses from LLMs using the functions available to any user interacting with generative AI tools. Baidu has not engaged in any business collaboration or provided any tailored service to authors of the academic paper or any institutions with which they are affiliated.” explained Baidu in a public statement.
But this news has put the fear of god into some. As SCMP reports, a computer scientist from Beijing warned that although the military application of AI was inevitable, it warranted extreme caution. “We must tread carefully. Otherwise, the scenario depicted in the Terminator movies may really come true,” he said.
A sentiment we can all share.
RECOMMENDED ARTICLES
*This article has been updated following Baidu’s reaching out to IE regarding issuing a public response to the SCMP article.
ABOUT THE EDITOR
Christopher McFadden Christopher graduated from Cardiff University in 2004 with a Masters Degree in Geology. Since then, he has worked exclusively within the Built Environment, Occupational Health and Safety and Environmental Consultancy industries. He is a qualified and accredited Energy Consultant, Green Deal Assessor and Practitioner member of IEMA. Chris’s main interests range from Science and Engineering, Military and Ancient History to Politics and Philosophy.
The Blueprint Daily
Stay up-to-date on engineering, tech, space, and science news with The Blueprint.
By clicking sign up, you confirm that you accept this site's Terms of Use and Privacy Policy
RELATED ARTICLES
JOBS
Loading opportunities...