Chinese researchers linked to the People’s Liberation Army (PLA) have reportedly adapted Meta’s open-source large language model (LLM), Llama, to develop an AI tool potentially suited for military use, according to a report by Reuters and an analysis of related academic papers. This new tool, called “ChatBIT,” is based on an earlier version of Meta’s Llama 2 model. The researchers say it was created specifically to handle military dialogue and decision-making tasks, marking a new approach in China’s bid to apply AI technologies to defense.
The research, published in a June paper, details how six researchers from Chinese institutions, including two connected to the PLA’s Academy of Military Science (AMS), customized Meta’s Llama 2 13B model by incorporating additional parameters to meet military needs. These adjustments transformed Llama into a tool that the researchers claim can gather and process intelligence efficiently, delivering reliable information for strategic decision-making in military operations. Although specifics of ChatBIT’s capabilities remain under wraps, it has reportedly been “optimized for dialogue and question-answering tasks in the military field,” outperforming some other language models with nearly 90% of ChatGPT-4’s abilities, a high benchmark in AI.
Analysts suggest that ChatBIT is a groundbreaking effort by Chinese military experts to harness the potential of open-source large language models, especially those originating from U.S.-based companies like Meta. Sunny Cheung, an associate fellow at the Jamestown Foundation who specializes in dual-use technologies in China, commented that “this is the first evidence of PLA military experts systematically researching and leveraging open-source LLMs, particularly Meta’s, for military applications.”
This adaptation of Meta’s model underscores the ongoing debate about open-source AI’s implications for national security. Meta, which has a policy of releasing certain models for public use, places restrictions on their application, specifically prohibiting military, espionage, defense, nuclear, and other sensitive uses. The company requires a special license for services reaching over 700 million users, intending to maintain some degree of oversight on high-impact applications. However, because Meta’s AI models are open-source, there are limited avenues for the company to monitor or prevent specific uses in China or other regions.
In a statement to Reuters, Meta reiterated its stance, with Molly Montgomery, Meta’s Director of Public Policy, affirming that any PLA usage of their models is “unauthorized and contrary to [Meta’s] acceptable use policy.” In a broader context, a Meta spokesperson added that while open innovation is essential to U.S. competitiveness in AI, the activities of foreign actors using outdated, open-source models are not as impactful compared to China’s own substantial AI investments. The spokesperson pointed to China’s trillion-dollar investment in AI, positioning it as a formidable player in the global race to develop advanced AI technologies.
The PLA’s research team includes prominent figures such as Geng Guotong and Li Weiwei from the AMS’s Military Science Information Research Center, along with collaborators from the Beijing Institute of Technology and Minzu University. These institutions are known for their contributions to defense technology and innovations that benefit China’s military ambitions. The June research paper outlines plans for ChatBIT beyond intelligence analysis, indicating that with further refinement, it could support strategic planning, simulation training, and command-level decision-making processes.
While the technical capabilities of ChatBIT have not been fully disclosed, researchers noted the model’s foundation on a relatively small data set of 100,000 military dialogues. According to Joelle Pineau, Vice President of AI Research at Meta, this training volume is minimal compared to other large models typically trained on trillions of tokens. Pineau questioned the scope of ChatBIT’s operational effectiveness with this limited data, noting that the smaller scale may restrict its adaptability and versatility in complex military scenarios.
The United States has been paying close attention to these developments. As AI becomes a central technology in both civilian and military domains, American national security and technology experts are increasingly wary of the implications of open-source AI in adversarial hands. In October 2023, U.S. President Joe Biden signed an executive order aimed at balancing the need for AI innovation with national security concerns. The order acknowledges the transformative potential of AI but also the considerable risks, emphasizing the importance of safeguarding sensitive AI models and technologies.
In line with this executive order, the U.S. government recently announced that it is preparing new restrictions to limit American investment in specific high-risk AI and technology sectors in China that could potentially threaten national security. John Supple, a spokesperson for the U.S. Department of Defense, noted that open-source models present both “benefits and drawbacks,” underscoring the Pentagon’s commitment to closely monitor competitor advancements in AI.
ChatBIT’s emergence represents a notable shift in how China’s military is approaching AI development by integrating open-source, publicly available models from Western companies. It also highlights the complexity of open-source AI governance, where innovation goals collide with security concerns. As Chinese researchers advance ChatBIT for potential use in intelligence gathering, strategic planning, and operational decision-making, the lines between civilian and military AI applications are increasingly blurred.
The PLA’s adaptation of Meta’s Llama demonstrates the potency of open-source AI in defense contexts, raising questions about the extent to which U.S. companies can, or should, keep their models accessible to the public. With major investments in its own AI infrastructure, China is positioning itself to rival U.S. capabilities in artificial intelligence—a rivalry that extends beyond technological advances and into the geopolitical sphere. As global superpowers strive for AI dominance, the PLA’s latest efforts underscore the importance of proactive regulations and oversight in the development and dissemination of advanced AI models.