Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to three academic papers and analysts.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT".
The researchers used an earlier Llama 13B large language model (LLM) from Meta (META.O)
, opens new tab, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
ChatBIT was fine-tuned and "optimised for dialogue and question-answering tasks in the military field", the paper said. It was found to outperform some other AI models that were roughly 90% as capable as OpenAI's powerful ChatGPT-4. The researchers didn't elaborate on how they defined performance or specify whether the AI model had been put into service.
"It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes," said Sunny Cheung, associate fellow at the Jamestown Foundation who specialises in China's emerging and dual use technologies, including AI., opens new tab
Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company.
Its terms also prohibit use of the models for "military, warfare, nuclear industries or applications, espionage" and other activities subject to U.S. defence export controls, as well as for the development of weapons and content intended to "incite and promote violence".
However, because Meta's models are public, the company has limited ways of enforcing those provisions.
In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse. Source: Reuters
Total views: 3270