Technology /

Generative AI Poses National Security Threat, Researchers Warn

Rand researchers claim China is developing AI models that could help create more compelling propaganda narratives used by the CCP


Generative AI Poses National Security Threat, Researchers Warn

Exponential advancement and development of technology often outpaces regulatory or other frameworks that provide guardrails to keep new products safe and ethical.


Artificial intelligence (AI) is no exception.


As the rapid development of AI technology has grown over the past few years, there are now dozens of websites and apps that allow users to enter a prompt to have AI generate anything from legal documents, computer code, images, music, and video content.


Benefits of AI include the automation of certain tasks, which provides a company the potential for quicker product development and higher employee productivity, along with lower labor costs and greater operational efficiency. But, the risks associated with the technology are quickly evolving.


Researchers at the Rand Corporation are now warning that generative AI poses a significant national security threat as the technology could enable malign influence over social media.


In a new report entitled “The Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0,” researchers argue that “malign actors at the nation-state level” (like the Chinese Communist Party or Russia) can use generative AI to run influence operations through social media manipulation.


A key potential use of AI platforms would be for governments to engage in “astroturfing,” which, as the report notes, is defined by the Technology and Social Change Project at Harvard as an “attempt to create the false perception of grassroots support for an issue by concealing [actor] identities and using other deceptive practices, like hiding the origins of information being disseminated or artificially inflating engagement metrics.”


The Rand team says that these types of campaigns are able to covertly shape political conversations, thus subverting the democratic process.


“We think many malign state actors (e.g., Russia and Iran) are likely to adopt generative AI for their malign social media manipulation efforts, and we believe that, as the technology becomes more mature, more ubiquitous, and easier to implement, other nations are likely to follow suit,” the report warns.


Earlier this year, the European Union (EU) pushed through draft legislation to regulate emerging AI technology. Companies that use generative AI tools will have to disclose any copyrighted material that was used to develop the systems. Officials began working on the proposal two years ago and will give separate legal obligations to government and corporations depending on risk level.


Rand says that China currently has “at least 30 companies, universities, and other research institutions” developing generative AI models. The report adds that though China lags behind the U.S. in the capabilities of these systems, the platforms can be weaponized to help create more compelling propaganda narratives used by the CCP.


The report states that generative AI will very likely be part of next-generation Chinese military information warfare.


Rand researchers say that the U.S. government, along with the broader technology and policy community, should respond to this emerging threat proactively, “considering a variety of mitigations to lessen potential harm.”

*For corrections please email [email protected]*