A technological determinism at times plagues the study of terrorism and violent extremism. In summary, it characterizes violent extremists of any creed (the “red team”) as swift and skilled adopters of emerging technologies to optimize their organizational processes, scale their violence, and avoid countermeasures. Meanwhile, it portrays counterterrorism forces (the “blue team”) as overly bureaucratic, cumbersome, and slow to respond to changes in the operational environment. The conventional wisdom holds that nonstate actors will almost always outpace the security services and intelligence agencies seeking to curb their violent activity, often with catastrophic consequences.
Analysts have traditionally understood new or emerging technologies as an amplifier or force multiplier for terrorists, with metrics such as gains in operational planning, increased attack lethality, or improved volume and sophistication of propaganda. Evidence has often borne this out: Extremists built entire recruitment pipelines for years on social media platforms and chat applications before those platforms assembled content moderation teams to counter such illicit use.[1]
But this technological determinism has proven faulty too, specifically by presupposing that the new destructive tactics that an emerging technology unlocks are, by definition, paradigm-shattering or facilitate mass radicalization or violence. This may be only normal, considering that scholarship in the field of terrorism studies proliferated after the al-Qaeda terrorist attacks of September 11, 2001, turning the feats of aviation technology into a crude but spectacular weapon of mass destruction.
Twenty-five years after that devastating attack, terrorists continue to experiment with emerging technologies to enhance their operational capabilities. Artificial intelligence (AI),[2] the technology that dominates the headlines and seems poised to have a drastic impact on society writ large, is not just for drafting emails and vibe-coding. On the contrary, it is a novel yet dangerous tool at the disposal of terrorists, including Salafi-jihadists, who can weaponize it.
Since as early as 2018, fellow analysts and scholars of terrorism and technology have warned about the potential misuses of AI by nonstate actors.[3] Every wave of technology adoption has a dark, sordid underbelly, and it is possible to harness or repurpose all trappings of globalization for nefarious ends. AI is bound to be misused by the worst of humanity—our work has examined terrorists deploying AI in propaganda,[4] as well as chatbot-enabled radicalization of lone wolves.[5] Violent extremists have also used it for operational planning,[6] and it has the potential to augment various weapons systems[7] and to improve intelligence, surveillance, and reconnaissance (ISR) capabilities.
The highly decentralized nature of the two most dangerous global Salafi‑jihadist terrorist organizations, al‑Qaeda and the Islamic State (and their regional affiliates), together with legions of “inspired” attackers[8] who carry out violence in their name, makes technologies that reduce the friction of planning and preparing an attack especially useful. But current narratives of pandemonium or, on the flip side, naive predictions of AI as a tool usable only for indisputable good and ripe for “fine-tuning” into perfection, will not age well. Put simply, predictions by those wearing rose-colored glasses do not align with established knowledge about terrorist innovation.
This paper seeks to contribute to a grounded and sober understanding of the threat that AI poses in the hands of Salafi-jihadists, and especially the Islamic State, based on knowledge of their innovation patterns and adoption of technologies more broadly. It does so by assessing why terrorists innovate tactically, operationally, and strategically and how Salafi-jihadists have adopted, iterated on, and refined their use of other technologies. It then turns to AI adoption by the Islamic State and its supporters, examining what is known about their past adoption and what is plausible in the future based on their adoption patterns. The paper concludes with an overview of two issues that counterterrorism agencies should not overlook when considering the AI-enabled terrorism threat landscape: the potential for Black Swan events and the conundrum posed by open-source AI models. It finds that while AI and its various applications raise the severity of worst-case scenarios, they only moderately increase the severity of more likely scenarios, such as propaganda generation or operational planning. Even so, small incremental efficiencies can compound into large-scale effects over time.
Why Terrorists Innovate: Adapt or Die
Terrorist innovation is the process whereby a terrorist organization reformulates substantial components of its preexisting codes and norms.[9] This does not necessarily mean an escalation in violence, as many presume, and innovation need not translate into enhanced operational efficiency. It is merely a change in functioning. It can occur at the tactical level, altering the method and location of an attack, for example. But innovation can also take place at the operational and strategic levels.[10] Innovation at the operational level could take the form of shifts in a terrorist group’s structure to become more resistant to law-enforcement infiltration. It could also manifest in changes to propaganda production tools or financing methods as groups confront counterterrorism pressure. At the strategic level, innovation may assume a shifting approach to achieving the group’s goals or even a pivot in the organization’s overall objectives.
The debate over the drivers of terrorist innovation, technological adoption, and organizational learning remains unsettled. However, analysts have identified a broad set of endogenous and exogenous factors that influence terrorists’ adoption of new or emerging technologies.[11] Audrey Kurth Cronin has proposed a lethal empowerment theory to explain which new technologies are adopted by violent extremists.[12] Typically, these technologies are easy to access, relatively inexpensive, transportable, concealable, simple to use, and available commercially off the shelf. They are also part of a cluster of emerging technologies that reinforce each other. This framework can help explain why terrorists adopted certain technologies in specific plots.[13] In the Islamic State–inspired New Year’s Day attack in New Orleans in 2025, the terrorist used smart glasses to conduct reconnaissance of the French Quarter during the operational planning phase. He also used peer-to-peer rental services and leveraged an electric truck for the vehicle-ramming attack component of his overall terrorist act, perhaps looking for ways to reduce noise that would have given advance warning to pedestrians in the street.
Analysts have widely understood terrorists’ innovation, learning, or adaptation as a result of operating in an inherently hostile environment, where terrorist groups continuously seek ways to gain a competitive advantage, sparking malevolent creativity. As a Global Network on Extremism and Technology (GNET) report summarizes, “Terrorist groups who fail to innovate will either ‘be degraded to the point of irrelevance’ or fail to attract resources, recruits and supporters.”[14] Fundamentally, the question of terrorist innovation is one of survival underpinned by a straightforward cost-benefit calculation: Innovation is a strategic necessity, but it often carries inherent risk.
It is ideal to understand the adoption curve of technology by violent nonstate actors as a four-phase process.[15] Often, adoption is not immediately successful, but failure to use a new technology efficiently should be interpreted as a learning opportunity rather than a failure. Early adoption often involves flawed experimentation. A phase of iteration follows, typically centered on a combination of commercial improvements to the technology and tactical refinement, which then spurs a breakthrough and a sharp rise in effective use. Finally, this leads to competition, in which various “blue team” actors seek to curb the use of the technology by violent nonstate actors, leading to adaptation–counteradaptation dynamics.
In some ways, there is a parallel to the private sector and the wave of start-up companies that inevitably proliferate around new technologies and software development. Many adopt the mantra “Fail fast”—in other words, experiment rapidly to identify potential failures or weak points early and minimize wasted time and resources. But terrorists have different incentive structures than corporations. The latter focuses on profit, while the former are concerned with politics. For a terrorist group, innovation is about fine-tuning specific approaches to achieve a specific objective or improve a capability.
Myriad factors drive a terrorist group to innovate, as Assaf Moghadam has noted, and among these is opportunism: The emergence of a new technology can be the enabling factor in a terrorist group’s innovation.[16] Moreover, as Moghadam observed, technological innovation is more multidirectional and integrative, complementing the shift in recent years from hierarchical structures to more networked organizations, as well as the penchant among some groups to associate with terrorist entrepreneurs.[17] Michael Horowitz’s work on why some groups resort to suicide terrorism applies the concept of adoption capacity theory and suggests that, beyond ideology, a terrorist group’s internal capacity for innovation and external linkages to other groups, especially those also seeking to innovate technologically, will be determining factors.[18] Brian Jackson has emphasized the importance of complementing explicit knowledge with tacit knowledge for successful terrorist innovation.[19] An example of explicit knowledge could be a manual for making 3D-printed weapons, whereas tacit knowledge would be the actual experience, practice, and tradecraft of designing, printing, and testing these weapons, with a focus on incremental improvement through subsequent iterations. Michael Kenney’s research builds on Jackson’s and explores the differences between abstract technical knowledge and practical, experiential knowledge.[20] The latter, in Kenney’s view, is far more valuable and leads groups to prioritize “learning by doing.”
Salafi-Jihadists’ Adoption of Technology
To understand AI adoption by Salafi-jihadists, both exogenous factors and endogenous factors matter. Ideology, resources, self‑image, organizational structure, and leadership can all shape which kinds of innovation these groups view as feasible or legitimate. Exogenous factors, such as counterterrorism measures and state support or repression, can also create pressures that prompt groups to alter their behavior. The properties of a technology itself explain why and how adoption occurs,[21] but ideology, too, can encourage or constrain innovation. In a cross-ideological study of innovation by violent extremists, looking at (1) Salafi-jihadists and (2) racially and ethnically motivated violent extremists, researchers found that “Salafi‑jihadist convention states that technology, whoever develops it, is generally characterized by material neutrality.”[22] These two groups stand in contrast, as the second group views technology companies with suspicion and tends to favor grassroots, autonomous innovation. Don Rassler and Yannick Veilleux-Lepage, meanwhile, focus on the cycle of tech innovation, typically starting with properties of technology itself: Dual-use and democratized access lead to accessibility, which in turn feeds into diffusion (spread of tactics) and directionality (new threat variants).[23]
For Salafi-jihadists, the adoption of a technology often stems from a combination of the tool’s characteristics and the organization’s ideological pillars. Ideologically, it is most appropriate to understand innovation as something normatively embedded. Salafi-jihadists treat many cultural artifacts tied to non-Islamic worldviews as strictly forbidden, but treat certain technologies themselves as morally neutral. This effectively allows them to adopt technology developed by the “enemy” without ideological friction. Within both the Islamic State and al-Qaeda, official propaganda materials relay many opinions on technology that emanated from practical experimentation by fighters and supporters.[24] Innovation often emerges bottom-up; it begins with successful testing on the battlefield or by supporters, and only later do groups codify it through propaganda. Salafi-jihadist propaganda is thus a crucial linchpin in the innovation propaganda of such groups.
There are several illustrative examples of successful innovation over the past decade. The Islamic State went to great lengths in the mid-2010s to enhance its ability to construct various forms of vehicle-borne improvised explosive devices (VBIEDs) in Iraq and Syria, including by experimenting with other exploding vehicles such as scooters, tanks, armored personnel carriers, and bulldozers.[25] Islamic State members, through the trial and error of continued technical improvisation, produced increasingly lethal VBIEDs to deploy against their adversaries. However, toward the end of the Battle of Mosul in 2017, the quality of these weapons deteriorated and attack lethality ebbed, mainly because so many of their skilled engineers were being killed. Personnel turnover negatively affected institutional memory,[26] leading to the loss of both know-how and other forms of tacit knowledge related to the production of VBIEDs.
The Islamic State has also displayed a knack for innovation in the use of drones. In their study of nonstate violent drone use in the Middle East, Veilleux-Lepage and Emil Archambault outlined four specific ways the Islamic State was innovative in its approach to drones:
- Using drones as flying artillery for the purposes of explosive delivery (not as loitering munitions)
- Using drones for observation in combination with kinetic activities, such as artillery fire or VBIEDs
- Featuring drones as an integral part of the group’s propaganda
- Developing a drone program without state sponsorship, and in doing so, “displaying significant technical prowess in assembling weaponized drones”[27]
The Islamic State has thus demonstrated a remarkable ability to innovate using emerging technologies. This has led many scholars and analysts to believe the group will continue to take advantage of other technologies, including AI, which itself can serve as a force multiplier for drone attacks, media operations, and the development and deployment of various types of weapons.
Artificial Intelligence and the Islamic State
Early experimentation with generative AI by Islamic State supporters began in or around 2023. In our analysis of the sprawling Islamic State digital ecosystem, which comprises a collection of official media foundations, publications, and accounts as well as aligned media foundations and supporter networks, we first observed the use of generative AI to create propaganda imagery by the group’s supporters.[28] These supporters, active on both the “fringe” messaging applications and self-hosted forums of the Islamic State as well as on mainstream social media platforms, were the first to include AI-generated images in their messaging. Many of them have been increasingly young and have experimented with “Alt-Jihad” aesthetics that appropriate some of the aesthetics of extreme-right groups to generate appealing memes.[29] AI integration was thus a logical next step. Both individuals and networks of supporters sustain the Islamic State’s digital ecosystem, largely by creating appealing, resonant content that supports official narratives and messaging. The ecosystem’s decentralized nature appears to be the natural starting point for AI integration.
In August 2023, Qimam Electronic Foundation, a well-established but unofficial pro–Islamic State media outlet focused on technology and operational security (OPSEC), published an article on how to use ChatGPT safely. It did not comment on the permissibility of the technology—which would also not fit its role as a supporting rather than official media outlet. However, it did provide basic OPSEC pointers in the article, which was published on one of the many pastebin sites Islamic State supporters use to safeguard their content from takedowns by content moderators.[30]
A year later, as the quality of generative AI outputs improved and more AI tools for multimedia creation became available, online Islamic State supporters discussed the permissibility of using the technology. In the aftermath of the Islamic State’s deadly attack on Crocus City Hall in Moscow on March 22, 2024,[31] a supporter circulated an AI-generated video news bulletin about the attack on the group’s self-hosted Rocket.Chat communications platform. This sparked a debate within pro–Islamic State networks about whether using AI tools, a set of technologies that developers had trained on data belonging to the “disbeliever,” was justifiable. It also led Islamic State supporters to blur the face of the AI-generated newscaster in subsequent propaganda videos claiming different attacks, apparently to avoid the prohibition against creating animate beings that resemble the creations of Allah.[32]
In May 2024, an Islamic State Khorasan Province (ISKP) supporter published a similar series of news-bulletin-like AI-generated videos to claim attacks in Afghanistan. Our open-source intelligence (OSINT) research shows that supporters also experimented with text-to-speech tools and used chatbots to translate text, though they often did not explicitly acknowledge it. Accordingly, generative AI’s first and most natural adoption by IS supporters has been for generating unofficial supporter-facing propaganda, including anasheed (“chants”), compelling imagery, and news bulletins, as well as imagery to accompany text statements. Official Islamic State media institutions also appear to have used generative AI to create imagery and for translation purposes.[33]
Doctrine: AI as fard al-’ayn?
ISKP, the branch of the Islamic State that metastasized in Afghanistan and Pakistan in 2015 and has since become one of the most active provinces in terms of external operations, became the first branch to provide guidance about AI usage to its supporters. In the June 2025 edition of its flagship magazine, Voice of Khorasan, Al Azaim Media Foundation dedicated an entire spread to OPSEC, the use of generative AI, and the religious obligation to use AI as a Muslim. It is not entirely surprising that this specific propaganda magazine is the first to address the subject. It is known as a visually compelling, “young,” and very readable monthly publication,[34] especially in contrast to al-Naba, the sometimes hard-to-parse, formulaic magazine that the Islamic State’s central media apparatus Al publishes weekly. ISKP’s magazine touches on various international developments and is aesthetically pleasing: Striking, modern collages accompany articles on global politics, women’s affairs, and religious matters. The magazine’s Light of Darkness bulletin contains ISKP’s advice on technology, digital tools, and OPSEC.
In June 2025, this bulletin described AI as fard al-’ayn (فرض العين ), a personal duty incumbent upon each believer in Islam. Not long after Al Azaim Media Foundation distributed the issue across its channels, it issued a retraction along with a short statement that the magazine had designated AI as fard al-’ayn mistakenly (figure 1). Some researchers have attributed the retraction to ridicule from other jihadist groups.[35] The rest of the article, however, remained published. The next Voice of Khorasan issue likewise dedicated a Light of Darkness bulletin entirely to AI.
Figure 1. Retraction Notice of the “Fard al-Ayn” Designation of AI in Voice of Khorasan
Both bulletins indicate a shallow understanding of how generative AI works and more sophisticated methods of deploying it, though they do demonstrate some awareness of the security and privacy concerns surrounding its use. This technical naivete and amateur advice would appear to make terrorist use of AI a non-issue if the articles weren’t so insistent on its adoption— an imperative they ground in religious justification.
The first article opens with a reference to those “martyred” when Israel used AI-guided weapons in Gaza, then launches into a short reflection on the Quran’s guidance regarding thoughtful engagement with information of suspicious origin. It concludes that Muslims should use AI responsibly. While the rest of the article focuses primarily on different cloud-based AI chatbots, it reveals an understanding of AI’s importance in weapons systems, economic affairs, and geopolitics. A statement that is, interestingly, a 100% AI-generated text according to the detection tool Pangram summarizes the push for AI adoption as follows:
AI is no longer optional, it’s your shield and compass in a digital world wired with hidden threats. From hackers and spyware to automated war tools and surveillance systems embedded in apps and networks, the dangers are evolving rapidly. With AI, you can detect intrusions, protect your identity, and decode digital traps. It also grants clear access to real knowledge on warfare, education, and global systems, free from bias and manipulation. Without AI literacy, you’re exposed and misled. With it, you’re empowered, informed, and secure. In an AI-driven age, understanding it isn’t a choice, it’s a defense.
The author weaves various Quranic verses throughout the piece to address privacy and security issues related to AI chatbots (figure 2). They also tie warnings about overreliance on or careless use of the technology to religious doctrine: “As Muslims, we are commanded not just to seek beneficial knowledge, but also to avoid becoming tools of zulm (oppression), fitnah (chaos), or ghaflah (heedlessness).” The article appears to conclude with a cautious endorsement of Brave Leo AI as the most secure option. Brave, the company primarily known for its privacy-focused browser, maintains that its chatbot does not collect personal identifiers such as IP addresses or any other personal data.[36]
Figure 2. Excerpts from the ISKP Bulletins on AI in Voice of Khorasan, 2025
The second article of the edition provides a list of ways to operationalize AI chatbots. The entire section reads as humdrum and unthreatening with a distinct AI-generated flavor. Examples of permissible AI use read as banal, including one incredibly suggesting its use to generate halal business ideas, such as a modest fashion business. This stands in stark contrast to other articles in the same issue, one of which retells the “memories of a mujahid from the bloody Battle of Mosul” in gory detail. It also refers to AI’s permissible uses for media campaigns and translation of Islamic State content but does not explore them in depth.
While these short bulletins confirm a top-down insistence on AI adoption among Islamic State members and supporters—though only from ISKP at this moment—the magazine does not discuss the use of AI for operational plotting and execution of terrorist attacks. This is where the dynamic identified in the Islamic State’s previous adoption of technologies becomes especially significant: In terrorist innovation, it is often supporters who experiment with new technologies first, and legitimization by the Islamic State follows only later through official propaganda outlets. For this reason, conversations among Islamic State supporters about how they are actually exploiting AI models may offer some of the most valuable intelligence for assessing the threat landscape ahead of AI-enabled terrorist attacks.
As this section demonstrates, AI-assisted propaganda generation and translation mark an interesting technical evolution in content production. However, there is little evidence so far that these activities have produced demonstrable increases in audience size, recruitment success, or pathways to violence. Nonetheless, even marginal improvements may have lethal downstream consequences. If modestly greater content appeal and more translations of publications garner only a handful of new or newly motivated supporters, the downstream effect can still be significant.
Tactical Adoption: Operational Planning and Weapons Systems
The most worrisome signal emerging from open-source analysis of discussions among Islamic State supporters about AI is its apparent role in lowering the barriers to operations that would normally require significant technical expertise and resources. Supporters have discussed the use of generative AI for a range of different operational planning purposes.[37] One discussion suggests that AI chatbots can generate the code necessary to program 3D-printed drones to deliver a payload in an attack. Critically, the discussion indicates that participants were suggesting the concept of AI-assisted code generation rather than sharing outputs directly. This illustrates that, at present, serious operational ideation is more common in these chatrooms than demonstrated technical execution. As with other technologies, including drones and social media platforms, early experimentation may very well lead to iteration and continued practice, resulting in a wave of AI-enabled terrorist attacks that we will further examine in our forecast.
We cannot determine whether supporters successfully generated or shared any such code through more restricted channels. Other discussions we observed on the Islamic State’s Rocket.Chat server indicates that users are actively experimenting with different chatbots for guidance on creating explosives or generating code for cyberattacks in the name of the Islamic State (figure 3). While the credibility of these threats—and whether the user actually successfully leveraged AI to attempt such operational planning—is unclear, the intent is evident and worrisome for what it portends.
Figure 3. Sample of Discussions About AI Use in an Operational Context
Piecing together the scale of actual AI-enabled plots and operations is an inherently unsatisfactory endeavor: Many foiled terrorist plots never receive public discussion, and the details of investigations into successful plots are rarely fully available to researchers. Nonetheless, there have been some initial signals that Islamic State supporters use AI chatbots for operational planning. In April 2025, authorities in Vienna, Austria, arrested an 18-year-old inspired by the Islamic State for planning to target the Israeli Embassy as well as a Shia mosque. However, investigators found no imminent plot, only ChatGPT chatlogs riddled with his fantasies about violent attacks as well as queries about the production and storage of explosives.[38] Open-source reporting has not clarified whether and how the teenager was able to circumvent the AI model’s safety alignment. Reporting has also shown that extremists of other convictions (from efilist[39] to neo-Nazi) have successfully used AI chatbots for operational planning.[40]
The small number of documented AI-enabled terrorist plots in which terrorists consulted AI chatbots for planning purposes reflects the limits of our visibility into such plots, but not the absence of such plots. Moreover, AI as a technology is in its nascent stage; its proficient use by terrorists is likely to continue improving over time. Other signals also point to AI’s misuse by violent extremists. The issue of chatbot exploitation has become significant enough that OpenAI has announced it will now embed a feature in its product to redirect individuals showing signs of radicalization to appropriate resources.[41] In April 2026, the start-up Throughline announced it would introduce a feature for OpenAI that offers human- and chatbot-based support to individuals who are demonstrating signs of radicalization. Major AI companies had previously subcontracted Throughline to redirect users to crisis support when they were at risk of self-harm, domestic violence, or an eating disorder. The 2026 Tumbler Ridge mass school shooting case,[42] in which OpenAI failed to appropriately act on the shooter’s flagged chat logs, may have catalyzed this response.[43] But the underlying impetus likely extends to misuse by all kinds of extremists, including Salafi-jihadists.
The Risks of AI-Enabled Cyber-Physical Systems
The primary uses of AI have been propaganda generation and radicalization efforts. The few discussions we were able to find of Islamic State support for AI use in operational planning suggest supporters typically use it very similarly to non-malicious applications: breaking down complex material and existing knowledge into practical, digestible steps.
However, it is important to look beyond generative tools to understand the full threat landscape AI technology creates, including the specific threats posed by AI-enabled cyber-physical systems. Agentic AI consists of systems that can execute actions online or in the physical world with limited or no ongoing user input. Cyber-physical systems are systems in the physical world in which embedded computers and networks monitor and control physical processes. We can distinguish two categories of AI-enabled or agentic cyber-physical systems.
First, and of most immediate concern, are AI-enabled cyber-physical systems that are deemed dual-use, such as self-driving cars and drones. Terrorist use of drones has a long history,[44] and the Russo-Ukrainian war—marked by a boom in AI-enabled drones—is likely to further shape drone tactics by the Islamic State and its supporters. Analysts have long explored terrorists’ potential use of self-driving cars to deliver explosives, but it has not yet occurred.[45] The second category consists of autonomous weapons, such as robotic systems that can select and attack targets without human intervention. Many of these weapons are “closed” and not commercially available. Still, advancements in AI and the recruitment of technical talent mean that terrorists may assemble their own, as they have DIY-ed various drone systems in the past.[46]
Some signals have emerged of potential exploitation of AI in drone attacks by Jama’at Nusrat al-Islam wal-Muslimin (JNIM), al-Qaeda’s affiliate in the Sahel region of Africa. In their analysis of JNIM’s drone usage, Niccola Milnes and Rida Lyammouri posited that some of the latest advancements in JNIM’s drone capabilities, including ISR-guided targeting and geofencing bypass, suggest the group may already be using open-source AI models to support drone modifications. [47] These tools can enable firmware overrides, autonomous flight path optimization, and comms-free execution. More evidence is necessary to evaluate exactly whether and how terrorists are modifying drones through AI. What is certainly true is that code repositories like GitHub feature an abundance of accessible software related to “drone mod scripts, vision models, [and] language tools.”[48] Meanwhile, the lack of clear evidence of AI-enabled attacks and/or adoption of fully autonomous weapons and/or drones by the Islamic State at present brings us to the crux of the matter: distinguishing between what is possible and what is plausible.
Future Forecasting: Distinguishing the Plausible from the Possible
Our forecasting of AI use by Salafi-jihadists, particularly the Islamic State, is based on our empirical observations of how the group has adopted it so far: First, it is clear that there is organizational endorsement of the use of the technology in specific ways, as Voice of Khorasan has highlighted. Second, there is an appetite for continued experimentation among supporters, which tends to drive innovation within the Islamic State and is subsequently reinforced from the top down. Third, AI adoption so far follows an efficiency logic: removing hurdles (e.g., slow propaganda translation and complex DIY explosives manuals) to improve productivity. We also derive the forecast from the broader AI technological trajectory in past years, from early, standalone generative tools in chatbot format toward more integrated, agentic systems that can remember prior context and execute multi‑step tasks with limited human supervision or input.
The following represents our forecast of the ways in which the Islamic State (and possibly other Salafi-jihadists not affiliated with the group) may seek to use AI in the coming years. It includes developments we believe will materialize within the next 12 months as well as in the longer term.
Propaganda and Recruitment
Mass and Scale. It is highly likely that the combination of improved agentic AI capabilities and already robust generative AI tools will enable propaganda efforts at scale by the Islamic State or its supporters through vast, efficiently spun-up bot networks that promote AI-generated content. As Adam Hadley, the founder of Tech Against Terrorism, stated in July 2025, “The use of agentic AI and circumvention techniques to create tens of thousands of accounts online and create terrorist content with abundance is not so far away.”[49] Social media platforms have become adept at identifying and attributing bot networks, but state and nonstate actors continue to discover loopholes to exploit for their propaganda campaigns. As other analysts have pointed out, “Slick, AI-assisted media output may have an additional pull towards the younger generation.”[50] ISKP has, in the past, hosted classes for propagandists on content design;[51] it is highly likely that AI will feature in upcoming iterations of such workshops.
Targeting and Microtargeting. In the near term, Salafi-jihadists will likely use AI tools for more tailored messaging campaigns, meaning the adaptation of content and narratives to specific audiences. In the longer term, the Islamic State will likely experiment with microtargeting, a strategy that uses personal data to segment audiences into extremely small groups and deliver highly relevant messages, similar to the approaches that Madison Avenue advertising firms pioneered. Multiple studies have demonstrated that AI-enabled microtargeting for political purposes is both effective and scalable.[52] The 2018 multi-institutional report The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation has already articulated the theory that terrorists would exploit AI in this manner. It warns that groups can use AI to target people with “precisely the right message at precisely the right time in order to maximize persuasive potential.”[53]
In our assessment, it is highly likely that the Islamic State and its supporters will engage in microtargeting through AI tools, including by identifying profiles of individuals who may be vulnerable to recruitment or coercible into conducting an attack. This fits a larger, well-established pattern in Islamic State propaganda, which, as al-Qaeda has done for decades, tailors narratives to an audience’s specific local grievances.[54] These groups have excelled at grievance laundering, connecting parochial (often legitimate) grievances to a global propaganda campaign intended to incite and radicalize jihadist followers and supporters around the world.
Specialized Bots. The Islamic State has a long history of using Telegram bots for its operations. The most comprehensive study of this phenomenon concludes that “for the Islamic State—similar to many other violent extremist groups—bots are being used to lubricate and augment influence activities, including facilitating content amplification and community cultivation efforts. They are standing in for official Islamic State operatives and advocates, connecting people with the movement based on common behaviors, shared interests, and/or ideological proximity while minimizing risk for the broader organization.”[55]
While many of these bots are rule-based rather than AI- or machine-learning-based (it is possible to build both through Telegram’s interface), the Islamic State’s use of such bots shows an appetite for efficiency and scaling. It is therefore plausible—and increasingly likely as tools mature—that the Islamic State or its supporters will deploy more capable bots to coax users toward the group’s ideology or to provide “vetted” guidance on behalf of the group, whether regarding operations or doctrine. Such bots may continue to be hosted on messaging applications like Telegram but could also be embedded in other platforms and services.
Operational Planning and Attack
Interactive DIY. It is almost certain that more Islamic State–inspired attackers and would-be attackers will use bots not affiliated with the Islamic State to receive hyper-personalized advice on attack planning or attack tactics, techniques, and procedures (TTPs). This is the logical progression of a terrorist organization that has sought to decrease friction in committing attacks on its behalf through the production and dissemination of DIY guides. It is increasingly likely that supporters will look for more personalized guidance. This may include conversations with chatbots about which weapons to use and how (e.g., TATP synthesis[56]) as well as broader tactics such as target selection, weather conditions, synthesis of open-source information about a target’s physical security, and anything relevant to reducing uncertainty. The role of “virtual planners”[57] may increasingly fall on such chatbots.[58]
To assess the likelihood of this, an important factor will be whether Islamic State supporters will find functional but private chatbots to their liking and adopt them at scale. In The Terrorists’ Dilemma, Jacob Shapiro lays out an essential trade-off for terrorists: Time invested in OPSEC to stay alive means less time for planning and plotting attacks. This has a host of downstream trade-offs: Communication is necessary for efficient planning and organization but also gives law enforcement more signals intelligence (SIGINT) cues to dismantle the organization. Chatbot adoption for advice on attack TTPs, depending on their calibration, may very well minimize this trade-off for terrorist groups. As terrorists outsource more mundane tasks to AI—much like the modern office worker—it opens up bandwidth for them to focus their energy on planning attacks, a trend that could lead to an uptick in successful terror plots.
Weapons Systems. At least some of the drone and counter-drone innovations emerging from the war in Ukraine (e.g., techniques for operating small drones under heavy jamming) will likely diffuse to Islamic State provinces such as those in Somalia, the Sahel, and Afghanistan. This does not imply literal replication: Islamic State cells will not achieve the integrated, costly capabilities that Ukraine has fielded, such as AI-augmented drones.[59] However, they may adapt selected open-source tools, including AI-assisted or vision-based navigation techniques, to modify commercial drones so that they can at least partially operate in areas where GPS jamming degrades traditional remote navigation. A combination of “vibe-coding” and tech-savvy recruits can make that a reality.[60] However, cost-benefit calculations will determine whether this becomes a pattern of use. Terrorists may deem simple kamikaze drones more efficient.
In addition, as the very purpose of terrorism is to create an outsized impact that includes a deep psychological impact, it is important to consider the potential propaganda value of some AI-enabled weapons systems. A drone attack using DIY methods and open-source AI tools to wreak havoc may create spectacular propaganda value. First-person view (FPV) drones may prove especially appealing to record graphic aerial propaganda footage—the original purpose of the Islamic State’s drone usage.[61]
Cross-Functional
Playing Parasite on Third-Party Infrastructure and Data Poisoning. Beyond the direct use of AI tools by Salafi-jihadists, there is a distinct possibility that, with improved technical skill, some Islamic State–inspired individuals will seek to exploit AI systems that are embedded in various civilian and military infrastructures. This could manifest as the manipulation of AI-enabled systems with normally benign purposes. To some extent, this is not new. The Islamic State and its supporters have notoriously used the algorithms (i.e., recommender systems) of social media platforms to amplify their content.[62] Researchers conducting analysis of the Islamic State’s propaganda system often use an AI-enabled snowballing method to identify more accounts—once a user likes a couple of Islamic State posts, the algorithm feeds them more. In the future, this could grow more complex, with data poisoning or prompt injections used to manipulate AI-assisted platforms for various civilian, military, or private-sector purposes.[63]
AI-Enabled Financial Innovation. Some of the most innovative practices in the Salafi-jihadist ecosystem have occurred in the arena of terrorist financing. While low-tech practices have often persisted, including reliance on Hawala and cash couriers to move money, certain Islamic State provinces have adopted cryptocurrencies to move money.[64] AI may aid fundraising narratives, but it will likely also facilitate low-level cybercrime that could support terrorists’ fundraising or laundering operations.
In sum, the Islamic State and its supporters could use a host of AI capabilities, but not all scenarios are equally plausible. Cognizant that previous threat assessments, e.g., those about augmented and virtual reality,[65] failed to account for the Islamic State’s tech adoption patterns, we have limited ourselves in this article to considering the most likely AI-enabled operations.
Two further issues related to AI and the terrorist threat landscape deserve further attention: the specific threats that open-source, locally run AI models pose and the inherently unpredictable occurrence of a Black Swan.
The Open-Source Model Conundrum
One of the most worrisome but still under-analyzed issues related to the use of AI by violent extremists is the exploitation of locally runnable open-source or open-weight AI models, as opposed to purely cloud-based models. By open-weight models, this section refers to AI models whose weights (i.e., the parameters that determine how they process inputs) are downloadable and modifiable, often alongside open-source or partially open-source code. Unlike cloud-hosted systems, locally run open-weight models can typically be used without rate limits.
Much attention has focused on how malicious actors can circumvent the safety alignment of cloud-hosted proprietary models like ChatGPT via “jailbreaking” to receive answers to questions the platform otherwise refuses to answer.[66] However, from the perspective of terrorist exploitation, locally executable open-weight/open-source models are in some ways more concerning. Proponents of open-source models argue that their approach enables public scrutiny, helping identify potential biases, vulnerabilities, and ethical concerns while ensuring the responsible development of AI models. This is certainly true. However, the same openness can facilitate misuse by malicious actors and make it harder for providers and law enforcement to detect such misuse than they could in centrally hosted, closed models. This paradox has clear implications for counterterrorism.
Unlike with proprietary, cloud-based models (e.g., ChatGPT, Gemini, Claude), users can download, modify, and run open-source and open-weight models (e.g., Qwen or Llama) entirely on local hardware or, if necessary, intermediaries outside the developer’s control can host them. This offers two benefits for malicious actors:
- Sophisticated users can permanently remove safety measures through techniques like “abliteration,” which consists of modifying the model’s weights to eliminate refusal behaviors entirely.[67]
- Users could plot an attack with negligible risk of “leaks” as queries and outputs no longer transit on the provider’s infrastructure.[68]
Even non-technical people are increasingly able to customize these models (by, for example, adjusting the model weights). Often these locally running open-source models are significantly less powerful than cutting-edge proprietary models like Claude or ChatGPT. Nonetheless, this is not of concern to terrorists. As Brian Fishman has astutely observed, “The willingness to accept imprecise targeting will also unlock non-state actors to use less-sophisticated open models. These tools are unlikely to achieve the precision and capability of cutting-edge ‘powerful’ models, but they are sufficient for a wide range of discrete tasks and can run locally on relatively simple autonomous or semi-autonomous systems.”[69]
Islamic State supporters have already begun experimenting with open-source AI models. Analysis of mentions of open-source models in the Islamic State’s main Rocket.Chat server as of April 2026 shows there is already some awareness of the unique value of these models. As one user refers to locally run chatbots, “You can ask it anything without worrying that it will send your questions to eg. OpenAI or other companies that may report you to the feds.” He explicitly notes that certain models are fine-tuned not to answer “any questions about bombs,” which provides some insight into the types of questions aspiring jihadists are seeking answers to. Another user appeared to endorse the use of locally run models for privacy and security, stating, “I used it to generate code and it’s flawless.”
The broader ecosystem of open-source robotics and computer vision projects also lowers the barrier for exploitation by violent extremists. For instance, widely used autopilot stacks provide an open‑source flight‑control system for drones and other unmanned aerial vehicles, supporting autonomous operation for people with moderate technical skill and motivation. In parallel, numerous open‑source computer vision projects explicitly focus on detecting and tracking objects in aerial imagery. While developers design these systems predominantly for research and other benign or positive applications (e.g., search‑and‑rescue drones or traffic monitoring), they are inherently dual‑use and accessible to malicious users.
On Black Swans and Gray Swans
There is a spectrum that analysts need to acknowledge in threat forecasting. Beyond the known AI-enabled terrorist abuses and likely future use cases that we have already examined, there is a risk of more extreme but still foreseeable Gray Swan scenarios (lower likelihood, higher impact), and true Black Swans, i.e., unpredictable events with catastrophic effects, that would fall outside current models altogether.
The mathematical statistician Nassim Nicholas Taleb introduced the Black Swan theory in his seminal 2007 book.[70] He lays out three defining characteristics of Black Swan: “First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme ‘impact’. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.”
The 9/11 terror attacks are the archetypal example of a Black Swan terrorist attack. Following that devastating attack, the United States and its allies constructed a worldwide counterterrorism apparatus to mitigate the likelihood of further spectacular attacks. The record has been impressive in that regard. While there have been jihadist and jihadist-inspired attacks in the West, there has not been an attack anywhere near the scale and magnitude of 9/11 in the past two and a half decades. However, if some of the predictions about AI becoming a truly revolutionary technology come true, then AI could theoretically open up an entire new portfolio of potential Black Swan scenarios.
Black Swans are by their very nature not meant to be predicted—they are outliers to a relative model, and their probability is noncomputable with standard statistical tools. If it can be forecast, it stops being a Black Swan. Applying Taleb’s definition literally, a Black Swan would constitute an AI-enabled terrorist capability that has an immense impact, lies outside counterterrorist models of prediction and rigorous red-teaming and forecasting, and then looks “obvious” only in hindsight. Almost by definition, it would not have been mentioned in our forecast or any other deep analysis[71] of the potential adoption of AI by Salafi-jihadist terrorists.
Gray Swans, likewise dangerous but within the realm of predictability, should also appear in any piece on threat forecasting of AI adoption by terrorists. While these events would be shocking, they would not be conceptually earth-shattering. This includes scenarios such as an AI-optimized mass-casualty chemical, biological, radiological, nuclear, or electromagnetic attack. This would also be the case if, for example, an Islamic State cell was suddenly able to launch a “swarm shock,” a drone swarm consisting of dozens or hundreds of semiautonomous FPV attack drones launched against a target, or if Islamic State supporters launched an AI-assisted cyber-physical attack. Such scenarios fall squarely under the Gray Swan label and, as terrorists intend, can have an enormous psychological impact on target populations.
There are a host of uncertainties about how the terrorist threat landscape will continue to evolve. The only certainty is that terrorists will continue seeking new and innovative methods to improve the lethality of their attacks. As such, any technology—including AI—that can increase the chances of perpetrating a mass casualty incident will be one that terrorists experiment with and seek to use as a force multiplier in their attacks and operations.
Conclusion
Revisiting the technological determinism hypothesis that has long shaped debates around terrorist innovation, this paper finds that AI—at this nascent stage—neither confirms nor overturns it. A preliminary investigation into the Islamic State’s digital ecosystem indicates that experimentation has begun, particularly in the areas of propaganda production, translation, and recruitment. Additionally, the series of terrorist plots that have involved AI in operational planning, one of which was inspired by the Islamic State, shows the appeal of a tool that provides precise and tailored information.
At the same time, AI has functioned less as a paradigm-shifting weapon to date than as an extension of earlier practices: accelerating content production, lowering friction in planning, and marginally optimizing existing workflows rather than fundamentally transforming operational logic. AI has also begun to serve as part of a larger trend toward automation, outsourcing, and the minimization of physical and direct contact. As Telegram bots have stood in for Islamic State operatives and virtual planners for physical networks, AI may eventually overtake the virtual planner model. Incremental adoption does not mean less impact, however. Terrorists, including Salafi-jihadists, gradually adopted internet usage, beginning with static websites that displayed ideological messaging and news updates. The internet is now the backbone of much terrorist recruitment and functioning.
So far, AI use within the Islamic State ecosystem appears to cluster around low-hanging applications: generative propaganda imagery and anasheed, automated translation, basic operational ideation, and exploratory discussions about coding, drones, and cyber activity. There is not yet any proof of whether the speculative uses of AI that Islamic State supporters mention in online forums have materialized in actual attacks. More complex AI-use scenarios—e.g., high-end cyber-physical attacks or AI-directed mass-casualty plots—remain largely unrealized, but analysts and policymakers should consider them Gray Swans.
The open-source AI conundrum is particularly pressing: The same decentralization and transparency that fuel innovation across research communities and provide global access to a revolutionary technology also enable violent nonstate actors to obtain AI models that they can run locally and strip of their built-in safety measures with no concerns about information leaks.
Taken together, these dynamics demand analytical humility. It is wise to place AI-enabled terrorism in the longer historical pattern of terrorist innovation characterized by gradual appropriation, DIY-ification, uneven sophistication, and the ever-present possibility of catastrophic surprise.
In one of the earliest articles on the topic of AI and terrorism, Daveed Gartenstein-Ross pointed out, “Many analysts—and I fell prey to this error—brushed aside early concerns about the global diffusion of drone technology. The reason? We imagined that terrorists would use drones as we did and believed that superior American airpower would blast theirs from the sky. But instead of trying to replicate the Predator, the Islamic State and other militant groups cleverly adapted smaller drones to their purposes.”[72]
To prevent strategic surprise in the form of an AI-generated terrorist attack, the international counterterrorism community should resist mirror‑imaging. Instead they should focus on cultivating resilience by stress‑testing systems through intense red‑teaming that explores a wide realm of unlikely but plausible attack pathways. Ultimately, it will be necessary to implement concrete measures to limit exposure to worst‑case failures.
Endnotes
- Daveed Gartenstein-Ross and Madeleine Blackman, “ISIL’s Virtual Planners: A Critical Terrorist Innovation,” War on the Rocks, January 4, 2017, https://warontherocks.com/2017/01/isils-virtual-planners-a-critical-terrorist-innovation. ↑
- Artificial intelligence (AI) refers to machine-based systems (e.g., software, a cyber-physical system) that perceive their environment through inputs and infer, from those inputs and the explicit objectives and implicit values embedded by their developers and trainers, how best to generate outputs such as predictions, content, recommendations, decisions, or physical actions. AI systems differ in their degree of autonomy; some operate reactively, responding to user inputs (e.g., “generate a line-by-line translation of this speech to Urdu”), while others reason and act proactively (e.g., an agent that finds accounts online of struggling teenagers and, based on the specific characteristics of that profile, develops a strategy to coax them into visiting a terrorist forum). ↑
- Daveed Gartenstein-Ross, “Terrorists Are Going to Use Artificial Intelligence,” Defense One, May 3, 2018, https://www.defenseone.com/ideas/2018/05/terrorists-are-going-use-artificial-intelligence/147944. ↑
- The Soufan Center, “Terrorist Groups Looking to AI to Enhance Propaganda and Recruitment Efforts,” IntelBrief, October 3, 2024, https://thesoufancenter.org/intelbrief-2024-october-3. ↑
- Priyank Mathur, Clara Broekaert, and Colin P. Clarke, “The Radicalization (and Counter-Radicalization) Potential of Artificial Intelligence,” International Centre for Counter-Terrorism, May 1, 2024, https://icct.nl/publication/radicalization-and-counter-radicalization-potential-artificial-intelligence. ↑
- Clara Broekaert and Lucas Webber, “AI Use in Terrorist Plots and Attacks Surges in 2025,” Militant Wire, December 24, 2025, https://www.militantwire.com/p/ai-use-in-terrorist-plots-and-attacks. ↑
- Jacob Ware, “Terrorist Groups, Artificial Intelligence, and Killer Drones,” War on the Rocks, September 24, 2019, https://warontherocks.com/2019/09/terrorist-groups-artificial-intelligence-and-killer-drones. ↑
- Kim Cragin, “Taking Stock of the Islamic State,” Lawfare, December 15, 2024, https://www.lawfaremedia.org/article/taking-stock-of-the-islamic-state. ↑
- Mauro Lubrano, “Navigating Terrorist Innovation: A Proposal for a Conceptual Framework on How Terrorists Innovate,” Terrorism and Political Violence 35, no. 2 (2023): 248–63, https://doi.org/10.1080/09546553.2021.1903440. ↑
- Martha Crenshaw, “Theories of Terrorism: Instrumental and Organizational Approaches,” Journal of Strategic Studies 10, no. 4 (1987): 13–31, https://doi.org/10.1080/01402398708437313. ↑
- While emerging technologies are widely understood as new, innovative technologies that could significantly disrupt society, existing technologies finding new applications or commercial circumstances (such as lapsed patents) also fall under this umbrella. See, for example, research into why violent extremists have adopted additive manufacturing (3D printing) only in recent years. Yannick Veilleux-Lepage, “Printing Terror: An Empirical Overview of the Use of 3D-Printed Firearms by Right-Wing Extremists,” CTC Sentinel 17, no. 6 (June 2024): 37–49, https://ctc.westpoint.edu/printing-terror-an-empirical-overview-of-the-use-of-3d-printed-firearms-by-right-wing-extremists. ↑
- Audrey Kurth Cronin, Power to the People: How Open Technological Innovation Is Arming Tomorrow’s Terrorists (New York: Oxford University Press, 2020). ↑
- Clara Broekaert and Colin P. Clarke, “The New Orleans Attack: The Technology Behind IS-Inspired Plots,” Global Network on Extremism and Technology, January 30, 2025, https://gnet-research.org/2025/01/30/the-new-orleans-attack-the-technology-behind-is-inspired-plots. ↑
- Chelsea Daymon, Yannick Veilleux-Lepage, and Emil Archambault, Learning from Foes: How Racially and Ethnically Motivated Violent Extremists Embrace and Mimic Islamic State’s Use of Emerging Technologies (London: Global Network on Extremism and Technology, 2022), https://gnet-research.org/2022/06/07/learning-from-foes-how-racially-and-ethnically-motivated-violent-extremists-embrace-and-mimic-islamic-states-use-of-emerging-technologies. ↑
- Daveed Gartenstein-Ross, Colin P. Clarke, and Matt Shear, “Terrorists and Technological Innovation,” Lawfare, February 2, 2020, https://www.lawfaremedia.org/article/terrorists-and-technological-innovation. ↑
- Assaf Moghadam, “How Al Qaeda Innovates,” Security Studies 22, no. 3 (2013): 466–97, https://doi.org/10.1080/09636412.2013.816123. ↑
- A terrorist entrepreneur is someone like Khaled Sheikh Mohammed, who was technically never a formal member of al-Qaeda even as he was the brains behind the 9/11 attacks. He was something of a jihadist “free agent,” lending his services to jihadist groups that sought his expertise. ↑
- Michael C. Horowitz, “Nonstate Actors and the Diffusion of Innovations: The Case of Suicide Terrorism,” International Organization 64, no. 1 (2010): 33–64, https://doi.org/10.1017/S0020818309990233. ↑
- Brian A. Jackson et al., Aptitude for Destruction, Volume 1: Organizational Learning in Terrorist Groups and Its Implications for Combating Terrorism (Santa Monica, CA: RAND Corporation, 2005), https://www.rand.org/pubs/monographs/MG331.html. ↑
- Michael Kenney, “Beyond the Internet: Mētis, Techne, and the Limitations of Online Artifacts for Islamist Terrorists,” Terrorism and Political Violence 22, no. 2 (2010): 177–97, https://doi.org/10.1080/09546550903554760. ↑
- Don Rassler and Yannick Veilleux-Lepage, “The Paradox of Progress: How ‘Disruptive,’ ‘Dual-Use,’ ‘Democratized,’ and ‘Diffused’ Technologies Shape Terrorist Innovation,” Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis [Journal for technology assessment in theory and practice] 33, no. 2 (2024): 22–28, https://doi.org/10.14512/tatup.33.2.22. ↑
- Marc‑André Argentino, Shiraz Maher, and Charlie Winter, Violent Extremist Innovation: A Cross‑Ideological Analysis (London: International Centre for the Study of Radicalisation and Political Violence, 2021), https://icsr.info/wp-content/uploads/2021/12/ICSR-Report-Violent-Extremist-Innovation-A-Cross%E2%80%91Ideological-Analysis.pdf. ↑
- Rassler and Veilleux-Lepage, “The Paradox of Progress.” ↑
- Charlie Winter, Shiraz Maher, and Aymenn Jawad al‑Tamimi, Understanding Salafi‑Jihadist Attitudes Towards Innovation (London: International Centre for the Study of Radicalisation and Political Violence, 2021), https://icsr.info/wp-content/uploads/2021/01/ICSR-Report-Understanding-Salafi%E2%80%91Jihadist-Attitudes-Towards-Innovation.pdf. ↑
- Charlie Winter, “Suicide Tactics and the Islamic State,” International Centre for Counter-Terrorism, January 10, 2017, https://icct.nl/publication/suicide-tactics-and-islamic-state. ↑
- Ellen Tveteraas, “Under the Hood – Learning and Innovation in the Islamic State’s Suicide Vehicle Industry,” Studies in Conflict & Terrorism 47, no. 12 (2024): 1648–71, https://doi.org/10.1080/1057610X.2022.2043226. ↑
- Yannick Veilleux‑Lepage and Emil Archambault, A Comparative Study of Non-State Violent Drone Use in the Middle East (The Hague: International Centre for Counter-Terrorism, 2022), https://icct.nl/publication/comparative-study-non-state-violent-drone-use-middle-east. ↑
- Official media foundations are linked to the Islamic State’s central media apparatus but also include the media foundations that appear to be the primary propaganda arms of different provinces of the Islamic State—though some of these are not centrally and formally acknowledged. Unofficial foundations and outlets are not linked to the group officially but still openly align with it. Supporter networks are individual accounts of supporters that may be involved in propagating Islamic State content or making Islamic State–aligned content. As Moustafa Ayed has noted, unofficial media foundations often “have a much larger footprint, and are more readily available on platforms and messaging applications than official outlets.” See Moustafa Ayad, “Teenage Terrorists and the Digital Ecosystem of the Islamic State,” CTC Sentinel 18, no. 2 (February 2025): 1–14, https://ctc.westpoint.edu/teenage-terrorists-and-the-digital-ecosystem-of-the-islamic-state. ↑
- Moustafa Ayad, “An ‘Alt‑Jihad’ Is Rising on Social Media,” Wired, December 8, 2021, https://www.wired.com/story/alt-jihad-rising-social-media. ↑
- Pastebin websites are online text-hosting services where users can store and share plaintext or code snippets. They often do not require the creation of an account to share content and have minimal oversight or content moderation practices. They serve as an important pillar of the Islamic State’s digital media ecosystem. See Alessandro Bolpagni and Ali Fisher, “‘Navigating Beyond the Digital Safe Haven’: Mapping the Course of pro-Islamic State Propaganda on Rocket.Chat through a URL Social Network Analysis,” Perspectives on Terrorism 20, no. 1 (2026): 9–28, DOI: 10.19165/AKBV9789. ↑
- Nicolas Stockhammer and Colin P. Clarke, “Learning from Islamic State-Khorasan Province’s Recent Plots,” Lawfare, August 11, 2024, https://www.lawfaremedia.org/article/learning-from-islamic-state-khorasan-province-s-recent-plots. ↑
- Interpretations and practice vary. However, prohibition of image-making is documented in several hadiths. For an in-depth look at pushback against AI-generated videos with human-like newscasters, see Mona Thakkar and Anne Speckhard, Caliphate AI: IS/ISKP Supporters Harness Generative AI for Propaganda Dissemination (International Center for the Study of Violent Extremism, 2024), https://icsve.org/caliphate-ai-is-iskp-supporters-harness-generative-ai-for-propaganda-dissemination. ↑
- While it is not possible to ascertain that generative AI has been used by the Islamic State’s official propaganda outlets, AI text detection tools indicate the use of such applications. Additionally, certain visual components of ISKP’s magazine Voice of Khorasan bear the hallmarks of generative AI. ↑
- For a comprehensive analysis of Voice of Khorasan, see Lucas Webber, “Voice of Khorasan Magazine and the Internationalization of Islamic State’s Anti-Taliban Propaganda,” Terrorism Monitor 20, no. 9 (2022): 1–6, https://jamestown.org/program/voice-of-khorasan-magazine-and-the-internationalization-of-islamic-states-anti-taliban-propaganda. ↑
- Aleks Krotoski, Carly Sygrove, and Caroline Feraday, “Jihadists and AI,” episode of The Documentary Podcast: The Global Jigsaw, BBC World Service, October 29, 2025, https://www.bbc.co.uk/programmes/p0mch2fv. ↑
- Brave Leo AI (product page), accessed April 10, 2026, https://brave.com/leo. ↑
- These insights are derived from OSINT work. For this study, we have limited ourselves to all channels in the Islamic State’s primary Rocket.Chat server but not private chats or chat groups. Language queries were done in English, French, Arabic, Pashto, and Urdu. ↑
- Azeri-Press Agency, “Wiener IS-Anhänger besprach Bombenbauen mit ChatGPT” [Viennese Islamic State supporter discussed bomb making with ChatGPT], Salzburger Nachrichten, August 31, 2025, https://www.sn.at/politik/innenpolitik/wiener-is-anhaenger-besprach-bombenbauen-mit-chatgpt-art-613233. ↑
- Efilism describes an extreme anti-natalist philosophy that, to prevent pain, all life and suffering should be extinguished. This ideology has motivated mass shootings. ↑
- Broekaert and Webber, “AI Use in Terrorist Plots.” ↑
- Byron Kaye, “Crisis Contractor for OpenAI, Anthropic Eyes a Move to Combat Extremism,” Reuters, April 2, 2026, https://www.reuters.com/sustainability/society-equity/crisis-contractor-openai-anthropic-eyes-move-combat-extremism-2026-04-02. ↑
- “Tumbler Ridge Shooter Had Interest in Gore and Guns,” Anti-Defamation League, February 11, 2026, https://www.adl.org/resources/article/tumbler-ridge-shooter-had-interest-gore-and-guns. ↑
- Georgia Wells, “OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago,” Wall Street Journal, February 21, 2026, https://www.wsj.com/us-news/law/openai-employees-raised-alarms-about-canada-shooting-suspect-months-ago-b585df62. ↑
- Global Counterterrorism Forum, Berlin Memorandum on Good Practices for Countering Terrorist Use of Unmanned Aerial Systems (Global Counterterrorism Forum, 2019), https://www.thegctf.org/Initiatives/Initiative-to-Operationalize-the-Berlin-Memorandum. ↑
- Jeffrey Lewis, “A Smart Bomb in Every Garage? Driverless Cars and the Future of Terrorist Attacks,” National Consortium for the Study of Terrorism and Responses to Terrorism, September 28, 2015, https://www.start.umd.edu/news/smart-bomb-every-garage-driverless-cars-and-future-terrorist-attacks. In 2018, a Finnish security firm reported it had “concrete evidence” that the Islamic State was considering self-driving cars in place of suicide bombers. See “Self Driving Cars ‘Game Changing’ for FBI…& ISIS,” The Cipher Brief, January 3, 2018, https://www.thecipherbrief.com/self-driving-cars-game-changing-fbi-isis. ↑
- Don Rassler, Muhammad al-`Ubaydi, and Vera Mironova, “The Islamic State’s Drone Documents: Management, Acquisitions, and DIY Tradecraft,” Combating Terrorism Center at West Point, January 31, 2017, https://ctc.westpoint.edu/ctc-perspectives-the-islamic-states-drone-documents-management-acquisitions-and-diy-tradecraft. ↑
- Niccola Milnes and Rida Lyammouri, “Countering JNIM’s Drone Proliferation in the Sahel,” Policy Paper No. 24/25, Policy Center for the New South, July 14, 2025, https://www.policycenter.ma/publications/countering-jnims-drone-proliferation-sahel. ↑
- Niccola Milnes, “JNIM’s Strength Is Low-Tech Grit, and AI Is Now Low-Tech Enough to Fit,” LinkedIn, April 1, 2025, https://www.linkedin.com/pulse/jnims-strength-low-tech-grit-ai-now-enough-fit-niccola-milnes-aomre. ↑
- Don Rassler, “A View from the CT Foxhole: Adam Hadley, Executive Director, Tech Against Terrorism,” CTC Sentinel 18, no. 7 (July 2025): 9–15, https://ctc.westpoint.edu/a-view-from-the-ct-foxhole-adam-hadley-executive-director-tech-against-terrorism. ↑
- Rueben Dass and Abdul Basit, “Nascent Adoption: Emerging Tech Trends by Terrorists in Afghanistan and Pakistan,” Global Network on Extremism and Technology, June 18, 2025, https://gnet-research.org/2025/06/18/nascent-adoption-emerging-tech-trends-by-terrorists-in-afghanistan-and-pakistan. ↑
- Iftikhar Firdous, “ISKP Begins Publishing Pashto News Bulletins Using Artificial Intelligence,” The Khorasan Diary, May 21, 2024, https://www.thekhorasandiary.com/en/2024/05/21/iskp-begins-publishing-pashto-news-bulletins-using-artificial-intelligence. ↑
- See, for example, Almog Simchon, Matthew Edwards, and Stephan Lewandowsky, “The Persuasive Effects of Political Microtargeting in the Age of Generative Artificial Intelligence,” PNAS Nexus 3, no. 2 (2024): pgae035, https://doi.org/10.1093/pnasnexus/pgae035. ↑
- Miles Brundage et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (February 20, 2018), https://doi.org/10.48550/arXiv.1802.07228. ↑
- Daveed Gartenstein-Ross, The Islamic State’s Global Propaganda Strategy (The Hague: International Centre for Counter-Terrorism, 2016), https://icct.nl/publication/islamic-states-global-propaganda-strategy. This strategy is also reminiscent of targeted recruitment of European petty criminals through redemption narratives in the 2010s. Rayat al-Tawheed, a group of British jihadists that later joined the Islamic State, famously used the slogan “Sometimes people with the worst pasts create the best futures.” See Rajan Basra and Peter R. Neumann, “Criminal Pasts, Terrorist Futures: European Jihadists and the New Crime-Terror Nexus,” Perspectives on Terrorism 10, no. 6 (2016): 25–40, https://pt.icct.nl/article/criminal-pasts-terrorist-futures-european-jihadists-and-new-crime-terror-nexus. ↑
- Abdullah Alrhmoun, Charlie Winter, and János Kertész, “Automating Terror: The Role and Impact of Telegram Bots in the Islamic State’s Online Ecosystem,” Terrorism and Political Violence 36, no. 4 (2024): 409–24, https://doi.org/10.1080/09546553.2023.2169141. ↑
- TATP is triacetone triperoxide, also known as “the Mother of Satan.” TATP is a highly volatile, unstable explosive favored by the Islamic State and its supporters because, when properly detonated, it can be highly lethal. ↑
- This refers to operatives who are part of the Islamic State who help coordinate attacks online with supporters. ↑
- Gartenstein-Ross and Blackman, “ISIL’s Virtual Planners.” ↑
- Strategic Studies Department, “Significance and Implications of Ukraine’s Operation Spiderweb,” TRENDS Research & Advisory, June 3, 2025, https://trendsresearch.org/insight/significance-and-implications-of-ukraines-operation-spiderweb. ↑
- Vibe-coding is a slang term popularized in 2025 by Andrej Karpathy that refers to a method of software development in which humans use AI tools to generate, debug, and refine applications through conversational prompts rather than by formally writing code line by line. ↑
- Emil Archambault and Yannick Veilleux-Lepage, “Drone Imagery in Islamic State Propaganda: Flying Like a State,” International Affairs 96, no. 4 (2020): 955–73, https://doi.org/10.1093/ia/iiaa014. ↑
- Gilad Karo, Tom Divon, and Blake Hallinan, “The TikTok Caliphate: How Jihadist Supporters Exploit Algorithmic Recommendations and Evade Content Moderation,” Social Media + Society 11, no. 1 (2026): 1–16, https://doi.org/10.1177/20563051251412167. ↑
- Data poisoning occurs during the training phase of an AI model and consists of including malicious data into the training dataset to nudge the model to learn “wrong” patterns, which would ultimately result in incorrect behavior for certain inputs. Prompt injections consist of manipulative input given to a trained model to elicit unintended behavior, which can include prompt formulations that seek to make the model ignore its instructions. ↑
- Jessica Davis, “The Financial Future of the Islamic State,” CTC Sentinel 17, no. 7 (August 2024): 32–37, https://ctc.westpoint.edu/the-financial-future-of-the-islamic-state. ↑
- While threat assessments in the 2010s and even early 2020s warned that augmented and virtual reality would unleash a host of new capabilities for violent extremists, including Salafi-jihadists, their predictions have not materialized at scale. This means that even as price points for AR/VR products have fallen, few have recognized the operational benefit or been able to leverage these tools for more nefarious ends. ↑
- For in-depth analysis of commands that “jailbreak” proprietary chatbots, see Gabriel Weimann et al., “Generating Terror: The Risks of Generative AI Exploitation,” CTC Sentinel 17, no. 1 (January 2024): 17–24, https://ctc.westpoint.edu/generating-terror-the-risks-of-generative-ai-exploitation. ↑
- See, for example, Dmitrii Volkov, “Badllama 3: Removing Safety Finetuning from Llama 3 in Minutes,” arXiv, July 1, 2024, https://doi.org/10.48550/arXiv.2407.01376. Additionally, various code repository services host step-by-step abliteration guides. ↑
- Local logging remains (operating system can log interactions and errors). ↑
- Brian Fishman, “AI and the New Blueprint of Terrorism,” War on the Rocks, March 9, 2026, https://warontherocks.com/2026/03/ai-and-the-new-blueprint-of-terrorism. ↑
- Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007). ↑
- See, for example, Joana Cook, Graig Klein, and Bàrbara Molas, “Terrorist Exploitation of AI: A Concept Note,” International Centre for Counter-Terrorism, 2024, https://icct.nl/sites/default/files/2026-02/Workshop%20-%20Concept%20Note.pdf. ↑
- Gartenstein-Ross, “Terrorists Are Going to Use AI.” ↑