The South Asian digital information landscape has navigated a set of technological advancements, and a concrete example of this is the May 2025 Conflict between India and Pakistan. This is said in the context of the integration of Artificial Intelligence in the disinformation campaigns that have become an all-important part of cognitive warfare. In the words of Joseph Nye, “In the Information Age, success is not merely the result of whose army wins, but whose story wins,” and it has introduced a new set of methods to maximize the chances of achieving this victory. The May 2025 Conflict is the real-world manifestation in the history of the South Asian strategic dyad in which Generative Artificial Intelligence (GenAI) and algorithmic manipulation were weaponized at scale. AI has amplified the traditional disinformation campaigns through unprecedented speed and scale, providing accessibility, and saturating the information environment, which reveals the convenience it has granted to malicious actors in order to dominate the psychological battlefield.
The online digital information landscape during the May conflict became rapidly saturated with the use of AI-driven disinformation. AI was used to make leadership deepfakes, Generative Adversarial Neural Networks (GANs)-manipulated visuals were broadcast, AI chatbots validated false claims, Large Language Models (LLMs) was used for micro-targeting, and many more. For instance, at the peak of military exchanges, a deep fake video of Prime Minister Shehbaz Sharif circulated across major social media platforms, conceding military failure against India’s Operation Sindoor. While the Deepfakes Analysis Unit (DAU) dismissed it as fake, its initial viral velocity sparked the concern to manage internal perception. Similarly, Grok AI, integrated into the X platform, is trained on the data from social media, which led it to validate false claims regarding the Indian forces’ invasion of Pakistan. Similar patterns of the use of AI in the disinformation campaigns are also observed in the Russia-Ukraine War, Israel-Hamas War, and the ongoing combined US-Israel war with Iran. These events confirm that AI has become an inevitable part of modern-cognitive wars.
Several characteristics make AI well-suited to amplify the generation and dissemination of disinformation. The sheer volume and speed of the AI-generated content bombard the digital information space with an overwhelming number of fabricated data in the chaotic hours of the conflict. Furthermore, the accessibility feature lowers the barrier to entry, as now anyone with malicious intent can generate fake content using AI, obviating the need for large and costly, human-operated bot farms, manual scripting and graphic creation, and static system of the traditional influence operations. This makes it extremely difficult to debunk each fabricated piece of content when the digital networks are flooded persistently with all kinds of information. With continuous development in the AI models, the content it generates has also become extremely persuasive, which further complicates its detection.
AI-generated fake content has profound societal implications. It undermines the role of objective truth and logical analysis in everyday life, referred to by RAND researchers as “truth decay.” The World Economic Forum 2024 identified misinformation and disinformation as its top global concern. Perhaps, the most infuriating dilemma it poses is a secondary, highly damaging psychological vulnerability known as the “liar’s dividend.” Increasing awareness about photorealistic deepfakes and highly fluent AI text makes the public skeptical of accurate information. This erodes trust in the state’s institutions and degrades the epistemic foundation of society, as the masses do not know whom to trust. In times of crisis, trust in the state’s institutions should be strengthened more than ever, as domestic unity constitutes a necessary precondition to effectively counter cross-border aggression. During the May 2025 conflict, the information ecosystem was fragmented on several issues, such as which side had inflicted more damage or images of hoax attacks, highlighting the vulnerabilities in the AI model that are exploited by propagandists. In essence, severe disruptions unfolding across the information environment during the May 2025 conflict necessitates a Cognitive Defense Strategy for Pakistan to counter the recently introduced mass use of AI in disinformation campaigns.
While the observable implications in the May conflict have so far been confined to public manipulation and the decision-makers were prevented from being affected, this might not be the case in future wars. With the ongoing rapid advancement in generative AI and the growing expertise of malicious users, it is becoming easier for generated content to bypass detection and safety guardrails. This prediction seems extremely critical when imagined in a compressed decision-making environment in which believable disinformation could create consternation and open the door to irreversible catastrophic damage. The mere possibility of it further necessitates the need for a centralized Cognitive Defense Strategy for Pakistan, a digital warfare task force that keeps up with the transformative nature of AI and dismantles disinformation campaigns, specifically in times of crisis. However, in the current context, the ultimate target of AI-led disinformation campaign is not military hardware and personnel, but the civilian population and its trust in state institutions. Therefore, much of the attention should be diverted to distinguish general facts from lies to strengthen cognitive resilience among the masses of Pakistan.

Leave a Reply