Nov 23, 2025
A Defensive AI Agent Against Large Language Model (LLM)-Assisted Polymorphic Malware
Ifeoma Ilechukwu, Saahir Vazirani, Guillaume Tabard, Chaitree Baradkar, Albert Calvo, Rijal Saepuloh
The rapid evolution of Large Language Models (LLMs) has introduced a new asymmetric threat: AI-assisted polymorphic malware. As identified by Google’s Threat Intelligence Group, attackers are utilizing automated frameworks like "PromptFlux" to weaponize LLMs, generating hundreds of functional, unique malware variants in minutes. Traditional Antivirus (AV) and Endpoint Detection and Response (EDR) systems fail to detect these attacks because they rely on static signatures of the final binary, remaining blind to the generation process itself. To close this gap, we introduce BlueFlux, a defensive AI agent that shifts detection "left" from the endpoint to the API. Powered by Grok and Model Context Protocol (MCP) tools, BlueFlux monitors LLM API logs to detect both the intent and velocity of code generation. By analyzing suspicious prompts, tracking high-speed mutation sequences, and correlating these behaviors into a dynamic risk score, BlueFlux provides an AI-aware shield capable of identifying and blocking the creation of polymorphic malware before it is ever deployed.
No reviews are available yet
Cite this work
@misc {
title={
(HckPrj) A Defensive AI Agent Against Large Language Model (LLM)-Assisted Polymorphic Malware
},
author={
Ifeoma Ilechukwu, Saahir Vazirani, Guillaume Tabard, Chaitree Baradkar, Albert Calvo, Rijal Saepuloh
},
date={
11/23/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


