Author: Denis Avetisyan
New research demonstrates how artificial intelligence can automate the critical process of identifying security policy weaknesses after a cyberattack.
This study explores the use of agentic workflows driven by Large Language Models to analyze incident evidence, map findings to the MITRE ATT&CK framework, and pinpoint areas for improved policy compliance.
Despite the critical need for efficient cybersecurity post-incident reviews, current processes remain labor-intensive and reliant on subjective expert analysis. This paper introduces an automated framework, ‘Automated Post-Incident Policy Gap Analysis via Threat-Informed Evidence Mapping using Large Language Models’, which leverages Large Language Models to analyze system evidence and pinpoint deficiencies in security policies. Experimental results demonstrate that this approach can effectively map observed behaviours to the MITRE ATT&CK framework, identify missing controls, and generate actionable remediation recommendations with traceable reasoning. Could LLM-assisted workflows represent a significant step towards more consistent, auditable, and ultimately, more resilient cybersecurity practices?
The Inevitable Failure of Reactive Security
Post-incident reviews have historically relied on painstaking manual effort, demanding significant time from security teams to collect logs, analyze data, and piece together the sequence of events. This process is not only incredibly resource-intensive, but also prone to inconsistencies; the depth of analysis often varies dramatically depending on the individuals involved and the urgency of the situation. Because these reviews frequently lack standardized methodologies and automated tools, critical details can be overlooked, root causes misidentified, and valuable learning opportunities lost. Consequently, organizations struggle to move beyond reactive firefighting and build a truly proactive security posture, repeatedly facing similar incidents due to incomplete or inconsistent post-incident understanding.
Current incident response methodologies frequently fall short when bridging the gap between technical findings and overarching organizational policies. Investigations often meticulously detail what happened – identifying compromised systems and exploited vulnerabilities – but struggle to clearly demonstrate how the incident violated specific security mandates or regulatory requirements. This disconnect complicates remediation efforts, as teams may address the immediate technical issue without fully understanding the policy implications, potentially leaving the organization exposed to further risk and hindering compliance audits. Consequently, proving adherence to standards like PCI DSS or HIPAA becomes significantly more challenging, and the opportunity to proactively strengthen security controls based on policy-driven insights is lost.
The inability to thoroughly learn from security incidents represents a significant vulnerability for organizations striving for robust defenses. Without a systematic approach to analyze failures – beyond simply restoring systems – crucial patterns and underlying weaknesses often remain hidden. This deficiency hinders proactive security enhancements, leaving organizations susceptible to repeated breaches and escalating risks. A reactive cycle of responding to incidents, rather than preventing them, limits the potential for genuine improvement in security posture and ultimately impacts an organization’s resilience against future threats. Consequently, this gap necessitates a shift towards more intelligent and automated post-incident processes that prioritize knowledge capture and preventative action.
Orchestrating Intelligence: The Agentic Workflow
The implemented Agentic Workflow consists of a multi-agent pipeline designed to automate the post-incident review process, reducing manual effort and accelerating resolution times. This pipeline breaks down the review into discrete tasks assigned to individual agents, each specializing in a specific aspect of incident analysis. These agents operate sequentially and collaboratively, passing data and insights between each other. The workflow begins with data ingestion from incident management systems, followed by analysis, root cause identification, and culminates in the generation of a post-incident report detailing findings and recommended preventative actions. This automated approach ensures consistency and scalability in the review process, allowing for a more efficient and thorough analysis of incidents.
The agentic workflow leverages Large Language Models (LLMs), specifically GPT-4o, to automate the analysis of incident data and determine root causes. These LLMs process incident reports, logs, and related documentation to identify patterns and anomalies indicative of underlying issues. LangGraph serves as the orchestration framework, managing the interaction between the LLM and data sources, and enabling a step-by-step reasoning process. This allows the system to move beyond simple keyword matching and perform contextual analysis, ultimately generating a prioritized list of potential root causes for each incident.
LlamaIndex provides the capability to ingest and semantically index an organization’s policies and regulatory documents. This process transforms unstructured policy text into vector embeddings, allowing the agentic workflow to perform similarity searches and retrieve relevant policies based on the meaning of incident findings. Rather than relying on keyword matches, semantic indexing enables correlation of incident root causes with applicable regulations, even when the phrasing differs. The system utilizes these retrieved policies to assess whether incident responses were compliant and to identify potential gaps in existing procedures, enhancing the thoroughness of post-incident reviews.
Tracing the Inevitable: A Brute-Force Attack Analysis
Analysis of a documented brute-force authentication attack was conducted utilizing Windows Event Logs (EVTX) as the foundational data source. These logs, generated by Windows operating systems, record security events, including login attempts, failures, and account lockouts. The EVTX format was parsed to extract relevant data points such as timestamps, source IP addresses, user accounts targeted, and the outcome of each authentication attempt. This data was then processed to identify patterns indicative of a brute-force attack, specifically a high volume of failed login attempts originating from multiple source IP addresses against a single or multiple target accounts. The use of EVTX logs enabled a detailed reconstruction of the attack timeline and scope without requiring additional sensor deployment or data collection mechanisms.
The LLM-powered agents analyzed Windows Event Logs to detect a brute-force attack pattern characterized by repeated failed login attempts from a limited set of source IP addresses. This pattern was then correlated with the MITRE ATT&CK Framework, specifically identifying tactics such as “Brute Force” (T1190) and techniques related to credential access. Through log analysis and correlation, the agents identified the systems targeted by the attack, detailing the user accounts subjected to the attempts and the specific servers or services experiencing the highest volume of malicious activity. The output included a list of impacted systems, their associated vulnerabilities, and a severity score based on the number of failed login attempts and the criticality of the targeted accounts.
The implemented workflow identified specific policy violations directly attributable to the brute-force attack. This linkage moved beyond simple threat detection to provide actionable intelligence; identified violations included insufficient account lockout policies, weak password complexity requirements, and a lack of multi-factor authentication on critical systems. By mapping the attack to these concrete policy failures, the workflow facilitated targeted remediation steps, such as strengthening password policies, enabling account lockout thresholds, and implementing MFA. This approach enables organizations to address the root causes of the vulnerability, preventing future attacks leveraging similar tactics and reducing overall risk exposure.
From Findings to Foresight: Policy Alignment and Remediation
A robust workflow now exists to meticulously connect analytical results with the foundational evidence and specific policy directives that underpin them. This Evidence-to-Policy Traceability isn’t simply documentation; it establishes a verifiable chain of reasoning, allowing organizations to demonstrate how conclusions were reached and why they adhere to established governance. By explicitly linking findings back to supporting data and relevant policy clauses, the system fosters accountability and transparency. This capability is critical for audits, compliance reporting, and informed decision-making, providing a clear and defensible rationale for security investments and operational procedures. Ultimately, it transforms data analysis from a technical exercise into a strategically aligned component of risk management and policy enforcement.
Organizations often struggle to maintain a consistent security posture due to divergences between documented policies and the actual controls in place; a rigorous policy gap identification process addresses this challenge. By systematically comparing stated requirements with implemented safeguards, vulnerabilities arising from misconfigurations, outdated technologies, or simply overlooked stipulations become apparent. This enables security teams to move beyond generalized risk assessments and prioritize remediation efforts based on concrete discrepancies, focusing resources on the most critical areas of non-compliance. Such a targeted approach not only minimizes exposure to threats but also streamlines audits, demonstrates due diligence, and fosters a more robust and adaptive security framework, ultimately ensuring that security investments deliver maximum value.
Effective security frameworks rely on adherence to established standards, and this process directly supports alignment with leading guidelines such as NIST SP 800-53, ISO/IEC 27001, and the CIS Critical Security Controls. By mapping implemented security measures against these recognized benchmarks, organizations gain a quantifiable understanding of their compliance status and identify areas needing improvement. This structured approach not only streamlines audit processes but also demonstrably strengthens the overall security posture, reducing risk and fostering a more resilient defense against evolving threats. The result is a proactive, standards-driven security program that enhances trust with stakeholders and minimizes potential vulnerabilities.
The Illusion of Control: Towards Continuous Improvement
Threat modeling, when systematically integrated into software development and system design workflows, shifts security considerations from a late-stage assessment to a foundational practice. This proactive approach involves identifying potential threats and vulnerabilities – the ‘attack vectors’ – before they can be exploited. By analyzing system architectures and data flows, teams can anticipate how malicious actors might attempt to compromise assets, and subsequently design security controls to mitigate those risks. The process isn’t a one-time event; rather, it’s a continuous cycle of identifying, prioritizing, and addressing threats throughout the system lifecycle, ultimately leading to more resilient and secure systems. This methodology encourages a deep understanding of potential weaknesses, enabling developers to build defenses into the core of their applications, rather than attempting to patch vulnerabilities after deployment.
Automated policy gap analysis represents a significant shift in cybersecurity, moving beyond periodic audits to a system of continuous assessment. These systems utilize sophisticated algorithms to compare existing security policies against evolving threat landscapes, industry best practices, and regulatory requirements. By constantly scanning for discrepancies – gaps where controls are missing or inadequate – organizations can proactively address vulnerabilities before they are exploited. This continuous refinement isn’t merely about identifying weaknesses; it facilitates automated remediation suggestions, prioritizing critical issues, and providing clear pathways for strengthening security posture. The result is a dynamic, self-improving system that substantially reduces the risk of successful attacks and fosters a more resilient operational environment, minimizing potential damage and downtime.
Shifting from solely responding to security incidents represents a fundamental change in organizational defense. Traditional cybersecurity often operates in a cycle of detection and remediation, perpetually chasing threats as they materialize. However, embracing a proactive stance prioritizes anticipating and preventing attacks before they can impact systems or data. This involves continuously evaluating security measures, identifying weaknesses, and implementing improvements – a process that isn’t a one-time fix, but an ongoing cycle of assessment and adaptation. By fostering this culture of continuous improvement, organizations build resilience – the capacity to withstand attacks, recover quickly from breaches, and learn from experience – ultimately minimizing disruption and bolstering long-term security.
The pursuit of automated post-incident analysis, as detailed in this work, reveals a familiar pattern. Systems, even those designed for resilience, inevitably accumulate dependencies and vulnerabilities. This echoes a sentiment articulated by Robert Tarjan: “Everything connected will someday fall together.” The paper’s approach, leveraging Large Language Models to map evidence against the MITRE ATT&CK framework, isn’t about preventing eventual failure, but rather about understanding the pathways through which that failure will manifest. It’s a pragmatic acceptance that security isn’t a destination, but an ongoing process of illuminating the inevitable points of collapse within a complex, interconnected system. The agentic workflows merely hasten the discovery of these weak points, offering a fleeting moment of preparedness before the entire structure succumbs to entropy.
What’s Next?
The automation of post-incident analysis, as demonstrated, is not a destination but a shifting of the problem. The system doesn’t solve the gaps in policy; it merely reveals them with greater efficiency. Scalability is just the word used to justify complexity, and each added layer of automation introduces new potential failure modes, new surfaces for adversarial exploitation. The true limitation isn’t the Large Language Model itself, but the brittle nature of the codified knowledge it processes. Policy, by definition, lags behind threat actors; automating its audit simply accelerates the discovery of past inadequacies.
The promise of agentic workflows hints at a future where systems adapt to evolving threats, but this implies a constant renegotiation of trust. Traceable reasoning is a comfort, not a guarantee. Every optimization will someday lose flexibility, and the pursuit of complete coverage will inevitably obscure the most critical vulnerabilities.
The perfect architecture is a myth to keep analysts sane. The next step isn’t better algorithms, but a deeper understanding of the inherent limitations of formalizing security. It’s a move away from treating systems as tools to be built, and toward recognizing them as ecosystems that must be cultivated, nurtured, and accepted as fundamentally incomplete.
Original article: https://arxiv.org/pdf/2601.03287.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Tom Cruise? Harrison Ford? People Are Arguing About Which Actor Had The Best 7-Year Run, And I Can’t Decide Who’s Right
- Gold Rate Forecast
- Brent Oil Forecast
- Answer to “Hard, chewy, sticky, sweet” question in Cookie Jam
- Adam Sandler Reveals What Would Have Happened If He Hadn’t Become a Comedian
- What If Karlach Had a Miss Piggy Meltdown?
- Abiotic Factor Update: Hotfix 1.2.0.23023 Brings Big Changes
- Katanire’s Yae Miko Cosplay: Genshin Impact Masterpiece
- ETH PREDICTION. ETH cryptocurrency
- Arc Raiders Player Screaming For Help Gets Frantic Visit From Real-Life Neighbor
2026-01-09 00:07