// Resources
Curated AI tools with IR-specific setup tips. Practitioner-tested, vendor-neutral.
No affiliate links. No sponsorships. Just what actually works.
// 01 — Large Language Models
General-purpose AI that IR, Detection, Threat Hunting, and TI teams have adapted for daily workflows.
Anthropic's model. Best-in-class for long-context analysis — feeds an entire week of logs without truncation.
IR Tip
Use Projects to maintain persistent investigation context across sessions. With a 200K token window, you can dump full Defender or Splunk exports and get coherent analysis without chunking.
OpenAI's flagship model. Widest ecosystem, best plugin support, and Custom GPTs for specialized IR workflows.
IR Tip
Build a Custom GPT loaded with your team's runbooks and playbooks as knowledge files. Point analysts at it during live incidents so they stop hunting for documentation mid-response.
AI-native search engine with real-time web access. The fastest tool for OSINT during live incidents.
IR Tip
Use it for real-time actor lookups, CVE context, domain history, and paste site hits during active incidents. Beats manually pivoting across 10 browser tabs and gives cited sources.
Microsoft's security-specific AI. Native integration with Sentinel, Defender XDR, Intune, and Entra.
IR Tip
If you're in a Microsoft-heavy shop this is the most integrated option. Ask it to summarize Sentinel incidents in plain English, generate KQL on the fly, or explain Defender alerts — all without leaving your workflow.
Google's AI model. Most valuable for teams on Google Workspace, GCP, or Chronicle SecOps.
IR Tip
Gemini in Google SecOps lets you query Chronicle logs with natural language — no KQL required. If your org is GCP-native, this removes a significant barrier for L1 analysts during triage.
// 02 — Security-Native AI
Tools designed specifically for security workflows — not general-purpose models adapted to fit.
Industry-standard for file, URL, IP, and domain analysis. AI-powered behavior summaries on submissions.
IR Tip
The AI behavior summary on file submissions saves 10–15 minutes of sandbox review per sample. Run everything through it before committing to manual analysis — the summary alone often tells you if it's worth the time.
Daniel Miessler's open-source AI pattern library for security and analysis workflows. CLI-first, composable.
IR Tip
The analyze_malware_input and create_sigma_rules patterns are immediately useful. Pipe log output directly in from the terminal — no GUI, no copy-paste. Best for analysts who live in the command line.
// 03 — Local & Private
For logs, malware samples, and forensic artefacts you cannot send to the cloud. No data leaves your network.
Run open-source LLMs locally with a single command. Supports Llama 3, Mistral, Gemma, and dozens more.
IR Tip
Non-negotiable for sensitive log analysis and malware samples you can't send to the cloud. Run Llama 3.3 70B for near-GPT-4 quality completely offline. Works on a forensic workstation with no internet access required.
Desktop GUI for running local LLMs. Easy model downloads, built-in chat, and a local API server.
IR Tip
The GUI makes local models accessible for analysts who aren't comfortable with the terminal. Spin up a local API server and point your team's scripts at it — same interface as OpenAI's API, zero external calls, full data sovereignty.