Cutting false positives by 35% without breaking detections
A walk-through of the IOC-validation framework I built for a tier-1 SOC — and the trade-offs you'll want to think about before you copy it.
Field notes from a security consultant with five years in the SOC — incident response, threat hunting, detection engineering, and the slow craft of separating real threats from noise.
I'm a Senior Security Consultant with five years leading SOC and incident response operations across enterprise environments at Deloitte and BDO Canada. My day-to-day lives at the intersection of alert triage, log analysis, threat detection, and incident containment — the work of separating real adversaries from background noise.
I specialize in analyzing technical content and behavioral signals to detect and classify phishing, malware, and exploit attempts using Azure Sentinel, CrowdStrike, Splunk, QRadar, and LogRhythm. I've spent that time getting good at proactive threat hunting, playbook development, and the kind of automation that makes detection sharper instead of louder.
Not a list of every product I've touched — these are the ones I reach for first, the ones I've broken and rebuilt enough to trust under pressure.
A short trace of the places I've spent time, from L1 monitoring at a Canadian bank to leading IR at a Big Four firm.
Led incident response across MDE and CrowdStrike for enterprise clients, driving rapid containment of active threats. Investigated security incidents using Azure Sentinel and KQL, uncovered indicators of compromise, and ran proactive threat hunts in client environments. Built a tool to validate threat-intel feed IOCs that cut false-positive alerts by 35% and meaningfully sharpened detection accuracy.
Conducted cybersecurity incident investigations through Jira and Demisto — analyzed logs and alerts, identified root causes, and reduced average resolution time by streamlining the response workflow. Ran historical and incident-specific searches across ArcSight and Sentinel, automated daily report summaries with Python, and maintained the playbooks the team used in the field.
Monitored the SOC using ArcSight and RSA, escalating events to reduce breach exposure. Built early hands-on with FireEye, Carbon Black, BlueCoat, and Proofpoint, and worked alert investigations using Autopsy and Volatility to pinpoint root causes and recommend remediation.
Eight years of hands-on diagnostics, ticketing-system triage, and root-cause analysis. The work that quietly built the troubleshooting reflex I rely on every day in the SOC.
Built an IOC-validation tool that cross-checked threat-intel feeds against context, cutting false-positive alerts by 35% and freeing the SOC to focus on real signal.
Finalist in the Canadian Collegiate Cyber Exercise (2019). Defended an Active Directory environment as part of a blue team in a high-pressure live-fire exercise.
Five years of continuous incident response and threat hunting at Deloitte and BDO — the kind of repetition that turns frameworks into instincts.
Honours Bachelor of Technology, Informatics & Security, from Seneca Polytechnic. Active in Defcon416, OWASP, TASK, and SecTor — still attending.
Short essays on detection, response, and the work of defending real environments. Replace with your own when you start publishing.
A walk-through of the IOC-validation framework I built for a tier-1 SOC — and the trade-offs you'll want to think about before you copy it.
One Sentinel query, three joins, and the reason most "suspicious logon" rules fail before they fire.
If your IR playbook only works when you're awake, it doesn't work. Notes on writing for the worst version of yourself.
Why headers and hashes only get you halfway, and what behavioral context closes the gap.