I'm a systems engineer who builds autonomous AI systems.
My background is in operational technology and cybersecurity — domains where AI agents must be precise, auditable, and safe. That constraint shaped how I think about agent design: every autonomous decision needs governance, every workflow needs structured oversight, and every system needs to degrade gracefully when the model gets it wrong.
I've built multi-agent platforms from scratch: agent harnesses with structured communication architectures, intelligent LLM routing layers that minimize cost while maximizing capability, governance dashboards that monitor agent decisions in real time, and autonomous research agents that operate on long time horizons with minimal human intervention.
I care deeply about making AI systems that are reliable, interpretable, and steerable — not just capable.
Starting May 2026, I'll be pursuing a PhD in AI/ML, focused on advancing the science of agentic AI systems.
The best AI systems are engineered with the same rigor as the physical systems they analyze. I approach agent design the way a systems engineer approaches a complex plant: understand the failure modes first, design for observability, build in human override at every level, and never trust a component you can't evaluate independently.
This isn't just a philosophy — it's how I've built every system in my portfolio.
Interested in working together? I'd love to hear from you.
jhon.arango@IntrepidCyberSecEng.com
(667) 355-4069