Blue Team

CT14 LLMs Are Your New Junior Engineer: Using AI Safely in Security and IT Ops

August 4th, 2026

1:15pm - 2:30pm

Level: Intermediate

Heather Wilde Renze

CTO | Angel Investor | Author

Everyone is rushing to “use AI,” but most teams don’t have the faintest idea how to do it safely. I’ve spent the past few years helping engineering and security groups build AI-driven workflows -- and I’ve seen everything from brilliant automation to “we accidentally gave the model production credentials.”

In this session, we’ll take a grounded look at where large language models actually help in IT and SecOps… and where they create new risks. You’ll learn how to treat an LLM like a junior engineer: helpful, fast, occasionally brilliant -- and absolutely not someone you hand the keys to.

We’ll walk through real examples of using AI for log analysis, policy drafting, user education, threat triage, documentation cleanup, and incident response prep. Then we’ll map the risks: data leakage, prompt injection, hallucinated config changes, and the false sense of confidence that gets people breached.

This session is designed to give you a practical, responsible, repeatable approach for using AI inside a Microsoft-centric environment -- without turning your crown jewels into training data.

You will learn:

  • About safe LLM usage patterns for IT and security operations, including what should and should not be delegated to AI.
  • Real-world workflows where AI reduces workload: log reviews, documentation, user comms, policy generation, and incident prep.
  • How to implement guardrails — data boundaries, human review loops, and access control -- that keep AI productive without increasing risk.