Adam Caudill

Security Leader, Researcher, Developer, Writer, & Photographer

AI & IAM: Focus on Fundamentals

On the need to address fundamental issues when integrating new technologies.

A recent article, The Future of Cybersecurity Includes Non-Human Employees1, discussed the growing need to manage access granted to the rapidly expanding number of AI agents being deployed in companies. This is a deeply important topic, and particularly timely, as many are facing this challenge today. While I do want to address that topic, I also want to address how this is being framed.

Aside from leaning into an inflammatory tone with the use of “Non-Human Employees” as part of the title of the piece, there’s a deeper issue I see with how this is being framed, and it’s also related to AI. Importantly, this is far from unique to this article, but a trend across discussions of AI (and other new & emerging technologies). For the pragmatic security practitioner, a clear understanding of this framing device and the danger of accepting it unchallenged is of particular import.

Let’s take a closer look.

Reframing for AI #

“Everything old is new again,” a phrase first uttered by Jonathan Swift, or Winston Churchill, or maybe it was Mark Twain2 - whoever it was, it’s a simple phrase that’s apt as ever. With each substantial new technology, every existing challenge, every solved problem, every known risk gets reframed and repackaged and relabelled. For marketing teams, this is a fantastic opportunity to stake a claim on new territory, to assert leadership in solving some critical issue, to announce their solutions for these “new” problems.

AI is here, and everything is new again.

Non-human employees are becoming the future of cybersecurity, and enterprises need to prepare accordingly. As organizations scale Artificial Intelligence (AI) and cloud automation, there is exponential growth in Non-Human Identities (NHIs), including bots, AI agents, service accounts and automation scripts. In fact, 51% of respondents in ConductorOne’s 2025 Future of Identity Security Report said the security of NHIs is now just as important as that of human accounts. - The Hacker News

Here, in these first two sentences, my main objections to the framing are already clear.

Many of the challenges and risks aren’t actually new at all; they are the same as those that we in the security industry have been addressing for years. It’s new names for an old problem. They call it “Non-Human Identities” - for the last few decades this was called a service account. A new name for an old and well understood thing.

The danger of this framing is that it seeks to leverage new names and the association with a new technology to separate it from the well-established and well-understood approaches that have been refined over years of diligent work and careful analysis. This is great when you can sell something that addresses the problem. It’s the only thing this reframing is good for.

In reality, service accounts for AI are effectively the same as service accounts for any other form of automation3. As such, the mitigations for service accounts apply to agentic AI, and should be applied. This includes all of the controls that should always be applied, such as:

  • Least Privilege: As always, accounts should have the tightest possible set of privileges to complete the task, no more.
  • Separation of Concerns: Credentials and other secrets should be used for a single well-defined purpose, and nothing else. They should not be re-used for other processes or systems.
  • Credential Lifetime: Credentials should have the shortest possible lifetime, and be easily rotated.
  • Actionable Logging & Alerts: All automation should have clear logs, and alerts for unusual activity.
  • IP & Location Restrictions: Credentials used for automation should only be permitted within a known environment, any use outside of this controlled environment, should trigger accounts being disabled and immediate alerts.
  • Leak Detection: Logs and other automation artefacts should be automatically checked for secrets, authentication tokens, and other sensitive material.
  • Clear Ownership: The automated processes, data, and credential should have clear documented owners, with clear responsibilities – especially should issues arise.
  • Human Validation: When taking substantial actions, actions that can’t be reverted, or otherwise have significant impact, a human should be in the loop to validate that the action is intended and the results are desired.

These apply to agentic AI just as much as they apply to any other service accounts. In other words: the key is to focus on the fundamentals. Regardless of the underlying technology, the key to achieving meaningful security is to focus on the fundamentals first, then moving on to looking at the unique challenges.

Non-Deterministic Automation #

Agentic AI and other forms of LLM-based automation are fundamentally similar to other forms of automation, though there is a substantial point that is critical to understand: it’s non-deterministic. If I automate a process using Python, the process is fixed, and the behaviour is deterministic – which is to say, it will always do the same thing4. With AI, it will by its very nature, do something different each time it’s used5.

It is this factor that presents the first meaningful difference between traditional automation and LLM-based automation. As the automation can’t be assumed to produce a consistent result, and may misbehave in truly unpredictable ways, this introduces some new risks that need to be addressed.

  • AI may attempt to perform actions beyond the intended scope, making the limitation of permissions particularly important. Implementing the principle of least privilege and maintaining strict separation of concerns are critical to avoid unintended activities.
  • AI may send secrets to search engines or other third-parties, making it important to restrict traffic and build deeper monitoring & leak detection6 into the process. This results in short credential lifetimes and IP & location restrictions being more important than normal7. It also shows the value of integrating proxies for web traffic, and wrappers & frameworks for code, to provide additional controls directly in the automation.
  • AI is subject to prompt injection, intentional or otherwise, which can cause radical departures from the intended behaviour, and thus any credentials or access the automation has can be abused in ways that would entirely defy expectations. This results in all input being potentially dangerous, and thus putting secrets at risk. This furthers the need to take the most restrictive and cautious approach possible.
  • AI agents can take unexpected paths to achieve results, as such, it’s critical that the permissions granted don’t include the ability to mint new tokens, assign permissions, or assume other roles. It would not be surprising to see an AI agent attempt this to circumvent an access control that resulted in an error message.
  • et cetera. This list could grow, though there is a clear pattern, and it’s that pattern that matters.

There are a number of unique risks that come with giving agentic AI credentials and access to systems, though all of them are addressed by existing high-level controls. Yes, the details of the controls, and the best approach to effectuate those controls may be different, the underlying controls are the same.

It’s this change that matters, where a secret is used for a defined intent, bound to a deterministic process, is now unbound to intent, and can be used in response to unexpected error messages, poisoned prompts, unanticipated data, or simply hallucinated steps in the process. Of course, with this change to unbind a secret to intent, it also can shift the impact to entirely different parts of the system or different data. This is where the practitioner needs to change how they reason about these risks and how they apply the controls to mitigate the risks.

Fundamentals Always Come First #

If you are facing the question of how to secure access for agentic AI8, focus on fundamentals first. Follow the best practices for any service account, ensure that those controls are in place, solid, and validated. Once that’s done, then start looking at the unique risks.

If you focus excessively on what’s different about AI-based automation, you risk missing the basic controls that will address most of the unique issues – and the issues that impact your other service accounts.

As is always the case in security, follow the fundamentals.


  1. The Hacker News, January 7, 2026. The article does share insight, though ends as a sales pitch for a Keeper product. ↩︎

  2. When in doubt, it’s common to attribute most any quote to Mark Twain, as he was an incredible source of fantastic quotes, though the collection of apocryphal quotes are far larger and far more impressive. In this case, finding the true source of this statement is far from clear, though Jonathan Swift is likely the best fit for oldest variant of it. Or, you can just go with Stephen King’s “sooner or later, everything old is new again” line from The Colorado Kid↩︎

  3. The article uses the term “Non-Human Employees”, though I reject this framing entirely. While I have written exploring the further-future issue of artificial life and intelligence, though we aren’t remotely close to this. As noted in the linked article, from what I believe to be a reasonably educated position on the topic, I don’t anticipate seeing this in my lifetime. As such, these are not, by any accurate definition, employees – this is automation. ↩︎

  4. This is a slight oversimplification, though is fundamentally accurate, in that the instructions executed will be the same, though depending on how the code is written, it may still result in somewhat different outcomes. For the sake of this comparison, executing a fixed and pre-defined set of instructions is sufficient to call this type of automation deterministic. ↩︎

  5. As above, this is a slight oversimplification, though to a lesser extent. LLMs are, by definition, statistical models and produce inconsistent results, even when given identical prompts. While the level of entropy involved can be tuned, reducing the variance in results, the fact remains the the models are statistical, and it’s not possible to ensure consistent results. ↩︎

  6. Leak detection for credentials and API keys is particularly important for all cases, though there is one that is very much unique to LLMs: secrets have been known to find their way into LLM training materials, resulting in others gaining access to API keys that may still be valid. This is just one of the various issues that can occur if prompts are used for training. ↩︎

  7. There is also the matter of PII and other sensitive data that can be leaked, but for the sake of this article, I will maintain a focus on IAM and associated secrets management. Broader questions around AI safety are best addressed separately. ↩︎

  8. The question of if you should use agentic AI is left as an exercise for the reader. ↩︎

Adam Caudill


Related Posts