Do AI Agents Have An Identity? Notes from InfoSec Discussions

Lupl attended the CISO NY conference in New York, one of the largest gatherings of senior security leaders. In this article, we share notes from discussions focused on how organizations should think about Agentic AI.
Agentic AI is the focus for many firms in 2025. We heard that the opportunity is big, and so is the risk. Autonomy saves time, but autonomy without limits can drift, overstep, or be hijacked. The sessions repeatedly returned to the same points, especially for organizations’ sensitive data across fragmented systems.
Core Themes
- Agents are identities. Not scripts, not toys. They need scoped access, auditable actions, and revocable credentials. Treating them like users simplifies monitoring, containment, and evidence.
- Role-based access control (RBAC) is the foundation. Ambitions are high, but consistent role-based access is rare. Without it, agents over-collect by default. In law firms, where systems overlap, loose defaults can turn routine automation into discovery risks.
- Cautious enablement beats prohibition. Users will experiment. The safer path is sanctioned, scoped use instead of bans that push usage underground (leading to shadow AI). High-profile leaks because of unfetted use of AI tools underline this point.
Critical Questions for Rolling Out Agents
These surfaced across multiple sessions. Clear answers suggest a secure rollout:
- Scope: What can the agent see, do, and change? Who sets those limits?
- Credentials: Are credentials short-lived and tied to roles, or are they long-lived keys that linger?
- Boundaries: Which systems are out of scope, and how is this enforced?
- Auditability: Are agent actions logged with the same fidelity and in the same place as human actions?
- Killswitch: If things go wrong, what is the shutdown path and who owns it by name?
- Egress: How do we prevent data exfiltration? Is it policy, content inspection, or wishful thinking?
- Third parties: When agents call external services, what is the failure posture, and how fast can connections be cut?
- Human checkpoints: Where do humans step in for high-consequence actions like critical approvals, sensitive data transfers, or external-facing outputs?
Why This Matters for Law Firm Security
Legal data is privileged, client-sensitive, and scattered across many systems, including DMS, PMS, email, and review platforms. Agents without defined roles may not just move faster; they may move faster in the wrong direction. Over-collection can taint matters. Broad scopes create attractive pivot points for attackers.
The right approach is to treat agents like associates with keys: implementing strict access controls, utilizing badges, maintaining logs, providing supervision, and instant revocation when necessary. This delivers the benefits while reducing the chance of a damaging incident.
The path ahead
Agentic AI is still in its early stage, with models evolving and advances arriving weekly or monthly. It can help organizations do more with less, but the long-term view matters. The win comes when identity and access are set first: clear scopes, real RBAC, rotating short-lived credentials, and a reliable shutdown path. Focus on foundational best practices and assign identity in ways that allow fluid systems to grow with innovation while managing risk. That throughline was consistent across the practitioners at the conference.
If there is one line to take back to your team, it is this: The option is not to wait and see. Treat agents as identities, enforce RBAC, and build systems that adapt as models evolve.
More legal tech insights we think you'll love

Why Generic Project Management Tools Fall Short for Law Firms
Learny why large‑firm lawyers are ditching Excel checklists for dynamic,...

Coordinating Chaos: Project Management’s Critical Role in the AI Age
Learny why large‑firm lawyers are ditching Excel checklists for dynamic,...

From Static Trackers to Living Workstreams
Learny why large‑firm lawyers are ditching Excel checklists for dynamic,...