We’ve built AI agents but a lack of transparency into what they are doing, governance of their changes, and confidence in their results is preventing users deploying them. In a word: trust.
TL;DR
What if you could already deliver transparency, confidence, and trust in building agents today? How many more deployments could be achieved? How much greater use and ROI could you deliver? The good news is that you can. We are delivering this today, even in these early days of Agentforce deployments, and have achieved consistent results. The reason we can do this is that we have been in the business of delivering transparency, confidence and trust for almost 30 years. The difference today, is that we have extended the paradigm from directing “people, process and technology” to directing “people+digital labor, process and technology”. The approach works for agents really well. The desired outcome is still the same. Trust.
Trust is everything
Back in 2000, we were told that regulated industries would never trust their data to be in the cloud. 25 years later, Veeva (pharma) and nCino (banking) were two of Salesforce’s largest industry ISVs and trailblazing the move to the cloud. Back in 2009, I wrote 3 books to help organizations make the right decisions when moving to the cloud. One was specifically about ISVs building Salesforce apps. Every book had a section covering risk and governance. I see a lot of parallels with AI Agents.
A recent conversation with a senior executive at HSBC was an eye-opener. They are bullish about the ability for AI to reinvent their industry. Just take a look at the AI page on their website. They see AI as a way to provide better and new offerings to their customers.
Regulated industries accept that the cost of doing business is to ensure that they need to comply with different regulatory compliance regimes. When I think about the regulated industries, the ones that are top of mind are food & pharma (FDA), financial services – (FSA and FCA), oil & gas and construction (HSE). And every major organization has common regulatory standards, which include quality (ISO9001), InfoSec (IS27001 SOC2), privacy (GDPR/CCPA), and Sarbanes Oxley (SOX).
Interestingly, I believe that regulated industries are those that are best positioned to exploit AI. They already have the core disciplines to make AI work; well-documented processes, strong data governance/quality, and documented systems.
Process mapping and UPN
Back in the 1990’s most regulated organizations had SOP and operational processes, which were textual documents that were rarely read. My first job as an intern was going around each department and updating pages in all the ISO9000 manuals e.g. pages 7, 57, 163, and 220. Boring, especially when you knew that no one would read the manuals!!!
When PCs became popular in every department, those paper files were replaced by process diagrams. This transformed the ability for people to understand what they needed to do and increased compliance. The process mapping software category was launched, and Nimbus Partners was the market leader in the Gartner Magic Quadrant. The company was launched in 1997 by the Elements.cloud founders. They created the UPN standard and process mapping software that was used by 1,000s of customers, including 10% of the Fortune 500.
Organizations that adopted UPN process diagrams as a way of documenting and getting adoption for their regulated processes saw clear benefits. Below are just a few examples where process maps not only satisfied their regulatory obligations and avoided fines but also delivered business benefits:
- Investment bank: “If we had had our processes up-to-date when we started the restructure, we could have executed it in half of the time that we did.”
- Food manufacturer: ”It is a way of creating a step change in our ways of working as we look for new and innovative ways of staying ahead“
- Construction: “It provided demonstrable corporate governance, improving business performance and pinpointing shortcomings costing more than $25m”
- Food and drink manufacturer: “End-to-end process thinking to break functional silos is absolutely critical for our company to deliver our business model”
- Retail bank: “Our cost per transaction has reduced from €16 to €3, and we have exceeded our target for automatic processing.”
- Construction engineering: “We’ve identified process improvement savings of $198,000 – per day.”
- Medical equipment manufacturer: “We achieved an 80% reduction in textual documents and better focus of activities with the added bonus of a 50% reduction in time spent training new recruits.”
Building trusted AI Agents
AI agents are replacing tasks performed by humans, so they need to be designed, communicated, and regulated in the same way as humans. But we seem to be back to making the mistakes of the early 1990s. I am seeing many (most?) agents being built in a way that makes it impossible for senior management to have any confidence in the results. Even if the agent works. Therefore, they are not willing to sign them off.
The problem is that there is no “design documentation” that can be used to engage stakeholders. Instead, instructions are being written directly into the agent alongside the guardrails and actions. During testing, more instructions keep being added as issues arise. The agent instructions and actions become a huge textual file. This makes it impossible for anyone but the creators to understand what the agent is doing. Using an agent to create the agent makes the problem worse. There is even less visibility into how the agent is operating.
I recently wrote about experience of using the Waymo self-driving cars and how it related to AI Agents. It is amazing to see them working, but we are happy because we know how a car works; we can see the steering wheel moving, and we can look out of the window to check on the route. If the Waymo had no windows and locked the doors and only unlocked them when you got to the destination, you would be far less likely to use them. That is how the current agents feel. A black box.
This trust issue may be a short-term (12-18 months) problem until we all have confidence with agents. But at the moment, this is a huge blocker. It is preventing the widespread adoption of agents.
We are seeing only 10% of the agents that have been built actually deployed into production. And those that are live are relatively simple use cases.
Process diagrams are design documentation
We need to go back to 1997 for the answer. If we want to ensure that agents are given clear, explicit instructions and everyone who is involved can understand them, this is best done with a detailed UPN process diagram. It is a critical design document that actually accelerates deployment, rather than slowing it down.
Salesforce has just launched a Trailhead badge – “Agent Planning – Outline the agent’s work” – which advocates process mapping using UPN as the design documentation and it links to the Elements.cloud training videos on how to map the agent JTBD (Job To Be Done).
This initial reaction is “agents are so quick to build, why waste time drawing a diagram”. Rush straight into building the agent, and you will burn days or weeks trying to get the agent to perform reliably. During testing, you keep adding more and more instructions until the agent becomes overwhelmed. And you’ve then need to get senior management to sign off on the agent.
This process-led design approach accelerates how fast you can go from idea to deployment. We can now build reliable complex agents in just a few days from idea to deployment using this approach. It seems so obvious, but it works. We’re seeing this out in the field. Salesforce Professional Services presented their story at TDX. They were struggling to deliver an internal FAQ agent for several weeks. Using their process-led approach, they delivered it in just one day.
There are so many benefits of using a process diagram as the “regulated agent design”:
- ideate: engage business users & agree on the scope of the agent in live workshops
- architect: agreed the structure of topics and then actions vs instructions vs guardrails
- build: the diagram can auto-generate unambiguous instructions that are far tighter with no conflicts
- test: the diagram can auto-generate test scenarios and make it easier to debug instructions during test
- signoff: makes it easier to explain behavior and get permission to deploy
- govern: demonstrate compliance with versioned diagrams
- improve: as you monitor the agent performance, the diagram can highlight areas to improve performance
Building agents that regulators love
We’ve seen that the fastest way to build agents is to create a UPN process diagram that sets out the detailed processes that they need to follow, the backend workflows that they can use, and the data they can access. This approach means that the agent’s performance is more reliable because you know that the instructions are consistent and complete.
The good news is that AI agents are easy to monitor and track. Every interaction between an AI agent and the customer/user is recorded and can be analyzed. If you’ve taken a process-led approach, you have the exact version of the process, instructions, and actions that were used for each interaction. This supports the regulatory governance framework that is required for AI agents.
If there are regulatory issues and you need to audit a particular interaction, you can see exactly why it played out like it did. And if there are changes, then these can be made very quickly by changing the process, the data (knowledge, policies, customer data), or the prompts/workflows that the AI agent is using.
When regulators pick up non-conformances, changes can be made to the process diagram. This generates the new agent and the regression test utterances to check that the changes do not impact other aspects of the agent’s skills. The new agent can then be deployed.
The diagram helps you consider the detailed flows, handoffs, and fault paths in your instructions so you create a compliant agent. If there is no process diagram to help you, you will burn up weeks trying to rewrite instructions to get consistent results and get the agent to comply. It is time-consuming, wasteful, and frustrating. You are always at risk of non-compliance. The process-led approach eliminates these issues.
When the changes are deployed, every user conversation will immediately be using the new AI agent – consistently – unlike a large human workforce where they all need to be briefed on the new process or knowledge. Humans have inconsistencies in how they interpret and apply the changes. The AI agent, deployed at scale, will be instantly using the most recent information across all interactions.
Taking this process-led approach we can build reliable agents with confidence.
Agents do not change everything
We can use past best practices. Agents are digital labor. There are very strong parallels with human labor. Agents are not magic. They are not black boxes that are impossible to govern. Over the last 30+ years, we’ve seen organizations use a governed, process-led approach to ensure that their human workforce can comply with complex regulations. Why would that be any different for our digital workforce?
How to take a process-led approach
Here is detailed Solution Guide that can step you through the approach to building agents.