Designing Trusted AI Agents with Context Engineering 6 min read 13th February 2026 Share Home » Blog » Designing Trusted AI Agents with Context Engineering Home » Blog » Designing Trusted AI Agents with Context Engineering Why is it that your existing employees initially outperform the new rockstar that you’ve just hired? And why do you have a period of onboarding before a new hire gets up to speed? Institutional knowledge. The new rockstar knows how to do the job. That’s why you hired them. But they need time to understand the company culture, processes, approaches, applications, their team, and customers and partners. In the AI world, the institutional knowledge is called “context”. AI Agents are the new rockstar employees. You can onboard them in minutes, not months. And the more context that you can provide them with, the better they can perform. Now, when you hear reports that AI Agents perform better when they have accurate data, think more broadly than customer data. The data that AI needs to do the job effectively also includes the data that describes the institutional knowledge: context. Understanding context So let’s look at the different types of context, its source, and if it is structured or unstructured, which will determine how it is presented to the AI Agent. ContentSourceStructured / UnstructuredCompany cultureAnnual reportsMarketing brand guidelinesNew employee handbookUnstructuredBusiness operationsUPN process diagramsUnstructuredApp configurationMetadata & dependenciesStructuredDataCRM, ERP appsStructuredTeam Org chartJob descriptionsUnstructured You keep hearing a lot about models having a large context window. Claude has a 1 million token context window, Chat GPT 5.2 has 400k token window – although this is input and output, not just input. But this is not enough to simply dump everything about the company. If we think about Salesforce Org configuration, 20 apex classes of relatively high complexity are over 250k tokens. So we need to be selective and provide the context for the role that the AI Agent is delivering: context engineering. Context engineering As you can see from the table above, much of this information is unstructured. Your employees are good at interpreting it and filling in the gaps using their judgment and applying institutional knowledge. AI Agents can now parse unstructured data, but are not as good at applying judgment when there are conflicts, nuances, ambiguity, or omissions. This is why we get hallucinations. So the context you provide needs to be “complete” and AI-readable. But also needs to be specific to the role of the AI Agent, so the context window is not overwhelmed. The way to do this is to consider the end-to-end process that the AI Agent is performing and using that to scope the context. That means parsing the different applications that are storing the context to pull just the right level of information. If we look at the table again, but add a column for what the context is to be found and the Salesforce application that can provide it to the Agentforce AI agent. ContentSourceStructured / UnstructuredExample sources & AI StorageCompany cultureAnnual reportsMarketing brand guidelinesNew employee handbookUnstructuredGoogleDrive > Data360 Business operations / processUPN process diagramsUnstructuredElements / Lucid > Data360 App configurationMetadata & dependenciesStructuredElements > InformaticaDataCRM, ERP appsStructuredEnterprise apps >Data360 Team Org chartJob descriptionsUnstructuredGoogleDrive & HR app > Data360 This makes sense of the investments that Salesforce has been making in Data360, Informatica, and the smaller tuck-in acquisitions. It is all about providing trusted context at scale. Context in context As we’ve said, providing the correct context to the AI Agent at the right level of detail means parsing these data sources with a clear understanding of the end-to-end process it is trying to perform. That is a combination of the documented business process and the application configuration, which is encoded in the metadata and dependencies. And this is not just about if metadata uses other metadata, but why and how. The process maps provide visibility into the manual activities between the applications or within the applications. The accuracy and completeness of the documented process diagrams vary wildly. Front office processes tend to be very poor. Back office processes in regulated industries are normally very good. And to exploit the power of AI Agents, organizations need to streamline them and optimize their business processes. This has sparked a process reengineering revolution that mirrors the 1990s. This time around, the level of detail required by AI Agents is higher than for humans. The understanding of the app configuration through the metadata and dependencies is available, but it is often confused by high levels of technical debt. And it requires sophisticated analysis if it is going to be complete and can be trusted. AI Agents are not yet capable of taking all the metadata and making sense of it. There is simply too much data. The only approach is using very clever agentic workflows of chained surgical tasks to run the analysis. Applications like Elements.cloud do this to create a very robust analysis of a Salesforce Org to be able to provide the correct context. Process engineering is driving changes to the Salesforce Org at a pace never seen before. Setting up AI Agents can help accelerate those Org changes, but introduce risk. So the same metadata analysis that the customer-facing AI Agents need to deliver business processes can be used to support the Setup AI Agents as they make Org changes. One way to think of Elements is as a decision engine for enterprise application logic. It reconstructs business intent from Salesforce configuration and uses that understanding to automate architectural judgment: change planning, impact analysis, reuse recommendations, tech-debt and security assessment, and guided redesign of larger capabilities. Action Plan Context Engineering is a new term for AI Agents, but the content already exists in organizations as the institutional knowledge that the people “absorb” over time. AI Agents are built to accept a firehose of information, but require it to be accurate and unambiguous. That has implications for organizations that want to tap into the benefits of AI Agents that are able to deliver sophisticated outcomes. Here is a 3-step action plan Document the scope of your AI Agents in terms of the end-to-end process and the outcomes Identify the critical contextual information that is required to make the AI Agents perform at the highest levels, and review the quality Format the contextual information in the platforms that can curate it for AI Agents. Final Word With Context Engineering curating the right level of contextual information for AI Agents based on an end-to-end process, we can see how we can make AI Agents trusted enough to deliver broader use cases in the enterprise. At the moment, AI Agents are being used in narrow use cases where the contextual decision making is limited, e.g., Intercom’s Fin for Support, Salesforce SDR, and Amazon’s Product Finder called Rufus. And now Salesforce is building the software stack that can provide this trusted context that will pave the way to AI Agents delivering powerful end-to-end outcomes that span more than just Salesforce. Post navigation Previous postElements.cloud: an insurance policy for your most critical business system Back to blog Share Ian Gotts Founder, Elements.cloud Table of contentsUnderstanding contextContext engineeringContext in contextAction PlanFinal Word
Agentforce 4 mins Org → Blueprint → Business Outcomes: Guide to driving real impact with Elements.cloud