17 minute read How metadata dictionaries can help reduce technical debt Home » Blog » How metadata dictionaries can help reduce technical debt Home » Blog » How metadata dictionaries can help reduce technical debt Impact of technical debt Forrester came up with the concept of the “Salesforce @scale dilemma”. Now this was back in 2017. I suggest it’s not got any better. It’s this idea that actually the more you use Salesforce, the more the users need, the more you configure it, the more you add, the more difficult it gets to change, the more risky it gets to change, and eventually you build it up and up and up. “Typically, clients are impressed by Salesforce’s CRM applications, which are more modern and user-pleasing than older applications. And they love Force.com’s high productivity for developers to configure the CRM applications as well as create new applications from scratch. Initial success breeds demands for more and more. As additional departments ask for Salesforce subscriptions, business leaders want to expand initial wins with Salesforce CRM into customer and/or partner engagement, marketing automation and analytics. New custom applications and customisations mushroom. In addition, the complexity of scale crushes Salesforce’s responsiveness. As Salesforce use grows, innovation slows and flexibility evaporates. Why? Every app change risks breaking one of hundreds of data and process customizations, integration links, and third-party add-ons. The result: every change requires long and expensive impact-analysis and regression testing projects – killing the responsiveness that made Salesforce attractive at the start. The Salesforce@scale dilemma is a challenge for clients to overcome, rather than an inevitable outcome of large-scale, strategic Salesforce implementations. It is a big issue because Salesforce has become a much more strategic supplier in technology strategies to win, serve and retain customers.” – John Rymer, Forrester Scale of technical debt Elements.cloud synchronizes about 50,000 orgs and analyzes about 1.3 billion metadata items a month. So there is great data on the scale of technical debt. The Change Intelligence Research Series looks at different aspects: UX, unused metadata, and security. You can download the reports here. The data from Change Intelligence Research Series reports is sobering: 51% of custom objects never get used. 41% of custom fields on custom objects never get populated. 43% of custom fields on standard objects never get populated. That’s not counting managed packages, so each metadata item has been created because of feedback. There have been meetings to argue about what it’s going to be called. There are Slack discussions about where to put it. It has then been built, tested, and deployed. Clearly it hasn’t been documented!! But all of that effort is wasted. BTW 43% is the average across all standard objects, but this rises to 88% when you look at the core objects – Case, Contact, Account, Lead, Event etc. And whilst these are scary numbers, the real issue is the cost associated with this technical debt. These fields end up on page layouts. The average number of fields on the Opportunity page layout is 150!!! If your page layout looks like a CVS receipt, then your end users are going to do one thing. They’re going to hit save and see what fields are mandatory and enter whatever they need to get out of this page. This destroys data quality. We’ve only talked about objects and fields, but there is also the related metadata that is created and never used; validation rules, flows, list views, permission/profiles, etc. And, I haven’t got the data for this, but empirically it feels like after 7 years people are about to throw away their org. That doesn’t have to happen, but teams get to that point where the level of technical debt means that they consider starting again. This is a massive waste of investment. What is Salesforce Metadata? Metadata is the key that drives the customization, functionality, and efficiency of your Salesforce org. Metadata makes it possible for anyone to configure Salesforce, Data Cloud, or Agentforce using a familiar drag and drop interface. Understanding the importance of Salesforce metadata is crucial to unlocking the full potential of the platform, ensuring that you maintain best practices and consistently deliver results that align with your organization’s goals. Metadata refers to “data about data”. In the context of software and data management, it serves as a detailed roadmap, capturing attributes, relationships, configurations, and other specifics. Think of it as the labels in a library’s card catalog: while not the content of the books themselves, these labels provide critical information about what each book is, where it resides, and how it relates to other volumes. Read The Ultimate Guide to Salesforce Metadata, to discover more. Similarly, in Salesforce, metadata encompasses the configurations, layouts, and settings that define how the platform behaves. It’s the blueprint that captures how objects relate, which fields are present on a layout, or the criteria for a specific automation. By grasping the importance of metadata, Salesforce Admins are better equipped to understand the platform’s architecture and make informed decisions during customization and management tasks. Metadata can be linked to other metadata. This blog explores metadata dependencies. Calculating the cost The cost is more than just the development resources to create this metadata. Confusing page layouts slow down end users. It leads to frustration, And worse, it results in poor data being entered. Whilst clean data has always been valuable, Agentforce’s ability to drive agents based on data means it is even more important that we have great data governance. Let’s dig into each of these cost items. Complexity slows changes The more complex the org, the longer it takes to do the impact analysis to understand the risk of making a change or deleting metadata. This means the analysis is either very time-consuming or potentially incomplete. Some teams decide that it is too difficult, too time-consuming, not accurate enough to bother doing the impact analysis. Instead, they make changes and risk rollbacks when those changes break the org. This slows down the speed that changes can be rolled-out. The knock-on effect is that end users cannot be as effective because Salesforce fails to keep up with the speed of business changes. This can lead to a lack of competitive advantage. End user cost Estimate the time that an end user spends staring at a confusing screen of fields, and trying to decide which of the myriad of picklist values to select. Multiply that by the number of users and how often they hit that page, and it is a significant cost. It is difficult to estimate the cost of the user frustration, but ultimately they could stop using Salesforce completely and go back to spreadsheets. Or they could stand up their own shadow Salesforce org. Cost of poor data Ultimately, the reason for Salesforce is tracking data for operations. Poor data doesn’t just impact operations. That data is in dashboards that managers and executives use to make strategic decisions. So, the cost of poor data quality is huge. It could be wrong data in the right fields, or if there are duplicate fields, then data entered into the wrong fields. Again we have insights into the scale of the problem. Validity is a data-quality ISV. They recently analyzed 246 billion data items and there is 30% duplication. The time taken to clean this up is enormous. Unless tech debt is cleaned up and good data governance is put in place, it will simply build again. Puts the brakes on exploiting Agentforce Data is used by Agentforce to power agents. Agents are able to reason and plan actions based on the data they are presented. Agents don’t have the “gut feel” that a manager has, so poor data is not questioned, it is acted on. The impact on customer satisfaction, brand reputation, and compliance dwarfs the cost of building metadata that is not used. So tech debt is potentially preventing the organization from exploiting agents and AI. This is a strategic competitive advantage issue. Technical debt impacts agents from 3 perspectives: Agents need to understand what data they can access: this is defined in the field metadata Agents need good quality, unambiguous data: this is data quality and duplication Agents need instructions and actions: this is automation metadata Cause of technical debt There are a number of reasons for technical debt. Not all of it is because your teams have taken shortcuts or made bad architectural decisions. 3 times a year Salesforce releases new functionality that obsoletes some of the changes you’ve made. But they also discontinue support for functionality such as Process Builder Workflows and require you to migrate to Flows. They also advocate a different approach such as Permission Set Groups. These all drive a need to make changes, which are outside your control. A lack of metadata documentation creates technical debt. Rarely is the description field filled out for metadata. This means it is risky to reuse that metadata – field, flow – because you have no idea of the impact. It is safer to create a duplicate, but this is technical debt. Finally, a lack of rigor in the implementation lifecycle means that requested requirements are not questioned enough to understand the true need. The rush to start building means what is developed is not what is needed. This results in poor adoption. Either the functionality is never used – the 41% of fields – or it requires time-consuming rework to get it right. 3 steps to minimize the impact of technical debt A metadata dictionary is pivotal to the 3 steps to managing technical debt: Understand tech debt: Understanding the level and impact that technical debt is having on your ability to deliver rapid changes is the first step. A metadata dictionary is the only way to get an overall perspective and then drill down into detail. Cost and prioritize reduction: Any technical debt reduction requires a business case that stakeholders buy-into. Tech debt reduction does not have an obvious visible benefit. It can be seen as delaying delivering changes. A metadata dictionary enables you to estimate what it will take to reduce the technical debt in the places where it is having the greatest negative impact. Manage metadata changes: To stop future technical debt you need to put in place a more rigorous implementation approach where business analysis is seen to be important, and where documentation is not an afterthought. The metadata dictionary is the core to this. I present at Salesforce and Dreamin’ events around the world. I always ask the audience who has a metadata dictionary. It is less than 5%. I find this staggering when it is the quickest and easiest way of making a step change in managing tech debt and delivering safer, faster changes. Let’s dig into each of the 3 steps to managing technical debt in more detail and show how a metadata dictionary can transform your ability to stay in control of changes. Understand metadata tech debt To be able to understand the entire configuration of your org, which may be 100,000s of metadata items – many of which are changing every new release – you need to automate the creation and maintenance of a metadata dictionary. Salesforce has created several APIs that allow apps like Elements.cloud to pull the metadata and some related documentation. To create an actionable metadata dictionary requires a great deal of custom functionality. For example analyzing field population or picklist value usage. To create the dependency trees – the chain of where metadata is used that is vital for impact analysis – you have to look at every metadata item and evaluate if it is used in every other metadata item. This is a huge processing-heavy activity, which is why it is impossible to do manually. There are no APIs that provide this information so that has been 100+ man years of development effort. It is not trivial to perform this analysis at scale. As every new metadata type is created, we need to develop new analysis. Examples are Data Cloud or Agentforce. Data Cloud adds a number of new metadata types and 100’s of standard DMOs. Agentforce adds Bots and GenAI metadata types. These need to be added to the metadata dictionary, and importantly, to the dependency analysis. So every metadata type that can be used in an Agentforce metadata type needs to be analyzed. The benefit is that if someone is about to change a field, then they can see that it is used by an Agent. Not knowing could have serious implications. Cost and prioritize reduction Once you have an understanding of the metadata and its dependencies, you can analyze the impact the tech debt has. Tech debt in areas that are never changed or never used has little or no impact. Knowing that a picklist value kicks off 20 Flows which in turn launch Apex is critical to be able to assess the effort to untangle the tech debt. The metadata dictionary analysis enables you to pinpoint where the tech debt is hurting you most, and also the effort involved in resolving it. As metadata is just data about data, so AI has a role to play. AI is fantastic with good quality data. Metadata is 100% accurate. It is the true picture of your org configuration. Elements’ AI Agents sit on top of the metadata dictionary and enable you to ask questions about your org. This augments the suite of standard analysis, reporting, and dashboards. Armed with this analysis you can build a costed, prioritized justification to allocate resources to reduce tech debt. As we’ve said before, you need to show that this is worth doing, rather than working on new functionality. Without a metadata dictionary with analysis powered by AI you cannot hope to build and defend that case. Manage metadata changes The first 2 points have been about reducing tech debt. But unless you fix the core issues that are contributing to tech debt, you will always be fighting a losing battle. The root cause of tech debt that you can control is not conducting rigorous enough business analysis through the implementation lifecycle. Too quickly you rush to start building without being clear about the business need. Our mantra is “Build the right thing, then build it right”. “Build the right thing” goes back to that analysis. The analysis has several steps: Capturing the requirements. This is what the end users think they want. The trick is not taking these at face value. You need to understand the true need. Validate the requirements. Take the requirements and consider the business processes and how they are changed based on the user’s demand. Understand the impact on the data model. Ask more probing questions to understand what the true need is. Whilst this feels like it is taking time, and you may get pushback from the users, but time spent here will save 10x or 100x of the time if the wrong thing has been built and there is rework. This is the concept of “Shift Left”. Create user stories. From the business processes, you can identify the changes that need to be made and these are documented as user stories, linked to the process diagrams and the impacted metadata in the metadata dictionary. Assess the impact: Once you understand what needs to be changed, you can then assess the impact from 3 perspectives; business, technical, and regulatory. The metadata dictionary is critical for this phase. It has the historical documentation of changes and the analysis documentation. Attached to a metadata item are previous changes and the business processes where the metadata is used. This helps assess the business risk and cost. The dependencies will help estimate the technical cost and risk. It can also help the development teams allocate the correct level of resources. Lower risk changes can be fast tracked. Higher risk changes must be allocated to a longer pipeline. Without the metadata dictionary analysis, every change has to be taken through the higher cost, longer route. Validate the business case for change. Now the true cost, risk and delivery estimates of the changes are understood, the business can decide if it is a good use of resources. They may change the prioritization or even decide they don’t want the changes made. Again, without the metadata dictionary analysis the change will just be delivered. We have adopted this approach internally at Elements.cloud. Between 1st August and 1st October 2024, the team has delivered 155 releases / deployments to Salesforce. There have been no surprises and zero rollbacks due to the org being broken. And yes, we have an Elements.cloud metadata dictionary. What to look for in a metadata dictionary Excel spreadsheets are not metadata dictionaries. The problem is your org is changing so fast and there is so much metadata, that trying to manage it in a spreadsheet is impossible. We’re looking at 10,000 – 100,000 metadata items. Understanding how to manage the metadata is your first step to understanding how to manage technical data. But then add in the requirement to provide detailed impact analysis. A place for documentation – automated, AI generated, and manual. Notifications when things change. And AI Agents so you can have a conversation with your metadata. Also, you need the confidence that the analysis considers all the metadata and dependencies so you can make an informed decision. Analysis that only analyzes a subset of metadata, dependencies, or out-of-date metadata will give a false sense of confidence. A metadata dictionary is a sophisticated AI-powered application. Considerations Here are some considerations as you set up a metadata dictionary and the org configuration documentation. BTW these are all addressed with the Elements.cloud metadata dictionary. Which org metadata are the metadata dictionaries created from — Production, one of your Sandboxes and your scratch DX Orgs? Which metadata types will the metadata dictionary track? It is just the core platform (sales/service cloud and managed packages), Data Cloud Agentforce, Industry Cloud or external systems and other apps that are integrated? How do you keep the metadata dictionary in sync with the Orgs as new metadata items are created and what happens to the documentation if metadata is deleted? Metadata may be modified by your team, external consultants or by upgrades to Managed Packages. How much “documentation” can be pulled from Salesforce using the MetadataAPI — description, summary, created date, last modified date — and how much can be generated automatically — where-used, risk assessment, field population? How does this scale for huge orgs or multi-org implementations? Are there cost or technical limitations? How do you get the documentation “flow” as metadata is migrated through the Sandboxes to Production, so it is only entered once? The aim is to create documentation when it is fresh in the mind, so as early as possible. If you leave it until later, it will not happen as easily. How do you get different teams, each working in their own Sandboxes and scratch DX Orgs to document their work? How do you merge the documentation so that each team can see what everyone else is doing? This is really important when they are working on the same objects concurrently. Where are you going to manage and version control the Org documentation: requirements, user stories, process maps, notes, specifications, screenshots etc? How do you report on the Org Documentation to speed up the impact assessment? How you want to use the Org documentation will determine how you structure and store it? How do you control access to the people who can update, view, collaborate and report on the Metadata Dictionary and Org documentation? Is it just your Admins? What about developers and external consultants? Can some of the Org documentation also be reused for end-user help? If so, how do you provide easy access from within the Salesforce record pages to the master versions of documentation? How can AI enable you to have a conversation with your org? Functionality checklist If you are assessing metadata dictionaries to make a decision about which ISV to select, then here is a list of functionality we’ve built because customers have demanded it. There is nothing on the list that we’ve added speculatively. We have a 40-person development team, but we don’t have time to build things that people don’t need. We follow our own mantra “Build the right thing”. So if you look at this list and say “We don’t need this”, then stop and consider why. You could also use this list as the specification to build a metadata dictionary internally. If we think about our development team size and the last 5 year’s development effort, it is not a trivial task. Plus our development team is constantly working to make sure the metadata dictionary is tracking the new metadata types that Salesforce is releasing. Building is a huge effort. Ongoing maintenance is not insignificant. Here is a list of the core functionality you should expect to find in a metadata dictionary: Multi-platform dictionaries Core platform metadata, inc Industries, Data Cloud, Agentforce… 3rd party apps: ServiceNow, Netsuite, Mulesoft… Production and Sandboxes Kept up to date: daily sync Change log Proposed metadata, e.g. add and document metadata that is not yet built Stakeholder ownership Export and bulk import Documentation Metadata API documentation, e.g. Apex test coverage Automated documentation, e.g. field population AI-generated, e.g. describe this Flow Manually attach/add external documentation, e.g. URL link, image, notes Custom metadata for metadata Metadata documentation is end-user help Analysis and AI Field population Dependencies within Salesforce Dependencies to 3rd party apps e.g. non-Core Tech debt Compliance, e.g. GDPR Performance, e.g. API version Security, e.g. Profiles/Permissions/Permission Sets Free text search Export to CSV Metadata AI Agent Access/views Table Custom views of tables Dependency trees Reporting Dashboards Access inside Salesforce setup, DevOps tooling, Ticketing apps Change notifications Change log Notification of changes Collaboration / chat at metadata item level Integrations Event monitoring DevOps Ticketing, e.g. Jira Salesforce Clouds, e.g. Data Cloud, Agentforce, Industry Clouds 3rd party apps, e.g. ServiceNow, Netsuite, Mulesoft Security and access Access control Manage vs edit vs view SAML / SSO Final word Implementing a metadata dictionary is without doubt the quickest and easiest way of making a step change in how you manage tech debt and delivering safer, faster changes. And it is very affordable for every org, no matter the size. Connect a metadata dictionary and within an hour you already have insights and intelligence in your org that you never dreamed possible. It is something your team will use daily to understand changes and document their work. It is pivotal to driving calmer, more effective, and cheaper changes in Salesforce. And, it opens the way to implementing Agentforce and, if you need it, Data Cloud. What are you waiting for? The next “hair on fire, what broke the org” roll-back? Sign up for our newsletter Subscribe to our newsletter to stay up-to-date with cutting-edge industry insights and timely product updates. Back to News Share Ian Gotts Founder & CEO 17 minute read Published: 1st November 2024 Table of contentsImpact of technical debtScale of technical debtWhat is Salesforce Metadata?Calculating the costComplexity slows changesEnd user costCost of poor dataPuts the brakes on exploiting AgentforceCause of technical debt3 steps to minimize the impact of technical debtUnderstand metadata tech debtCost and prioritize reductionManage metadata changesWhat to look for in a metadata dictionaryConsiderationsFunctionality checklistFinal word Post navigation Are AI Agents Enhancing Or Replacing Workers?PreviousEvolving Your Salesforce Security Model: Part 1Next
Metadata management 6 mins Monitor, Identify, and Keep Your Salesforce Automations on the Up-to-Date API Version