Building a business case for removing technical debt
Technical debt has a real cost
Technical debt kills the application’s agility making it slower and more expensive to make changes. It can even hit performance and open up security flaws. Often the true cost of this technical debt is hidden and is not immediately apparent to the business users. Tech debt is often described in very technical terms making it difficult to understand by the business.
Forrester, the industry analysts called it the “Salesforce @scale dilemma.”
The complexity of scale crushes Salesforce’s responsiveness. As Salesforce use grows, innovation slows and flexibility evaporates. Why? Every app change risks breaking one of hundreds of data and process customizations, integration links, and third-party add-ons. The result: every change requires long and expensive impact-analysis and regression testing projects – killing the responsiveness that made Salesforce attractive at the start.
Salesforce@scale Dilemma 2017
Forrester
Pressure from the business to deliver new functionality makes it difficult to allocate resources to tech debt reduction. Therefore implementation teams need to build a business case that cites the cost of tech debt so that they can justify the effort taken to remove it. This needs to be using data that the business can understand, relate to, and validate.
Current levels of tech debt
Elements.cloud syncs 50,000 orgs and analyses 1.3 billion metadata items per month. The Change Intelligence Research Series has produced a number of reports based on analysis of a representative number of Salesforce implementations. The first report looked at UX complexity and the second on the wasted effort in developing objects and fields that were never used.
The numbers are staggering.
The first chart on the left shows that 51% of custom objects that are created and never used. This number excludes custom objects that are added by installing managed packages. The chart on the right shows that 41% of all custom fields, created across custom and standard objects, are never populated. This number excludes custom fields from managed packages. It also excludes fields that could have a null value that is valid: picklist and checkboxes.
Interestingly this percentage is an average across all standard objects of which there are 100’s. If you narrow it down to the core standard objects, the percentages are much higher. For example, 71% of custom fields on the Lead object are never populated, 66% on the Contact, and 58% on the Account.
Having lots of custom fields that are never populated is not necessarily an issue if they are never seen unless, of course, you are hitting object limits. However, the issue is that many of these fields are on page layouts. As you can see from the chart below the Opportunity object has 150 fields on average and over 50% of these are never populated. The UX experts say that to reduce cognitive load for users that number should be 7.
Technical debt has an interest rate
Just like any debt, there is interest that needs to be paid. And not all tech debt is the same. Some is more expensive than others. Not all technical debt is bad. If we look at the different levels of debt around the org in terms of the interest that needs to be paid back, we can categorize it.
Highest level
Think of this like credit card interest: 29%. The impact of tech debt at this level is that it is impacting the users’ productivity. There are surprises and even rollbacks when making changes in other areas that are unknowingly related to it. It could even affect the performance or security of Salesforce. This tech debt needs to be addressed right now, even if there are no plans to make other changes in that area.
Medium level
This is a bank loan: 5%. It will impact any future changes in that area. If it is architectural tech debt it could prevent future changes such as Data Cloud or AI. It will delay or extend the work, so extra time to remove the tech debt needs to be factored into the work, probably before it starts.
Low level
This is a mortgage locked into sub-1% or a parental loan. The tech debt doesn’t impact the user or the delivery of changes. If it ain’t broke, don’t fix it.
The data to build the business case
There are several places to look for the costs of tech debt to build a costed case for the development work to resolve the tech debt. I’ve listed them in order of the easiest to calculate and justify.
User productivity
This is the easiest to calculate. It is also the most tangible, so easily understood by business users and executive management. This could be overly complex screens due to the number of fields, which is a big issue. The cost can be calculated in the additional time to read and complete each screen. If users don’t understand what to put in fields, they enter whatever is easiest to be able to save the record, which destroys data quality. Ultimately, this leads to user frustration.
Data correction
Confusing page layouts with huge numbers of fields, or fields with large numbers of picklist values, result in incorrect data being entered. This data then needs to be corrected, either by those users or other teams. This has a cost in terms of time, but there is also the impact of poor data quality, particularly if the data is not corrected instantly.
Poor decision making
Bad decisions are made by executives reading dashboards and reports with inaccurate or incorrect data. The data may trigger external systems via integrations. For example: poor sales data fed into the CPQ (Configure Price Quote) application has a direct impact: the quote is wrong, the order is wrong, the delivery is wrong. That is why CPQ implementations often expose the quality of the original sales implementation. Finally, bad data is junk food for AI. Bad recommendations will be made by AI based on bad data and these may not be picked up by humans.
User support
When users are confused by screens or by automation that produces results that are difficult to understand, then it triggers support calls. The analysis to find answers can take far longer when there are high levels of technical debt. The cost here is the analysis time, but also the time that the user is kept waiting for the answer which is stopping them from completing their task.
Compliance and security
Data security breaches and failing regulatory compliance is very compelling. It almost needs no estimate of the cost to justify the work to your executive management to fix the problems. What is needed is to tie these issues back to tech debt.
Downtime and roll-backs
If you do not do this impact assessment with enough rigor, the impact could be that Salesforce stops working and there is downtime and a roll-back. This is difficult to quantify. The cost of any downtime can be calculated, but the risk of it happening is impossible to predict.
Increased delivery costs
Future changes take longer because the impact analysis effort is greater. The analysis is to ensure that the consequences of any change are understood. It is difficult to estimate how much extra time is taken in the assessment. Also a Change Intelligence Platform like Elements.cloud makes the analysis faster, and now AI can accelerate it even more.
Fixing tech debt
Fixing tech debt is not all about deleting fields. Tech debt could be architectural and require restructuring the data model. So there are different levels of tech debt and the effort to remove it. Here I’ve listed some of the approaches in order of easiest to most challenging. You need to look at your priorities and the risks of the tech debt. Fixing the security model – Profiles/Permissions Sets – is a major project, but this may be a priority due to potential compliance fines.
Metadata documentation
A lack of good metadata documentation is a prime cause of tech debt. If you don’t know WHY metadata was created you cannot reuse it without risk. Having documentation that says WHAT the metadata does is as good as useless. You need the WHY in the description so you understand whether you can reuse metadata.
For example. You want to add a field on the Account object that tracks the revenue of the customer i.e. their turnover. There is already a field called Revenue__c but is this the turnover of customer or the amount of revenue paid to us by the customer annually or YTD? There is no documentation and we have no idea what the field triggers, to work out why it is there, without a massive amount of analysis. So it is easier to create a new field called Turnover_c. We don’t document it because…… And the next time someone looks at it, they are not sure if it is the revenue of the customer, or the amount we lost when the customer attrites.
We’ve created an AI Prompt to help you analyze field descriptions based on the most important fields, and also tell you the effort to analyze fields. So you can add documentation for the most important fields for the most important objects. We’ve also created a description standard MDD (Metadata Description Definition) so you can create descriptions humans and AI can read – because Einstein Copilot is now reading your descriptions.
Also, think about changing your sign-off process so that metadata cannot be released to production without good documentation.
Help Text
This is a quick win. Creating good, concise, helpful help text dramatically reduces poor data entry and reduces user frustration. If it is done in concert with the next item “Hiding FIelds” the UX can be dramatically improved.
We’ve created an AI Prompt to help you analyze help text based on the most important fields for an object. The important fields are those that trigger automation, are picklists, have validation, or are used in reports and dashboards. We’ve also given you some ideas of how to improve the help where the 500 ch help text is not enough.
Hiding fields
If deleting fields is too scary, and you are not at object field limits, you can simply remove fields from page layouts to simplify the UX. If the fields are required, then think about using Lightning Dynamic Forms so the fields are displayed based on values of record types or other fields.
This requires an analysis of the fields. We’ve created an AI Prompt that analyzes every field and ranks them based on complexity. It also estimates the analysis time so you can estimate the overall effort. It groups the field based on the % data population of the field.
Deleting fields
If you are at object field limits, you will need to delete fields. You cannot delete a field just because it has no data. We thought of 8 reasons why a field is empty but shouldn’t be deleted. So you need to do some analysis before deleting the fields.
The AI Prompt that is in the previous section (“Hiding fields”) can help you as it lists the fields with zero data and no dependencies. So these are the best candidates for deletion. We wrote an article for SlaesforceBen which lists the 8 reasons you shouldn’t;t delete an empty field.
Deleting managed packages
Managed Packages do not cause huge tech debt problems as they are often self-contained. But some do add their custom fields to page layouts. The other issue is they have custom fields that have similar names to your existing standard or custom fields. That means when you are selecting fields, e.g. Flow, Einstein Copilot, or Reports, you are given a list of fields that look the same, often with no API name to help you determine which is which. This adds development time because you need to analyze the fields. And there is a risk you select the wrong field.
We’ve created a checklist and AI prompts for removing Managed Packages so you can remove them without risk.
Permissions & Profiles migration
This is not straightforward. But it is very likely that your current Profiles/Permissions are over-permissioning most of your users. Giving every user read/write/delete/export on the core data in Salesforce, is high risk. It not only opens you up to accidental security breaches but also the risk that a “bad actor” could wreak havoc. This could also compromise your regulatory compliance, leading to fines and reputational damage.
This needs to be tackled as a project with very clear objectives. It can be done in phases, so that the users see no disruption. The restructure means that permissions that they never needed, are removed. The end result is it is far easier and simpler to allocate the correct permissions to new users, saving admin time. But also the entire system is more secure, helping your CISO or Head of Compliance sleep at night.
We’ve developed some functionality working with the Salesforce team that helps with the detailed analysis of your current profiles and permissions so that you understand how and where to consolidate. We call this suite of capabilities, Permissions Explorer.
Process Workflow to Flow migration
Your Process Builder Workflow (PBW) may be functioning perfectly well. But as you make changes to them, this is the opportunity to migrate them to Flow. This is probably on a project-by-project basis where the PBWs are part of the project scope, rather than creating a project to migrate all PBWs. When you migrate them, do not simply create a Flow that mirrors the PBW. Ensure you have the time to reevaluate the underlying business process and refactor the automation into one or more Flows. Or maybe the PBW is not needed and you can simply delete it.
Rearchitecting data model
This is a major project and the benefits and costs are very specific to the problem you are solving. Showing the user cost of architectural tech debt is often difficult. Therefore it is hard to build a business case, and these projects are often delayed and delayed until it is impossible to maintain the system, and it is a major project. Or, it raises the question, “Should we throw away the current org, and start again with a clean org”. This is often because the analysis of the issues is too difficult to do manually. With a metadata dictionary and automated impact analysis, supported by AI, this task becomes way easier.
Example calculation
Let’s just do a worked example of the user cost of complex screens.
Pick a page layout that has a high level of traffic by a large number of users. For Sales Cloud, this is probably Opportunity. For Service Cloud, this is Case.
Build up the calculation like this:
- Estimate the time taken to add a new record. Better still, time some users adding new records. Multiply that time by the user’s annual loaded cost and the number of users that hit the page per year. That gives you a total cost.
- Look at the data quality of the new records added, and then add the time to correct that data. Multiply that time by that user’s annual loaded cost. That gives you a total cost to fix the data.
- Use AI to do the Field Analysis and Help Text analysis to understand what can be simplified or removed
- Mock up a new screen – not the full development and deploy – and then repeat the task of adding a new record to understand the new total cost.
- You have the original costs (entering and fixing) and the new costs (entering), so you have the potential savings.
I can’t put a cost on that
There is larger issue that often overshadows any detailed cost benefit. It can overshadow all the other metrics. That is security / regulatory compliance. If you are concerned about security breaches and failing regulatory compliance audits due to your current security model (Profiles, Permission Sets and Permission Set Groups), then there is clear mandate that doesn’t require a detailed ROI.
The other future issue is that without reducing technical debt it will prove very difficult, if not impossible, to implement either Data Cloud or AI effectively. So both these can provide a great incentive to launch a targeted tech debt clean-up exercise. And the opportunity cost of now leveraging AI is huge.
Sign up for
our newsletter
Subscribe to our newsletter to stay up-to-date with cutting-edge industry insights and timely product updates.
Ian Gotts
Founder & CEO13 minute read
Published: 23rd June 2024