13 minute read

Handing Off Salesforce Projects from Business to DevOps

Home » Blog » Handing Off Salesforce Projects from Business to DevOps
Home » Blog » Handing Off Salesforce Projects from Business to DevOps


The goal of every change project is adoption. The change could be minor; updating a discrete business process driven by feedback; addressing poor adoption; or eliminating waste. But the changes could also be a strategic, top-down digital transformation: the application of AI, the merging of two orgs; or the implementation of a new Salesforce cloud like CPQ.  It starts with a clear definition of the changes that the business needs to be made to people, process, and technology. 

Rarely is any change simply a process or people change. Normally the transformation requires some changes to the supporting application where Salesforce is at the heart. 

The challenge has been often being able to clearly explain what changes need to be made in the applications. The rush to start building without spending sufficient time on the analysis and planning leads to poor adoption and rework. In the worst cases, the changes are never used or the project is scrapped before it is even delivered.  

It is estimated that the total cost of this lack of planning is over $1 trillion across all IT projects worldwide. This is the wasted cost for rework and failed projects. And this doesn’t include the opportunity costs; the inability of the business to operate more efficiently and the competitive advantage  – which was the reason why the changes were initiated. 

The critical handover document from the business teams to the development teams is the user story. This has a well-established format so huge time savings are possible using GPT to generate the first draft.  The user story is developed by the business teams in a Change Intelligence Platform but is then synchronized to the DevOps tools or ticketing systems for the development teams to work from. 

The business and development teams need to work collaboratively as every change goes around the implementation lifecycle. The purple are the business users and the blue are the developers.

This image illustrates the collaboration between business and development teams in the lifecycle of Salesforce projects, with purple for business users and blue for developers

Importance of the handover

If the development teams are going to be able to understand exactly what changes are required it needs to be communicated unambiguously. Describing multiple dependent changes to complex systems is difficult. Background and context is critical. Words are open to misinterpretation. But images are not enough.

Get this wrong – even slightly wrong – and the dev team build the wrong thing. Any effort is completely wasted or, at best results, in costly, time-consuming rework. There is also an issue that it erodes the trust and confidence of the business in IT’s ability to deliver the changes at the pace they need to be competitive. 

It doesn’t matter how efficient or good the development teams are they cannot develop the right solution if they are going in the wrong direction. The development metrics, as we will see later, focus on the ensure the performance of the build, not the overall outcomes – adoption.   DevOps cannot positively impact user adoption, speed to value, or the ROI. That is set at the point of the handover. They can only make it worse.

The handover document is a User Story. Too often they are incomplete or inaccurate or ambiguous.

Responsibilities of the business team

The responsibility of the business-facing teams is to understand the true requirements and develop user stories that are passed to development.  This requires rigorous analysis. It does not have to be massively time-consuming, But any effort here can save 10x or even 100x effort downstream – see “Shift Left”

The user story is the ultimate deliverable. The user story is often called the work item. It is the definition of the system changes that need to be made. It is granular and detailed. So it is impossible to write user stories until you really understand the true requirements. Capturing requirements as a list of disconnected needs does not get to the heart of the overall business change. It does not guarantee the accuracy or completeness of the requirements. 

The most effective approach is visual requirements capture using business process maps in the UPN (Universal Process Notation) as this format was designed to engage business users. It drives out the detailed requirements. It provides documentation and training materials.  

Unless you apply a rigorous, detailed approach to requirements capture, there is a very high likelihood that you’ve not understood them in enough detail. Then there is absolutely no chance of writing the correct user stories. Failure is almost guaranteed.

The next issue is ambiguity. You may have the correct requirements and have written good user stories that capture the needed changes. But if the development team does not have a view of the context, and the impact (technical, business, regulatory) they cannot plan and prioritize the release, or allocate the right level of technical resources. 

Finally, you cannot manage the expectations of the delivery timescale for the business users. Without any clear understanding of the scope of the change you need to “pad” the estimates until every change ends up being estimated as a standard “3 months”. 

Too often this analysis phase is rushed because of time pressures. As we’ve said, mistakes made here are amplified in future phases. GPT can now support the business team in the analysis, so the first draft of some of the deliverables can be created automatically, increasing productivity by up to 100x.

There are a couple of techniques that can help ensure that the requirements are really understood and validated by the business. Visual requirements capture – business process maps – are a very fast way to engage business users. They are far less ambiguous than a textual document. They drive out missed requirements.  Combined with the ERD they ensure that the solution is implementable. They are also understood by every stakeholder; business users (at every level), business analysis architects, admin, developers, and compliance teams.

Elements embedded GPT can draw a process map from a textual prompt.  This could be the transcript of an interview, the description of a process, or a procedure document.  GPT combines this with its knowledge of processes in its LLM (Large Language Model) and draws a process map following the UPN format. This is a huge productivity boost, but also helps when teams are stalled when looking at a blank screen. 

User Stories  – Good to Great

We’ve said the user story is a pivotal document. The more supporting and contextual information that can be associated with the user story, without overwhelming the developers, then the better chance they have of building the right thing. 

The user story can developed by the business users inside Elements where they have access to analysis insights and documentation; process maps, business requirements, notes, discussions, and metadata. These attachments mustn’t be lost in the handover when the user story is loaded into the ticketing systems used by the development teams.

The challenge is that the ticketing systems that the developers often use – like Jira – do not allow all this contextual information.  Elements syncs with Jira and has a Chrome plugin that provides an overlay so all the information is visible in the Jira ticket. This means the developers don’t need to access both Jira and Elements and they get the full picture. 

The user story has a standard format

As a…..  I want to….. So that I can

But ideally, there is other supporting information;

  • Status: A user story moved through a lifecycle from planned through to implemented or deleted/archived because it will not be implemented
  • Acceptance Criteria: These are what help the developers create their tests
  • Business process maps: What business process is impacted or improved
  • Business requirements: What requirements drove the need for the user story
  • Related user stories: Rarely is a user story created in isolation
  • Roles impacted: In the organization which groups of people are going to be affected
  • Impact across 3 dimensions: business, regulatory, and technical
  • Business: what are the risks and the impact of the change on the operations
  • Regulatory: do the changes put adherence to regulatory compliance at risk
  • Technical: what is the full extent of the system changes, which includes the technical dependencies
  • Suggested metadata: Which metadata is going to be added or updated
  • Release: Which release is this planned for
  • Conflicts: If the same metadata is on different user stories in the same release or different releases this can be automatically flagged

The business and regulatory impact assessment is made easier if the operational processes are mapped, and the process maps are attached to the user story.  The level of technical impact assessment is massively improved by having access to the org Metadata Explorer capabilities inside Elements; Custom Metadata Views, Dependency Trees, Dependency Grids, and Metadata Search.

Elements’ GPT can write really good user stories by looking at the activity boxes on a UPN process map. It takes the activity box description, the input and output text, and the resources with RASCI. (Responsible, Accountable, Supportive, Consulted, Informed)

ElementsGPT can write detailed descriptions in the correct format and detailed acceptance criteria.  Writing user stories is boring, repetitive, and time-consuming. For any change, there is often a large number of user stories. GPT not only reduces the time taken but also improves the accuracy. 

But it doesn’t stop there. As Elements also has access to the thousands of metadata items in the org, Elements GPT can now assess each user story and suggest existing metadata that could be used to support it, or what new metadata needs to be created.  Then metadata can be attached to the user story. This is also what drives the conflict-checking.

Whilst these are only recommendations, there is a massive time saving, particularly if the team is new to the org. It can help reduce technical debt, because it suggests existing metadata that may not have been known about. The more accurate the names and descriptions of the metadata, the better the recommendations. 

DevOps – Delivering results through collaboration

Once the user story has been synced to the ticketing systems that the developers use, it can be planned. This means looking at the level of technical complexity and risk. This determines the teams that will be allocated the work, but also the level of testing and which pipeline it will be put through.  A simple, low-risk set of changes can go through a more streamlined pipeline with fewer steps. More complex changes will need to go through the full pipeline of stages that could include UAT, and integration test.

It may be that the way the business analysts envisioned it being delivered will not work in practice. So that there is some discussion.  Having the same user story both in Elements and the ticketing system enables that collaboration to be handled more easily. There is less chance of a change being made that is not reflected back into the documentation, such as process maps, acceptance criteria, or metadata descriptions.

The user story will move through stages in its lifecycle and have additional information added. This all adds to the overall documentation that makes it faster to do future impact analysis of the metadata updated by the user story. 

Wherever possible we should be entering documentation once, or auto-generating it, and then making it visible everywhere in context. For example, it is really valuable to see that a metadata item has been changed by 5 user stories, and that there is one user story in progress, and one planned which will change it. This happens automatically, when a user story is associated with a metadata item. It could be because ElementsGPT recommended it, or it was linked when navigating around the dependency tree.

Changes may be made through code (pro-code i.e. Apex) or declaratively (low-code) or a combination. The same approaches and disciplines apply, no matter what delivery method is used. The development teams use tools to drive changes effectively. Often these are integrated, or a single suite covers all of the needs

  • “Ticketing” for tracking user stories
  • Source code 
  • Release management 
  • Testing 
  • Feedback and bug tracking

All these tools enable rapid, continuous delivery, making changes consistent reliable, and agile. Often all factors of these suffer as the size of the development organization grows, But it doesn’t have to. What is important is the development team can collaborate and work together, but also they are not disconnected from the business users that are identifying the requirements.

DevOps Metrics

DevOps Research and Assessment (DORA)developed 4 metrics that are universally adopted by development teams. The four DORA metrics are:

  • Deployment frequency: How often a software team pushes changes to production
  • Change lead time: The time it takes to get committed code to run in production
  • Change failure rate: The share of incidents, rollbacks, and failures out of all deployments
  • Time to restore service: The time it takes to restore service in production after an incident

These metrics are specific to the development team and do not directly impact the overall success of the program. But they do focus the development teams. These metrics drive them to deliver the user story as quickly and safely as possible. As we’ve said in the past, if the user stories are not correct, there is nothing the developers can do to improve them. Delivering on the metrics doesn’t guarantee success (time to value). They ensure the efficiency of the build process. If they are told or misunderstand what to build, they simply build the wrong thing faster. 

Time to Value Metrics

Unlike the DevOps metrics, the overall metrics for time to value are not well established, nor are they consistent from project to project.  This makes it impossible to benchmark projects. There is a high-level ROI calculation that many projects calculate. But we need more granular metrics that can be applied to every project, no matter the size, no matter the scope.  The challenge with standardizing on a set of metrics is that there are no existing baselines. There is no agreement on how to measure or what to aim for. 

The first 2 metrics cover the business analysis phase; requirements capture and user story creation.  The measures are speed. These are quantative.  The requirements and user stories need to meet the qualitative standard. They are the measurements that you should be able to get out of the Change Intelligence Platform, just like the DevOps metrics come from the DevOps Platform. These metrics can be applied across any project. It doesn’t matter the scale of the project, the complexity of the implementation, or the prior levels of technical debt. 

– time to capture and get agreement on requirements 

– time from signed-off requirement to complete user stories

The next metrics are about the quality of the business analysis. They measure the overall outcome. They measure the success of the project and collectively with the first 2 metrics are “Time to Value”

– % of user stories that get implemented

– % of user stories that need rework “Noisy Waste”

– % of user stories that are delivered that are being not used  “Quiet Waste”

There are measures of org complexity and technical debt.  These are only relevant within the context of an implementation. High levels of complexity are not necessarily bad.  A complex organization will have a complex implementation, but if it is still agile then that is ok.  A simple implementation that is impossible to change is bad.  Technical debt in areas that are not changed, again are acceptable. But high levels of technical debt in parts of the org that are changed regularly and are killing agility is clearly not.  

Final Word 

The focus of any change project – whether tactical or strategic – should be adoption. This requires tight collaboration between the business users – who understand what they need – and the development team – who are able to deliver it.

The critical handover document is the user story. Too often, there is not enough rigorous business analysis to ensure that the user story is accurate and complete. Effort spent at this phase saves 10x or 100x in rework downstream.

GPT can now provide massive productivity gains in the business analysis phase; auto-generating process maps, writing user stories, and recommending solutions.  This not only reduces the effort but increases the accuracy of the work. But it relies on strong business analysis work.

The established DevOps metrics drive the performance of their teams, but if the user stories are not correct then the overall objectives are missed. New metrics are emerging that support the overall delivery lifecycle and help align both the business users and development teams.

Back to News

Continue reading

Read more news and updates from Elements.