14 minute read Optimize your Salesforce Flow Architecture for better performance and scalability Home » Blog » Optimize your Salesforce Flow Architecture for better performance and scalability Home » Blog » Optimize your Salesforce Flow Architecture for better performance and scalability Salesforce flows drive automation across the Salesforce platform; many business-critical operations, as well as productivity-enhancing processes, are powered by flows. But what happens when these critical processes slow down your operations? As organizations grow and processes evolve, it is important to optimize your flow architecture to prevent problems such as slow performance. In this blog, we’ll explain actionable steps for reviewing and optimizing your Salesforce flow architecture and how Elements can support you with the process. When and why to review Flow Architecture? Well-architected flows streamline operations, enhance productivity, and maintain data integrity. However, outdated or overly complex flows risk slowing performance, causing failures, and complicating maintenance. Given the critical role flows play in automation and maintaining business continuity, ensuring architecture is regularly reviewed and updated is not just a best practice—it’s a strategic necessity. It’s essential to review their architecture regularly—at least three times a year, aligned with Salesforce’s major releases. After just one year, a flow could be three versions behind, missing critical bug fixes, performance improvements, and new flow capabilities. Beyond regular reviews, you should also consider flow architecture improvements when: You’ve never reviewed your flows at scale. Flows are frequently added or modified in your Org. You face frequent flow failures. Before diving into the review process, it’s important to ensure you have the necessary tools and knowledge at your disposal. The following prerequisites will set you up for success. Prerequisites Salesforce Metadata Management license A Salesforce Org synced into Metadata Dictionary (can be Production or a Sandbox, provided the Sandbox was refreshed relatively recently) Metafields definitions During the flow architecture review, you will need to assess insights, provided by Elements to decide on required actions. Metafields will need to be created for your flows. Create the following 2 Metafields with flow optimization in mind. Complexity review (picklist): Values: No Action Required Simplify Break into Subflows Rebuild as Orchestrator Purpose: Assess whether the flow’s complexity is justified or not, and if not, what action needs to be taken. Flow Optimization Review (picklist): Values: Overlapping logic Overlapping triggers Needs asynchronous logic Purpose: Determine if multiple flows need to be merged, consolidated, or optimized for performance by introducing asynchronous paths. 8 Steps to improve Salesforce Flow Architecture using Elements Step 1: Scan the current Flow health with Analytics 360 Before you start auditing individual flows, it is a good idea to review the aggregate health and quality posture of your flow architecture. Begin by evaluating the overall health of your flow architecture with Analytics 360. Open the ‘Automation overview’ dashboard to analyze the flows in your org. Evaluate the key areas to give you a better understanding of your flow’s architecture and to identify areas for improvement. Flows by type Analytics 360 provides a breakdown of all flows in your core Org (excluding managed packages) by flow type. This breakdown includes classifications like ‘record-triggered’, ‘screen flow’, ‘platform-event-triggered’, ‘auto-launched’, ‘schedule-triggered’, ‘no-trigger’, ‘orchestrator’, and others. This analysis shows how your org relies on different flow types and reveals patterns in automation. For instance, a high proportion of record-triggered flows indicates heavy reliance on automation for data changes. At the same time, low usage of orchestrations could suggest missed opportunities to simplify complex processes. Flows by fault coverage Flows by fault coverage is a custom metric calculated by Elements. It scores each flow based on the percentage of possible fault paths created. If a flow doesn’t have any DML or action elements requiring a fault path, the score is set as ‘not applicable’. Fault paths are critical for error handling, especially in database transaction operations or system integrations, where failures without fault paths can result in cascading errors or data corruption. For example, flows lacking fault paths for email alerts may be at risk of failure. Documenting every possible fault path is a pattern recommended by Salesforce’s Well-Architected framework. Flows by complexity High-complexity flows are like a tangled web of electrical wiring—difficult to trace, prone to short circuits, and expensive to fix. Simplifying these flows is like rewiring with a clear diagram, ensuring smoother operations and fewer outages. The complexity breakdown of your Org’s flow diagram reveals how many flows could be broken down into smaller, composable sub-flows. Flows by API version Every Salesforce release, a new API version of the Salesforce product is introduced, with new features, bug fixes, and various improvements. But Salesforce doesn’t automatically update the API version of flows. Automatic updates could disrupt custom business logic and cause errors in existing functionality. By allowing manual upgrades, Salesforce ensures admins and developers have the time to test their custom code and flows in a sandbox before going live. Therefore, it is your responsibility to ensure your flows are running on the latest version of Salesforce to access the newest features for custom applications and custom actions. If you have the newest API version of flow, you can access the latest bug fixes, enhancements, improvements, and features. Over time, if your flows lag behind, it leads to inconsistent behaviour, varied performance, and, ultimately, errors. In the technical debt dashboard in Analytics 360, you can check the API version breakdown for all your flows. Step 2: Create a custom view for Flows Within the Metadata Dictionary, create a custom view of metadata for flows with the following attributes: Name API name Sub-type (e.g., record-triggered, screen, auto-launched, orchestration, etc.) Complexity score level (this is the numeric value of the complexity) Total complexity (this is the complexity category, like ‘High’) API version Fault path coverage Immediate run (defines if a flow has before-save flow logic) Asynchronous run (defines if a flow has after-save logic) This view provides a detailed snapshot of your flows, helping you identify which ones need immediate attention. The custom view of your flows also allows you to spot compounded issues, such as a highly complex flow with no fault path coverage and running on an outdated API version. Step 3: Identify overly complex flows that can be simplified ‘Complexity’ is neither good or bad. It all depends on the business and technical context. You may have a flow running a unique, business-critical price calculation that is innovative in nature and a source of competitive advantage for your business. And in such a case, ‘complexity’ is expected. However, Salesforce’s Well-Architected framework identifies the following patterns for flows: Flow patterns Flows are organized in a hierarchical structure consisting of a main flow and supporting subflows. Each flow serves a single, specific business process. Complex sequences of related data operations are created with Orchestrator (instead of invoking multiple subflows within a monolithic flow). Subflows are used for the sections of processes that need to be reused across the business. When those design patterns are not met, you would expect to see complex or highly complex flows. Here are proposed steps to identify flows that could be simplified: Step 3.1: Find complex flows with sub-flows that can be turned into a Flow Orchestrator Salesforce Flow Orchestrator provides a clear, modular way to manage complex, multi-step processes by breaking them into distinct stages. This makes it easier to track progress and manage each step. It improves fault handling, allowing for more granular error management and retries. It also supports asynchronous processing for long-running operations, reducing the risk of performance bottlenecks. In order to identify complex, monolithic flows that could be improved by a transformation to a flow orchestrator, follow these steps: Filter your custom view of metadata to only show ‘no trigger‘ flows (these can be invoked as sub-flows). Bulk-select listed flows and open the dependency grid using the context menu. That will show you your no-trigger flows and which other flows use them. Each relationship will be shown as a single row. Using Dependent API name, identify a flow that appears multiple times, meaning it calls multiple sub-flows. Using the created MetaField’ Complexity review’, classify the found flow as ‘Rebuild as Orchestrator.’ Step 3.2: Identify flows that are complex due to reused, duplicated logic Elements score flow complexity is based on a number of flow elements, and a numeric score is assigned to each element type. In other words, the more blocks a flow has and the more loops, decisions, and subflows it uses, the higher the complexity. Salesforce’s Well-Architected framework advocates for composability in automation design. Having logic repeated across different flows is considered an anti-pattern and contributes to flow complexity. Here is how you can use Elements to quickly identify flows that could be using duplicated logic. Create a new custom view of metadata, this time listing: Metadata type: Standard Object and Custom Object Columns: Label, API name Bulk-select listed objects (100 at a time) and open the dependency grid using the context menu. That will show you all automation and report types using those objects. In the dependency explorer grid, find the column titled ‘Dependent type’ (it is third from the right). Set the filter to ‘contains’: ‘flow’. This will filter dependent metadata to only show flows. Review the values for columns: ‘Write’, ‘Read’, and ‘Relationship description’. They hold information about whether the flow takes data from the object (e.g. record lookup), writes data into an object (e.g. create a record, update a record, delete a record), and in what elements the object is referenced. You may use that information to look for patterns. We recommend downloading the listed dependencies and uploading them to ChatGPT or another generative AI that can interpret CSV files. Explain the columns and their meanings and ask to identify potential patterns. The CSV contains no sensitive business or client data, so there shouldn’t be any concerns about sharing the file with an AI model. For flows that have been identified as having duplicate logic, document MetaFields: Complexity review: Break into Subflows Flow Optimization Review: Overlapping logic Step 3.3 Review flow definitions At this point, you have identified any flows that can be rebuilt as flow orchestrators or broken down into sub-flows to avoid duplicated logic across multiple flows. The only thing left to do is audit the remaining flows with complexity classified as High or Extremely High. This requires manual inspection of the flow logic and used elements and identification of opportunities for simplification. Sort your custom view of metadata by complexity score level, ‘from highest to lowest’ value. Your most complex flows will now appear at the top of the list. Tip: You should prioritize flows with API versions in the low 50’s or 40s. That is because every Salesforce release (new API version) introduces new features on flows. Flows that have not been updated since they were built years ago do not use many new elements that help with complex logic and batch processing. They are most likely candidates for simplification. Go through the listed flows one by one. Open the Salesforce flow by clicking a blue cloud icon in the right panel. Then, open the most recent flow version to analyze its structure. For flows that you have identified as having unnecessary complexity, document the Complexity review as either: ‘Simplify’: if the logic needs to be rebuilt using modern elements), or ‘Break into sub flows’: if the flow has been found to do multiple business processes. Step 4: Optimize record-triggered flows per object The number of record-triggered flows on your objects should be part of your business architecture strategy. But it should be an intentional and consistent design principle. You can use Well-Architected and blogs to help you develop your design principles for record-triggered flows. Here is how you can use Elements to understand your record-triggered flow architecture and ensure it meets your design standards: Create a new custom view of metadata, this time listing: Metadata type: Standard Object and Custom Object Columns: Label, API name Bulk-select listed objects (100 at a time) and open the dependency grid using the context menu. That will show you all automation and report types using those objects. In the dependency explorer grid, apply the following filters: ‘Dependent type’ (3rd from the right): set filter to ‘contains’: ‘flow’. ‘Trigger action’ (5th from the right): set filter to ‘is not empty’ You have a list of all record-triggered flows across selected objects. Look for: Multiple flows are triggered on the same object. Trigger Action (this will tell you what record operation triggers the flow) Trigger Type (identifies flow as either before or after save flow) If you find flows that trigger on the same object and in the same way, open them in Salesforce and check if they have any specific entry conditions. For flows that have been identified as having overlapping triggers, document MetaField: Flow Optimization Review: Overlapping triggers Step 5: Identify flows that should be using asynchronous logic Asynchronous processes are requests that are not executed in real-time but separately later. Asynchronous operations are put in a queue and executed one at a time. Salesforce recommends that flows involving external system callouts or long-running processes use asynchronous paths to avoid timeouts and transaction limits. Synchronous operations are generally recommended when automation is necessary for the user to receive the outcome in real-time, improving the user experience. Elements can help you identify candidates among your flows that could use asynchronous logic: Sort your custom view of metadata by complexity score level, ‘from highest to lowest’ value. Your most complex flows will now appear at the top of the list. Filter your custom view of metadata only to show flows where ‘asynchronous run’ has the value ‘No’. Your view should look similar to this: Review each complex flow one by one. Look at its description and any documentation left in the Elements right panel. Finally, open the flow in Salesforce and investigate its logic. Understand if the flow’s purpose is to provide the user with an outcome in real time or not. For flows that do not need to provide immediate results, are complex, and have long-running processes, categorize them using the created Metafield: Flow Optimization Review: Needs asynchronous logic However, even well-structured asynchronous flows can falter if error handling is overlooked. To ensure seamless operation, the next step focuses on identifying and addressing suboptimal error handling inflows. Step 6: Identify suboptimal error handling inflows Salesforce’s Well-Architected framework specifies that all flows should consistently use fault paths. But screen flows are singled out as especially needing fault paths so that users can receive educational error messages that help them troubleshoot issues themselves when possible. Elements can help you identify flows that are lacking fault paths by helping you act on fault path coverage score: Sort your custom view of metadata by complexity score level, ‘from highest to lowest’ value. Your most complex flows will now appear at the top of the list. Filter your custom view of metadata to only show flows where: ‘Subtype‘ is screen flow. ‘Fault coverage’ is less than 90 (score is from 0 to 100 representing % score) Most of your flows likely lack fault paths, as Salesforce introduced this feature only recently. To address this, you can create backlog stories to extend flows with appropriate fault paths. For details, see step 8. Step 7: Prioritize Flow Optimization Using a Matrix You have reviewed and identified all optimization actions for your flows. However, chances are that you do not have the time to improve all of them at once. So, how do you prioritize the needed enhancements to improve their scalability and performance? Many flows will have compounded issues that make them particularly risky or inefficient. For example, a flow that is highly complex uses an outdated API version, has no fault paths, does not use asynchronous logic, and coordinates multi-step logic with many sub-flows. Flows with high business criticality and multiple technical issues should be addressed first to maximize the impact on business outcomes. Step 8: Take Action After the review is finished, you will end up with a list of flows classified by complexity review and flow optimization review. You can apply filters to show all the flows that match the same action, for instance: Apply a filter on ‘Complexity review’ to ‘Rebuild as Orchestrator’ to see all the flows you identified that need to be migrated to the Orchestrator. Apply a filter on ‘Complexity review’ to ‘Break into Subflows’ and ‘Flow optimization review’ to ‘Overlapping logic’ to see all the flows that you have identified needing sub-flow due to reusing similar logic. Apply filter on ‘Flow optimization review’ to ‘Overlapping triggers’ to see all the record-triggered flows that need to be consolidated due to overlapping triggers. Apply filter on ‘Flow optimization review’ to ‘Needs asynchronous logic’ to see all the complex flows that need to be re-written to run in asynchronous mode. Apply the filter on ‘Complexity review’ to ‘Simplify’ to see all the complex flows that need to be refactored using new logic and standard components. Custom views of metadata come equipped with many single and bulk operations. You can raise user stories and document tasks to break down complex flows, remove hard-coded values, increase API versions, and improve error handling through fault paths. You can then pick up those stories from your backlog and deliver them when there is capacity. However, chances are that a lot of flows will face a unique combination of problems. And when you are working on optimizing a single flow, it is best to address all found issues together. Make sure the acceptance criteria on raised stories against flows reflect the specific issues found, for instance: Break down the flow into simpler, composable units Introduce a flow orchestrator to handle complex database operations. Ensure all elements have fault paths to manage errors. Update the flow to the latest API version (e.g., API version 61) to use new flow builder capabilities. Conclusion Salesforce flows power many key business functions. When well-architected, they can improve efficiency and enhance customization. However, if they are outdated, they can disrupt business operations by causing broken processes and slowing down performance. That’s why staying on top of your Salesforce Flow is essential, ensuring operations are running as smoothly as possible and avoiding any operation failures. Using Elements, you gain the insights and tools to understand your Salesforce flow architecture with precision. Following this blog’s steps, you can optimize your Salesforce Flow and improve performance and maintainability. For more information on how Elements can help, get in touch now. Sign up for our newsletter Subscribe to our newsletter to stay up-to-date with cutting-edge industry insights and timely product updates. Back to News Share Xavery Lisinski VP Product 14 minute read Published: 3rd January 2025 Table of contentsWhen and why to review Flow Architecture?PrerequisitesMetafields definitions8 Steps to improve Salesforce Flow Architecture using ElementsStep 1: Scan the current Flow health with Analytics 360Flows by typeFlows by fault coverageFlows by complexityFlows by API versionStep 2: Create a custom view for FlowsStep 3: Identify overly complex flows that can be simplifiedFlow patternsStep 4: Optimize record-triggered flows per objectStep 5: Identify flows that should be using asynchronous logicStep 6: Identify suboptimal error handling inflowsStep 7: Prioritize Flow Optimization Using a MatrixStep 8: Take ActionConclusion Post navigation Secrets of a successful Data Cloud ImplementationPrevious
Metadata management 6 mins Monitor, Identify, and Keep Your Salesforce Automations on the Up-to-Date API Version