Skip to content
Elements.cloud Colour LogoElements.cloud Colour Logo
  • Why Elements?
  • Capabilities
        • Metadata analysis

          Org model, analytics, reports, field population

          Impact assessment

          Dependency trees & grid, impact assessment from story

          Change tracking

          Change governance; change log alerts, reporting, stories

          Access and compliance

          Permission explorer, metafields

          Agent designer

          One smart place to design, deploy and iterate AI Agents. Completely free.

          Salesforce Configuration Mining

          Generate process.maps with insights in context & Data models on demand from your current Org metadata

          Process-led design

          Capture requirements, visually map business processes and generate user stories

          Lifecycle governance

          Regulatory compliance, audit trail and version control

          Change Intelligence Platform Overview

          Enables organizations to know where they are today and realize where they want to be tomorrow. Elements.cloud gives you back control.

          Find out more
  • Solutions
        • By project

          Agentforce

          Tech debt removal

          Navigating complexity

          Salesforce documentation

          Org merge / split

          Compliance & auditing

          Salesforce Implementation

          Org healthcheck

          By cloud

          Agentforce

          Revenue Cloud / CPQ

          Data Cloud

          Sales Cloud

          Service Cloud

          Education Cloud

          Manufacturing Cloud

          Automotive Cloud

          Energy and Utilities Cloud

          Consumer Goods Cloud

          Financial Services Cloud

          Gov Cloud

          Health Cloud

          By role

          Executive

          Management

          Architecture

          Operational/Product Owner

          Consultants

  • Pricing
  • Resources
        • Resources from elements

          Success stories

          Whitepapers & eBooks

          Blog

          Resource hubs

          Center of Excellence Data cloud

          Events

          Webinars & Videos

          Academy

          Featured content

          New

          eBook

          Ultimate guide to creating Agents

          New

          Webinar

          The #1 Way to Build Complex Agentforce Agents with Confidence

          6 minute read

          News

          Metadata Explorer: Untangle Org Complexity

  • Company
        • Elements.cloud team

          We are Elements

          We’re a dedicated team at Elements.cloud, driven by a passion for innovation and a commitment to excellence in the Salesforce ecosystem.

          Read more about us

          Meet the team

          See the people that make up Elements and talk to us to shape your dream career.

          Meet the team

          Contact us

          It is easy to schedule a call with one of our experts.

          Contact us
  • Login
        • Login

        • Login to Elements
        • Support
        • Managed Package (Prod)
        • Managed Package (Sandbox)
        • Chrome extension
        • Elements.cloud status
Talk to us
Get started free

    Identify and mitigate performance risks due to high record counts in Salesforce

    9 min read

    17th January 2025

    Share

    Home » Blog » Identify and mitigate performance risks due to high record counts in Salesforce
    Home » Blog » Identify and mitigate performance risks due to high record counts in Salesforce

    Why monitor record counts on Objects?

    Salesforce Objects (aka SObjects) with a significant number of records can cause significant performance impact to your Org due to data skew, sub-optimal configuration, and default Salesforce technical limits (you can read more about those limitations here). Keeping your data volumes under control is also one of Salesforce’s Well-Architected principles. Key performance issues include:

    • Slow Queries: Large datasets take longer to process and retrieve results.
    • Delayed response on Save: Creating or updating records taking longer than expected, or timing out potentially causing loss of data or failed business process automation.
    • Search Delays: Indexing can be slower, degrading the user experience.
    • Inefficient Reporting: Complex reports with large data volumes experience long processing times.
    • Data Skew: Uneven data distribution can cause extended locking issues during mass updating of records and record sharing recalculations.
    • Storage Problems: Excessive record volumes consume the inclusive data storage which may need additional licensing to resolve.

    To prevent these issues, regular monitoring and proactive management of objects is recommended.

    By the end of this guide, you’ll understand how to monitor and optimize record counts to prevent these issues.

    When to investigate record counts on Objects?

    When should you investigate record counts on objects? Regular monitoring of object record counts and their ownership profile is critical to preventing the performance challenges outlined earlier for business users, such as slow queries and data skew. Proactively identifying risks ensures your Salesforce Org runs smoothly and avoids disruptions. In this section, we’ll explore detailed insights into where monitoring becomes essential and how Elements can help streamline this process. Here’s when to do so:

    • Regular Monitoring: Continuously review objects exceeding strict resource limits of 10,000 (check record ownership), 50,000 (check queries accessing without filters), and 2,000,000 (record indexing performance) records and track growth trends periodically.
    • Threshold Alerts: Set up alerts in Salesforce for objects nearing critical limits (e.g. 50,000 records) to take proactive action.
    • Major Business Changes: Monitor objects after system upgrades, new processes, or campaigns that increase record creation.
    • High-Volume Periods: Before peak events like sales promotions, seasonal demand, or end-of-quarter reporting, ensure the critical objects are optimized.

    Object classification

    To help you manage your data retention and storage policies effectively, you can leverage Elements.cloud’s Custom Metadata Views, Data Tables, and MetaFields which allows you to classify objects and forecast growth. The essential categorization on each of your objects in Salesforce are:

    • Storage Duration: Classify how long the data needs to be stored (e.g. short-term, medium-term, or permanent).
    • Retention Reason: Specify why data is stored (e.g., compliance standards or business needs).
    • Archiving Strategy: Define whether to migrate, archive, summarize, delete, or retain data once it is no longer being actively used.
    • Data Archiving Condition: Describes the conditions that can be used to decide which records to archive, summarize, migrate, or delete.
    • Data Storage Risk: Track the current classification of an object based on its record count and data storage utilization (e.g. Healthy, Needs Monitoring, or Needs Remediation).
    • Remediation Plan: Assign actions like index key fields through skinny tables and external IDs, migration to BigObjects or Data Cloud, remove unused data, and implement asynchronous processing or batch processing.

    Document your data archiving policies

    Your Org has plenty of standard and custom objects. But most of the time there is no specific policy or consideration for how much data we plan to store, for how long, and what to do with the data which is no longer needed.

    Document Your Data Archiving Policies

    Ensure your Org has a clear strategy for data storage and retention.

    • For New Objects: When designing new objects, collaborate with stakeholders to define:
      • How long data will be stored.
      • Why the data is needed.
      • The archiving strategy for unused records.
      • A business owner for the data who can make decisions on retention.
    • For Existing Objects: Review current storage practices with stakeholders and document:
      • Storage duration, retention reasons, and archiving conditions.
      • Examples like storing business-critical product risks for five years or archiving them three months after they are no longer required.

    Review Data Policies:

    Work with department heads or legal teams to evaluate and refine data storage policies:

    • Ensure all objects have clear retention guidelines and archiving strategies.
    • Include practical use cases to illustrate policies.
    • Understand the regulatory needs including data privacy rules, and freedom of information requests

    By maintaining well-documented archiving strategies, your Org can avoid unnecessary data accumulation and improve system performance.

    Example: It is common for case records to accumulate quickly over time. For a business that sells their product or services as an annual subscription, up to 5 year deals. You may specify a condition that all case records need to be stored for up to 5 years. Or that they can be deleted within 3 months after the account churns and does not renew their contract.

    Identify risky objects based on record counts

    Once you’ve set up monitoring and tools, the next step is identifying risky objects and taking actionable steps to optimize them. Let’s break down this process into clear steps.

    Step 1: Quick scan in Analytics 360

    Watch this quick video from Brooke, Partner Success Manager and Salesforce expert, on how you can quickly identify objects with risky levels of record counts in your Org.

    Step 2: Review and classify risky objects

    Step 2.1: Create custom views of metadata

    Start by creating a custom view of metadata in Elements for custom and standard objects. Pick name, API name, metadata type, description, record count to be included as columns. Also, add the six categorizations discussed earlier using MetaFields:

    • Storage duration 
    • Retention reason
    • Archiving strategy 
    • Data archiving condition
    • Data storage risk 
    • Remediation plan

    Step 2.2: Classify objects exceeding 10,000 records

    Flag objects with over 10,000 records as ‘Needs Remediation’ and investigate potential risks. Common issues and remediation actions are outlined below:

    IssueImpactRemediation Plan
    Ownership SkewRecords owned by a single user slow access changes (e.g., recalculations, bulk updates). This may be a result of automations, or a previous data migration or mass upload.Redistribute ownership to balance workloads and represent actual data owners rather than systems.
    Lookup SkewParent records linked to 10,000+ child records experience database locking during updates.Optimise parent-child relationships, review data model design.
    Account Data SkewLarge accounts with high volumes of related records (e.g., cases, contacts) result in slower processing and locking errors.Split large accounts into an account hierarchy, or restructure relationships.

    How to Investigate and Resolve:

    1. Use analytics resources such as custom Salesforce reports or metadata exports to identify objects with potential risks.
    2. Export detailed insights such as metadata views as a CSV file to filter mass updates and highlight objects visually.
    3. Implement the remediation plans to address identified issues.

    ​Step 2.3: Classify objects exceeding 50,000 and 2 million records

    Objects with over 50,000 records can exceed query limits and cause serious performance issues unless accessed correctly. Flag these objects as ‘Needs remediation’ and use Element’s metadata views or exports to identify them visually, review the queries used in automations to ensure they follow best practice selective queries, which are indexed.

    Even when your queries are selective performance can suffer at over 2 million records and a large data volume strategy should already be in place.

    You can export the list of objects as a CSV file and filter it in a spreadsheet.

    Step 2.3.1: Identify objects that have a lot of records that can be archived

    Earlier, you documented data storage policies for each object in your Org. Use this original report to identify if any large-data-volume objects contain records that can potentially be archived.

    Filter your custom view of metadata to only contain objects that for ‘Archiving strategy‘ have value ‘is not equal to “Keep on platform“. This will show all objects where the archiving strategy is set to delete records, archive them off-platform, or store them in the big object.

    Step 2.3.2: Identify actions to optimize performance on large-data-volume objects

    For objects that have more than 50,000 records exceeding the common report limits, find any automations, like flows, apex classes, or apex triggers, that are tied to those objects. If there are no automations tied to those objects, then you are not facing any immediate challenges. But you should still consider taking action in order to prevent future challenges.

    For objects with over 50,000 records and any flows, apex classes, or triggers dependent on them, review them one by one to identify any potential issues.

    Issues to look for and how to remedy them

    • SOQL Query Limits in Apex Classes: SOQL queries are limited to 50,000 records in Apex code. Exceeding this limit can cause failures or prevent a query from returning all relevant data.
      • Remediation plan: Optimize queries that are not selective, consider batch processing

    • API Call Limits: Large datasets can strain API call limits, particularly during integrations or bulk operations.
      • Mitigation: Use Bulk API (to get data from the object in integrations) or Apply Batch Processing (for internal automations)

    • Apex Trigger Execution Limits: Salesforce triggers process up to 200 records at a time. Large datasets may hit governor limits during bulk updates or complex transactions unless the Apex triggers are “bulkified”.
      • Mitigation: Review and optimize triggers to prevent these issues. (Note: code quality factors into this significantly– sub-optimal code can result in a need to throttle back from 200)

    • Indexing and Selectivity: Non-selective queries can lead to full table scans, significantly impacting performance or being rejected by governor limits.
      • Mitigation: Index Key Fields and Optimize Queries

    Step 3: Audit sources of data creation

    • Should this object really have that much data?
    • Should the number of records be accumulating that quickly?

    These are the sorts of questions that are not often asked but are critical to monitoring and securing performance of your Org.

    Here is what you need to do:

    • Assess whether records are being created intentionally as part of a controlled process, and identify the systems or processes contributing to record creation.
    • Identify anomalies in automated processes related to this object, and determine if automation should be adjusted to better manage record creation and lifecycle.

    Determine the list of automations tied to an object and even identify apex classes or flows responsible for creating new data.

    Review those automations and decide whether they are still needed, or are they legacy automations that support a long-forgotten or deprecated business process.

    Step 4: Identify and resolve unusual data patterns

    ​Investigate spikes or drops in record creation to identify anomalies. These patterns often reflect business changes (e.g., marketing campaigns or customer behavior shifts) or outdated processes. Align actionable insights with your business strategy to detect and address suspicious activity.

    For the selected object, right-click/use the context menu and select ‘Open adoption insights’. This will open a dashboard for the object with record population data for the selected time period. You can use the Record count over time chart to identify any suspicious data patterns.

    Take action

    After the review is finished, you will end up with a list of objects classified by data storage risk as ‘Needs remediation‘.

    Custom views of metadata come equipped with many single and bulk operations. You can raise user stories and document tasks to index key fields, optimize queries, or any other identified remediation actions.

    We propose that you filter your object list by remediation plan (e.g. remediation plan is ‘Archive old data‘) and then you can bulk-create stories for all objects with the same remediation plan. You will end up with multiple stories, each linked to a different object, specifying the needed remediation action.

    You can then pick up those stories from your backlog and deliver them when there is capacity.

    Summary

    In conclusion, managing record counts in your Salesforce Org is not just about improving query performance but also about addressing critical risks that can impact your overall business operations. By implementing proactive risk management strategies, such as optimizing data storage, leveraging batch processes, and performing regular performance tuning, you can mitigate performance bottlenecks and ensure efficient resource usage. With these practices in place, your Org will be well-prepared to scale effectively and deliver lasting value.

    Post navigation

    Previous postThe Rise of Digital Labor: Redefining Work in the Age of AI
    Next postWhy regularly audit your Salesforce Org using MetaFields?
    Back to blog
    Share
    Picture of Xavery Lisinski

    Xavery Lisinski

    CEO & CPO
    Table of contentsWhy monitor record counts on Objects?When to investigate record counts on Objects?Object classificationDocument your data archiving policiesDocument Your Data Archiving PoliciesReview Data Policies:Identify risky objects based on record countsStep 1: Quick scan in Analytics 360Step 2: Review and classify risky objectsStep 3: Audit sources of data creationStep 4: Identify and resolve unusual data patternsTake actionSummary

    Continue reading

    Read more news and updates from Elements.

    Product Updates
    6 mins

    Metadata Explorer: Untangle Org Complexity

    Product Updates
    6 mins

    The $100/Month Business Case: Proving the ROI of Essential Salesforce Software

    Configuration Mining
    9 mins

    Configuration Mining: Salesforce documentation in minutes instead of months

    Community
    4 mins

    TDX25 Wrapped: Elements.cloud, Process Configuration Mining, and Agentforce

    Join Our Newsletter for the Latest News, Updates & More

    Using Elements is like having a Swiss Army knife for Salesforce. It’s become an integral part of our Salesforce-focused methodologies.
    Daniel Keith - Tenger Ways

    Accelerate your future with Elements, a change intelligence platform that helps you continuously innovate your business

    Talk to us
    Footer logo

    Elements Headquarters

    San Francisco, USA

    Elements Offices

    USA, UK, Canada, Switzerland, The Netherlands & Ukraine

    Contact us
    • Change Intelligence Platform
      • Salesforce Metadata analysis
      • Metadata impact assessment
      • Change tracking
      • Access and compliance
      • Agent Designer
      • Salesforce Configuration Mining
      • Process-led design
      • Lifecycle governance
    • Resources
      • Success stories
      • Integrations
      • Blog
      • Events
      • Webinars
      • Academy
      • Whitepapers & eBooks
      • Support
      • Elements.cloud status
      • Brand Resources
    • Company
      • Team
      • Contact us
      • Pricing
    Salesforce logo

    Available on
    AppExchange

    © 2025  Elements.cloud
    • Trust Center
    • Data Privacy & GDPR
    • Terms of Service (Website)
    • Terms of Service (App)
    • Open Facebook in a new tab
    • Open X in a new tab
    • Open LinkedIn in a new tab
    • Open YouTube in a new tab
    Tech web agency

    Watching videos on Elements.cloud site requires the use of Performance and Advertising cookies. These cookies are associated with video playback via services such as Youtube and Vimeo. In order to watch this video, it will require the acceptance of these additional cookies

    Reject Accept