8 minute read Using AI to analyze field complexity Home » Blog » Using AI to analyze field complexity Home » Blog » Using AI to analyze field complexity If you need to delete some fields in an object because you’ve hit limits, where do you start? Which are the easiest targets? Easiest means – no data, no dependencies. And even if you had a list of fields with no data and no dependencies, you can’t just delete them. There are at least 5 reasons why an empty field is valuable and deleting it will cause a sea of pain. For any field you are considering changing you need to understand how and where it might be used. And, analyzing nearly 500 fields is hard work. Why 500 – because if you weren’t at the max, you wouldn’t be doing this!! (Sometimes it is 800! Not including managed packages.) Or, you may need to estimate the time it is going to take to make changes, so you need an understanding of the complexity of the fields in an object. Too often you hear “It can’t take that long. It’s only XXX that I want you to change”. You need analysis to support the estimates you’ve made. For both scenarios, you need to evaluate all fields and rank them in a useful way. But, there is too much data to do the analysis by hand. You need to look at a field, find each of the dependencies, and then calculate the complexity of each field. But what if ElementsGPT Copilot could do this in 5 minutes or less? It can. And, then you can have a conversation with the results – or export as a CSV and do your own analysis and create charts. Approach This is the approach: Find the fields and dependencies Connect the data – fields, dependencies, scoring Calculate the complexity and the time We want to look at an object. For every custom field look at the % population and dependencies for that field. For every dependency, look at the difficulty of removing that field based on the dependency metadata type. Create an overall score by adding up all the dependency scores which is the sum of all the dependencies of a particular type, multiplied by the metadata type removal difficulty weighting. Do the same for the analysis time. List the fields in the order of the score, grouped by bands based on the field % population. The weighting and time, which you see in the prompt below, has 2 columns. Weight is used to create a complexity/effort score. If the dependency type is not found, it is because it is a link to a metadata dictionary outside of Salesforce, so this is an external integration. Hence making it complex: 3 Time is in minutes and is how long it takes to analyze each dependency. If the dependency type is not found, it is because it is a link to a metadata dictionary outside of Salesforce, so this is an external integration. Hence making it 30 mins You can fine-tune these scoring settings or change them completely. Scoring could be 1-10 to give you more granularity. You could ignore reports by making them 0. The time could be changed to the time taken to update rather than analyze. You can change the prompt. This is intended to be the starting point. The art of the possible. This prompt should be the basis for you to experiment with the analysis that you want to perform on your org. At the end of this article are the data attributes (columns) that you can work with and use in your prompts for metadata and dependencies. We will create another article about permissions which uses a different sub-set of metadata. 3 simple steps 1. Open Elements Metadata Dictionary for your Org It has already analyzed the metadata and dependencies. So, Select “Ask Org Copilot” from the button in the top bar. And, then select the Metadata and Dependencies tab inside the Copilot. 2. Copy the prompt Use the prompt below (all the italicized text)… and change the XXX to the object you want to analyze. Calculate the complexity and time for each field in the XXX object Use the following weights for dependency metadata type to calculate scores for each field in the file. For any dependency not listed: 3 Use the time values for dependency metadata type to calculate time for each field in the file For any dependency not listed: 30 Dependent Type Weight Time Apex Classes 3 10 Apex Triggers 3 10 Apex Pages 3 10 Approval Processes 2 7.5 Assignment Rules 2 2 Auto-Response Rules 2 2 Dashboards 1 2 Email Templates 1 1 Entitlement Processes 2 5 Fields inc Formula Fields 2 2 Field Sets 1 2 Field Updates 1 1 Flows 3 10 Global Actions 1 1 Lightning Pages 2 2 List Views 1 1 Lookup Filters 1 1 Matching Rules 1 2 Page Layouts 1 1 Process Builder Workflows 3 10 Reports 0.1 0.1 Restriction Rules 2 2 Sharing Rules 2 2 Validation Rules 2 2 Visualforce Pages 1 2 Workflow Field Updates 1 1 Workflow Rules 1 1 Follow these steps for the object: Calculate the score for each field in the Metadata CSV file by multiplying the occurrence of a dependency for the field in the Dependencies CSV file using API name in the Metadata CSV file and Source API name in the Dependencies CSV file to match, and using the dependent type in the Dependencies CSV file and its respective weight in the provided list and summing the results. Sum the number of dependencies in the Dependencies CSV file for each field in the Metadata CSV file using API name in the Metadata CSV file and Source API name in the Dependencies CSV file to match. Calculate the time for each field in the Metadata CSV file by multiplying the occurrence of a dependency for the field in the Dependencies CSV file using API name in the Metadata CSV file and Source API name in the Dependencies CSV file to match, and using the dependent type in the Dependencies CSV file and its respective time in the provided list and summing the results. The total time is in minutes, so convert to hours and minutes. Assign a value for band for each field based on the field % population in the Metadata CSV file. Band 1: 100% -74.99%, Band 2: 75%-49.99%, Band 3: 50%-24.99%, Band 4: 25%-9.99%, Band 5: 10%-0.99%, Band 6: 0%. If % population has text then assign it to Band. If API Name ends with __c then it is custom. If not it is standard. Provide the output in a table with a row for every field. The columns are: Name, API Name, Calculated Score, Calculated Analysis, Calculated Band, % Population, Custom / Standard, Record count, Number of Dependencies, Created Date, Description, Help text. Please follow this structure to calculate and order the fields, and provide the output for all the fields in a CSV. 3.Open CSV For understanding field complexity – Sort by Band (AZ), Calculated Score (ZA), Overall: % (ZA). This puts the most complex at the top. You should include standard fields. For which fields to delete first – Sort by Band (ZA), Score (AZ), Overall: % (AZ). This puts the least used and least complex at the top. You may want to filter by standard fields from your list because you cannot delete them. But remember you can remove them from page layouts or hide them using Dynamic Forms. 4. Display as a bar chart (Optional) If you want a graphical representation, ask Elements to produce a bar chart – or use ChatGPT to create it from the CSV. Can you create a bar chart displaying the scores for all bands with the highest score at the top. Band1:black, Band2:blue, Band 3:green, Band:orange, Band5:yellow, Band6:red, Band7:grey Digging deeper Now you have an idea about what is possible, you can create your own prompts. They can be simple time-saving hacks, so you don’t need to run a report e.g. “How many record types does Account have and how many records for each” Or they can be more complex with joins of data, like this example. Here are the columns of data available to the ElementsGPT Copilot in the METADATA.csv and DEPENDENCIES.csv files METADATA.csv Name API Name Type Last modified date Created date Last modified by Test coverage Tags Complexity score level Total complexity Active API version Subtype Data sensitivity Encryption Compliance group % population Help text Fault coverage Asynchronous run Trigger type Record count Immediate run Last modified record date Last created record date Count of Approval processes on object Count of Compact layouts on object Count of Buttons, Links, and Actions on object Count of Duplicate rules on object Count of Custom fields on object Count of Field sets on object Count of Email alerts on object Count of Page layouts on object Count of List views on object Count of Record types on object Count of Process builder workflows on object Count of Validation rules on object Count of Sharing rules on object Count of Workflow rules on object Count of Workflow field updates on object Count of Support processes on object Count of Assignment rules on object Count of Escalation rules on object Last report run date Description Parent object DEPENDENCIES.csv Source type Source API name Source label Source parent object Write Read Trigger type Trigger action Relationship description Dependent type Dependent API name Dependent label Dependent parent object Sign up for our newsletter Subscribe to our newsletter to stay up-to-date with cutting-edge industry insights and timely product updates. Back to News Share Ian Gotts Founder & CEO 8 minute read Published: 31st May 2024 Table of contentsApproach3 simple steps1. Open Elements Metadata Dictionary for your Org2. Copy the prompt3.Open CSV4. Display as a bar chart (Optional)Digging deeperMETADATA.csvDEPENDENCIES.csv Post navigation Using AI to evaluate metadata descriptionsPreviousAI can evaluate process diagramsNext