For a decade, Salesforce.com has been both a business application and a development platform. Now, because of the number of its APIs and available adaptors, it’s increasingly being used as an integration hub for the cloud.
Sounds pretty strategic, doesn’t it? Too often, though, Salesforce.com (SFDC) has been managed as “just an app” and all the development work has been thought of as “just adding a feature,” without doing a top-down analysis of the consequences for the rest of IT. When my firm is called in to review larger SFDC instances, we often see the proverbial hallway closet, crammed full of stuff that is going to fall out if you open the door more than a crack.
[ Also on CIO.com: Essential CRM software features: A savvy buyer’s guide ]
Here are some quick metrics we use when first evaluating an SFDC system configuration for its sustainability and manageability. It’s not that any one of these by itself is an explosion waiting to happen, but each of them contributes to technical debt and the eventual budgetary and data corruption surprises. So here, in no particular order, are some rules of thumb we look out for in an SFDC instance, and why they matter:
- At least 10 percent of an object’s custom fields should be formulas or roll-ups. Too often, code or child objects are used instead of clever formulas. Code means cost and maintenance. Spurious objects mean UI and reporting ugliness, which means even more code.
- No object should have more than a handful of fields with constraints such as ‘required’ or ‘unique value.’ See the arguments against validation rules.
- No UI page (aka “page layout”) should have more than five required fields. While the fields that are made required on the page work well and don’t have the ugly failure modes of DML constraints, they do annoy users. If you have to have several required fields, at least put them “above the fold” in the page layout so the user doesn’t have to scroll a lot during initial data entry.
- No object should have more than 10 record types unless there is an incredibly strong reason for it, and the number of user profiles should be less than 5 percent of the total user count. These two rules of thumb by themselves seem arbitrary, but the issue is manageability. The most pressing issue is the number of check boxes and selectors that need to be maintained for user profiles, as this suffers from what is amusingly called a combinatorial explosion. If you have 20 objects with 100 fields, maintaining 10 profiles involves 40,000 check boxes just for read and write privileges — and then there are all the record-type picklist subsets and page layouts. We’ve worked on SFDC systems that had over 500,000 items for a system administrator to maintain.
- The system should have less than 10 percent unused objects and less than 25 percent “unused” fields. While it’s perfectly OK to have historical objects that are no longer active (but do hold data) and fields that are explicitly marked as deprecated, having hundreds of “mystery fields” just makes things harder to understand and manage. Each irrelevant field or object is contributing to the combinatorial explosion from the prior bullet.
- The number of reports should be less than the number of users. Don’t laugh. We’ve seen systems that had ten times that number. It doesn’t take much of a leap to conclude that 75 percent of those reports were unmaintained and produced questionable results, but they were still there to mislead the user if they were run.
- For any one object, there shouldn’t be a mix of code, workflows, flows. While each of these technologies works fine individually in the appropriate use cases, with any level of sophisticated processing they do not play well together. Mixing these technologies on a single object makes the transactional flow harder to understand and causes lots of flaky behavior and annoying failure modes. Troubleshooting some of the failures will take you longer than just converting everything to code (which, while generally not a good thing, is in this specific case the only real solution).
- APEX code test methods should be at least 75 percent the size of the code under test. If your code is the least bit interesting, it will have branches and calculations. Setting up the data conditions to cover all the branches and do any level of testing for results/outcomes takes lots of lines. If the test code is cursory, that means the project that developed the functional code did not budget enough time for it — never a good sign for quality, reliability and maintainability.
- For any one object, there shouldn’t be more than one trigger, and it should be small (ideally, just one line). You can get away with a couple of simple triggers on a given object, but you can’t control the order of execution among an object’s triggers. The only way to guarantee that any complex processing is done right every time is to have one trigger per object that is just a skeleton that calls methods in supporting classes. Plus, testing logic is easier when it is encapsulated in a class rather than loitering in a trigger.
- Classes can be as big as you want, but individual methods should fit in a single screen. Anything bigger than 15 lines is probably a mixture of methods and should be refactored for simplicity and comprehensibility. I’m not going to explain why: it’s an IQ test.
And then there’s the data
In this article, we looked at the system metadata and configuration. Next time, we’ll look at data characteristics that are warning signs of problems in SFDC systems.
Next read this
4 ways to make agile and waterfall work together
Has your cloud consultant gone crazy?
20 questions for screening a Salesforce.com consultant