Technical debt comes from many software practices, but most of them are pretty obvious and the result of intentional choices. In the cloud, though, there are some insidious and barely visible new sources that contribute to software entropy. Credit: Thinkstock Even the cloud needs spring cleaning. And the cleanup isn’t the refactoring and data quality work that you already know about. Those “big animals” are hard to miss, and now we need to pay attention to the mites and dust-bunnies that we noticed only when they become overwhelming. These new micro-debts appear quickly and continuously because cloud systems are easy to configure. Add a new field, edit a formula, modify a report, change a picklist value – each of these take just seconds. But the repercussions of those little changes can lead to big consequences. Aside from the fact there’s no comprehensive way to inventory all the changes, the knock-on effects of new or changed items are hard to assess. So undoing a change (either to retire something that’s obsolete or to correct a new symptom) is way more work than making the change in the first place. Thanks to compound-interest effects, the collection starts to become as obnoxious as an ant-hill. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe Traditionalists will, of course, laugh and point to the discipline of configuration control. And I couldn’t agree more: Cloud systems need configuration control more than traditional systems precisely because they are easier to use and typically administered in a completely decentralized way. So as I wrote four years ago, use the new modes of documenting change that are enabled by cloud systems. But if recording a change in the config control system takes longer than the actual change took, you don’t have to be a behavioral economist to know people won’t do it. It’s that human nature thing—everybody knows that flossing your teeth will have payoffs over the decades, but since it takes two minutes a day…nobody flosses their teeth enough to make a difference. Good news for Salesforce.com users There are a couple of pieces of good news here, at least for Salesforce.com users. The system keeps an administrative change log automatically. It’s no substitute for really documenting what you’re doing, but it’s a start. Also, the Force.com plug in for the Eclipse IDE let you snapshot almost the entire system configuration as a set (a large set, sometimes) of XML files. Take a monthly snapshot and use your favorite XML diff tools, and you can see what changed. Of course, you won’t know why or by whom…but you’ve got some clues. There are also some snapshotting and diffing tools available from DreamFactory, Panaya, and others. But, if you’ve not been doing any of the regular cleanup tasks, you may find that the tools blow up. In SFDC, the APIs let you interrogate a maximum of 10,000 objects…and in recent system audit, a client had over 20,000 of them. Whoops. Taking action to reduce technical debt But tools don’t solve the problem here. Best practices do…like these: Set expectations on change. Even if you could make a change in 20 seconds, make it clear that changes can only go in once a day…once a week if you can get away with it. Set expectations on system administration workload. You’ll never pay off the technical debt all at once, and it’s reasonable to expect 15 percent of administrative time devoted to gradually draining the swamp. Back up the system administrator’s change log once a quarter (or whatever your cloud system’s logging horizon is). Keep that small file forever. Use a full sandbox and refresh it once a month. The only way to really see the impact of a potential change is in a sandbox full of data…and even then, there will be things that show up only in production. Also use a dev sandbox (one of the freebies) and refresh it once a day. Use your system snapshot tool on your production system, and update that once a day. Use that snapshot to conduct a where-used analysis in support of designing proposed changes. Before making a change in production test it in one of the sandboxes. If the change is in an area where the configuration hasn’t changed much in the last 30 days, do the test in the full sandbox. If there’s been a lot of configuration change, do the test in the dev sandbox. Once you’ve put the change in the sandbox, push the “run all tests” button (or whatever mechanism your cloud system has for running unit and system tests) and see if any new test errors crop up. Sometimes just fixing a typo in a picklist can cause dozens of test errors. Once a month, do a tally of all the views, reports, and dashboards in the system. These tend to multiply quickly (particularly if most users are allowed to create their own report) and can have a deadly impact on system credibility. Aggressively prune reports (by hiding them, not deleting them) – typically anything that hasn’t been run in 12 months is not likely to be missed. Once a month, analyze whatever items cause a combinatorial explosion in system management items…and seriously scrutinize new sources of complexity. In Salesforce, the key items are profiles, custom objects, and record types. It is not at all uncommon to have more than 100,000 Booleans defining the security system, so you want to closely monitor anything that can geometrically increase that number. Have a policy that every time you add something to the system, at least one item needs to be deprecated. And at least one item that was previously deprecated needs to be deleted (typically, the deprecation “purgatory” should be at least once a month). Deprecation should consist of a special ASCII character (such as ▼ or ♖) at the start of the item label, to clearly indicate that “there’s something weird here” in any report or page layout that uses that field. Deprecation should also include prepending the field’s “developer name” with “zzz” to force the deprecated fields to the bottom of alpha-sorted lists. Deprecation and deletion actions must be handled as changes that are logged and fully tested in the sandbox prior to pushing into production. Bottom line There’s no easy answer here, and nobody likes to pay taxes. But technical debt is real, and it builds inexorably. So go authorize some spring cleaning before things get out of hand. Related content opinion The changing face of cybersecurity threats in 2023 Cybersecurity has always been a cat-and-mouse game, but the mice keep getting bigger and are becoming increasingly harder to hunt. By Dipti Parmar Sep 29, 2023 8 mins Cybercrime Security brandpost Should finance organizations bank on Generative AI? Finance and banking organizations are looking at generative AI to support employees and customers across a range of text and numerically-based use cases. By Jay Limbasiya, Global AI, Analytics, & Data Management Business Development, Unstructured Data Solutions, Dell Technologies Sep 29, 2023 5 mins Artificial Intelligence brandpost Embrace the Generative AI revolution: a guide to integrating Generative AI into your operations The CTO of SAP shares his experiences and learnings to provide actionable insights on navigating the GenAI revolution. By Juergen Mueller Sep 29, 2023 4 mins Artificial Intelligence feature 10 most in-demand generative AI skills Gen AI is booming, and companies are scrambling to fill skills gaps by hiring freelancers to make the most of the technology. These are the 10 most sought-after generative AI skills on the market right now. By Sarah K. White Sep 29, 2023 8 mins Hiring Generative AI IT Skills Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe