Data-driven applications like CRM and ERP typically have a mechanism to validate data entries, in order to prevent entries that are out of bounds (such as negative inventories, customer counts in the billions, or pointers to nowhere).\u00a0 A big system may have hundreds of them scattered across the UI landscape.\u00a0 While these validation rules aren\u2019t as universally enforced as true database constraints, they still irritate users because they prevent the record from being saved.\u00a0 The careless user may just move on, thinking that some of their data has been saved when in reality it\u2019s been lost and has to be re-entered later.\u00a0\nWorse, validation rules can cause errors in application code, prevent integrations from properly inserting data, and throw countless errors in test code.\u00a0 So every time somebody asks to create another validation rule (which involves no coding), all the coders have to be called in to test their modules and deploy the updates before that new \u201cnothing\u201d rule can be deployed.\u00a0\nAnnoying, yes.\u00a0 And so somebody will say, \u201cwe can have a work-around that makes it so validation rules are skipped when the data entry is coming from code, rather than from people.\u201d\u00a0 The idea is floated that next time a user is in a record s\/he can amend it so it conforms to the new validation rules.\u00a0 Sounds logical enough, particularly when it comes from an executive.\u00a0\nFly into the danger zone\nSo let\u2019s go for it!\u00a0 With new validation rules, we\u2019ll put in logic to check a field indicating \u201clast time this record was updated by code.\u201d\u00a0 If that time is essentially \u201cnow,\u201d we skip the validation rule.\u00a0 If not, this is a human entry and the rule is enforced.\u00a0\u00a0\nUnfortunately, we\u2019ve just volunteered for the Silent Killer.\u00a0\nThe first thing to think about is that only newer validation rules will be skipped (because nobody will go back and fix every single rule across the system), so the data quality is strictly enforced in some areas and not in others.\u00a0 And of course, none of this is documented or done in a thorough way, so some data conditions that should be impossible will start to creep in unnoticed.\u00a0\u00a0\nThe second thing to think about is that some bits of your software will know to update that \u201clast time this record was updated by code\u201d field, but others won\u2019t.\u00a0 You can also bet that even within modules that update that field\u2026not every data manipulation will update it (simply because laziness knows no bounds).\u00a0 So this means that some of your components will be able to skip the validations, and others won\u2019t.\u00a0 Processing will continue alright, but outcomes and data conditions will start to get flaky in more and more subtle ways.\u00a0\n[Related: Are you over-testing your software?]\u00a0\nAt first, nobody will notice much.\u00a0 Typically, the only thing that becomes obvious is that older, un-modified records will go farther and farther out of data compliance.\u00a0 So when a user edits one of those old records there may be a half-dozen validation rule violations to fix before the data can be saved \u2013 even if the user\u2019s changes were innocent.\u00a0 And sometimes those things-to-be-fixed are in records that the user isn\u2019t working on (and may not even have access to).\u00a0 Those always generate AYFKM reactions (Google it).\u00a0\nThe next thing people might notice is that reports that compare historical records to current ones (such as \u201cthis quarter\u2019s pipeline vs last year\u2019s\u201d) will start to yield results that are superficially OK, but misleading.\u00a0 Roll-ups and quantities will be there, but may not \u201cfoot.\u201d\u00a0 Data segmentation and \u201cbuckets\u201d may look silly, and the data semantics will get blurred for sure.\u00a0 Only careful scrutiny by a diligent data analyst will see it, but management decisions based on historical comparisons will increasingly be lost in the fog of war.\u00a0\nThe result, of course, is that confidence in the system insidiously goes down.\u00a0 Nobody blames the users or the sloppy thinking.\u00a0 They just blame the app, and wonder why they can\u2019t really trust the data.\u00a0\nBad data is never a good thing, but the real danger here comes from code processing errors.\u00a0 Because of erratic\/inconsistent enforcement of validation rules, data will be processed in ways never contemplated by the developers.\u00a0 Logic will be inconsistently applied.\u00a0 Code branches that are selected based on ranges of calculation results won\u2019t be followed.\u00a0 All too often, this stuff is happening without the code throwing an obvious error \u2026 which means that illogical or impossible outcomes may not be noticed for a long time.\u00a0 But those outcomes are being propagated across the data nonetheless. The more validation rules you have in the system, the worse the scope of these issues.\u00a0\nEventually, some module will start throwing errors that prevent some part of a business process to complete.\u00a0 Now that it\u2019s way too late, you bring your coders in to investigate the \u201cproblem in their damned code.\u201d\u00a0 But if it\u2019s been months since they last looked at it, they\u2019ll face three levels of issues:\u00a0\n\nLearning curves to remember how their own code works\nLearning curves about the validation rules that have been added and the resulting data pollution\nThe inability to deploy any fix until they have troubleshot and repaired their code, their test code, and the data pollution.\u00a0\u00a0\n\nAnd guess what:\u00a0 these learning curves and delays aren\u2019t additive \u2013 they\u2019re multiplicative. Throw in some extra stress because of a screaming boss, and you create a nice little vortex. The bug fixes from yesterday become the #1 cause of new bugs in the system.\u00a0\n[Related: 4 warning signs that your team is not agile]\u00a0\nThis all goes double for integration code that may propagate the bad data into other systems that are expecting things to be well-behaved.\u00a0\nThis is about as far away from agile as it gets.\u00a0\nHow to avoid a heart attack\nInstead, how about reducing the use of validation rules in the first place?\u00a0 Every time there\u2019s an addition or modification to a validation rule, see if you can remove it and replace it with \u201cnagware.\u201d\u00a0 Nagware lets the user and the code always save the record, but it notifies the user that the data is out of spec every time they look at that record.\u00a0 In addition, the nagware sends an email every couple of days to the owner of any record that\u2019s out of spec.\u00a0 They can choose to delay the data update, but in our experience they won\u2019t do so for long.\u00a0 In practice, this relaxed validation approach works in a fairly safe way.\u00a0 Of course it\u2019s not iron-clad, but it contains the scope of any data pollution.\u00a0 This mess all started because the users didn\u2019t want the handcuffs presented by unconditional enforcement of the validation rule, and we give them the equivalent of a mild shock-collar to give them the illusion of freedom.