Enterprise data scientists are frustrated by the Sisyphean struggle to get the technology assets they require to build data models. But that\u2019s hardly the only hurdle: Because these projects slow-cook in siloes, data science teams often duplicate efforts. It\u2019s a maddening combination of requisitioning hell and redundancies.\nNo stranger to such challenges, defense contractor Lockheed Martin installed a software platform to make the development of machine learning (ML) and artificial intelligence (AI) models more efficient. The platform centralizes assets required to build data models, reducing the costs of the company\u2019s ML and AI projects by $20 million a year, says Matt Seaman, Lockheed Martin\u2019s chief data and analytics officer of enterprise operations.\n\n[ Learn the essential skills and traits of elite data scientists and the secrets of highly successful data analytics teams. | Prove your data science chops by earning one of these data science certifications. | Get the insights by signing up for our newsletters. ]\n\nThe self-service capabilities are critical for the company\u2019s approach to democratizing access to data, Seaman says. \u201cWe\u2019re reducing the barriers to start and run new projects that will help us make better and faster decisions with data.\u201d\nAdoption of self-service technology is soaring, representing the next phase of a consumerization phenomenon that put mobile computers and applications into the hands of millions of workers more than a decade ago. But perhaps nowhere is the interest greater than in data science, in which the potential of advanced analytics that helps discover business insights has been constrained by the same clunky processes that have long held back companies from reaching their potential.\nClearing the provisioning hurdle\nLockheed Martin is neutralizing the problem with the help of Domino Data Lab, whose collaborative data science platform helps the company\u2019s 300-plus data scientists both build data models more efficiently and lay a foundation for future data scientists coming into the company, says Seaman.\n Lockheed Martin\n\nMatt Seaman, director and chief data and analytics officer of enterprise operations, Lockheed Martin\n\n\nBefore landing on Domino Data, Lockheed Martin\u2019s data scientists spent an inordinate amount of time identifying computing resources they needed and requesting them from IT. These staffers waited for IT to build, install and configure the integrated development environment (IDE) and other programming tools on a server, which they logged into every time they needed to access their projects and resources. But many data scientists are working on multiple projects, often requiring multiple systems, servers and IDEs, creating a constant cycle of blocking and tackling infrastructure.\nData scientists who spent time procuring infrastructure or engaging in software engineering spent less time building the data models. Also, the work suffered as Lockheed Martin couldn\u2019t identify the pain points of data scientists trying to do their work, let alone track project status, Seaman says.\n\u201cWe didn\u2019t have a lot of visibility into who were the players trying to drive innovation\u201d \u2014 let alone who needed to be enabled, Seamans says. \u201cIt\u2019s about taking data out of its silo and getting it into the hands of people in a more efficient way.\u201d\nThe data modeling \u2018domino\u2019 falls\nDomino Data consolidates these capabilities into a browser-based graphical user interface (GUI) where users can access development resources through a menu of templates for software, machine learning libraries and infrastructure. They can pick programming languages (Python, R, SAS, etc.) and on-demand compute resources (CPU, GPU or Spark clusters) to build their models. Staffers can opt to work with private cloud or public cloud systems, avoiding resource lock-in.\nIn keeping with the company\u2019s DevSecOps strategy, programming packages and their dependencies are automatically distributed, while tracking and audit capabilities for code, data and tools provide guardrails to ensure visibility and compliance. Because data scientists can access the tools and infrastructure they need, 90% of engineers previously assigned to supporting these workflows now support other business projects, Seaman says.\nData science teams are building new ML models to simulate the design of new aeronautics products. Other models help gain greater visibility into capacity of productions lines in factory operations, including tracking the flow of materials from assembly through fabrication, and detecting defects and maintenance issues. Other project focus on deep learning models that mitigate supply chain risk.\nSeaman says the software reduces the time to begin building data models and launching them into production from weeks to minutes, while yielding a tenfold increase in productivity from greater access to resources. Data science leaders gain greater visibility into projects, optimizing collaboration and knowledge sharing, while the IT teams can manage and govern infrastructure usage and costs. \u201cThe platform brings order to the chaos,\u201d Seaman says.\nData science know-how still needed\nThe ease of use suggests Domino Data is a low-code solution for analytics consumed by business users, but this is far from the case. While many data science modeling tools enable staff with limited technical capability to click and move compute assets around with a cursor, Domino Data requires some coding knowledge \u2014 a detail about which the company remains unapologetic.\n Data Domino Lab\n\nJoshua Poduska, chief data scientist, Data Domino Lab\n\n\n\u201cOrganizations that are doing meaningful data science work are going to rely on code-first data science to do that work,\u201d says Joshua Poduska, Domino Data\u2019s chief data scientist. Domino Data Lab shouldn\u2019t be the first stop for so-called citizen data scientists looking to learn the ropes of the discipline.\nTo wit, Lockheed Martin\u2019s Domino Data users are practicing ML and AI engineers, and Seaman expects adoption of the platform will grow.\nSeaman acknowledges that Domino Data isn\u2019t for everyone, noting that it\u2019s critical to incorporate a data strategy that supports everything from low-level data modeling tools to more sophisticated algorithms and deep neural networks. Even so, he says, \u201cthere will always be a place for advanced innovation that comes from code-base solutions.\u201d\nIndustry market watchers tend to agree. Global spending on AI technologies will grow from $50.1 billion in 2020 to more than $110 billion in 2024 as more enterprises look to cultivate business insights at scale, according to IDC research.