Martha Heller: What is the transformation currently underway at Perdue Farms?\n\nMark Booth: We have a growth strategy to improve our business, and to support that, we\u2019re driving a transformation in technology and business processes. We\u2019ve been replacing our old systems, some of which are more than 20 years old, and this has been going very well. But the more challenging work is in making our processes as efficient as possible so we capture the right data in our desire to become a more data-driven business. If your processes aren\u2019t efficient, you\u2019ll capture the wrong data, and you wind up with the wrong insights.\n\nIn addition to business process change, what else are you facing to getting value out of data?\n\nMaking sure we adhere to the new processes is a challenge in a 103-year-old poultry and agribusiness company. We need to make everyone in the company understand that the data will set us free. With the right insights, we\u2019ll better understand what\u2019s good for the business and what our consumers want.\n\nThis is important because all our businesses have seen significant change over the last several years. On the food side, we\u2019ve come out with innovative products, so we\u2019re not just selling chicken anymore. We need the data to understand what additional new products we should produce.\n\nOn the agribusiness side we source, purchase, and process agricultural commodities and offer a diverse portfolio of products including grains, soybean meal, blended feed ingredients, and top-quality oils for the food industry to add value to the commodities our customers desire. The data can also help us enrich our commodity products.\n\nHow are you populating your data lake?\n\nWe\u2019ve decided to take a practical approach, led by Kyle Benning, who runs our data function. CIOs have learned that it\u2019s a big risk to build a data lake and hope \u201cthey will come,\u201d because they might not, and a data infrastructure is a big expense. To avoid that, we built our data platform business case by business case.\n\nWith our involvement, our business partners create business cases, whether in foods or agribusiness, and these are approved by our business unit governance team. That team makes sure business cases are aligned with our corporate goals. Then our analytics team, an IT group, makes sure we build the data lake in the right sequence. We call this team Information Management because we want our business associates to be the analysts, not IT.\n\nOnce a business case has been approved by both the business unit and Information Management teams, we turn the case into a project and put it into production by moving only the data for that business case into the data lake. We have a metrics web page that tells us how much data is in the consumer zone, and what can be accessed, and how much is in the raw zone, which is a data repository that needs to be massaged before it goes in the consumer zone.\n\nHow do you educate the business case owners about how to work with the data?\n\nBenning's team is a center of excellence. He has player-coaches who train the business case teams on data preparation and visualization tools until they become self-sufficient in creating their own end-user dashboards and algorithms. Once the business case is approved, the business case team understands the tools, and IT moves the data into the lake, then our business partners can work with the data on their own. Once they have a solution that gives them the required insights, IT moves it to the consumer zone so it gets governed and cataloged.\n\nBy training the business and moving the data, case by case, we started to see a groundswell. Starting small and proving out the value of the data, people get excited, and more departments have become engaged. We have some departments that are now actually looking to hire analytics-savvy business resources.\n\nWhat is the risk of this case-by-case approach?\n\nWhen you move in data specific to the business case, you move only in that required data, which is not necessarily all the data, so there can be gaps. As we move from one business case to the next, we need to look back to make sure we close the data gaps. We moved some financial performance data in the data lake, for example, but that wasn't 100% of the financial data that\u2019ll be required in the end. We\u2019re now working on a project to get the rest of the financial data into the data lake.\n\nI think of it as a snowplow going down the road making sure the path is clear. As we move out of one business into another, to work on a new business case, we have to look back to make sure we move in the data even if it\u2019s not required by a specific business case. As we do this, we know the environment has become self-funding via the business cases.\n\nWhat\u2019s a real example of an early business case?\n\nOne would be with our chicken deboning machines. We wanted to know if the line was optimized. If we run it faster, will we have enough people at the end of the line to keep up with the machine, or if we run the machine too slow, will we have too many people? We got some real savings from moving operational data about the deboning machines into the data lake.\n\nWhat\u2019s the benefit of populating your data lake this way?\n\nThe big benefit is senior manager acceptance of the value of the data. They can see the dashboards they automatically populate, and can realize they\u2019re no longer using paper or waiting to get reports. And because this is their business case, they see this data on their terms. Their experience with the data then gets more people excited and involved.\n\nDescribe some other challenges of working with operational data as opposed to enterprise data.\n\nWorking with financial data isn\u2019t as complex as working with operational data. For the most part, ERPs are built a certain way to be configured for most processes. We understand that data and how to put it in a data lake. But operational data can be more challenging because we mix financial data with operational data. For the first time, we need to think about how much sensor data should go into the data lake, how it should it be structured, what the standards are for operational technology data, and should the data that shows our production error rates stay at the plant or move into the data lake. Figuring all of that out is difficult work, and we\u2019re still not through it as much as sitting on so much OT data.\n\nWhat advice do you have for CIOs building a new data capability? \n\nStart small, define your ecosystem, simplify your tech stack, and involve people who have credibility in the business. Yes, you need the right technology to become a data-driven business, but that\u2019s just table stakes. The real drivers are your business partners and the business cases they create. In IT, we sometimes make data too complicated because we start talking bits and bytes, and ones and zeros, and cloud and platforms, but that\u2019s not the point. Use your governance structures to pick a business case that\u2019s very important to the business, nail it, and then go get the next two or three. It will mushroom from there.