All across the country it seems like companies are collapsing limited-purpose data warehouses to a singular integrated repository. In every industry, companies are moving up this path with a strikingly similar stride. From my view, here\u2019s why so much of this is happening at once: to reduce the time-to-answer, reduce the cost-per-answer, achieve limitless scalability, integrate internal and external data, and deliver data as a service. Not bad stuff. We\u2019re entering into an age of the Modern Enterprise Data Warehouse (EDW) where the scope of analytics is wider and IT is delivering at enterprise scale, albeit by leveraging non-IT horse power.\nThis modern era overlays decades of data management \u201cgrowing-up\u201d onto fit-for-purpose scalable technologies and more tightly couples enterprise information with data democratization, advanced analytics, and BI dashboards. Looking under the hood, these modern data warehouses share some common traits including:\nData Lakes are replacing ETL hubs and landing areas because enterprise-wide, Hadoop-based data lakes support better, cheaper, and faster schema\u2019less landing and pre-processing of data at an atomic level. This ecosystem\u2019s limitlessness makes it ideal for structured internal data aligned with semi- and un-structured Big Data from IoT and digital sources. So that the lake does not become a swamp, organizations are implementing data-factory\u2019ish frameworks in an effort to deliver the data security, manageability, reliable, and cost-performance expected of these enterprise-class systems. Further, an emergent class of data-smart power users are fishing directly from the data lake, making an IT-enabled end-run around the EDW. This is good for everyone - the hands-on power users as well as the EDW users \u2013 but most importantly it\u2019s good for the business.\nSelf-Service Analytics are being deployed in lieu of IT-supplied dashboards. Todays\u2019 data visualization and discovery tools empower businesses to roll their own. Their story boarding capability, rapid response development environments and in-memory processing shorten both the time and cost-to-answer.\nData Governance processes are part of the foundation. Having become data smart, companies now realize that the policies, processes, and standards around data are just as important as the tools. For example, since \u201ccustomer churn\u201d can be measured a dozen ways, a single version of the truth requires coming to terms on the meaning of metrics, data definitions, and system sources of truth.\nPooled Compute Infrastructure is becoming the new normal, while dedicated boxes are starting to seem more than a little antiquated.Pooled compute resources take a variety of forms such as grid computing clusters, high performance specialized MPP platforms (e.g., Teradata), virtualized private clouds, and public clouds. The advantage of pooled platforms is that enterprises reduce equipment costs by more effectively managing utilization, and new projects can be supported without sub-projects to add compute infrastructure.\nData Ecosystems Leveraging \u201cBest Fit\u201d Data Platforms are the rule of thumb. To affordably scale with emerging data volumes and varieties, modern EDWs employ multiple fit-to-purpose data management systems (DBMS) rather than a single-repository one-size-fits all strategy. For enterprise-class demands that need to support hundreds of online users and ETL processes, row-oriented databases, particularly MPP databases that scale linearly, continue to be the preferred solution. Hadoop solutions are being used for landing and staging of large data volumes to achieve cost and scalability advantages. Column-store databases are a tool of choice for many applications that require ultra-fast response with minimal compute resources, such as BI dashboards and many \u201crite once\/read many workloads. NoSQL tools are being used for big data applications that do use flexible schema and constant time retrieval methods.\nEnterprise IT and \u201cShadow IT\u201d are on the same team. Over the years EDW costs and project backlogs have grown, so too has the workforce of data professionals working in businesses (i.e., outside IT) and who are unencumbered by data modelers and ETL job streams that support hundreds or even thousands of users. Closer to the business is where we find many of today\u2019s data Navy Seals. In the past, these groups lived off in their own data eco-systems; in fact, it was often one \u201cdata pond\u201d for every SAS developer. In the modern EDW, however, these data Navy Seals have access to sand boxes that are provisioned from the data lakes and made accessible through self service BI tools as well as statistical modeling tools.\nIt\u2019s is an exciting time to be working in our business. At the same time the data supply and demand are sky-rocketing off the charts and our tools and techniques are coming of age. Interestingly, big data is not a new term. It re-emerges every 10 years or so when escalating data volumes make conventional technical solutions obsolete. With this new paradigm for the Modern Enterprise Data Warehouse, I won\u2019t be surprised to see the term big data go back into remission in the next year or two.