Along with social, mobile and cloud, analytics and associated data technologies have earned a place as one of the core disruptors of the digital age. 2015 saw big data initiatives moving from test to production and a strong push to leverage new data technologies to power business intelligence. As 2016 gets underway, five insiders share their predictions for what 2016 holds in store for the data and analytics space.\n\n\nScott Gnau, CTO of Hadoop distribution vendor Hortonworks, predicts the following trends will dominate data and analytics in 2016:\n\n\nInternet of Anything. In 2016, businesses will look at deriving value from all data, Gnau says. "It's not just the Internet of Things but rather Internet of Anything that can provide insights," he says. "Getting value from data extends beyond devices, sensors and machines and includes all data including that produced by server logs, geo location and data from the Internet."\nData at the jagged edge. Businesses must look beyond the edge of their data centers all the way out to the jagged edge of data, Gnau says. He notes that data flows now originate outside the data from many devices, sensors and servers on \u2014 for example, an oil rig in the ocean or a satellite in space. There is a huge opportunity to manage the security perimeter as well as to provide complete data provenance across the ecosystem. Gnau says IoAT creates a new paradigm that requires new thinking and new data management systems, and these solutions will mature and permeate the enterprise in 2016.\nData in motion platform.The industry will see the evolution of data in motion platforms in 2016. "There is a need for a higher-level platform to handle the many device protocols and bring all of the data flows into Hadoop," Gnau says. "The platform needs to facilitate communications in multiple protocol languages. There combination of data in motion and data at rest is a big opportunity for the year."\n\n\n\n\n\nAlso on CIO.com\n\n5 interview questions for big data engineers\u2026 and how to answer them\nMove over CISO: The Chief Data Officer may be sharing part of your job\n11 big data certifications that will pay off\n\n\n\n\n\n\n\n\nBig data made easy. There is a market need to simplify big data technologies, and opportunities for this exist at all levels: technical, consumption and so on. Gnau says that in 2016 there will be significant progress toward simplification. "It doesn't matter who you are \u2014 cluster operator, security administrator, data analyst \u2014 everyone wants Hadoop and related big data technologies to be straightforward," he says. "Things like a single integrated developer experience or a reduced number of settings or profiles will start to appear across the board."\nHadoop for mission critical workloads. In 2016, Hadoop will be used to deliver more mission critical workloads \u2014 beyond the "web scale" companies, Gnau predicts. "While companies like Yahoo, Spotify and TrueCar all have built businesses which significantly leverage Hadoop, we will see Hadoop used by more traditional enterprises to extract valuable insights from the vast quantity of data under management and deliver net new mission critical analytic applications which simply weren't possible without Hadoop," he says.\n\n\nDeepak Kumar, founder and CTO of IT systems management solutions provider Adaptiva, predicts:\n\n\nThis is the year data gets limits. "Data usage will become more regulated, as providers won't be able to keep up with the data demand and businesses won't be able to keep up with the rising cost," Kumar says. "As a result, companies will begin to leverage technologies that monitor this data."\nSystems management will get smart about big data analytics. "Integration of big data analytics solutions will continue to fall short, leaving perishable business insights undiscovered in disconnected silos of data \u2014 systems management will step in to help," he says.\n\n\nBadri Raghavan, chief data scientist at energy analytics specialist FirstFuel Software, says 2016 will see:\n\n\nThe democratization of data. Raghavan says that thanks to solutions like Amazon Mechanical Turk, businesses and individuals will be able to much more easily collect data from around the world that to which they previously did not have access. "Not only will data be easier to find, but the emergence of user-friendly tools will enable people without extensive data knowledge to parse the information so as to secure meaningful value," he says.\nIncreased concerns around data privacy. Europe recently set strict regulations around data, which means that organizations will need to be strategic about how they're tackling data security issues. "Rather than considering data privacy an afterthought item, people will need to be proactive in explaining exactly how they will use the data and ensuring compliance with local and global laws," he says.\nNew applications for data insights. In 2016, Raghavan says organizations and individuals will tap data and analytics to deliver personalized and engaging experiences across industries including energy, sports, social good and music. "For instance, people will be able to use data to change a song based on their personal preferences (e.g., lots of drum)," he says.\n\n\nDan Kogan, director of product marketing at business intelligence and analytics company Tableau Software predicts a slew of trends in the big data space for 2016, including:\n\n\nThe NoSQL takeover Kogan says 2016 will see the shift to NoSQL databases as a leading piece of the Enterprise IT landscape as the benefits of schema-less database concepts become more pronounced. "Nothing shows the picture more starkly than looking at Gartner's Magic Quadrant for Operational Database Management Systems, which in the past was dominated by Oracle, IBM, Microsoft and SAP," he says. "In contrast, in the most recent Magic Quadrant, we see the NoSQL companies, including MongoDB, DataStax, Redis Labs, MarkLogic and Amazon Web Services (with DynamoDB), outnumbering the traditional database vendors in Gartner's Leaders quadrant of the report.\nApache Spark lights up big data. Apache Spark has moved from a being a component of the Hadoop ecosystem to the big data platform of choice for a number of enterprises. "Spark provides dramatically increased data processing speed compared to Hadoop and is now the largest big data open source project, according to Spark originator and Databricks co-founder, Matei Zaharia," Kogan says. "We see more and more compelling enterprise use cases around Spark, such as at Goldman Sachs, where Spark has become the "lingua franca" of big data analytics.\nBig data grows up: Hadoop adds to enterprise standards. Kogan says the enterprise capabilities of Hadoop will mature in 2016. "As further evidence to the growing trend of Hadoop becoming a core part of the enterprise IT landscape, we'll see investment grow in the components surrounding enterprise systems such as security," he says. "Apache Sentry project provides a system for enforcing fine-grained, role-based authorization to data and metadata stored on a Hadoop cluster. These are the types of capabilities that customers expect from their enterprise-grade RDBMS platforms and are now coming to the forefront of the emerging big data technologies, thus eliminating one more barrier to enterprise adoption."\nBig data gets fast: Options expand to add speed to Hadoop. Kogan says 2016 will see Hadoop gain the sort of performance that has traditionally been associated with data warehouses. "With Hadoop gaining more traction in the enterprise, we see a growing demand from end users for the same fast data exploration capabilities they've come to expect from traditional data warehouses," he says. "To meet that end user demand, we see growing adoption of technologies such as Cloudera Impala, AtScale, Actian Vector and Jethro Data that enable the business user's old friend, the OLAP cube, for Hadoop \u2014 further blurring the lines behind the "traditional" BI concepts and the world of 'big data'."\nThe number of options for "preparing" end users to discover all forms of data grows. Self-service data preparation tools are exploding in popularity. Kogan says this is in part due to the shift toward business-user-generated data discovery tools such as Tableau that reduce time to analyze data. "Business users now want to also be able to reduce the time and complexity of preparing data for analysis, something that is especially important in the world of big data when dealing with a variety of data types and formats," he says. "We've seen a host of innovation in this space from companies focused on end user data preparation for big data such as Alteryx, Trifacta, Paxata and Lavastorm while even seeing long established ETL leaders such as Informatica with their Rev product make heavy investments here."\nMPP Data Warehouse growth is heating up\u2026in the cloud! Kogan says the "death" of the data warehouse has been overhyped for some time now, but it's no secret that growth in this segment of the market has been slowing. "But we now see a major shift in the application of this technology to the cloud where Amazon led the way with an on-demand cloud data warehouse in Redshift," he says. "Redshift was AWS's fastest-growing service, but it now has competition from Google with BigQuery, offerings from long-time data warehouse power players such as Microsoft (with Azure SQL Data Warehouse) and Teradata, along with new start-ups such as Snowflake, winner of Strata + Hadoop World 2015 Startup Showcase, also gaining adoption in this space. Analysts cite 90 percent of companies who have adopted Hadoop will also keep their data warehouses, and with these new cloud offerings, those customers can dynamically scale up or down the amount of storage and compute resources in the data warehouse relative to the larger amounts of information stored in their Hadoop data lake."\nThe buzzwords converge: IoT, cloud and big data come together. The technology is still in its early days, but the data from devices in the Internet of Things will become one of the "killer apps" for the cloud and a driver of a petabyte-scale data explosion, Kogan says. "For this reason, we see leading cloud and data companies such as Google, Amazon Web Services and Microsoft bringing Internet of Things services to life where the data can move seamlessly to their cloud based analytics engines," he says.\n\n\nDan Graham, general manager of enterprise systems at data warehousing and big data analytics specialist Teradata, predicts that in 2016:\n\n\nOrganizations will hit reset on Hadoop. Graham believes 2016 will see enterprises use lessons learned from past deployments to rearchitect their approaches. "As Hadoop and related open source technologies move beyond knowledge gathering and the hype abates, enterprises will hit the reset button on (not abandon) their Hadoop deployments to address lessons learned \u2014 particularly around governance, data integration, security and reliability," he says.\nAlgorithms will enter the boardroom. "Algorithms heat up in the data ingest and preparation processes for house holding and profiling," he says. "As a result, CEOs and Investors will start talking deep analytics as core business goals."\nData lakes will finally discover a few killer apps. Data lakes will be the most common repository for staging raw IoT data, driven by volume and costs, Graham says. "The size of IoT M2M data will over run in-memory capacity by orders of magnitude, driving implementers to data lake technologies for low cost storage," he says.\nIoT data captured at the data center will erode in value faster than transaction data. "Lacking monetary data fields, most sensor data will become low value in hours, days or weeks as it is replaced by fresh collections of the same sensor data," Graham says. "Architectures and systems will be forced to compensate for this rapid decline to cope with retention and processing costs."