Identifying and pinpointing potential network failures and performance issues has long been a matter of educated guesswork, but an emerging generation of predictive analytics tools promises to bring greater accuracy to network reliably forecasts, allowing staff to address and remedy specific issues even before they can even begin affecting network operations.\nPredictive analytics is a game-changer, giving CIOs the ability to literally look into the future. "There is a growing need for networks to adapt to dynamic application demands as well as address dynamically to special events, seasonality and so on," says Diomedes Kastanis, head of technology and innovation for Ericsson. "Although we have a lot of automation systems and rules to manage and operate networks, it still is not enough to cope with the intense changing environment and proactively adapt to changing demands."\nThe shock of the new\nPredictive analytics incorporating processes such as machine learning (ML) and artificial intelligence (AI) are relatively new concepts to many CIOs. "It takes time to prove a newer technology to the enterprise market and it is still early in this space," says Brian Soldato, a senior director at cyber security research firm NSS Labs. "Most of the adoption is occurring among security platforms and endpoint technologies that have predictive analytics as a feature."\nPredictive analytics has improved significantly over the past few years, thanks to advances in AI and related fields. "Forecasts based on time series data, like network logs, are increasingly accurate and therefore more useful," says Chris Nicholson, CEO of Skymind, an AI developer supporting the open source deep learning framework Deeplearning4j. "The level of accuracy depends on the quality of the data set," he notes. "On some problems, deep learning can achieve a double digit increase in accuracy."\nKastanis says Ericsson is in the process of investigating predictive analytics optimization on its networks. "We are currently experimenting with cutting edge technologies like deep learning, decision theory and semantic reasoning with different technology partners who are best of breed in different components of AI," he notes.\nGianluca Noya, digital network deployment and analytics lead at Accenture, concludes that it's now possible to predict future network behaviors, such as demand and service experience, with more than 95 percent accuracy, requiring a history of only five times the span of the prediction. "In other words, to predict data for the next month, you need five months of historic data," he says.\nAdvancements in compute power and distributed storage have opened the way to the unfettered use of packet-level network data, yet most network operators have failed to take full advantage of this potentially powerful resource, Noya says. "We see many early trials with investment in technology enablers, but frequently these initiatives are thwarted by the lack of a holistic approach taking into consideration the transformative effect of adopting data-driven operating models," he says.\nAnticipating capacity\u00a0requirements\nGetting a solid handle on future network capacity needs is a relatively simple predictive analytics problem, says Steven Toy, senior director of information technology at SAS. "Figure out the metric you\u2019re interested in measuring against, capture the network capacity data, then compare," he says. Say that an organization, for example, wants to upgrade its circuits when they're reaching 75 percent of capacity. "Gather data for several months and forecast where you\u2019ll be in three to four months \u2014 roughly the time it takes to provision new circuits," Toy says. "When your analysis shows you\u2019ll be at 75 percent in three to four months, start the procurement process."\nPredictive algorithms can be applied against traffic, service, device and user behavior, essentially extending standard network statistical planning activities to cover many more dimensions of network and technology performance, Noya says. "Presently, the capacity planning approach relies on reference performance KPIs that are certified by technology providers and supported by the engineering design," he notes. "Application of AI \/machine learning algorithms enables improvement of this approach with a continuous learning process to improve performance beyond what would be possible using static KPIs."\nEnsuring performance and quality\n"In the case of network performance and quality issues, forecasting algorithms help manage multiple dimensions of the analysis and decide which events have the most impact on results," Noya says. Deep learning can be a particularly useful tool for network performance\/quality optimization. "When you have a dataset that includes records of events you want to predict, you can train a deep neural network on that data," Nicholson says. If the deep net is properly trained, it can accurately predict when those events are likely to occur.\u00a0 "When you can predict capacity problems accurately (for example), you can act pre-emptively to rebalance the load on your network and provision the network with more capacity," he explains.\u00a0\nAccording to Toy, network performance and quality problems are similar to manufacturing problems. "The more data you have about the manufacturing process, and the more information you have about problems coming into repair centers, the more easily you can predict failure," he notes. It\u2019s pretty much the same for network problems, he says. "Predict failures and performance problems at the edge and in the core of your network by forecasting error rates, predict the failure of components based on logs, and take action before the problem is noticeable."\nPredictive analytics can also examine trends in data traffic patterns based on usage type and provide an early warning whenever it discovers possible issues. "For example, low priority real-time traffic that uses UDP (User Datagram Protocol) will start seeing performance issues before higher priority traffic is impacted," says Atif Mir, CIO network infrastructure advisory leader at professional services firm KPMG. "A good and empowered predictive analytics tool can predict the impact and, if empowered, can make changes to avoid such impact," he explains.\nProactive security\nMost networks are secured through firewalls supporting intrusion detection and packet analysis. Yet attackers are becoming smarter and craftier. "Today's bad actors are much more sophisticated and sometimes organized and sponsored like an enterprise," Mir says. "Proactively defending the network requires a much different approach and predictive analytics is one of them."\nPredictive analytics enables security analytics platforms to recognize anomalous behavior from systems, devices and\/or users. "This fills a much needed gap," Soldato says. "With NGFW (Next-Generation Firewall) and endpoint technology, predictive analytics ... proactively identifies potential outside or even zero-day threats by recognizing what a file should or should not do in terms of the way it behaves when it is downloaded and executed or even simply saved."\nInsider threat risk mitigation and rapid detection of security breaches are more important than ever, and predictive analytics can provide clues that escape human observers. "Predictive analytics, along with NetFlow or sflow data, can help weigh the risk of devices on your network (including end users) and predict which are at highest risk," Toy says. The cost of a network breach is typically several million dollars, Toy notes. "The more quickly you can detect and correct the breach, the less cost and impact there will be to your company\u2019s reputation."\nControlling costs\nComparing network pricing structures becomes complicated when multiple technical alternatives are available. "Software defined networks (SDNs), when coupled with predictive analytics, can help simplify forecasting and adjust network costs," Mir says.\n"Analytics platforms that have implemented predictive analytics can help forecast network costs because they have the ability to ingest and process large amounts of network data," Soldato says. Predictive analytics is a proactive forecasting technology with the platform allowing enterprises visibility of what network usage, performance and quality will look like months and even a few years into the future. "In turn, this helps the enterprise prepare and forecast for network upgrades, new devices and personnel," he notes\nAs a prerequisite for network cost prediction, it is necessary to build network allocation cost foundations that enable the attribution of cost, both capital expenditures (CapEx) and operating expenditures (OpEx), to specific technical services or end customer products, Noya notes. "This is a difficult process for converged operators where network elements support multiple products and services, but ultimately necessary to accurately understand the total cost of ownership for product and service," he says. The network inventory should also be matched with the procurement catalog to create a continuum between network designs, network capacity and expansion costs. "Network predictive capacity analytics are employed to understand forward-looking costs," Noya says.\nGetting started\nThe first step in deploying predictive analytics for any form of network optimization is collecting and organizing clearly defined historical evidence of past problems. "You must know what constitutes normal functioning to identify what is abnormal," says John Crupi, vice president and engineering system architect for Greenwave Systems, an internet of things (IoT) software developer. "A spike in performance issues may be normal depending on the nature of the network, or it may be an early indicator of serious issues to come," he explains.\nCIOs also need to set a predictive analytics strategy and roadmap that ties to business needs. "Within the scope of this strategy, pick a manageable goal that could be used as a proof-of-concept," Mir says. "The next step is to figure out all factors that contribute to its variability and get access to any and all data\/logs available for the variables."\n"The best way to get started with predictive analytics is not to start with predictive analytics, but rather to start understanding and identifying patterns of behavior across systems," Crupi says. "These patterns set the foundation for applying predictive analytics."\nOnce a predictive analytics platform has been deployed, feed the machine learning models with a significant amount of training data, Kastanis advises. Then rely on human experts to validate the initial predictions and execute changes in networks with expert approval until the accuracy of the ML models rise consistently above baseline expectations. "Until there are solid case studies to prove the accuracy of ML models, operators will be skeptical in taking the risk to let ML models make network changes for autonomous network management," Kastanis says.\nA worthwhile effort\nPredictive analytics is not a solution, it's a tool derived from strategy, Crupi says. "It is just one part of an overall analytics arsenal," he notes.\n"Many organizations want to jump into predictive analytics and immediately start training models to predict failure," Crupi says. But that's really not a good idea. "Training a predictive model takes a tremendous amount of data\u00a0and requires data scientists with access to historical context," he explains. "It\u2019s best to start with basic analytics and visualizations so you can start to 'see' what is happening."\nReflecting on his own experience, Kastanis says that the benefits predictive analytics provides are worth all of the time and effort needed to structure and deploy the technology. "It is an amazing idea that will significantly stabilize network performance and optimize OpEx for network management, thereby making network management much more effective," he says.