Over the past few years, computer vision applications have become ubiquitous. From phones that recognize the faces of their users, to cars that drive themselves, to satellites that track ship movements, the value of computer vision has never been clear.\n\nBut hardware shortages and labor disruptions in the pandemic\u2019s wake are challenging companies\u2019 ability to make good on the promise of computer vision, even as the pandemic itself has accelerated the potential of its use cases.\n\nFollowing is a look at how companies across a range of industries are deploying computer vision to improve and optimize key business processes, from retail fulfillment to health-care diagnostics.\n\nWhat is computer vision?\n\nComputer vision is a field of artificial intelligence that is focused on processing images and videos to extract meaningful information. Examples of computer vision in action include optical character recognition, image recognition, pattern recognition, facial recognition, and object detection and classification.\n\nIndustries that make heavy use of computer vision include manufacturing, healthcare, automotive, agriculture, and logistics and supply chain. In enterprises, top drivers for deploying computer vision include automation, process improvement and productivity, and regulatory compliance and safety.\n\n\u201cThe market is growing so fast that it\u2019s hard to keep tabs on it,\u201d says IDC analyst Matt Arcaro, adding that the pandemic has accelerated computer vision adoption \u2014 for example, for monitoring occupancy to help ensure social distancing or to keep track of how many people were using public transit.\n\n\u201cBecause there are plenty of CCTV cameras in place, it\u2019s an elegant upgrade\u201d to incorporate computer vision, Arcaro says. \u201cAnd, in many cases, due to government mandates or organizational choices, the investment dollars have been there.\u201d\n\nAccording to IDC, the total worldwide market for computer vision technologies will grow to $2.1 billion this year, from $760 million in 2020, with a compound annual growth rate of 57% expected through 2025, to a total market value of $7.2 billion.\n\nMost of this market is currently on-prem, but IDC expects public cloud deployments to account for 48% of computer vision spending by 2025.\n\nScaling and expediting retail fulfillment and delivery\n\nThe retail industry has seen dramatic disruption during the pandemic, with customers moving more of their shopping online and increasingly switching to home delivery.\n\nWalmart, for example, reported that the number of shoppers getting their shopping delivered increased six times compared to before the pandemic. To meet the challenge, the multinational chain of hypermarkets increased pickup and delivery capacity by 20% last year and plans to increase it another 35% this year.\n\nTo make this happen, Walmart is investing in several categories of technology equipped with computer vision, including drones and autonomous vehicles. The company last July announced plans to roll out robots from Symbotic in 25 of its 42 regional distribution centers. The robots use computer vision, among other AI technologies, to move freight around warehouses.\n\nMeanwhile, American supermarket chain Kroger has been investing in micro-fulfillment centers \u2014 small-scale, heavily-automated distribution warehouses located close to where customers live. The goal is to deliver groceries to customers in as little as 30 minutes, according to the company. Since last summer, Kroger has opened facilities in Florida, Alabama, Texas, California, Ohio, and Georgia, and has plans to open 17 more facilities, including both hubs and spokes, over the next 24 months.\n\nAt a hub site, more than 1,000 bots \u201cwhizz around giant 3D grids, orchestrated by proprietary air-traffic control systems,\u201d according to the company. Instead of moving around entire pallets of products, as happens in a regional distribution center, here the robots fetch individual items. Computer vision is used to sort and pack items so that, for example, heavy items are at the bottom and bags are evenly weighted.\n\nOn-demand retail company Fabric, which specializes in micro-fulfillment centers for use by retailers that can\u2019t build their own, uses automation extensively in its facilities, says co-founder Ori Avraham. \u201cWe use computer vision as a key capability of our robotic solution,\u201d he says. \u201cFor example, robots\u2019 accurate navigation on the floor is based on vision-based analysis of floor stickers. This process happens in real-time as part of the robot navigation.\u201d\n\nThe robotic picking arms use computer vision as well, he says. \u201cFor that, we use a segmentation and classification algorithm to allow us to pick and place items. Both of these capabilities are crucial to our ability to operate our micro fulfillment centers successfully.\u201d\n\nLast month, Fabric opened a new micro-fulfillment center in Dallas, adding to its existing operations in New York; Washington, DC; and Tel Aviv. It has partnerships with Walmart, Instacart, and FreshDirect and plans to double its network of micro-fulfillment centers by the end of the year.\n\nStreamlining and improving manufacturing processes\n\nManufacturing is another industry being revolutionized by computer vision, which is used extensively on production lines to inspect products, automate processes, and optimize productivity.\n\nMike Griffin, chief data scientist at Insight, a Tempe, Ariz.-based technology consulting firm, has worked with several manufacturing clients on computer vision projects. One partnership involved developing a system in which a handheld device could be used to take a photograph of a bin of products and automatically provide a count of the number of products in the bin.\n\n\u201c[The client] wanted to be able to hire people with disabilities to do counting,\u201d Griffin says. \u201cIt sounds like an easy [system to develop], but the challenge there is the vision application has to do more than interpret what it can see, but it also has to interrupt what it can\u2019t see.\u201d\n\nProducts might be stacked on top of each other, hiding those at the bottom from view. So the computer vision system had to take a two-dimensional image and translate it into a three-dimensional model. \u201cWe needed to be at least 80% accurate on our inventory, including boxes wrapped in clear plastic with a lot of glare on them,\u201d says Griffin.\n\nTo train the system, employees walked with cell phones and took videos. Then an intern manually labeled 500 images taken from those videos, containing 30,000 boxes. So few images were required because computer vision is a relatively mature area of artificial intelligence, with many pre-trained models. For example, to create a new model for a custom data set, like boxes, transfer learning is used.\n\n\u201cWe\u2019ll take a model that\u2019s been trained on millions of images of cats and dogs and cars and whatnot,\u201d Griffin says. \u201cSo a lot of the hard work has already been done. And then we can add our 500 images of boxes or 1,000 images of tires to that model and retrain it with that additional set of images.\u201d\n\nTransfer learning allows for faster model training, with smaller data sets, than otherwise possible. \u201cYou can also create synthetic data,\u201d Griffin adds. \u201cFor example, a construction company wanted to identify hazards and they only had a couple of hundred training images. We created additional images, putting those orange hazard cones in, say, a field, or a parking lot, to augment their set of images to boost that training.\u201d\n\nAnother innovative use of image processing in manufacturing is to translate testing data into images and then use machine learning on the generated images.\n\n\u201cTest failures can be near each other but it\u2019s not obvious that they\u2019re related to each other until you translate that data into images,\u201d Griffin says. \u201cThey\u2019re near each other in the test space, as opposed to being near each other in physical space.\u201d\n\nImproving healthcare diagnostics\n\nIn healthcare, computer vision is used extensively in diagnostics, such as in AI-powered image and video interpretation. It is also used to monitor patients for safety, and to improve healthcare operations, says Gartner analyst Tuong Nguyen.\n\n\u201cThe potential for computer vision is enormous,\u201d he says. \u201cIt\u2019s basically helping machines make sense of the world. The applications are infinite \u2014 really, anything you need to see. The entire world.\u201d\n\nAccording to the fourth annual Optum survey on AI in healthcare, released at the end of 2021, 98% of healthcare organizations either already have an AI strategy or are planning to implement one, and 99% of healthcare leaders believe AI can be trusted for use in health care.\n\nMedical image interpretation was one of the top three areas cited by survey respondents where AI can be used to improve patient outcomes. The other two areas, virtual patient care and medical diagnosis, are also ripe for computer vision.\n\nTake, for example, idiopathic pulmonary fibrosis, a deadly lung disease that affects hundreds of thousands of people worldwide. The disease has no known cause or cure and is very difficult to diagnose. In the US alone, about 40,000 people die from the disease every year.\n\nAccording to PwC, it typically takes more than two years for idiopathic pulmonary fibrosis to be diagnosed; by then, the average life expectancy of those finally diagnosed is just three to five years.\n\nThe Open Source Imaging Consortium Data Repository, supported by PwC and Microsoft, is building a platform to share anonymized imaging data to help with diagnosing the disease. By the end of this year, the organization expects to have 15,000 scans in its database.\n\nWith AI and machine learning, doctors can diagnose the disease faster and more accurately, giving them more time to treat patients.\n\nAnd, in the future, the same platform can also be used for other rare diseases.\n\nOther industries being disrupted by computer vision\n\nIn the automotive sector, computer vision is used to assist drivers and to monitor drivers to ensure they are paying attention to the road. It\u2019s also key to enabling self-driving cars, a major growth engine for the use of computer vision in the automotive industry, says IDC\u2019s Aracaro.\n\nBut there is another key market for autonomous driving, and computer vision in general, says Arcaro: Agriculture. \u201cJohn Deere is doing something really critical there,\u201d he says, noting that computer vision is also being used in agriculture to sort products, to monitor plant and animal health, and to monitor and manage agricultural assets.\n\nIn cybersecurity, image analytics can be used to read signatures or spot phishing websites that are designed to look similar to real websites \u2014 but different enough to evade other detection methods.\n\nIn the hospitality industry, computer vision helps track where guests go while onboard cruise ships in order to improve their experience.\n\nIn the financial services industry, image processing captures data from documents to improve efficiency of business processes.\n\n\u201c[Computer vision] spans almost every industry,\u201d says Dinesh Batra, vice president of data and artificial intelligence at Capgemini Invent. \u201cIt has been a hugely successful tool for enterprises in recent years \u2014 and its prominence will only continue to accelerate.\u201d\n\nVisibly bright future\n\nAnd yet despite the abundance of use cases already employed, computer visionhas significant room for growth.\n\n\u201cIt\u2019s still early days,\u201d says Gartner\u2019s Nguyen. \u201cI expect to see more vendors show up in this space addressing different elements of the value chain. There is still a lot of opportunity to come as the technology gets better, more affordable, and more accessible. We\u2019ll start seeing it used anywhere and everywhere.\u201d\n\nIt\u2019s not all smooth sailing, however. According to Gartner, obstacles to adoption include hardware shortages and lack of processing capabilities. In some applications, there are still issues with accuracy. Computer vision systems also need to be integrated into the production lines as well as back-end systems, both of which can be a challenge.\n\nSo while COVID-19 has increased the demand and potential for computer vision in business, the attendant hardware shortages and labor disruptions caused in the pandemic\u2019s wake have made it difficult for many enterprises thus far to capitalize on the promise of the technology.\n\nBut as those issues abate in the future, companies will assuredly be primed for giving the technology a close look.