Even with the best planning, companies should be ready to adjust projects when faced with unexpected challenges or as parameters change due to business needs. This is no difference in the development of AI models, where initial results may lead teams down a new path that requires a rethinking of their AI infrastructure.
“AI is never one and done,” said Tony Paikeday, senior director of AI systems at NVIDIA. “Models drift over time because the data that fuels them changes over time. If you trained a model on data from last month or last year, the data you feed it next month or next year is going to look wildly different.”
Also, as AI models grow and expand, the need for infrastructure changes may occur — especially when enterprises evaluate the costs of data transfer, storage and other issues surrounding data gravity.
Analyzing media content at scale
BEN Group, the world’s largest product placement and influencer marketing company, saw that consumers were spending less time on traditional media and shifting to streaming and social media channels, while also using ad blockers and skipping ads. These new advertiser challenges caused the company to develop AI models to find creators and influencers who provide more authentic, natural and non-disruptive brand-to-customer interactions.
Initially, BEN Group’s data science teams used cloud computing to develop and experiment with their AI models. The applications were trained to look at terabytes of unstructured data including images, text data, video and audio to identify the right influencers for a specific brand. However, with 50 million self-identified creators, the vast volumes of data processing became too great.
“As the size of our datasets started to scale from megabytes to terabytes due to the adoption of deep learning for video understanding and focusing on AI research, we realized that cloud computing would become infeasible to handle the number of experiments necessary to develop new neural architectures and algorithms,” said Schubert Carvalho, Director of AI Research at BEN Group. “We needed a dedicated resource to provide maximum performance for running multiple experiments and storing vast amounts of data locally.”
The company chose NVIDIA DGX A100, an on-premises solution that allows BEN Group to analyze terabytes of video content in a few hours, instead of days or weeks. For example, the team can analyze 100,000 Instagram posts per week, derive insights and even create customized algorithms for clients. Overall, content processing became four times faster than the legacy GPU-powered servers the company previously used.
Now, the AI models can predict the entire sales funnel — from impressions and views to clicks, and even conversions from content creators across Hollywood, music, and social media influencers — to find the best avenues for partnership.
They’re also able to identify fraudulent influencers and bot activity, reducing the 15% of advertisers’ spend that is lost to fraud, which is estimated at $1.3 billion annually.
The overall results of the infrastructure switch have paid dividends for BEN Group. In certain cases, customer acquisition costs decreased by 32%, and some clients saw a conversion rate increase of 39%. The AI technology was able to predict eight out of the 10 top shows in the fall of 2020, with a similar correlation in new streaming shows for the purpose of product placement.
“The DGX A100 delivered powerful performance to our AI research team,” Carvalho said. “We fostered the development of new AI solutions that were not possible with our previous AI infrastructure.”
Click here to discover all the benefits of an AI infrastructure without all of the heavy lifting, with NVIDIA DGX Systems, powered by DGXA100 Tensor core GPUs and AMD EPYC CPUs.
To learn more about BEN Group’s AI success, read their case study.
About Keith Shaw:
Keith is a freelance digital journalist who has written about technology topics for more than 20 years.