Still, while ideas abound, actualization remains a challenge. How can marketers realistically incorporate generative AI into their daily workflows at this stage? In the latest episode of fifty-five’s Data Break podcast, I chatted with Hugo Loriot, Head of Data and Technology Integration at The Brandtech Group, to discuss how GenAI is shaping the future of business.
Below is a preview of our discussion on how Generative AI is transforming business as we know it and how brands can better prepare for what comes next.
The first GenAI use cases to be developed were centered around content production, as using an LLM to optimize a text for SEO or write emails felt like the most logical course of action. Yet, while content generation remains the only production-ready application for marketers, some are shifting their attention to strategy, media, and analytics.
Another notable shift is where brands choose to focus their attention internally. To stay ahead of the curve in a fast-changing environment, they are pulling in new stakeholders early to evaluate new technologies and determine where to integrate GenAI and in which order of priority. This implies high cross-department collaboration, especially between legal and privacy teams. Any AI application being considered must comply with emerging regulations, such as the European AI Act.
Before investing in new GenAI tools, businesses must work on their data ownership, transparency, and governance. To do so, brands must take a top-down approach to responsible AI-powered business transformation, with major stakeholders defining corporate policies to clarify how to safely use AI for a competitive advantage.
A strong AI governance must touch on three key points:
A prompt containing company information cannot be allowed to feed the model’s training or be retrieved by a third party. Therefore, brands need to agree on specific terms and conditions with the model provider or ensure that only specific interactions are permitted.
Whether a prompt is used to generate an image, text, video, or anything else, the content created is not necessarily “owned” by the prompt writer. That status depends not only on the model provider but also on local legislation. Today, in the US, it is still unclear whether or not an AI-generated image is owned by the person who prompted the model.
The ambiguity mentioned above is part of why human oversight is vital, not only to provide guardrails for the model but also to edit the output with a more personal touch, thus reinforcing ownership.
Beyond quality inputs and a strong data foundation, brands should also pay special attention to
their shift from third-party cookies to first-party data, as the former are being gradually phased out. With a more mindful approach to data collection and governance, far from the “Wild West” days of the early internet, transitioning to GenAI-augmented processes will be much more top-down, organized, and manageable.
The same mindfulness is recommended when choosing which use case to prioritize. Our clients have seen great success by focusing on “safer” internal use cases dedicated to quality optimization, time-saving, and efficiency. By concentrating on these safer use cases, businesses can build the right level of governance, safety, control, human involvement, and testing to pave the way for future, riskier GenAI applications.
In the near future, a convergence between traditional AI (prediction-oriented) and generative AI (creation-oriented) seems highly likely, opening up countless possibilities for brands. Additionally, techniques such as Retrieval-Augmented Generation (RAG) can augment GenAI models by fetching data from external sources, including market research, to allow for deeper dives into specific topics - a potential game-changer for companies with limited access to first-party data, which is the case with many traditional industries such as CPG.
Listen to the full episode for additional insights on:
Discover all the latest news, articles, webinar replays and fifty-five events in our monthly newsletter, Tea O'Clock.