Generative artificial intelligence has burst into marketing and advertising with astonishing speed. It is no longer a future promise; it is a tool of the present. More than half of marketers actively use it for key tasks such as creative content production and audience segmentation. Nearly all plan to expand its use over the next year, establishing AI as a central component of digital strategy.
However, this race for innovation hides a concerning reality. As adoption accelerates, safety measures are failing to keep pace. A dangerous gap is emerging between AI’s potential and the risks associated with its rushed implementation. These problems are not theoretical; they are already happening and having real consequences.
This article delves into the most striking findings of a recent study by IAB and Aymara titled “AI Adoption Is Surging in Advertising, but is the Industry Prepared for Responsible AI?”. Through it, we uncover five counterintuitive truths that every industry professional should know.

Surprise: 7 out of 10 specialists have already experienced an AI “accident”.
AI risks are not a distant threat; they are an operational reality. An alarming 70% of marketers have already experienced at least one incident related to the use of AI in their advertising campaigns.
These “accidents” are not minor. They include a variety of serious problems, such as the generation of biased or inappropriate content, as well as the creation of off-brand material. Specialists are also facing more complex threats such as misinformation, loss of creative control, and malicious instructions or jailbreaks. The consequences have been tangible and far from cheap: 40% of those affected had to pause or remove ads, and more than a third suffered brand damage or had to manage PR crises. Only 6% considered the impact to be minimal.
The great paradox: almost everyone feels prepared, yet most are failing.
Despite this alarming reality, the industry suffers from overconfidence bordering on denial. Nearly 90% of respondents claim they feel prepared to detect AI-generated problems before launching a campaign.
This huge disconnect suggests a dangerously false sense of security. Confidence is based on traditional workflows such as human review and brand integrity checklists. However, these basic measures are insufficient. The high rate of already reported problems shows that current methods cannot manage the complexity and scale of the risks introduced by generative AI, while more advanced practices such as red team testing or automated evaluation tools remain uncommon.
Money isn’t following the problem: investment in AI safety is stagnant.
Despite obvious risks and already experienced incidents, investment in safety is not growing at the pace it should. Fewer than 35% of specialists plan to increase their brand integrity oversight budgets in the next 12 months.
This gap between risk and investment is critical. Companies are scaling their use of AI exponentially, but without dedicating the necessary resources to build strong, reliable safety systems. This forces every leader to ask: are we treating AI safety as an avoidable cost, or as a fundamental investment in brand survival?
The risky game of “whose job is it?”: AI leadership is ambiguous.
One of the biggest obstacles to responsible AI is the lack of clear leadership. While many point to executive leadership, an AI committee, or the marketing and legal teams as those responsible, the landscape is dangerously fragmented. Fourteen percent of organizations admitted that “no one” owns this function, while others simply were not sure.
This leadership vacuum is a ticking time bomb. Without clear ownership and a defined chain of command, safety protocols are not implemented, risks go unnoticed, and critical decisions are postponed. When an incident occurs, the lack of a clear responsible party only worsens the consequences.
A ray of hope: most want external help to navigate the chaos.
Despite the challenges, there is a very optimistic fact: the industry recognizes the magnitude of the problem and is willing to seek help. This is reinforced by the fact that only 6% believe that current safety measures are adequate. As a result, more than 90% of respondents said they would consider a third-party solution to assess risks such as hallucinations, bias, or off-brand content.
The value of this external help is clear to specialists, who are seeking both operational security and peace of mind. As one respondent put it, it would provide “peace of mind,” while another noted that it would “reduce risk for our brand and business.”
This openness is an extremely positive sign. It shows that industry leaders understand that AI governance and control is a complex discipline requiring specialized tools and expertise, and they are ready to collaborate with experts to strengthen their operations.

The research paints a clear picture: the advertising industry is building the airplane mid-flight. The enthusiasm for the technology’s potential is undeniable, but the dangers of uncontrolled implementation are equally real — and already causing damage.
The good news is that marketing professionals are asking for help. They want better standards, more robust tools, and expert support to use AI responsibly. As we integrate AI more deeply into our work, how can we ensure that our ambition does not outpace our caution?
The path toward safe and effective AI doesn’t have to be complicated, but it does require immediate commitment. As the report concludes:
With a few practical steps, responsible AI is not only possible — it can become the norm.
The data in this article comes from the IAB and Aymara report titled “AI Adoption Is Surging in Advertising, but is the Industry Prepared for Responsible AI?”
For those who would like to explore practical applications of AI in more depth, IAB also offers the resource “AI in Advertising Use Case Map,” a complete guide to current and emerging use cases in the industry.
