By using GenAI to create websites with low-quality content quickly, fraudsters have been creating a negative stir in the industry: survey
Based on a 2024 survey of 1,000 advertising personnel around the world on generative AI trends, a firm that provides digital ad verification and media quality checks has released some findings.
First, 54% of respondents cited believing that generative AI (GenAI) significantly harmed media quality due to abuse by unscrupulous players.
Second, a network of over 200 web properties were cited to include mostly AI-generated, ad-supported content. These sites often mimic legitimate publishers, posing threats to ad spend and campaign performance. Dozens of them use terms such as “sports” or “sport” in their URLs, capitalizing on the perception of sports content as “safer” or “more suitable” than traditional breaking news. However, simply being sports-focused does not inherently justify ad spend, the respondents felt.
Third, user experience on these unethical sites was cited by respondents to be poor, negatively impacting campaign performance for any human visitors. Also, these sites often worsened the issue by overcrowding their pages with ads, creating a frustrating and cluttered reading experience.
Fourth, respondents cited that, while AI-generated sites (or sites containing pure AI generated content) may rely entirely on AI to produce believable articles at scale, they regularly scrape and plagiarize content from legitimate publishers.
Fifth, the offending websites were felt by respondents to erode trust in programmatic media buying, diverting ad budgets from quality publishers to low-quality or fraudulent inventory. In addition to this direct financial loss, damage to brand reputation can be caused by the ads on these sites through association with untrustworthy and poorly-made content.
This trend, disclosed in the survey commissioned by DoubleVerify, has highlighted the need for tools that help brands avoid unsuitable or harmful placements, based both on content and overall presentation. Relying on exclusion lists to block known low-quality sites has not been effective due to the rapid creation of new domains. The proper solution would be to deploy AI and machine learning to identify such sites dynamically and update the exclusion list automatically.