[This article was originally published on Emerce. DUTCH]
Viewability is a hot topic in the online marketing industry. Advertisers who know if and how long their creatives are in-view are able to spend their media budgets more effectively. Specialized viewability vendors provide the required data, but how reliable are they? On behalf of ING and Mindshare, Bannerconnect put viewability to the test.
For this unique study, we selected the ten most used viewability vendors in the Dutch online media landscape: AOL / Adtech, Sizmek, Active View, Comscore, Integral Ad Science, Alenty, Meetrics, MOAT, DMA and Double Verify. These vendors play an important role in assessing the effectiveness of online campaigns by submitting viewability data (the extent to which online ads are actually visible to consumers).
To increase the accountability of online marketing, the IAB launched a viewability standard in 2014. According to this standard an online display ad only counts as ‘visible’ if fifty percent of the ad has been visible for at least one second. Viewability can thus be an important factor in the calculation of the fee advertisers pay to publishers for displaying their ads.
An important condition is that advertisers can obtain objective data, wherein the visibility of their ads is defined unambiguously. In addition to the data provided by publishers, advertisers often use independent viewability vendors. However, the problem is that the data provided by these viewability vendors can sometimes vary strongly. Therefore, it is difficult for advertisers to choose the vendor that best suits their objectives.
To assist ING in this decision we developed an objective test and approached ten of the most common viewability vendors in the Dutch online media landscape. Eventually, eight parties agreed to have the accuracy and consistency of their technology tested. Both direct and programmatic purchased impressions were part of the test.
In order to do an objective test, we loaded the technology of these eight vendors in a purpose-developed website. “Thirteen different test scenarios were developed for the first part of the test,” says Patrick Boux de Casson, Quality & Control Executive. “We constantly repeated these scenarios on the test site. For example, an imaginary visitor scrolled from top to bottom at different speeds where a banner was not displayed, or where a banner was fully displayed. Because we knew in advance what the outcome of each scenario should be, we were able to assess how accurate the reports of the vendor were”
The second part consisted of a live test, in which viewability technologies were used in a direct campaign and a programmatic campaign on the biggest publishers in the Netherlands: Sanoma, RTL, Persgroep, Telegraph and Marktplaats. The exact same traffic was made available for programmatic deals and direct buying. The aim was to determine whether the viewability vendors would be able to report in the same way over these two buying methods. This is of great interest to advertisers who want to create agreements with publishers about viewability.
By comparing the results from the controlled environment in the lab test to the live test, we obtained a clear understanding of the accuracy and consistency of the reports supplied by the viewability vendors. This resulted in some remarkable results. Almost all the tested vendors in the lab test reported a mere 0.5 seconds of visible banner as being an in-view banner, which doesn’t conform the IAB standard.
The performance of one of the tested vendors was so inadequate that researchers no longer included the vendor in the second part of the test. The vendors only took part under the strict condition that no specific vendor related test results would be mentioned in publications.
Other results of the test were also surprising. Although the conditions over which the different vendors reported were exactly the same, reports exhibited a very large spread in their results. For example, the difference between the most and least accurate measurements in the lab test were 22 percent, and in the live test this even increased to 39 percent.
Since we don’t have access to the backend technology of these vendors, we don’t know the exact reasons for this. But in general we can conclude that the technology from some of the vendors is less accurate then the scenarios we created.
Of course, it is important to select a viewability vendor which can report accurately, to make sure your campaign is running on positions where your impressions are actually in-view. The choice of vendor therefore also determines the success of your campaigns.
Based on both tests MOAT, Sizmek and IAS turned out to be the best vendors matching the objectives of ING. In their advice, the researchers only took the viewability metric into account and left out other factors like service and price. Advertisers who want to know which parties are best suited for their objectives will therefore need to approach a party with the know-how to objectively measure the performance of those vendors.
For ING, the results of the test were valuable for several reasons. “ING has a strong focus on brand advertising,” says Arjan Grootveld, who is advising ING as an independent programmatic consultant in this area. “Obviously we want to know exactly whether this reported range is actually realized. With the verified viewability data we can also take steps to ensure that our budget is optimally spent. By shifting budgets or creating stricter agreements with publishers where viewability is disappointing, we can get the most out of our budget.”
Moreover, according to Grootveld the report offers greater insight into the dynamics and rules within this complex field. “Although there are all kinds of well-defined guidelines, it has become clear that not all parties are sticking to them. The fact that almost all vendors report an impression of half a second as ‘in-view’ is very remarkable to say the least.”
Grootveld is pleased with the improved transparency resulting from such investigations. “We learnt more about viewability and how it is measured, and can take advantage of this knowledge at a later stage” he says. “In view of the rapid developments in this industry I can therefore imagine that we will do more of these tests in the future, and also take other related matters into account such as mobile and video. As an advertiser, you cannot have enough of these valuable insights. ”
https://www.bannerconnect.net/wp-content/uploads/2016/09/Freq-cap-visual-bw.jpg10801080Tim van Rijthttps://bannerconnect.net/wp-content/uploads/2015/09/logo_bannerconnect_72.pngTim van Rijt2016-09-07 15:24:472018-05-17 11:07:13Here's What Frequency Caps Get Wrong - And How to Improve Them