Why Your CTV Results Look Better in the Dashboard Than in the Business

Nearly half of all TV viewing now happens through streaming, and advertisers are rushing to spend where the eyeballs are. The category is on track to hit $42 billion in ad spend by 2027. That’s due to a growth rate higher than any other marketing channel outside of social media.

But the infrastructure for spending that money well hasn't kept up. Truthset estimates roughly $7.4 billion of 2026 CTV ad spend will be wasted due to inaccurate identity data. And that’s not even accounting for attribution inflation, technology fees, or inventory quality gaps.

The industry has attracted middlemen who make the buying process significantly more complicated and expensive than it needs to be. And results harder to pin down.

This is not a “you” problem. It’s a structural gap in how most CTV campaigns are bought and measured. This guide is for marketers ready to close it.

 

Platform numbers can overstate performance.

Platform self-reported attribution is the first place to look when dashboard results and business reality diverge.

CTV platforms are notorious for grading their own homework. And they’re mostly giving themselves high marks. They count conversions from people who already intended to buy, use overly wide view-through attribution windows, and fail to include a control group.

The result is attribution inflation. In one audit, we saw in-platform attribution come in more than four times higher than what a third-party incrementality test showed on the same campaign.

 

What drives Connected TV over-attribution?

  1. No true control group. Without a group of comparable households who were not exposed to your ad, it’s difficult to separate the lift your campaign caused from conversions that were already going to happen. It's a classic case of confusing correlation and causation.

  2. View-through windows that are too long. If someone sees your ad on Monday and converts three weeks later, did the ad drive that conversion? Probably not. But many platforms will count it if the attribution window allows. The longer the window, the more inflated the numbers.

  3. Counting people you were already reaching. Retargeting audiences and CRM lists are common CTV targeting approaches. They can work well. But when your measurement is based on whether a person who saw your ad converted, and that person was already being reached across email, paid social, and display, you are not measuring TV's contribution. You are measuring the effect of your entire marketing mix with TV included.

  4. Inaccurate IP address matching. IP-to-email accuracy sits at 16%. That means the household-level targeting and conversion matching your platform is reporting may not reflect what actually happened. When the signals powering CTV buying are this unreliable, brands unknowingly serve ads to the wrong households and measurement tools record phantom outcomes.

None of this means your CTV campaigns are not working. It means the attribution model your platform is using cannot tell you whether they are. The solution requires multiple measurement approaches working together. For example, pair IP-based attribution with incrementality testing to ensure across both models point in the same direction.

 

Your CTV fee stack costs more than you think.

40-60% of your programmatic CTV spend can be eaten by the technology stack before a single ad runs.

When you buy through a third-party DSP, your dollars pass through a supply-side platform (SSP), an ad exchange, a DSP license, data transfer fees, and ad serving costs. Each layer takes a cut. That cut is embedded in the CPM you are quoted, which is why it is hard to see. You’re told you it costs a $20 CPM for premium streaming inventory. But you may actually be paying $10 in fees stacked on top of $10 in media.

Most marketers know that programmatic buying has a fee problem. Fewer realize the scale of it in CTV specifically, where the infrastructure is newer and the fee structures are less scrutinized than in display or paid social.

 

Are you actually buying CTV, or something else?

When you contract for CTV, you expect your ads to run on television screens, inside long-form, professionally produced content, in the non-skippable ad breaks that look and feel like the TV advertising you have always known. That's not always the case.

According to eMarketer, 39% of advertisers express concern about the lack of transparency on where their CTV ads run. Part of the problem is that many platforms bundle CTV with OTT (over-the-top) video. This includes any streaming content delivered over the internet, including on phones, tablets, and desktops. Technically, a six-second pre-roll on mobile qualifies as OTT. So does a skippable video unit running inside a gaming app.

The experience gap between a 30-second, non-skippable spot inside a full-length TV show on a living room screen and a skippable pre-roll on mobile is enormous. The attention levels, context, and brand signaling all differ significantly. A viewer leaned back on a couch watching a drama on Peacock is not in the same mindset as someone half-watching a video on their phone while playing a game. But mixing CTV and OTT inventory is a convenient way to deflate CPM numbers while inflating reported reach.

The right question to ask any CTV vendor is what percentage of delivered impressions ran on a TV screen, inside long-form content, and in a non-skippable placement. If that number is not readily available, or if the answer conflates CTV and OTT delivery, you don’t have a clear picture of what you bought. The most defensible buys are inside Hollywood-style, long-form content: full-length episodes, movies, and professionally produced series.

This isn’t to say that OTT can’t be valuable. But any partner should be transparent about the inventory they’re buying with your dollars.

 

Most CTV campaigns concentrate spend instead of building reach.

Reach quality is the second major variable that most CTV campaigns neglect. There are two ways it breaks down.

Concentration on a small number of publishers. The instinct to focus spend on major premium publishers is understandable. The challenge is that when you concentrate budget on two or three platforms, you hammer the same households repeatedly rather than building reach across the full streaming universe. Consumers do not choose their streaming service based on perceived platform prestige. They choose based on what they want to watch. Your audience is spread across dozens of apps, and concentrating spend on a few of them is not a quality decision. It’s a reach-limiting one. Marketing Architects’ response data shows performance is remarkably consistent across publishers. What drives results is reaching the right person in the right environment, not which logo is in the top corner of the screen.

No frequency control across channels. Fragmented reach and cross-provider planning remain a top barrier to CTV scale. If you run linear TV and Connected TV through separate vendors, there is no mechanism to cap how often a given household sees your ad across both. Overexposure means paying to reach the same person multiple times when that spend could be building reach with someone new. Unified frequency management requires buying multiple forms of TV through the same infrastructure.

 

How to pressure-test your CTV campaigns for provable impact.

These questions will tell you whether your campaigns are built to deliver defensible business results.

  1. Does your attribution have a control group? If the answer is no, your ROAS figure is not a CTV number. It is a marketing mix number with CTV included. Any meaningful incrementality claim requires a group of comparable people who did not see the ad.

  2. What percentage of your CPM is going to media versus fees? Ask your vendor to break down the all-in tech fee rate.

  3. What percentage of delivered impressions ran on a TV screen? If your vendor bundles CTV and OTT in reporting, ask for the split. If the answer is unavailable or inconsistent, you may be paying for an audience mix you did not intend to buy.

  4. Where is your identity data coming from, and how is it validated? IP matching can be deeply flawed, and your partners should have action plans in place to address those issues for clearer targeting and measurement.

  5. Are linear and CTV managed through the same infrastructure? If not, you have no visibility into cross-channel frequency or unduplicated reach. You could be spending budget reaching the same households on two channels simultaneously, with no way to identify it in reporting.

 

A different way to buy Connected TV.

The problems described in this guide are features of how most CTV buying infrastructure was built. Third-party DSPs were designed to clear inventory efficiently, not to hold out for the best possible placement for a specific brand's goals. Platform attribution was designed to demonstrate platform value, not to prove true incrementality.

Solving them requires a different approach. At Marketing Architects, we built Annika Streaming, our media-buying AI for Connected TV, from the ground up to address each of these points directly.

Annika® is not a white-labeled platform. Because we built our own DSP and have direct publisher relationships across the full streaming universe, the fee stack that consumes standard programmatic budgets simply does not exist.

The buying approach is different, too. Think of it like a batter waiting for the right pitch. Billions of bid opportunities come across the plate. A standard DSP swings at everything just to get on base. Annika lets the bad pitches go. She lets the mediocre ones go, too. She is evaluating every opportunity against two criteria simultaneously: efficient CPMs and media that meets quality standards. That means brand-safe environments, long-form TV-screen placements, and audiences with demonstrated intent to act. She only commits when the pitch is a home run across all dimensions. That selectivity is what makes efficient CPMs possible without sacrificing placement quality.

Marketing Architects’ CTV measurement strategy also layers incrementality testing alongside traditional IP-based tracking. We share methodology with clients and actively encourage third-party validation. We would rather have a smaller, defensible number than a large one that falls apart under scrutiny.

Brands that have shifted their CTV campaigns to Annika have seen performance improve by at least 20%, typically from eliminating unnecessary fees, adding cross-channel frequency control, and replacing inflated reporting with results you can act on.

 

The gap between your dashboard report and your CFO's spreadsheet has a dollar amount.

Most CTV advertisers come to us with the same problem: strong platform numbers that do not hold up when someone starts asking harder questions. A platform reports 4x ROAS. A lift test shows 1.2x. Finance sees break-even. The budget gets scrutinized. The channel takes the blame.

CTV, bought and measured correctly, outperforms Meta and Google on incremental ROAS. The problem is the infrastructure most brands use to buy and measure the channel. And it’s costing advertisers more than they realize.

The brands getting defensible results from CTV are the ones who can answer the questions in this guide with specific numbers, and who have a measurement approach that holds up outside the dashboard.

If you are ready to close the gap between what your dashboard shows and what your business gains, let's talk.

The Marketing Architects Team image

The Marketing Architects Team

Curated by our leaders, creatives, analysts, designers, media buyers and more at Marketing Architects.