Friday, June 20, 2025

[Netflix Tech Blog] Inside Netflix's Ads Event Engine: A Deep Dive (Part 2)

In Part 1, we explored how Netflix moved from a bloated token model to a more elegant, scalable architecture using a centralized Metadata Registry Service (MRS). That shift allowed Netflix to scale its ad event tracking system across vendors, while keeping client payloads small and observability high.

But Netflix wasn’t done yet.

Netflix built it's own Ad tech platform: Ad Decision Server:

Netflix made a strategic bet to build an in-house advertising technology platform, including its own Ad Decision Server (ADS). This was about more than just cost or speed, it was about control, flexibility, and future-proofing. It actually began with a clear inventory of business and technical use cases:

  • In-house frequency capping
  • Pricing and billing for impression events
  • Robust campaign reporting and analytics
  • Scalable, multi-vendor tracking via event handler
But this Ad Event Processing Pipeline needed a significant overhaul to:
  • Match or exceed the capabilities of existing third-party solutions
  • Rapidly support new ad formats like Pause/Display ads
  • Enable advanced features like frequency capping and billing events
  • Consolidate and centralize telemetry pipelines across use cases


Road to a versatile and unified pipeline:

Netflix recognized future ad types like Pause/Display ads that would introduce variant telemetry. That means the upstream telemetry pipelines might vary, but downstream use‑cases remain largely consistent. 

So what is upstream and what is downstream in this Netflix event processing context:

Upstream (Producers of telemetry / Events): This refers to the systems and sources that generate or send ad-related telemetry data into the event processing pipeline.

Example of upstream in this context:

  • Devices (Phones, TVs, Browsers)
  • Different Ad Types (eg. Video Ads, Display Ads, Pause Ads)
  • Ad Media Players
  • Ad Servers
These upstream systems send events like:
  • Ad Started
  • Ad Paused
  • Ad Clicked
  • Token for ad metadata
These system may:
  • Use different formats
  • Have different logging behavior
  • Be customized per ad type or region

Downstream (Consumers of processed data): This refers to the systems that consume the processed ad events for various business use cases.

Example of downstream in this context:

  • Frquency capping service
  • Billing and pricing engine
  • Reporting dashboards for advertisers
  • Anlytical systems (like Druid)
  • Ad session generators
  • External vendor tracking
These downstream consumers:
  • Expect standardized, enriched, reliable event data
  • Rely on consistent schema
  • Need latency and correctness guarantees

So "upstream telemetry pipelines might vary, but downstream use‑cases remain largely consistent" means:

  • Even if ad events come from diverse sources, with different formats and behaviors (upstream variation),
  • The core processing needs (like billing, metrics, sessionization, and reporting) remain the same (downstream consistency).

Based on that they outline a vision:

  • Centralized ad event collection service that:
    • Decrypts tokens
    • Enriches metadata (via the Registry)
    • Hashes identifiers
    • Publishes a unified data contract, agnostic of ad type or server
  • All consumers (for reporting, capping, billing)/read from this central service, ensuring clean separation and consistency.


Building Ad sessions & Downstream Pipelines:

A key architectural decision was "Sessionization happens downstream, not in the ingestion flow" but what does that mean? Lets break it down:

Ingestion flow (Upstream): This is the intial (real time) path where raw ad telemetry events are ingested into the system, like:

  •  Ad started playing
  • User clicked
  • 25% of video watched

These events are lightweight, real time and are not yet grouped into a meaningful session. Think of it as raw, granular events flying into Kafka or a central service.

Sessionization (Downstream Processing): This is a batch or stream aggregation process where the system:

  • Looks at multiple ad events from the same ad playback session
  • Groups them together
  • Creates a meaningful structure called an "Ad Session"

An Ad session might include:

  • When did the ad start and end
  • What was the user's interaction (viewed / clicked / skipped)
  • Was it complete or partial view
  • What was pricing and vendor context
So "Sessionization happens downstream, not in the ingestion flow" means:
  • The ingestion system just collects and stores raw events quickly and reliably.
  • The heavier work of aggregating these into a complete session (sessionization) happens later, in the downstream pipeline where there is:
    • More time
    • Access to more context
    • Better processiong capabilities
Why it was required?
  • Separation of concerns: Ingestion stays fast and simple; sessionization logic is centralized and easier to iterate on.
  • Extensibility: You can add new logic to sessionization without touching ingestion.
  • Scalability: Real time pipelines stay light; heavy lifting is done downstream.
  • Consistency: One place to define and compute what an "ad session" means.

They also built out a suite of downstream systems:
  • Frequency Capping
  • Ads Metrics 
  • Ads Sessionizer
  • Ads Event Handler 
  • Billing and revenue jobs
  • Reporting & Metrics APIs


Final Architecture of Ad Event Processing:


Ad Event Processing Pipeline


Here is the breakdown of the flow:

  1. Netflix Player -> Netflix Edge:
    • The client (TV, browser, mobile app) plays an ad and emits telemetry (e.g., impression, quartile events).
    • These events are forwarded to Netflix’s Edge infrastructure, ensuring low-latency ingestion globally.
  2. Telemetry Infrastructure:
    • Once events hit the edge, they are processed by a Telemetry Infrastructure service.
    • This layer is responsible for:
      • Validating events
      • Attaching metadata (e.g., device info, playback context)
      • Interfacing with the Metadata Registry Service (MRS) to enrich events with ad context based on token references.
  3. Kafka:
    • Events are published to a Kafka topic.
    • This decouples ingestion from downstream processing and allows multiple consumers to operate in parallel.
  4. Ads Event Publisher:
    • This is the central orchestration layer.
    • It performs:
      • Token decryption
      • Metadata enrichment (macros, tracking URLs)
      • Identifier hashing
      • Emits a unified and extensible data contract for downstream use
    • Also writes select events to the data warehouse to support Ads Finance and Billing workflows.
  5. Kafka -> Downstream Consumers:
    • Once enriched, the events are republished to Kafka for the following consumers:
      • Frequency Capping:
        • Records ad views in a Key-Value abstraction layer.
        • Helps enforce logic like “don’t show the same ad more than 3 times per day per user.”
      • Ads Metrics Processor:
        • Processes delivery metrics (e.g., impressions, completion rates).
        • Writes real-time data to Druid for interactive analytics.
      • Ads Sessionizer:
        • Groups related telemetry events into ad sessions (e.g., full playback window).
        • Feeds both data warehouse and offline vendor integrations.
      • Ads Event Handler:
        • Responsible for firing external tracking pixels and HTTP callbacks to third-party vendors (like Nielsen, DoubleVerify).
        • Ensures vendors get event pings per their integration needs.
  6. Analytics & Offline Exchange:
    • The sessionized and aggregated data from the warehouse is used for:
      • Offline measurement
      • Revenue analytics
      • Advertiser reporting

Benefits of design:

Principle Implementation
Separation of concerns Ingestion (Telemetry Infra) ≠ Enrichment (Publisher) ≠ Consumption (Consumers)
Scalability Kafka enables multi-consumer, partitioned scaling
Extensibility Central Ads Event Publisher emits a unified schema
Observability Real-time metrics via Druid; sessionized logs via warehouse
Partner-friendly Vendor hooks via Event Handler + Offline exchange

By centralizing sessionization and metadata logic, Netflix ensures that:
  • Client-side logic remains lightweight and stable
  • Backend systems remain flexible to change and easy to audit

Few clarificatons:

1. What is the Key-Value (KV) abstraction in Frequency Capping?

In Netflix’s Ad Event Processing Pipeline, frequency capping is the mechanism that ensures a user doesn’t see the same ad (or type of ad) more than a specified number of times e.g., “Don’t show Ad X to user Y more than 3 times a day.

To implement this efficiently, Netflix uses a Key-Value abstraction, which is essentially a highly scalable key-value store optimized for fast reads and writes.

How it works:
  • Key:  A unique identifier for the "audience + ad + context"
    • Example: user_id:ad_id:date
  • Value: A counter or timestamp that tracks views or interactions
    • Example: {count: 2, last_shown: 2025-06-20T12:15:30}
Let's understand it by example:

KeyValue
user123:ad789:2025-06-20{ count: 2, last_shown: 12:15 }
user123:ad456:2025-06-20{ count: 1, last_shown: 10:04 }

When a new ad event comes in:
  • Netflix looks up the key
  • Checks the count
  • If under the cap (e.g., 3), they allow it and increment the counter
  • If at or above cap, the ad is skipped or replaced
Why "Abstraction"?

Netflix may use multiple underlying KV stores (like Redis, Cassandra, or custom services), but abstracts it behind a unified interface so:
  • Ad engineers don’t worry about implementation
  • Multiple ad systems can share the same cap logic
  • It can scale across regions and millions of users
Benefits:

FeatureWhy it matters
SpeedReal-time lookups for every ad play
ScalabilityHandles millions of users per day
FlexibilitySupports different capping rules
Multi-tenancyCan cap by user, device, campaign, etc.


2. What is Druid and why it is getting used here?

Apache Druid is a real-time analytics database designed for low-latency, high-concurrency, OLAP-style queries on large volumes of streaming or batch data.
It’s built for interactive dashboards, slice-and-dice analytics, and fast aggregations over time-series or event data.
Think of it as a high-performance hybrid of a data warehouse and a time-series database.

Netflix uses Druid in the Ads Metrics Processor and Analytics layer. Here’s what it powers:
  • Streaming Ad Delivery Metrics
    • As ads are played, Netflix captures events like:
      • Impressions
      • Quartile completions (25%, 50%, etc.)
      • Clicks, errors, skips
    • These events are streamed and aggregated in near-real-time via Flink or similar streaming processors
    • Druid ingests these metrics for interactive querying.
  • Advertiser Reporting and Internal Dashboards
    • Ad operations teams and partners want to know:
      • "How many impressions did Ad X get in Region Y yesterday?"
      • "What's the completion rate for Campaign Z?"
    • These queries are run against Druid and return in milliseconds, even over billions of records
Why Nerflix uses Druid (instead of just a data warehouse like Redshift or BigQuery)?
 
FeatureWhy it matters for Netflix Ads
Real-time ingestionMetrics appear in dashboards within seconds of the event
Sub-second queriesAdvertiser reports and internal dashboards need speed
High concurrencySupports thousands of parallel queries from internal tools
Flexible aggregationGroup by ad_id, region, time, device_type, etc. instantly
Schema-on-read & roll-upsDruid efficiently stores pre-aggregated data for common metrics


Conclusion:

As Netflix shifted toward building its own advertising technology platform, the Ads Event Processing Pipeline evolved into a thoughtfully architected, modular telemetry platform capable of handling diverse ad formats, scaling globally, and meeting the demands of both real-time analytics and long-term reporting.

This transformation was not just about infrastructure, but about platformization:
  • A centralized event publisher that abstracts complexity.
  • A unified schema that serves a wide range of downstream consumers.
  • A clean separation of concerns that allows rapid innovation in ad types without touching backend logic.
  • Observability and control, made possible through systems like Druid and the Metadata Registry Service.
With this evolution, Netflix ensured:
  • Scalability: Handling billions of ad events across regions and devices
  • Agility: Enabling quick integration of new ad formats like Pause/Display Ads
  • Partner alignment: Supporting vendor integrations with minimal friction
  • Business confidence: Powering frequency capping, billing, and reporting with precision
Here is the original Netflix blog

No comments:

Post a Comment