Swift Throttler: Fast ML to Reduce Bid Waste in oRTB
In large-scale oRTB systems, bid waste—bids rejected for non-price reasons—is a hidden tax on both compute and opportunity. Every bid that is constructed, scored, and responded, only to be rejected due to publisher restriction or creative mismatch, consumes hardware and bandwidth without any chance of winning the auction.
Formally, we can define bid waste over a given time window as:
While industry benchmarks typically fall in the 5–10% range, Some DSPs have ~20% bid waste on certain SSPs, such as Google AdX. The prevailing mitigation strategy relied on rule-based filters owned by publishers and exchanges—coarse heuristics that operate outside the DSP’s control plane.
At the time, we argued for a different approach:
Given the volume and richness of rejection feedback data, we can train a compact model that predicts non-price rejected bids before we decorate the bids, and filter them out before ranking and response.
As a result, we designed and built Swift Throttler to do exactly this.
Swift Throttler is a lightweight prediction and filtering layer that sits upstream of ranking stack. It filters out candidates that are highly likely to be rejected for non-price reasons—whether creative-related (disapproved, size mismatch) or publisher preference related—before they enter the auction. This early pruning:
- Reduces bid waste,
- Increases win rate among surviving bids, and
- Improves compute efficiency for both the DSP and the exchange.
Problem Definition: Non-Price Rejections as a Learning Signal
In typical RTB feedback logs, each bid response can be categorized as:
- Price-related outcomes
- Win (clearing at or below bid price)
- Loss (due to lower price)
- Non-price rejections
- Creative disapproved
- Size or format mismatch
- Policy violations
- Other publisher-specific rejection rules
Historically, these non-price rejection codes were treated primarily as diagnostic information. For Swift Throttler, we reframed them as supervised labels:
- Positive class: bids that would be rejected for non-price reasons.
- Negative class: bids that are eligible from a non-price standpoint (win or lose on price is fine).
The objective is to learn a function:
where encodes bid, creative, SSP, and context features. At serving time, if exceeds a threshold , we drop the bid before we invest further compute in ranking and before sending it to the exchange.
Design Goals
We designed Swift Throttler around four concrete goals:
- Latency-aware
- Sub-10ms inference budget, end-to-end, in the critical path of a real-time bidding pipeline.
- Compute-efficient
- Significant reduction in QPS and CPU/memory usage for downstream components and network traffic.
- Highly explainable
- Simple model architecture with interpretable features and clear attribution for filtering decisions.
- Low operational overhead
- Easy to deploy, monitor, and retrain; no specialized hardware required.
These constraints effectively excluded heavyweight neural architectures and pushed us toward simple, robust models and careful feature engineering.
System Architecture: Where Swift Throttler Sits in the RTB Stack
At a high level, a typical oRTB request flows through the following pipeline:
- Request ingestion and validation
- Candidate matching
- Bid formulation
- Ranking
- External auction
Swift Throttler is inserted between steps 2 and 3:
- Request arrives and basic eligibility checks run.
- A set of candidate line items / campaigns is assembled.
- Swift Throttler evaluates each candidate:
- Extracts a compact feature vector.
- Runs it through the model to estimate non-price rejection probability.
- Drops candidates whose predicted probability exceeds a configurable threshold.
- Only surviving candidates proceed to ranking
- Bids are constructed and sent to the SSP.
- Auction feedback (including rejection codes) is logged and later used to retrain and recalibrate Swift Throttler.
This placement is crucial:
- Early enough to avoid downstream compute on doomed candidates.
- Late enough that enough features (creative, targeting, SSP context) are available for a high-quality prediction.
Modeling Approach: Simple, Interpretable, and Fast
Given the latency and explainability constraints, we deliberately chose pragmatism over architectural complexity.
Features
We engineered features around dimensions known to influence non-price rejections:
- Creative attributes
- Inventory and SSP context
- Targeting and policy
- Historical feedback signals
These features are intentionally semantically aligned with the underlying mechanisms that lead to non-price rejections, which makes model outputs easier to reason about and debug.
Model Choice
Rather than introduce a deep model, we opted for a simple, highly explainable predictor (e.g., a regularized linear model or a shallow tree-based model, depending on deployment constraints). This class of models offers:
- Sub-10ms inference on commodity hardware.
- Low memory footprint.
- Straightforward techniques for:
- Feature importance analysis,
- Threshold tuning, and
- Per-feature contribution inspection.
This proved more effective and operationally sustainable than an over-engineered neural solution that would have added latency and infrastructure complexity for marginal predictive gains.
Training, Evaluation, and Threshold Selection
Offline Training
We trained Swift Throttler using historical bid logs, with labels derived from SSP feedback:
- Positive labels: bids with explicit non-price rejection codes.
- Negative labels: bids that were not rejected for non-price reasons (wins and price-losses).
We evaluated candidate models on:
- AUC / ROC curves for the non-price rejection classification task.
- Expected bid waste reduction under different thresholds , by simulating dropping predicted positives.
- False positive impact: the probability of dropping a bid that would have been eligible and potentially profitable.
The offline evaluation produced a Pareto frontier between:
- Bid waste reduction (good) and
- Potential opportunity loss (bad).
We then selected operating points that:
- Maximized bid waste reduction subject to strict constraints on delivery and performance impact, and
- Varied thresholds by SSP or traffic segment when needed.
Online Experiments
We rolled out Swift Throttler via controlled experiments (A/B tests):
- Treatment: Swift Throttler enabled; bids above threshold dropped pre-ranking.
- Control: Existing pipeline without Swift Throttler.
Key metrics we tracked:
- Bid waste rate (non-price rejections / total bids).
- Overall QPS and CPU utilization in downstream systems.
- Win rate and clearing price distributions on remaining bids.
- Campaign-level delivery, CPA, and ROAS.
The results validated the offline predictions: Swift Throttler significantly reduced bid waste while preserving or improving business performance.
Lessons Learned
A few key takeaways from the Swift Throttler project:
- Feedback is a feature, not just a log: SSP rejection codes, when properly structured, are a powerful supervised signal for learning better pre-auction filters.
- Simple models can be strategically optimal: Under tight latency and cost budgets, a small, well-engineered model with good features can outperform more complex architectures in real-world impact.
- Place models where they change economics: Inserting Swift Throttler before ranking and response is what unlocked both compute and opportunity gains.
- Make it explainable and tunable: Trust and adoption grew because stakeholders could understand why bids were being filtered and could tune thresholds per SSP or use case.
Swift Throttler turned bid waste from an accepted cost of doing business into a first-class optimization objective, using a technical yet pragmatic approach that balances modeling, systems engineering, and economic impact.