Google has spent the past year steadily increasing its emphasis on value-based bidding — and Max Conversion Value (MCV) has become one of the most heavily pushed recommendations across accounts. If you’re managing paid search, you’ve likely seen it everywhere: rep suggestions, UI prompts, “upgrade your bidding strategy” nudges, all telling you the same thing:
Move to value. Let the algorithm optimize toward what matters most.
It’s a compelling pitch.
But also an incomplete one.
I wondered whether MCV would deliver the performance improvements promised or if it was just another example of Google asking us to trust a system that wasn’t built with our clients’ realities in mind.
I took Google’s advice at face value and conducted a 30-day experiment with MCV across three very different accounts that, on paper, should have benefited from value-based optimization. These weren’t random tests. Each account represented a specific scenario where MCV should have a fair shot:
- Health services — complex funnel, weighted values, sparse high-value events
- High-consideration B2C — single conversion type, moderate volume, clear user intent
- Travel & Leisure — dynamic revenue (the ideal MCV scenario)
If MCV was going to deliver optimal performance improvements, these were the environments for it. What actually happened told a different story.
A Quick Refresher on Max Conversions vs. Max Conversion Value
Before diving into the tests, here’s a brief overview of the two bid strategies:
Max Conversions (MC)
- Optimizes for the highest number of conversions.
- Works best when all conversions are roughly equal in value, or when volume itself is the priority.
- More predictable — especially in lead-gen, where conversion quality varies but quantity is steady.
Max Conversion Value (MCV)
- Optimizes for higher-value conversions, not more conversions.
- Built originally for e-commerce, where order value is clean, consistent, and measurable.
- In lead gen, “value” typically has to be assigned.
- Success depends on signal quality, frequency, and consistency.
Why Google wants advertisers using MCV
Google’s automation roadmap leans toward machine learning models that perform best when fed detailed value gradients. The platform’s thesis is simple: “If you tell us what matters, we’ll find more of it.”
That premise is logical, but it assumes the advertiser can provide value signals that are frequent, accurate, and statistically meaningful. With this context, I wanted to see whether MCV could actually improve performance in practice, so I chose three accounts that could expose the strategy’s strengths and limitations. Together, these gave me a comprehensive picture of how MCV behaves across the full spectrum of environments, alternative to typical e-commerce accounts.
Results
Account #1 – Health Services
Can MCV prioritize the highest-value action in a multi-step healthcare funnel?
Three reasons made this account the ideal environment to test MCV:
- A clear hierarchy of conversion types
Patients don’t jump straight to booking appointments. They browse providers, compare services, call for information, and eventually convert. Each action reflects a different stage in the journey and carries a different level of business value. - An established internal understanding of value
The account already operated with a strong sense of which actions correlated most with downstream appointments, making value assignment meaningful rather than speculative. - A layered conversion path
This is exactly the type of environment where value-based bidding should shine — prioritizing higher-value indicators while still maintaining funnel volume.
To reflect actual business impact, I assigned values to each event:
- $500 → Appointment action
- $25 → Find-a-Doctor interaction
- $10 → Phone call
This created a clear hierarchy for Google to learn from: more high-value appointment behavior, fewer low-value exploratory actions.
What I expected
I expected the model to wobble early — healthcare doesn’t produce high-value conversions quickly — but stabilize over time and begin weighing toward stronger intent actions. Even a modest lift in appointment actions would’ve suggested MCV understood the funnel hierarchy.
What actually happened
Instead, over 30 days, MCV reduced performance across almost every conversion type, including mid-funnel and high-value actions. The strategy didn’t lean into value at all — it constricted delivery and weakened the entire funnel.
KPI Summary
| Metric | Control (MC) | MCV | Δ Change | % Change |
| Clicks | 665 | 534 | -131 | -20% |
| Impressions | 3,173 | 2,191 | -982 | -31% |
| Cost | $528 | $542 | +$14 | +3% |
| Conversions (Total) | 222 | 139 | -83 | -37% |
| Conversion Value | $6,724 | $4,852 | -$1,872 | -28% |
| Cost per Conversion | $2 | $4 | +$1.5 | +64% |
| Value per Cost | 13 | 9 | -4 | -30% |
| Conversion Rate | 33% | 26% | -7 pp | -22% |
Google marked most of these differences as statistically significant, indicating the drop wasn’t random.
Conversion Breakdown
The three primary actions that reflect the entire funnel:
| Conversion Event | Assigned Value | MC | MCV |
| Phone Calls | $10 | 193 | 130 |
| Find-a-Doctor | $25 | 22 | 2 |
| Appointments | $500 | 8 | 7 |
| TOTAL | 222 | 139 |
Why MCV struggled here
1. High-value events were too sparse for the model to learn from
Appointment actions simply didn’t happen often enough to train a value-based system. With too little data, MCV lost confidence — and when that happens, the model pulls back aggressively.
2. Mid-funnel intent collapsed
“Find-a-Doctor” events fell by 90%, a major issue, as these actions typically lead to appointment bookings. MCV didn’t just chase the wrong signals — it eliminated the signals that feed the funnel.
3. MCV shrank delivery instead of reallocating it
A value-based model should redistribute spending toward higher-value actions. Here, it simply reduced volume, impressions, and conversions across the board in an effort to refine targeting.
Takeaway
When the highest-value action is infrequent, MCV struggles to learn, shrinks delivery, and ultimately underperforms.
Account #2 – High-Consideration B2C
Could MCV improve efficiency in a simple, high-intent funnel?
Three reasons made this a meaningful second test:
- A single, high-value conversion type
No micro-conversions. No ambiguous signals. Just one primary lead that carries substantial downstream worth. - High-intent users
People searching for trade-in or resale services often know exactly what they need. This should make value-based learning easier. - A simpler decision environment
With fewer signals for the model to interpret, MCV had a cleaner path to optimize — theoretically reducing noise and improving precision.
Since this test only measured one conversion event, each lead was assigned a $100 value, creating a straightforward target for MCV to chase.
What I expected
With a cleaner funnel, I expected MCV to either match Max Conversions or slightly outperform it by finding pockets of higher-intent users within a niche audience. Even modest improvements in CPL or efficiency would have signaled value.
What actually happened
MCV again returned fewer conversions, lower value, and higher cost per lead — despite having only one conversion type to optimize toward, and with ample conversion volume.
KPI Summary
| Metric | Control (MC) | MCV | Δ Change | % Change |
| Clicks | 314 | 272 | -42 | -13% |
| Impressions | 2,433 | 2,394 | -39 | -2% |
| Cost | $300 | $299 | -$0.6 | -0.2% |
| Conversions (Total) | 21 | 18 | -3 | -14% |
| Conversion Value | $2,100 | $1,800 | -$300 | -14% |
| Cost per Conversion | $14 | $17 | +$2 | +16% |
| CPC | $0.96 | $1.10 | +$0.14 | +15% |
| CTR | 13% | 11% | -1.5 pp | -12% |
| Conversion Rate | 6.69% | 6.62% | -0.07 pp | -1.1% |
Google marked this experiment as inconclusive, but the directional trend consistently favored Max Conversions.
Why MCV struggled here
1. No value variation for the model to learn from
With every lead assigned the same $100 value, MCV had no way to distinguish “better” users from average ones. Without value gradients, the model couldn’t prioritize higher-quality traffic — it simply saw every conversion as equal.
2. The algorithm became overly restrictive
When MCV doesn’t detect enough value-based differences, it tightens delivery instead of exploring. That’s exactly what happened here: impression volume dipped, reach narrowed, and conversion opportunities dropped.
3. MCV favored stability over exploration
Rather than testing into broader auction opportunities, the model stuck to a smaller, “safer” query set. But those weren’t necessarily the queries that generated leads.
Takeaway
Value-based bidding underperformed not because of complexity or lack of data, but because the model had no value variation to learn from. With only one conversion type, MCV became overly restrictive and reduced reach — while Max Conversions maintained steady delivery and captured more total leads.
This test showed that even in simple, high-intent funnels, value-based bidding only works if the conversion data contains real value differentiation. Without that variation, the strategy defaults to caution — and volume suffers.
Account #3 – Travel & Leisure
Could MCV succeed in a real revenue environment built for value-based bidding?
Three reasons made this the strongest test case for value-based bidding:
- A true revenue-generating conversion event
The account didn’t rely on assigned values. Instead, it fed real booking revenue directly to Google, giving MCV precise, bottom-line feedback. - High-intent branded traffic
Brand searches convert reliably. With strong historical performance and clear signals, this environment should give MCV everything it needs to optimize. - Consistent conversion volume
Unlike healthcare or high-consideration B2C, this vertical produces frequent, stable bookings — theoretically giving the model enough data to learn faster and more accurately.
This setup created an almost ideal scenario for MCV: real value inputs, strong intent, and steady traffic.
What I expected
In this environment, I expected MCV to either outperform MC or at least match it — potentially finding higher-value booking patterns, driving a higher CVR, or improving ROAS by prioritizing more profitable users.
What actually happened
Despite having everything in its favor — clean revenue data, strong volume, and high intent — MCV still returned fewer bookings, lower revenue, and higher cost per booking than Max Conversions.
The bid strategy broadened delivery and lost efficiency across the full funnel.
KPI Summary
| Metric | Control (MC) | MCV | Δ Change | % Change |
| Clicks | 1,001 | 1,064 | +63 | +6% |
| Impressions | 3,867 | 4,488 | +621 | +16% |
| Cost | $1,117 | $1,130 | +$13 | +1% |
| Bookings (Conversions) | 99 | 86 | -13 | -13% |
| Revenue | $25,750 | $22,563 | -$3,187 | -12% |
| Cost per Booking | $11 | $13 | +$1.8 | +16% |
| CPC | $1.12 | $1.06 | -$0.06 | -5% |
| CTR | 26% | 24% | -2 pp | -8% |
| Conversion Rate | 10% | 8% | -2 pp | -18% |
Google marked the test as inconclusive, but the direction was consistent and clear: max conversions delivered better performance across almost every meaningful metric.
Why MCV struggled here
1. The model broadened traffic instead of refining it
MCV pulled in more impressions but at lower quality, stepping outside the highest-intent branded queries. CTR and conversion rate both dropped, signaling a move away from the audience most likely to book.
2. It overprioritized exploratory reach
MCV often tries to identify potential high-value users by widening the net. But in branded search, broader isn’t better—it dilutes the pool. MC stayed tightly focused on proven demand, while MCV drifted away from it.
3. Even real revenue data didn’t provide enough signal differentiation
MCV still couldn’t identify clear patterns in “higher-value” booking behavior. Without strong, consistent variation in booking values, the algorithm didn’t develop enough confidence to optimize aggressively.
Conclusions
Running MCV across three very different lead-gen accounts taught me something that aligns closely with what many industry experts already suspect:
Value-based bidding can work, but only when the underlying data environment is truly ready for it.
In all three tests, the outcome was similar. MCV didn’t fail because the concept is flawed, but because lead generation doesn’t always produce the signal quality or differentiation required for value-based automation to make consistently smarter decisions than Max Conversions.
Where e-commerce has abundant revenue signals, wide value ranges, and dense conversion volume, lead gen accounts face:
- Sparse high-value events
- Uniform assigned values
- Multi-step funnels where early- and mid-stage actions matter just as much as the final ones
These conditions make it difficult for MCV to learn, and when Google’s models can’t learn, they either widen too broadly or restrict too aggressively. Both patterns appeared in these tests.
Meanwhile, Max Conversions delivered what it’s historically known for:
Steady delivery, stable reach, and predictable cost efficiency — even in complex or uneven funnel environments.
What We Recommend
1. Only use MCV when you have real value variation.
If every lead shares the same assigned value—or if your values aren’t tied to actual business impact—MCV likely won’t learn effectively, in which case max conversions is the safer choice.
2. Make sure your conversion tracking is rock solid.
Value-based bidding magnifies tracking gaps. Clean, consistent, deduped events are a must before testing MCV.
3. Give MCV longer test windows than standard experiments.
Unlike MC or tCPA tests, MCV needs more time and data to stabilize. Plan for 6–8 weeks minimum.
4. Protect mid-funnel volume.
If MCV starts collapsing exploratory traffic, pull back. Mid-funnel signals matter more in lead gen than most advertisers realize.
5. Default to Max Conversions unless your data model is mature.
Industry-wide best practice: MC remains the more reliable choice in most lead-gen accounts until richer value signals become available.
Next Steps (And Where MCV Might Finally Shine)
Looking at the patterns across these tests, the next place I might look to evaluate MCV is a vertical like home services, where:
- Lead types are clearer
- Volume is higher
- Intent is consistent
- Downstream values are easier to tier
With a 60–90 day test window, more value variation, and bigger sample sizes, MCV may have enough signal strength to operate the way it was intended, so I plan to run that next.
Final Thoughts
The promise of value-based bidding is compelling — and in the right environment, it really can work.
But lead gen isn’t ecommerce, and most lead-gen accounts still lack the depth and consistency of value signals that MCV needs to outperform simpler strategies.
Until those foundations improve:
MCV is best treated as an advanced option — not an automatic upgrade.
Max Conversions remains the most reliable bidding strategy for the majority of lead-gen advertisers today.
If and when the data becomes richer, MCV may become the future. But based on these tests, that future isn’t quite here yet.
If you want to see how we can dial in your bidding strategy, or if you want to learn more about MCV, contact us today or shoot us a message on LinkedIn.
