Question: NavBoost uses click data weighted by query frequency and user satisfaction signals, creating theoretical velocity advantages for mid-position high-volume rankings. Simultaneously, AI Overviews and SERP features have fragmented CTR curves, making position #3 below an AI Overview fundamentally different from position #3 on a clean SERP. How would you build a unified model that accounts for both click signal accumulation mechanics and query-specific CTR fragmentation, and how would this model change content deployment prioritization across your keyword portfolio?
Two Systems Interacting
NavBoost rewards pages that accumulate positive click signals: clicks, dwell time, low pogo-sticking. The signal compounds over time. A page ranking #4 for a 100,000 monthly search volume query accumulates more absolute click data than a page ranking #1 for a 1,000 monthly search volume query.
Simultaneously, SERP features fragment click distribution. AI Overviews, featured snippets, knowledge panels, and shopping carousels intercept clicks before organic results. Position #3 on a clean SERP might get 8% CTR. Position #3 below an AI Overview might get 2%.
These systems interact. A page in a fragmented SERP accumulates fewer clicks despite high query volume. A page on a clean SERP accumulates more clicks despite lower query volume. Optimizing for one system without modeling the other produces suboptimal deployment decisions.
The Accumulation Mechanic
NavBoost’s training data includes:
- Click-through from SERP to page
- Time on page before return to SERP
- Whether user clicked another result after returning (pogo-sticking)
- Whether user reformulated query (dissatisfaction signal)
Positive signals: long dwell, no pogo-stick, no reformulation.
Negative signals: quick return, clicking competitors, query refinement.
The accumulation hypothesis: these signals compound. A page accumulating positive clicks over 6 months builds a “trust reservoir” that’s hard for new competitors to displace. This explains why established pages resist displacement even when new content is objectively better.
If true, there’s a velocity advantage to ranking mid-position for high-volume queries versus top-position for low-volume queries. You accumulate more absolute positive signals, building a larger trust reservoir faster.
The counter-argument: NavBoost might weight by rate, not absolute volume. Click satisfaction rate matters, not total clicks. A page with 95% satisfaction rate on 1,000 clicks might score higher than 85% satisfaction rate on 10,000 clicks.
Observable test: do pages with longer ranking history in high-volume queries show more ranking stability than pages with equivalent history in low-volume queries? If absolute accumulation matters, high-volume history should produce more stability.
CTR Fragmentation Reality
Average CTR by position (traditional model):
- Position 1: ~28%
- Position 2: ~15%
- Position 3: ~11%
- Position 4: ~8%
These averages are increasingly fictional. Actual CTR depends on SERP composition:
Clean SERP (rare): Traditional curve roughly applies.
Featured snippet present: Position 0 takes ~8-12%. Position 1 drops to ~20%. Total organic CTR compressed.
AI Overview present: Overview satisfies ~40-60% of informational queries without click. Remaining CTR distributed among fewer seekers. Position 1 might drop to ~15%.
Shopping carousel present: Transactional clicks go to carousel. Organic positions below carousel see dramatic CTR reduction for purchase-intent queries.
Knowledge panel present: Entity queries satisfied in-SERP. Organic results see minimal CTR regardless of position.
A unified model needs query-level CTR curves, not position-level averages.
Building Query-Level CTR Models
Step 1: SERP feature classification
For each target keyword, document which SERP features appear:
- AI Overview (yes/no, length)
- Featured snippet (yes/no, format)
- Knowledge panel (yes/no, completeness)
- Shopping results (yes/no, count)
- Local pack (yes/no)
- Video carousel (yes/no)
- Image pack (yes/no)
Step 2: CTR curve estimation by feature combination
Group keywords by SERP feature profile. Estimate CTR curves for each group using:
- Google Search Console data for keywords you rank for
- Industry CTR studies segmented by SERP type
- Click-through experiments via paid search (bid on organic keywords, measure CTR by position)
Example curves:
Clean SERP: P1=28%, P2=15%, P3=11%, P4=8%
AI Overview + Featured Snippet: P1=12%, P2=8%, P3=5%, P4=3%
Shopping Carousel (transactional): P1=10%, P2=6%, P3=4%, P4=2%
These are estimates. Build from your own data where possible.
Step 3: Click potential calculation
For each keyword:
Click Potential = Monthly Search Volume × Position CTR (from feature-adjusted curve)
Keyword A: 50,000 volume, AI Overview present, targeting P2
Click Potential = 50,000 × 0.08 = 4,000 monthly clicks
Keyword B: 10,000 volume, clean SERP, targeting P1
Click Potential = 10,000 × 0.28 = 2,800 monthly clicks
Despite 5x lower volume, Keyword A delivers more clicks due to CTR fragmentation difference.
Step 4: Accumulation weighting
If accumulation velocity matters, weight click potential by ranking timeline:
Keywords where you can rank within 3 months → full weight
Keywords requiring 6-12 months → 50% weight (delayed accumulation)
Keywords requiring 12+ months → 25% weight
This discounts keywords where you’ll accumulate signals slowly.
Deployment Prioritization Framework
Rank keywords by: Click Potential × Accumulation Weight × Conversion Value
High priority: High click potential, fast ranking timeline, high conversion value. Deploy resources immediately.
Medium priority: High click potential but slow timeline, or fast timeline but lower click potential. Secondary resource allocation.
Low priority: Low click potential regardless of other factors. These include:
- High-volume keywords with severe SERP fragmentation (AI Overview + featured snippet + PAA = minimal organic CTR)
- Low-volume keywords even on clean SERPs
- Keywords where you can’t rank within 12 months
Strategic implication: Volume-based keyword prioritization is obsolete. A 100,000 volume keyword with AI Overview might deliver fewer clicks than a 15,000 volume keyword on a clean SERP. Model SERP-adjusted click potential, not raw volume.
The Fragmentation Trend Problem
SERP fragmentation is increasing. AI Overviews expanding. Featured snippets proliferating. Zero-click searches growing.
A keyword with clean SERP today might have AI Overview in 6 months. Your CTR model becomes stale.
Monitoring approach:
- Re-audit SERP features quarterly for priority keywords
- Track Google’s feature rollout announcements
- Monitor your GSC CTR trends; declining CTR at stable position indicates feature intrusion
Hedging strategy:
- Diversify across SERP feature profiles
- Don’t over-concentrate on currently-clean SERPs that may fragment
- Build branded query volume (less susceptible to feature fragmentation)
Second-Order Effects
The featured snippet trap: Winning featured snippets might reduce your click accumulation. You answer the query in-SERP, user doesn’t click, you accumulate no positive signal. Position 0 could be worse for NavBoost than position 1.
Test: compare click-through and NavBoost signal accumulation for pages with featured snippets versus pages ranking #1 without snippets. If snippet ownership correlates with lower dwell time accumulation, the feature has hidden costs.
AI Overview citation paradox: Being cited in AI Overview might increase brand awareness without click accumulation. Users see your brand, don’t click, don’t accumulate signals. Brand awareness rises, NavBoost score doesn’t.
This creates divergent strategies: brand building (optimize for AI Overview citation) versus ranking building (optimize for click-generating positions). These may conflict.
Competitive signal dynamics: Your click accumulation is relative. If competitor accumulates faster, your relative position weakens. A keyword with stable position but increasing competitor click velocity requires attention even without position drop.
Monitor: competitor ranking stability on your target keywords. Increasing stability suggests their click accumulation accelerates. Decreasing stability (more SERP volatility) suggests opportunity.
Model Limitations
This model assumes:
- NavBoost weights absolute click volume (unconfirmed)
- CTR curves are stable within SERP feature categories (they vary by query type)
- Historical data predicts future SERP composition (Google changes features)
The model provides better prioritization than volume-only or position-only approaches. It doesn’t provide precision. Use it for directional decisions, not exact forecasting.
Falsification Criteria
Model fails if:
- Pages with high click accumulation show no ranking stability advantage
- SERP feature presence doesn’t correlate with CTR variation in your GSC data
- Prioritizing by click potential produces worse outcomes than prioritizing by volume
Test against your own data. If the model’s predictions don’t match your ranking outcomes, adjust weights or abandon the framework.