The 7 App Store Connect metrics worth watching after Apple's March 2026 update are: conversion rate (peer benchmarked), Day 35 download-to-paid conversion, Day 35 proceeds per download, Day 1 retention, Day 7 retention, crash rate, and cohort breakdown by download source. Apple shipped over 100 new metrics on March 25, 2026, but a solo indie dev only needs these seven to know whether the listing converts, whether the install sticks, and whether paid traffic pays back [1].
TL;DR:
- Apple's March 25, 2026 release added 100+ metrics, peer group benchmarks at the 25th, 50th, and 75th percentiles, and cohort analysis by download date and source [1] [2].
- For indie devs, conversion rate vs peer p50 is the single most actionable number. It tells you whether your screenshots are pulling weight.
- Day 35 download-to-paid conversion and proceeds per download are the two new monetization benchmarks. Use them only if your app is paid or subscription-based.
- Cohort by download source separates organic, Search Ads, and Custom Product Page traffic. Don't average them.
- The dashboard still can't show creative-level revenue, paywall drop-off, or web-to-store attribution [4]. Know the limits before chasing the data.
This guide is for the indie dev who opened the new dashboard, saw 100+ metrics, and closed it. We narrow the surface to seven numbers, set rules for what each one means, and tell you what action to take when each crosses the line.
Table of Contents
- What did Apple change in App Store Connect Analytics on March 25, 2026?
- Which 7 metrics should indie devs actually watch?
- How do peer group benchmarks compare your app to competitors?
- What can App Store Connect Analytics still not measure?
- How should indie devs use the new metrics each week?
- Make analytics a 10-minute weekly check, not a project
What did Apple change in App Store Connect Analytics on March 25, 2026?
Apple shipped what it called the biggest update to App Store Connect Analytics since the platform launched [3]. The release included over 100 new metrics, two new monetization benchmarks (Day 35 download-to-paid conversion and Day 35 proceeds per download), peer group comparison at the 25th, 50th, and 75th percentile, cohort analysis by attributes like download date and download source, up to seven filters per metric widget, and two new subscription reports exportable through the Analytics Reports API [1].
The framing matters. Frederik Riedel of OneSec told 9to5Mac that the update is "incredibly empowering, especially for indie developers" because solo creators no longer need a data team or an MBA to act on App Store data [3]. That's the opportunity. The risk is the opposite: 100 metrics is a paralysis surface, not a clarity surface. The dashboard rewards selectivity.
Three structural choices Apple made are worth understanding before you open the dashboard:
- Peer benchmarks use differential privacy. Apple adds calibrated noise to each benchmark before publishing it, so you cannot reverse-engineer a competitor's exact number. The percentile ranges are directional, not surgical [2].
- Cohort data is aggregated. You can see how a download-date cohort retains, but not what an individual user did. That's a privacy guarantee, not a tooling gap [1].
- The benchmark window is Day 35. Apple settled on 35 days as the post-install window for the new monetization benchmarks. That's roughly the iOS keyword stabilization window plus a buffer, which is intentional. Don't read benchmark data sooner than that.
Which 7 metrics should indie devs actually watch?
The dashboard surfaces 100+ metrics. You don't need them. The seven below cover the indie dev use cases (does the listing convert, does the install stick, does paid traffic pay back) and map directly to the new peer group benchmark widgets [2].
| # | Metric | What it tells you | Healthy range vs peer p50 | Trigger to act |
|---|---|---|---|---|
| 1 | Conversion Rate | Listing → install efficiency. The clearest screenshot signal | At or above p50 | Below p25 for 2+ weeks: refresh frame 1 and 2 |
| 2 | Day 35 Download to Paid Conversion | Free users who become paying users in 35 days | At or above p50 | Below p25: paywall or pre-paywall framing is off |
| 3 | Day 35 Proceeds per Download | Average revenue per install at Day 35 | Apply only if the model is paid or subscription | Drop more than 15% week over week: pricing or trial offer changed unintentionally |
| 4 | Day 1 Retention | Users who return on day 2 | At or above p50 | Below p25: onboarding broken, or screenshots oversold the app |
| 5 | Day 7 Retention | Users who return in week 2 | At or above p50 | Below p25: not a screenshot fix; product habit is the problem |
| 6 | Crash Rate | Quality signal Apple's algorithm reads | Below p50 (lower is better) | Above p75: stop new traffic until fixed |
| 7 | Cohort by Download Source | Separates organic, Search Ads, web, CPP traffic | Read each source independently | Any source converting below half its peers: route off it |
A few rules for reading the table.
Conversion rate is the metric that connects to screenshots. When your conversion sits below your peer group's median for two weeks running, the most likely root cause is the listing surface. That's the trigger to audit your set, not a hunch. The ASO audit tool walks the listing with the peer benchmark in mind and surfaces the highest-impact frames to refresh, and the App Store screenshot mistakes guide covers the patterns that most often drag conversion below p25.
Day 35 download-to-paid conversion is the new monetization anchor. Before March 2026, a paid or subscription indie dev couldn't tell whether the install-to-purchase rate was good. Now Apple publishes the peer p25, p50, and p75. Apply the metric only if you actually monetize on download or trial start. Free apps with ads should ignore the row.
Cohort by download source is the most underused new feature. A blended conversion rate hides everything. Search Ads traffic, paid social traffic via CPP, web-driven traffic, and pure organic all behave differently. Average them and you'll mis-diagnose your screenshots. Split them and the right action becomes obvious.
The other four metrics (Day 1 retention, Day 7 retention, crash rate, cohort breakdown) are read-along context. You don't change them by editing screenshots. You watch them so you don't blame the listing for a product problem.
How do peer group benchmarks compare your app to competitors?
Peer group benchmarks are the lever the March 2026 release actually moves. Until now, "my screenshots convert fine" was a defensible claim because there was no comparison. With peer benchmarks, the claim is testable.
Apple builds peer groups by App Store category, business model, and download volume tier [2]. Your app is bucketed into a group of similar apps with similar scale, and the dashboard shows you where you sit in that group's distribution. Each metric widget displays the 25th, 50th, and 75th percentile values, which translates to: bottom quartile, median, top quartile [2].
Apple uses differential privacy on every benchmark. The exact mechanism: aggregate the peer group's metric values, add calibrated noise, only publish if the group has a minimum number of apps that week [2]. The output is directional, not exact. A benchmark of "your conversion rate is at p35" tells you you're below median but not in the bottom quartile. Treat the percentile bands as zones, not point estimates.
A few practical reading rules:
- At or above p50 means the listing is doing its job. Optimization at this level is small wins. Don't redesign.
- Between p25 and p50 means there's room. Worth running a Product Page Optimization test. The A/B testing guide for PPO screenshots covers test design and the 90-day test cap.
- Below p25 means the listing is not competitive in your category. Frame 1 and 2 are usually the cause. Refresh both before doing anything else.
A subtle trap: peer groups are by primary category. If your app is cross-category (a productivity app that also fits Lifestyle), Apple buckets it by your declared primary. Switching primary category to chase a friendlier benchmark is a bad idea. The App Store ranking factors guide covers why category churn hurts more than it helps.
What can App Store Connect Analytics still not measure?
This is the section nobody at Apple will say out loud. Even with 100+ new metrics, the dashboard still can't tell you what most indie devs actually want to know [4].
The four blind spots, in plain language:
- Creative-level revenue attribution. You can see that a Search Ads cohort converts at X%, but you cannot connect a specific ad creative or screenshot variant to a specific revenue outcome. The cohort data tells you which source brought the user, not which message brought them.
- Pre-purchase funnel. Onboarding screens, paywall exposure, trial start, trial drop-off, the path from app open to first transaction. None of it shows up in App Store Connect. The dashboard sees the install and the receipt, not the steps in between [4].
- Cross-channel touchpoints. Email, web landing pages, SMS, retargeting. If a user clicked a web ad, visited your marketing site, then installed three days later, the dashboard shows the install as organic. The web step is invisible.
- Individual attribution after ATT. App Tracking Transparency capped what individual-level data Apple will publish, and the new analytics doesn't break that wall. Cohort data stays aggregated [4].
For most indie devs, this is fine. You're not running a $50K Google Ads campaign that needs creative-level ROAS. You're running a listing that either converts or doesn't, and an app that either retains or doesn't, and a paywall that either pre-sells or doesn't. The peer benchmarks tell you which of those three is broken without the funnel data.
If you're building a subscription business and you actually need paywall analytics, the gap is real. Most subscription indies pair App Store Connect with a third-party SDK (Adapty, RevenueCat, Superwall) for the funnel layer. Apple's update doesn't change that calculus, it just makes the listing-level half of the picture clearer.
How should indie devs use the new metrics each week?
A 10-minute weekly check beats a deep dive once a month. The weekly check fits between feature work, the monthly deep dive doesn't.
The 10-minute routine:
- Open the new Analytics dashboard. Pin the seven metrics from the table above as your default view. Pinning saves the dashboard between sessions and is the single biggest time saver in the new UI.
- Read conversion rate vs peer p50 first. This is your screenshot signal. If it's below p50 for two consecutive weeks, that's the trigger to audit. The trigger checklist for refreshing screenshots covers what counts as a refresh trigger and what doesn't.
- Check Day 1 retention. If retention dropped while conversion rose, your screenshots got better at attracting installs your product can't keep. That's a signal to align the listing more honestly with the product, not to chase higher conversion.
- Split conversion by download source if you run paid acquisition. Cohort by source shows whether Search Ads or CPP traffic converts at the same rate as organic. If a paid source converts at half the organic rate, the source itself is the problem (audience match), not the listing.
- Glance at crash rate. If it spikes above your peer p75, stop everything else and ship a bug fix. Crash rate above peer is a ranking penalty Apple's algorithm reads, and no listing optimization will outrun it. The App Store ranking factors guide covers how quality signals feed search ranking.
- For paid or subscription apps, glance at Day 35 download-to-paid conversion and proceeds per download. Compare to last month's reading, not last week's. The Day 35 window means weekly numbers are noisy.
- Close the dashboard. Going deeper than this without a hypothesis is how the dashboard eats your morning.
What you're aiming for is a dashboard that produces one of three signals: keep shipping product, refresh the listing, or fix a bug. Anything else is data tourism.
A specific note for subscription apps: the subscription app screenshots guide covers how to read the new Day 35 download-to-paid benchmark in the context of your paywall sequence. The number itself is a single value, but the diagnosis depends on which screenshot frame is doing the pre-paywall work.
A pragmatic add-on: if you're shipping App Preview videos as part of your listing, the new conversion benchmark also reflects video performance. The App Preview video specs reference covers per-device length and resolution requirements so the video isn't the silent reason your conversion sits below p50.
Make analytics a 10-minute weekly check, not a project
The March 2026 update is genuinely a step forward. Peer group benchmarks turn "my conversion is fine" into a falsifiable claim, and cohort by source breaks the average that hid every paid-vs-organic mismatch. For solo indie devs, the upside isn't 100 new metrics. The upside is that the seven metrics that actually matter now ship with comparison values built in [2] [3].
The trap is treating analytics like a feature you have to master. You don't. You need a 10-minute Monday morning ritual: read seven numbers, decide whether to refresh the listing, fix a bug, or keep shipping. That's it. The dashboard tells you which of those three to do, and the rest of the week is product work.
When the conversion benchmark says it's time to refresh, the screenshot work itself shouldn't take more than the rest of that morning. Try AppScreenshotStudio today for free and put the saved hours back into the part the dashboard can't measure: the product the screenshots are selling.
References
- New In-App Purchase and subscription data now available in Analytics, Apple Developer News— developer.apple.com
- Peer group benchmarks, App Store Connect Analytics Help— developer.apple.com
- Apple announces major update to Analytics in App Store Connect— 9to5mac.com
- App Store Connect Analytics Guide: New Metrics & Remaining Limits— blog.funnelfox.com