Skip to main content

Reconciling Copilot usage metrics across dashboards, APIs, and reports

Understand how Copilot usage metrics differ between dashboards, APIs, and exported reports.

Qui peut utiliser cette fonctionnalité ?

Enterprise owners and billing managers

Remarque

GitHub Copilot usage metrics are currently in public preview with data protection and subject to change.

The Copilot usage metrics dashboard, APIs, and export files all use the same underlying telemetry data, but they aggregate and present it differently. Understanding these differences helps you reconcile numbers across sources and trust your analysis when preparing internal reports.

Prerequisite

Copilot usage metrics depend on telemetry from users’ IDEs. If a developer has disabled telemetry in their IDE, their Copilot activity will not appear in the dashboard, API reports, or exported data.

If you notice missing users or unexpectedly low adoption numbers, verify IDE telemetry settings before troubleshooting other causes.

Metric alignment

The dashboard and APIs use shared definitions for key metrics:

ConceptDashboard metricAPI or export fieldNotes
Active usersDaily/weekly/total active usersuser_initiated_interaction_count > 0A user is considered active if they interacted with Copilot in their IDE on that day.
Acceptance rateCode completion acceptance ratecode_acceptance_activity_count ÷ code_generation_activity_countBoth sources calculate acceptance rate the same way, though rounding may differ.
Agent adoptionAgent adoption charttotals_by_feature where feature = “agent”Reflects users who interacted with the Agent Copilot.
Language usageLanguage usage chartstotals_by_language_feature or totals_by_language_modelThe dashboard visualizes these aggregated fields.

For complete field descriptions, see Data available in Copilot usage metrics.

Discrepancies between reports

Small differences between dashboard data, API reports, and exports are expected. These variations are usually caused by differences in time windows, scope, or data freshness.

Time windows

Each data source aggregates data differently.

SourceTime windowAggregation method
Dashboard28-day rolling windowMetrics are aggregated continuously over the past 28 days to smooth fluctuations.
APIsDailyEach record represents a single day per user, enabling daily trend analysis.
NDJSON exportsDailyMirrors API output for BI tools and long-term reporting.

Aligning your reporting period with the dashboard’s 28-day window ensures consistent comparisons.

Delayed telemetry

Because IDE telemetry is processed asynchronously, data for recent days may appear incomplete or missing. Data typically finalizes within three full UTC days. Apparent drops in recent daily metrics often resolve once telemetry is fully processed.

Export timing

NDJSON files reflect data available at the time of export. If a file is downloaded before new telemetry is processed, the data may lag behind the dashboard or API. Re-exporting the file after the three-day window provides the most accurate view.

Unknown values

The value Unknown appears in some API or export breakdowns when telemetry from the IDE client lacks sufficient detail to categorize the activity. This is expected behavior and does not indicate missing data.

BreakdownExplanation
LanguageShown as Unknown when the IDE cannot identify the programming language of the active file.
FeatureAppears when an older client sends a generic event without specifying a chat mode (for example, chat_panel_unknown_mode).
ModelAppears when the event lacks information identifying the model used. Some internal models (for example, gpt-4o-mini) may appear alongside Unknown when used for non-user-facing operations such as summarization or intent detection.

Unknown values are excluded from dashboard visualizations but appear in API and NDJSON data for completeness. The amount of Unknown data decreases as users upgrade to newer IDE and extension versions that send richer telemetry.