Docs
Launch GraphOS Studio

Sending metrics to GraphOS

Learn how to report operation and field usage metrics to GraphOS


Studio offers visualizations of metrics like request rate, latency, and more to help you analyze your 's performance. Studio also lets you analyze how clients are using individual s in your GraphQL requests. To analyze metrics and field usage in Studio, you first need to report them to GraphOS.

  • Reporting metrics from Apollo Server or a monograph requires an Enterprise or legacy Team plan.
  • Connecting a self-hosted to requires an Enterprise plan.

If your organization doesn't currently have an Enterprise plan, you can test it out by signing up for a free Enterprise trial.

Reporting operation metrics

How you report metrics to depends on whether you're using Apollo Router or Apollo Server or whether you're using a third-party server.

From the Apollo Router or Apollo Server

Both the Apollo and Apollo Server use the same mechanism to enable metrics reporting to :

  1. Obtain a graph API key from Studio.

  2. Obtain the graph ref for the graph and you want to report metrics for. You can find your variant's at the top of the variant's README page in Studio. It has the format graph-id@variant-name (such as my-graph@staging).

  3. Use the obtained values to set the following environment s in your environment before starting up your /server:

    export APOLLO_KEY=<YOUR_GRAPH_API_KEY>
    export APOLLO_GRAPH_REF=<YOUR_GRAPH_REF>

NOTE

Consult your production environment's documentation to learn how to set its environment s.

Now, when your or server starts up, it automatically begins reporting metrics to .

From a third-party server (advanced)

You can set up a reporting agent in your GraphQL server to push metrics to . The agent is responsible for:

  • Translating details into the correct reporting format
  • Implementing a default signature function to identify each executed
  • Emitting batches of traces and metrics to the reporting endpoint
  • Optionally defining plugins to enable advanced reporting features

Apollo Server defines its agent for performing these tasks in the usage reporting plugin.

NOTE

If you're interested in collaborating with Apollo on creating a dedicated integration for your GraphQL server, please contact us at support@apollographql.com.

Reporting format

The reporting endpoint accepts batches of traces and metrics that are encoded in protocol buffer format. Each trace corresponds to the execution of a single GraphQL , including a breakdown of the timing and error information for each that's resolved as part of the operation. The for this protocol buffer is defined as the Report message in the protobuf schema.

The protobuf schema document describes how to create a report whose tracesPerQuery objects consist solely of a list of detailed execution traces in the trace array. now allows your server to describe usage as a mix of detailed execution traces and pre-aggregated metrics (released in Apollo Server 2.24), which leads to much more efficient reports. This document doesn't describe how to generate these metrics. Nor does it descibe how to report the number of requests for a particular shown in the Clients & Operations table on the Insights page.

NOTE

We strongly encourage developers to contact Apollo support at support@apollographql.com to discuss their use case before building their own reporting agent using this module.

As a starting point, we recommend implementing an extension to the GraphQL execution that creates a report with a single trace, as defined in the Trace message of the protobuf schema. Then, you can batch multiple traces into a single report. We recommend sending batches approximately every 20 seconds and limiting each batch to a reasonable size (~4MB).

Many server runtimes already support emitting tracing information as a GraphQL extension. Such extensions are available for Node, Ruby, Scala, Java, Elixir, and .NET. If you're working on adding metrics reporting functionality for one of these languages, reading through that tracing instrumentation is a good place to start. For other languages, we recommend consulting the Apollo Server usage reporting plugin.

Operation signing

For Studio to correctly group GraphQL queries, your reporting agent should define a function to generate an operation signature for each distinct . This can be challenging because two structurally different operations can be functionally equivalent. For instance, all of the following queries request the same information:

query AuthorForPost($foo: String!) {
post(id: $foo) {
author
}
}
query AuthorForPost($bar: String!) {
post(id: $bar) {
author
}
}
query AuthorForPost {
post(id: "my-post-id") {
author
}
}
query AuthorForPost {
post(id: "my-post-id") {
writer: author
}
}

It's important to decide how to group such queries when tracking metrics. The TypeScript reference implementation does the following to every query before generating its signature to better group functionally equivalent s:

  • Drop unused s and/or s
  • Hide string literals
  • Ignore es
  • Sort the tree deterministically
  • Ignore differences in whitespace.

We recommend using the same default signature method for consistency across different server runtimes.

Sending metrics to the reporting endpoint

After your GraphQL server prepares a batch of traces, it should send them to the Studio reporting endpoint at the following URL:

https://usage-reporting.api.apollographql.com/api/ingress/traces

Each batch should be sent as an HTTP POST request. The body of the request can be one of the following:

  • A binary serialization of a Report message
  • A gzipped binary serialization of a Report message

To authenticate with Studio, each request must include either:

  • An X-Api-Key header with a valid API key for your graph
  • An authtoken cookie with a valid API key for your graph

Only graph-level API keys (starting with the prefix service:) are supported.

The request can also optionally include a Content-Type header with value application/protobuf, but this is not required.

⚠️ CAUTION

The reporting endpoint rejects reports that are older than 50 minutes. If you see an error like Rejecting report from service {your service} with skewed timestamp, ensure your traces are current and that your timestamp calculations are accurate.

For a reference implementation, see the sendReport() function in the TypeScript reference agent.

Tuning reporting behavior

We recommend implementing retries with backoff when you encounter 5xx responses or networking errors when communicating with the reporting endpoint. Additionally, implement a shutdown hook to ensure you push all pending reports before your server initiates a healthy shutdown.

Implementing additional reporting features

The reference TypeScript implementation includes several features that you might want to include in your implementation. All of these features are implemented in the usage reporting plugin itself and are documented in the plugin's API reference.

For example, you can restrict which information is sent to , particularly to avoid reporting personal data. Because personal data most commonly appears in s and headers, the TypeScript agent offers options to sendVariablesValues and sendHeaders.

Reporting field usage metrics

Your GraphQL or server can report one or both of the following usage metrics:

  • Requests: How many times an that requests a particular has been observed
  • Executions: How many times the for a particular has been executed

How you report these metrics to depends on whether you're using the Apollo Router or Apollo Server.

From the Apollo Router

If you have a cloud or self-hosted supergraph, you only need to configure your router to send operation metrics to GraphOS, and usage will be automatically reported. s should not send any metrics to directly. Instead, they can include trace data in their responses to the . The router then includes that data in its own reports to .

From Apollo Server

Apollo Server automatically reports usage metrics as long as you follow these prerequisites:

  • You must first configure your server to send operation metrics to GraphOS.

  • To report requests:

    • Your GraphQL server must run Apollo Server 3.6 or later.
    • If you have a federated graph, your gateway must run Apollo Server 3.6 or later, but there are no requirements for your s.
  • To report executions:

    • Your GraphQL server can run any recent version of Apollo Server 2.x or 3.x.
    • If you have a federated graph, your s must support federated tracing. For compatible libraries, see the FEDERATED TRACING entry for libraries in this table.

NOTE

If some of your s support federated tracing and others don't, only executions in compatible subgraphs are reported to Apollo.

Disabling execution metrics

In Apollo Server 3.6 and later, you can turn off -level instrumentation for some or all s by providing the fieldLevelInstrumentation option to ApolloServerPluginUsageReporting.

Turning off -level instrumentation for a particular request has the following effects:

  • The request does not contribute to the "executions" statistic on the Insights page in Studio.
  • The request does not contribute to -level execution timing hints that can be displayed in the GraphOS Studio Explorer and VS Code.
  • The request does not produce a trace that can be viewed in the Traces section of the Insights page in Studio.

These requests still contribute to most features of Studio, such as , the Insights page, and the "Request" metrics on the Insights page.

To turn off -level instrumentation for all requests, pass () => false as the fieldLevelInstrumentation option:

new ApolloServer({
plugins: [
ApolloServerPluginUsageReporting({
fieldLevelInstrumentation: () => false
})
]
// ...
});

If you do this, execution metrics do not appear on the Insights page.

Fractional sampling

You can enable -level instrumentation for a fixed fraction of all requests by passing a number between 0 and 1 as the fieldLevelInstrumentation option:

new ApolloServer({
plugins: [
ApolloServerPluginUsageReporting({
fieldLevelInstrumentation: 0.01
})
]
// ...
});

If you do so, Apollo Server randomly chooses to enable -level instrumentation for each request according to the given probability.

⚠️ CAUTION

Make sure to pass a number (like 0.01), not a function that always returns the same number (like () => 0.01), which has a different effect.

In this case, whenever -level instrumentation is enabled for a particular request, Apollo Server reports it to Studio with a weight based on the given probability. The "executions" statistic on the Insights page (along with execution timing hints) is scaled by this weight.

For example, if you pass 0.01, your server enables -level execution for approximately 1% of requests, and every observed execution is counted as 100 executions on the Insights page. (The actual observed execution count is available in a tooltip in the table.)

Custom sampling

You can decide whether to enable -level instrumentation (and what the weight should be) on a per- basis by passing a function as the value of fieldLevelInstrumentation.

For example, you might want to enable -level instrumentation more often for rare s and less often for common operations. For details, see the usage reporting plugin docs.

Performance considerations

Calculating execution metrics can affect performance for large queries or high-traffic graphs. This is especially true for federated graphs because a includes each 's full trace data in its response to the gateway.

Previous
Overview
Next
Operation signatures
Edit on GitHubEditForumsDiscord