Introduction
When building applications with distributed tracing, it’s common to rely on platforms like DataDog for observability. During development, these factors can really slow you down. Things like managing API keys, hitting rate limits, and the cost of sending test traces all create unnecessary friction. Testing tracing code shouldn’t feel this heavy.
Luckily, there’s a better way. In this post, we’ll show how to use the OpenTelemetry Collector with Docker Compose to spin up a lightweight local environment and a minimal Go app to publish traces that we can visualize. This setup lets you test and validate your tracing logic without depending on external services like DataDog—fast, cheap, and easy.
Quick Refresher
A trace represents the full path of a request as it moves through your system—from the initial entry point to all the services and components it touches along the way. It gives you a high-level view of how that request was handled, helping you understand performance bottlenecks, failures, and dependencies across services.
Within a trace, each individual operation is captured as a span. A span records the details of a specific unit of work, such as a function execution, a database query, or an external API call. Spans include timing information, metadata, and parent-child relationships to show how work is structured within the trace. Together, traces and spans give you a clear, structured view of what your application is doing under the hood.
Why Test Locally with OTel?
Using a cloud provider’s API during development can introduce several pain points:
- You’re tied to API keys, which might need to be shared or rotated.
- There’s the risk of hitting rate limits or racking up costs with too many test traces.
- Every trace has to travel over the network, which slows down your feedback loop.
OpenTelemetry (OTel) offers a more flexible solution. As an open standard, it gives you full control over how traces are collected, processed, and exported. By running the OTel Collector locally, you can simulate trace pipelines and debug your instrumentation—all without leaving your development machine.
Setting Up Your Environment
Before we dive into the code, let’s get your local environment ready. All you need is Docker, the Go runtime, and a basic understanding of how OpenTelemetry works.
You can check out the docs for downloading/installing the aforementioned resources:
- Docker Installation: https://www.docker.com/get-started/
- Go: https://go.dev/doc/install
We’ll use a Docker Compose file to run the OTel Collector. This setup will include a custom configuration, allowing it to receive traces in a DataDog-friendly format. This is perfect for testing without hitting external APIs. Once it’s up and running, we’ll write a small Go app to generate some traces and send them to the local collector.
You can start by creating a new project directory of your liking.
mkdir <NAME_OF_YOUR_PROJECT>
cd <NAME_OF_YOUR_PROJECT>
Local OTel Configuration
Within your new project directory, you can include this docker compose file for OTel configuration shown below.
OTel Docker Compose File
# otel-compose.yaml
services:
otel-collector:
image: grafana/otel-lgtm:latest
ports:
- "8126:8126"
- "4317:4317"
- "4318:4318"
- "3005:3000" # Optional port for health check or UI if needed
configs:
- source: otelcol-config.yaml
target: /otel-lgtm/otelcol-config.yaml
configs:
otelcol-config.yaml:
content: |
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus/collector:
config:
scrape_configs:
- job_name: 'opentelemetry-collector'
static_configs:
- targets: ['localhost:8888']
datadog:
endpoint: 0.0.0.0:8126
processors:
batch:
exporters:
otlphttp/metrics:
endpoint: http://localhost:9090/api/v1/otlp
tls:
insecure: true
otlphttp/traces:
endpoint: http://localhost:4418
tls:
insecure: true
otlphttp/logs:
endpoint: http://localhost:3100/otlp
tls:
insecure: true
debug/metrics:
verbosity: detailed
debug/traces:
verbosity: detailed
debug/logs:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp, datadog]
processors: [batch]
exporters: [otlphttp/traces]
#exporters: [otlphttp/traces,debug/traces]
metrics:
receivers: [otlp,prometheus/collector, datadog]
processors: [batch]
exporters: [otlphttp/metrics]
#exporters: [otlphttp/metrics,debug/metrics]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/logs]
#exporters: [otlphttp/logs,debug/logs]
OTel Config File Breakdown
Let’s do a brief breakdown of the docker-compose file above. Within the services
block, we have the otel-collector
, which uses the grafana/otel-lgtm
image, exposing common telemetry ports:
8126
– for Datadog traces4317
– OTLP gRPC4318
– OTLP HTTP3005
– optional, for UI or health checks
It also mounts the custom configuration from the configs
section. See below.
services:
otel-collector:
image: grafana/otel-lgtm:latest
ports:
- "8126:8126"
- "4317:4317"
- "4318:4318"
- "3005:3000" # Optional port for health check or UI if needed
configs:
- source: otelcol-config.yaml
target: /otel-lgtm/otelcol-config.yaml
The configs
section is responsible for defining how the telemetry will be received, exported, and aggregated.
The receivers
sub-section is responsible for accepting incoming telemetry data.
- otlp: For traces/logs/metrics via OTLP (gRPC & HTTP)
- prometheus/collector: Scrapes metrics from
localhost:8888
- datadog: Accepts Datadog trace format on port
8126
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus/collector:
config:
scrape_configs:
- job_name: 'opentelemetry-collector'
static_configs:
- targets: ['localhost:8888']
datadog:
endpoint: 0.0.0.0:8126
The batch sub-section handles batching of telemetry data before sending. An empty batch section means the processor will run using the default configuration.
processors:
batch:
The exporters
sub-section sends telemetry to storage backends.
- otlphttp/metrics: Pushes metrics to Prometheus-compatible endpoint
- otlphttp/traces: Sends traces to a trace backend
- otlphttp/logs: Sends logs to a log backend (like Loki)
- debug/*: Optional detailed logging for dev/debugging
exporters:
otlphttp/metrics:
endpoint: http://localhost:9090/api/v1/otlp
tls:
insecure: true
otlphttp/traces:
endpoint: http://localhost:4418
tls:
insecure: true
otlphttp/logs:
endpoint: http://localhost:3100/otlp
tls:
insecure: true
debug/metrics:
verbosity: detailed
debug/traces:
verbosity: detailed
debug/logs:
verbosity: detailed
The service
sub-section ties it all together with pipelines.
- traces: Collects from OTLP & Datadog → batches → exports
- metrics: Collects from OTLP, Prometheus, & Datadog → batches → exports
- logs: Collects from OTLP → batches → exports
service:
pipelines:
traces:
receivers: [otlp, datadog]
processors: [batch]
exporters: [otlphttp/traces]
#exporters: [otlphttp/traces,debug/traces]
metrics:
receivers: [otlp,prometheus/collector, datadog]
processors: [batch]
exporters: [otlphttp/metrics]
#exporters: [otlphttp/metrics,debug/metrics]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/logs]
#exporters: [otlphttp/logs,debug/logs]
Running The OTel Collector
You can now run the following command to spin up the container:
docker compose -f otel-compose.yaml up
You should see the followings logs:
Navigate to your browser and check out the Grafana Dashboard.
http://localhost:3005/
You should see the following dashboard page. Clicking on the hamburger menu in the top left corner of the screen should give you a list of options. For our use case, we’ll focus on the Explore section, but more on that later.
Now we can move onto setting up a simple Go project that will send traces to our OTel collector.
Publish Traces Locally
We’ll use a minimal Go program with the DataDog APM SDK to generate traces. These traces will be collected by the OTel Collector, via the DataDog receiver, as configured in our Docker Compose file.
We’ll initialize go modules, creating the go.mod
and go.sum
files.
go mod init <NAME_OF_YOUR_GO_PROJECT>
Next, you can use the main.go
file below for the scratch app. We’ll explain the code after the block, just like we did with the OTel configuration.
Minimal Go Application
// main.go
package main
import (
"context"
"fmt"
"io"
"net/http"
"time"
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer"
)
func main() {
// Start Datadog tracer with custom agent address pointing to OTel
tracer.Start(
tracer.WithAgentAddr("localhost:8126"),
tracer.WithServiceName("demo-service"),
)
defer tracer.Stop()
// Create a root span
rootSpan := tracer.StartSpan("main.operation")
rootCtx := tracer.ContextWithSpan(context.Background(), rootSpan)
// Simulate a delayed task
simulateTask(rootCtx)
// Simulate a traced HTTP call
tracedHTTPCall(rootCtx)
rootSpan.Finish()
fmt.Println("Done. Check your OTel collector output.")
}
func simulateTask(ctx context.Context) {
span, _ := tracer.StartSpanFromContext(ctx, "simulate.task")
defer span.Finish()
fmt.Println("Sleeping for 1 second...")
time.Sleep(1 * time.Second)
}
func tracedHTTPCall(ctx context.Context) {
span, _ := tracer.StartSpanFromContext(ctx, "http.request")
defer span.Finish()
req, _ := http.NewRequestWithContext(ctx, "GET", "https://httpbin.org/get", nil)
client := http.Client{}
res, err := client.Do(req)
if err != nil {
span.SetTag("error", err)
return
}
defer res.Body.Close()
body, _ := io.ReadAll(res.Body)
fmt.Printf("HTTP GET response: %s\n", string(body))
}
What Is This Go Code Doing?
This Go program uses the DataDog APM SDK to generate and send tracing data to the OpenTelemetry Collector. Here’s what it does, step by step:
- Starts a tracer with
localhost:8126
as the agent address — this is where the OTel Collector is listening for DataDog-formatted traces. - Creates a root span named
main.operation
to represent the main task. - Exposes a func
simulateTask()
that creates a child spansimulate.task
from the root context and usestime.Sleep
which mimicks a slow or blocking operation. - Exposes a func
tracedHTTPCall()
that creates a child spanhttp.request
and shows how to trace outgoing API calls. - Closes all spans and sends the trace data to the OTel Collector, which routes it based on your pipeline config.
This setup is ideal for testing observability pipelines locally with minimal setup.
In Go, spans are closely tied to the context.Context
type. When you create a span, you typically attach it to a context, allowing you to pass tracing information down through function calls.
This makes it easy to maintain parent-child relationships between spans and ensures trace data flows naturally through your application’s execution path.
Let’s Run it
Now we can run the program with confidence!
go run main.go
Check The Tracing
Navigate to the Home > Explore > Tempo page of the Grafana dashboard UI (URL provided below).
http://localhost:3005/explore?schemaVersion=1&panes=%7B%22wne%22:%7B%22datasource%22:%22tempo%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22datasource%22:%7B%22type%22:%22tempo%22,%22uid%22:%22tempo%22%7D,%22queryType%22:%22traceqlSearch%22,%22limit%22:20,%22tableType%22:%22traces%22,%22filters%22:%5B%7B%22id%22:%2243bfb49c%22,%22operator%22:%22%3D%22,%22scope%22:%22span%22%7D%5D%7D%5D,%22range%22:%7B%22from%22:%22now-15m%22,%22to%22:%22now%22%7D%7D%7D&orgId=1
Note: Tempo gathers and displays trace data within Grafana. For more details, explore the Tempo documentation.
Here is an example of what you should see — the trace ids will be a bit different for you.
You should see a trace ID containing three spans: a root span and two child spans, simulate.task
and http.request
.
Clicking a span opens the TraceQL explorer, showing how that specific span connects with others.
We can see the following trace connection structure:
main.operation
(root span)simulate.task
(child span)http.request
(child span)
Debugging Your Traces
When working with OTel locally, you might run into a few common problems. These can include misconfigured receivers or exporters, or even missing spans because of broken trace context propagation.
These bugs are often hard to find, especially when you don’t have good visibility into the OTel Collector’s activity.
For OTel configuration issues, tools like otelbin.io
can help you debug your setup and ensure everything is working correctly.
When it comes to broken tracing, spans may show up as unlinked or missing. This often happens if:
context.Context
isn’t passed between functions.- A service starts a new trace instead of continuing one.
- Trace headers aren’t forwarded in HTTP requests.
The Grafana Tempo dashboard should help spot these issues by clearly showing disconnected or orphaned spans, making it easier to debug and fix broken trace propagation.
Conclusion
In this blog post, we covered:
- Traces and spans: A high-level overview.
- Challenges: The difficulties of testing telemetry data locally.
- An alternative solution: A local testing approach involving:
- Setting up a local OTel configuration for telemetry.
- Publishing traces with a minimal Go application.
- Visualizing traces using the Grafana Tempo Dashboard.
- Debugging your traces.
While this example is simple, you can apply these fundamentals to your own projects. You could integrate the otel-compose.yaml
file directly into your project and run it using the steps we’ve outlined. Tracing is a core component of distributed systems, making micro-service management much easier.
We hope this helps demystify the world of tracing and empowers you to troubleshoot your traces more quickly!