Datadog Traces
The Observo AI Datadog Traces destination enables seamless transmission of trace data to Datadog for distributed tracing and performance monitoring, supporting Gzip compression, secure API key authentication, and customizable tags for efficient application performance analysis.
Purpose
The Observo AI Datadog Traces destination enables users to send trace data to Datadog for distributed tracing, performance monitoring, and application performance analysis. This destination integrates seamlessly with Datadog’s platform, allowing organizations to leverage Datadog’s tracing capabilities to visualize application performance and troubleshoot issues.
Prerequisites
Before configuring the Datadog Traces destination in Observo AI, ensure the following requirements are met to facilitate seamless data ingestion:
Datadog Account:
Create a Datadog account if one does not already exist. This account serves as the hub for your trace data
Ensure the account is active and configured to accept trace data.
Note the Datadog site such as us1, us3, eu1 that corresponds to your account’s region.
API Key:
Generate a Datadog API key in the Datadog platform to authenticate trace data ingestion (Generate an API Key in Datadog).
Navigate to Organization Settings > API Keys in the Datadog UI, create a new key, and securely store its value.
Ensure the API key has permissions to send traces to Datadog.
Network Access:
Verify that the Observo AI instance can communicate with the Datadog trace intake endpoint such as trace.agent.datadoghq.com for US1.
Check for firewall rules or network policies that may block outbound HTTPS traffic to Datadog’s trace intake endpoint on port 443.
Datadog Tags (Optional):
Prepare any tags you want to apply to traces for filtering and organization in Datadog such as env:production, service:webapp.
Tags can be configured in Observo AI to align with Datadog’s tagging conventions for traces.
Datadog Account
Hub for trace data
Must be active and configured for traces
API Key
Authenticates trace data ingestion
Securely store the API key
Network Access
Enables communication with Datadog
Ensure HTTPS connectivity to Datadog trace endpoint
Datadog Tags
Organizes traces in Datadog
Optional, but recommended for filtering
Integration
The Integration section outlines default configurations for the Datadog Traces destination. To tailor the setup to your environment, consult the Configuration Parameters section in the Datadog documentation for advanced options.
To configure Datadog Traces as a destination in Observo AI, follow these steps:
Log in to Observo AI:
Navigate to the Destinations tab.
Click the Add Destinations button and select Create New.
Choose Datadog Traces from the list of available destinations to begin configuration.
General Settings:
Name: Add a unique identifier, such as datadog-traces-1.
Description (Optional): Provide a description for the destination.
Site (Optional): The Datadog site to send observability data to. Default: datadoghq.com
Examplesus3.datadoghq.com
datadoghq.eu
Request Configuration:
Request Concurrency: Configuration for outbound request concurrency. Default: Adaptive concurrency.
OptionsDescriptionAdaptive concurrency
Adjusts parallelism based on system load
A fixed concurrency of 1
Processes one task at a time only
Request Rate Limit Duration Secs: The time window used for the rate_limit_num option. Default: 1.
Request Rate Limit Num: The maximum number of requests allowed within the rate_limit_duration_secs time window.
Request Retry Attempts: The maximum number of retries to make for failed requests. The default, represents an infinite number of retries. Default: Unlimited.
Request Retry Initial Backoff Secs: The amount of time to wait in seconds before attempting the first retry for a failed request. After the first retry has failed, the fibonacci sequence will be used to select future backoffs. Default: 1.
Request Retry Max Duration Secs: The maximum amount of time to wait between retries. Default: 3600.
Request Timeout Secs: The time a request waits before being aborted. It is recommended that this value is not lowered below the service’s internal timeout, as this could create orphaned requests, and duplicate data downstream. Default: 60.
TLS Configuration (Optional):
TLS Enabled (False): Whether or not to require TLS for incoming or outgoing connections. When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file for more information.
TLS CA: The CA certificate provided as an inline string in PEM format.
Example/etc/certs/ca.crt
TLS CRT: The certificate as a string in PEM format.
Example/etc/certs/tls.crt
TLS Key: Absolute path to a private key file used to identify this server. The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Example/etc/certs/tls.key
TLS Key Pass: Passphrase used to unlock the encrypted key file. This has no effect unless key_file is set.
Examples${KEY_PASS_ENV_VAR}
PassWord1
TLS Verify Hostname (False): Enables hostname verification. Hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension. Only relevant for outgoing connections. NOT recommended to set this to false unless you understand the risks.
TLS Verify Certificate (False): Enables certificate verification. Certificates must be valid in terms of not being expired, and being issued by a trusted issuer. This verification operates in a hierarchical manner, checking validity of the certificate, the issuer of that certificate and so on until reaching a root certificate. Relevant for both incoming and outgoing connections. Do NOT set this to false unless you understand the risks of not verifying the validity of certificates.
Batching Requirements:
Batch Max Bytes (Increment as needed): The maximum size of a batch that will be processed by a sink. This is based on the uncompressed size of the batched events, before they are serialized / compressed.
Batch Max Events (Increment as needed): The maximum size of a batch before it is flushed.
Batch Timeout Seconds (Increment as needed): The maximum age of a batch before it is flushed. Default: 1
Buffering Configuration (Optional):
Buffer Type: Specifies the buffering mechanism for event delivery.
OptionsDescriptionMemory
High-Performance, in-memory buffering Max Events: The maximum number of events allowed in the buffer. Default: 500 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer.This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.
Disk
Lower-Performance, Less-costly, on disk buffering Max Bytes Size: The maximum number of bytes size allowed in the buffer. Must be at-least 268435488 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.
Advanced Settings (Optional):
Endpoint (Optional): The endpoint to send observability data to. The endpoint must contain an HTTP scheme, and may specify a hostname or IP address and port. If set, overrides the site option.
Exampleshttp://example.com:12345
Compression: Enable compression to reduce data transfer size. Default: No Compression
OptionsDescriptionNo compression
Data stored or transmitted in original uncompressed form
Gzip compression
DEFLATE-based compression with headers for file storage
Zlib compression
Lightweight DEFLATE wrapper, used in programming libraries
Enable Proxy: (False) Defines whether to use a proxy to connect to New Relic. If set to true, the proxy settings must be configured. If enabled:
Proxy HTTP Endpoint: Specify the HTTP proxy endpoint.
Examplehttp://proxy.example.com:8080
Proxy HTTPS Endpoint: Specify the HTTPS proxy endpoint.
Examplehttps://proxy.example.com:8080
Proxy Bypass List (Add as needed): Hosts to avoid connecting through the proxy.
Examplehttps://proxy.example.com:8080
Save and Test Configuration:
Save the configuration settings in Observo AI.
Send sample trace data and verify that it appears in the Datadog APM > Traces Explorer under the specified service and tags.
Example Scenarios
TechTrend Innovations, a fictitious technology company specializing in cloud-based SaaS applications, uses Observo AI to manage telemetry data from its microservices architecture. To enhance application performance monitoring and troubleshoot latency issues, TechTrend integrates Observo AI with Datadog to send trace data for distributed tracing. This integration enables visualization of request flows across services, identification of bottlenecks, and optimization of application performance, ensuring a seamless user experience.
Standard Datadog Traces Source Setup
Here is a standard Datadog Traces Source configuration example. Only the required sections and their associated field updates are displayed in the table below:
General Settings
Name
techtrend-datadog-traces
Unique identifier for the Datadog Traces destination.
Description
Send trace data for microservices monitoring
Optional description of the destination's purpose.
Site
us3.datadoghq.com
Datadog site corresponding to the US3 region for trace ingestion.
Default Api Key
ef8d5de700e7989468166c40fc8a0ccd
Default Datadog API key for authenticating HTTP requests (securely stored).
Request Configuration
Request Concurrency
Adaptive concurrency
Adjusts parallelism based on system load for optimal performance.
Request Rate Limit Duration Secs
1
1-second time window for rate limiting requests.
Request Rate Limit Num
200
Maximum of 200 requests allowed within the 1-second window.
Request Retry Attempts
5
Retries failed requests up to 5 times to ensure reliable delivery.
Request Retry Initial Backoff Secs
1
Waits 1 second before the first retry, using Fibonacci for subsequent retries.
Request Retry Max Duration Secs
1800
Maximum 30-minute wait between retries to prevent excessive delays.
Request Timeout Secs
60
60-second timeout for HTTP requests to avoid orphaned requests.
TLS Configuration
TLS Enabled
True
Requires TLS for secure outgoing connections to Datadog.
TLS CA
-----BEGIN CERTIFICATE-----...
Inline PEM-formatted CA certificate for verifying Datadog's server.
TLS CRT
-----BEGIN CERTIFICATE-----...
Certificate in PEM format for secure connections.
TLS Key
-----BEGIN PRIVATE KEY-----...
Private key in PEM format for secure connections (securely stored).
TLS Key Pass
SecurePass2025
Passphrase to unlock the encrypted key file.
TLS Verify Hostname
True
Verifies that the hostname in Datadog’s certificate matches us3.datadoghq.com.
TLS Verify Certificate
True
Ensures certificates are valid and issued by a trusted authority.
Batching Configuration
Batch Max Bytes
2097152
Maximum batch size of 2MB (uncompressed) to balance throughput and efficiency.
Batch Max Events
500
Maximum of 500 events per batch before flushing to Datadog.
Batch Timeout Seconds
1
Flushes batches after 1 second to ensure timely delivery.
Buffering Configuration
Buffer Type
Memory
Uses high-performance in-memory buffering for trace delivery.
Max Events
1000
Limits buffer to 1000 events to manage memory usage.
When Full
Block
Applies backpressure when the buffer is full, preventing data loss.
Advanced Settings
Endpoint
https://trace.agent.datadoghq.com
Datadog trace intake endpoint for sending observability data.
Compression
Gzip compression
Uses Gzip to reduce data transfer size, optimizing bandwidth usage.
Enable Proxy
True
Enables proxy usage for secure and controlled network communication.
Proxy HTTP Endpoint
http://proxy.techtrend.com:8080
HTTP proxy endpoint for routing requests through the corporate network.
Proxy HTTPS Endpoint
https://proxy.techtrend.com:8080
HTTPS proxy endpoint for secure request routing.
Proxy Bypass List
*.internal.techtrend.com
Bypasses proxy for internal TechTrend domains to optimize performance.
Troubleshooting
If issues arise with the Datadog Traces destination, use the following steps to diagnose and resolve them:
Verify Configuration Settings:
Ensure all fields, such as Datadog Site, API Key, and Service, are correctly entered and match the Datadog account configuration.
Confirm that the Datadog Site matches your account’s region such as us1, eu1.
Check Authentication:
Verify that the API key is valid and has not been revoked or expired.
Regenerate the API key in Datadog if necessary and update the Observo AI configuration.
Monitor Logs and Traces:
Check Observo AI logs for errors or warnings related to trace data transmission to Datadog.
In the Datadog platform, navigate to APM > Traces to confirm that traces are arriving with the expected service and tags.
Validate Network Connectivity:
Ensure that the Observo AI instance can reach the Datadog trace intake endpoint such as trace.agent.datadoghq.com.
Check for firewall rules or network policies blocking HTTPS traffic on port 443.
Test Data Flow:
Send sample trace data through Observo AI and monitor its arrival in Datadog’s APM Traces Explorer.
Use the Analytics tab in the targeted Observo AI pipeline to monitor data volume and ensure expected throughput.
Check Quotas and Limits:
Verify that the Datadog account is not hitting trace ingestion limits or quotas (Datadog APM Quotas and Limits).
Adjust batching settings such as Batch Max Bytes, Batch Max Events if backpressure or slow data transfer occurs.
Traces not appearing in Datadog
Incorrect API key or Datadog Site
Verify API key and site in configuration
Authentication errors
Expired or invalid API key
Regenerate API key and update configuration
Connection failures
Network or firewall issues
Check network policies and HTTPS connectivity
Slow trace transfer
Backpressure or rate limiting
Adjust batching settings or check Datadog quotas
Resources
For additional guidance and detailed information, refer to the following resources:
Last updated
Was this helpful?

