Datadog Logs
The Observo AI Datadog Logs destination enables seamless transmission of log data to Datadog for centralized observability, monitoring, and analytics, supporting customizable timestamp formats, Gzip compression, and secure API key authentication.
Purpose
The Observo AI Datadog Logs destination enables users to send telemetry data (Logs) to Datadog for centralized observability, monitoring, and analytics. This destination integrates seamlessly with Datadog’s platform, allowing organizations to leverage Datadog’s powerful visualization and alerting capabilities for their telemetry data.
Prerequisites
Before configuring the Datadog Logs destination in Observo AI, ensure the following requirements are met to facilitate seamless data ingestion:
Datadog Account:
Create a Datadog account if one does not already exist. This account serves as the hub for your telemetry data.
Ensure the account is active and configured to accept log data.
Note the Datadog site such as us1, us3, eu1 that corresponds to your account’s region.
API Key:
Generate a Datadog API key in the Datadog platform to authenticate data ingestion (Generate an API Key in Datadog).
Navigate to Organization Settings > API Keys in the Datadog UI, create a new key, and securely store its value.
Ensure the API key has permissions to send logs to Datadog.
Network Access:
Verify that the Observo AI instance can communicate with the Datadog endpoint such as http-intake.logs.datadoghq.com for US1.
Check for firewall rules or network policies that may block outbound HTTPS traffic to Datadog’s log intake endpoint.
Datadog Tags (Optional):
Prepare any tags you want to apply to logs for filtering and organization in Datadog such as env:production, service:webapp.
Tags can be configured in Observo AI to align with Datadog’s tagging conventions.
Datadog Account
Hub for telemetry data
Must be active and configured for logs
API Key
Authenticates data ingestion
Securely store the API key
Network Access
Enables communication with Datadog
Ensure HTTPS connectivity to Datadog endpoint
Datadog Tags
Organizes logs in Datadog
Optional, but recommended for filtering
Integration
The Integration section outlines the configurations for the Datadog Logs destination. To configure Datadog Logs as a destination in Observo AI, follow these steps:
Log in to Observo AI:
Navigate to the Destinations tab.
Click the Add Destinations button and select Create New.
Choose Datadog Logs from the list of available destinations to begin configuration.
General Settings:
Name: Add a unique identifier such as datadog-logs-1.
Description (Optional): Provide a description for the destination.
Site (Optional): The Datadog site to send observability data to. Default: datadoghq.com
Examplesus3.datadoghq.com
datadoghq.eu
Encoding:
Encoding Timestamp Format: Specify the format for timestamp fields. Default: Empty
OptionDescriptionRFC 3339 timestamp
Represent the timestamp as an RFC 3339 timestamp.
Unix timestamp
Represent the timestamp as a Unix timestamp.
Request Configuration (Optional):
Request Concurrency: Configuration for outbound request concurrency. Default: Adaptive Concurrency.
OptionsA fixed concurrency of 1
Adaptive concurrency
Request Rate Limit Duration Secs: The time window used for the rate_limit_num option. Default: 1
Request Rate Limit Num: The maximum number of requests allowed within the rate_limit_duration_secs time window.
Request Retry Attempts: The maximum number of retries to make for failed requests. The default, represents an infinite number of retries.
Request Retry Initial Backoff Secs: The amount of time to wait in seconds before attempting the first retry for a failed request. After the first retry has failed, the fibonacci sequence will be used to select future backoffs. Default: 1
Request Retry Max Duration Secs: The maximum amount of time to wait between retries. Default: 3600
Request Timeout Secs: The time a request waits before being aborted. It is recommended that this value is not lowered below the service’s internal timeout, as this could create orphaned requests, and duplicate data downstream. Default: 60
TLS Configurations (Optional):
TLS Enabled (False): Whether or not to require TLS for incoming or outgoing connections. When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file for more information.
TLS CA File: The CA certificate provided as an inline string in PEM format.
Example/etc/certs/ca.crt
TLS Crt File: The certificate as a string in PEM format.
Example/etc/certs/tls.crt
TLS Key File: Absolute path to a private key file used to identify this server. The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Example/etc/certs/tls.key
TLS Key Pass: Passphrase used to unlock the encrypted key file. This has no effect unless key_file is set.
Examples${KEY_PASS_ENV_VAR}
PassWord1
TLS Verify Hostname (False): Enables hostname verification. Hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension. Only relevant for outgoing connections. NOT recommended to set this to false unless you understand the risks.
TLS Verify Certificate (False): Enables certificate verification. Certificates must be valid in terms of not being expired, and being issued by a trusted issuer. This verification operates in a hierarchical manner, checking validity of the certificate, the issuer of that certificate and so on until reaching a root certificate. Relevant for both incoming and outgoing connections. Do NOT set this to false unless you understand the risks of not verifying the validity of certificates.
Batching Configuration (Optional):
Batch Max Bytes: The maximum size of a batch that will be processed by a sink. This is based on the uncompressed size of the batched events, before they are serialized / compressed.
Batch Max Events: The maximum size of a batch before it is flushed.
Batch Timeout Secs: The maximum age of a batch before it is flushed. Default: 1
Acknowledgement (False):
Acknowledgements Enabled (False): Whether or not end-to-end acknowledgements are enabled. When enabled, any source connected to this supporting end-to-end acknowledgements, will wait for events to be acknowledged by the destination before acknowledging them at the source.
Buffering Configuration (Optional):
Buffer Type: Specifies the buffering mechanism for event delivery.
OptionsDescriptionMemory
High-Performance, in-memory buffering Max Events: The maximum number of events allowed in the buffer. Default: 500 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer.This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.
Disk
Lower-Performance, Less-costly, on disk buffering Max Bytes Size: The maximum number of bytes size allowed in the buffer. Must be at-least 268435488 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.
Advanced Settings (Optional):
Endpoint (Optional): The endpoint to send observability data to. The endpoint must contain an HTTP scheme, and may specify a hostname or IP address and port. If set, overrides the site option.
Exampleshttp://127.0.0.1:8080http://example.com:12345Compression (Optional): Compression configuration. All compression algorithms use the default compression level unless otherwise specified. Default: No compression
OptionsGzip compression
Zlib compression
No compression
Save and Test Configuration:
Save the configuration settings in Observo AI.
Send sample data and verify that it appears in the Datadog Logs Explorer under the specified source and service.
Example Scenarios
TechTrend Innovations, a fictitious technology company, aims to integrate Observo with Datadog Logs to centralize telemetry data for monitoring and analytics. They have an active Datadog account in the us1 region, have generated an API key with log ingestion permissions, and want to tag their logs for easy filtering. The configuration will ensure seamless data transmission to Datadog’s log management platform.
Standard Azure Monitor Logs Destination Setup
Here is a standard Data Dog Logs Destination configuration example. Only the required sections and their associated field updates are displayed in the table below:
General Settings
Name
datadog-logs-techtrend-1
Unique identifier for the destination.
Description
Centralizes TechTrend Innovations' telemetry data in Datadog Logs for monitoring and analytics.
Provides context for the destination's purpose.
Site
datadoghq.com
The Datadog site corresponds to the account’s region (US1).
Default Api Key
a1b2c3d4e5f67890abcdef1234567890
Datadog API key for authenticating HTTP requests to send logs.
Save and Test Configuration:
Save settings, send sample data, verify ingestion in Datadog Logs Explorer.
Saves configuration, tests data flow, and confirms logs appear in Datadog with expected tags.
Notes:
Ensure the API key a1b2c3d4e5f67890abcdef1234567890 is valid and has permissions to send logs to Datadog.
Verify HTTPS connectivity (port 443) to the Datadog log intake endpoint (http-intake.logs.datadoghq.com).
Monitor Observo’s logs and Datadog’s Logs Explorer to confirm data arrival and troubleshoot errors.
Optionally, add tags like env:production or service:webapp in Observo to organize logs in Datadog.
This configuration enables TechTrend Innovations to transmit telemetry data from Observo to Datadog Logs for centralized observability and analytics.
Troubleshooting
If issues arise with the Datadog Logs destination, use the following steps to diagnose and resolve them:
Verify Configuration Settings:
Ensure all fields, such as Datadog Site, API Key, Source, and Service, are correctly entered and match the Datadog account configuration.
Confirm that the Datadog Site matches your account’s region such as us1, eu1.
Check Authentication:
Verify that the API key is valid and has not been revoked or expired.
Regenerate the API key in Datadog if necessary and update the Observo AI configuration.
Monitor Logs:
Check Observo AI logs for errors or warnings related to data transmission to Datadog.
In the Datadog platform, navigate to Logs > Explorer to confirm that logs are arriving with the expected source and tags.
Validate Network Connectivity:
Ensure that the Observo AI instance can reach the Datadog log intake endpoint such as http-intake.logs.datadoghq.com.
Check for firewall rules or network policies blocking HTTPS traffic on port 443.
Test Data Flow:
Send sample data through Observo AI and monitor its arrival in Datadog’s Logs Explorer.
Use the Analytics tab in the targeted Observo AI pipeline to monitor data volume and ensure expected throughput.
Check Quotas and Limits:
Verify that the Datadog account is not hitting log ingestion limits or quotas (Log management quotas).
Adjust batching settings such as Batch Max Bytes, Batch Max Events if backpressure or slow data transfer occurs.
Logs not appearing in Datadog
Incorrect API key or Datadog Site
Verify API key and site in configuration
Authentication errors
Expired or invalid API key
Regenerate API key and update configuration
Connection failures
Network or firewall issues
Check network policies and HTTPS connectivity
Slow data transfer
Backpressure or rate limiting
Adjust batching settings or check Datadog quotas
Resources
For additional guidance and detailed information, refer to the following resources:
Last updated
Was this helpful?

