HTTP
The HTTP destination allows users to forward observability data (logs, metrics, traces) to an external HTTP server from within the pipeline. This destination provides flexibility in sending data to custom endpoints for storage or analysis, while supporting a variety of authentication and framing options.
Purpose
The purpose of the Observo AI HTTP Destination is to enable the routing and delivery of telemetry data (such as logs, metrics, or traces) from an Observo AI Site to an HTTP/HTTPS endpoint for further processing, storage, or analysis. It allows organizations to integrate Observo AI’s data observability and routing capabilities with external systems that accept data via HTTP POST or PUT requests, typically in JSON format. Key use cases include:
Integration with External Services: Sending telemetry data to web-based services, such as SIEM systems such as Panther, monitoring tools such as Datadog, or custom APIs for real-time processing or alerting.
Data Forwarding: Transmitting structured or enriched data to cloud-based or on-premises applications that expose HTTP endpoints, enabling centralized data collection or analysis.
Data Transformation and Enrichment: Utilizing Observo AI’s pipeline capabilities to filter, mask, enrich, or reformat data before forwarding it to the HTTP endpoint, ensuring compatibility with the target system’s requirements.
Flexible Delivery: Supporting scenarios where data needs to be sent to systems without native Kafka or other protocol support, using HTTP as a widely compatible transport mechanism.
The HTTP destination supports authentication such as Bearer tokens, Basic Auth, TLS for secure communication, and batching configurations to optimize data transfer, making it suitable for secure, scalable, and efficient data delivery to HTTP-based systems.
Prerequisites
Before configuring an HTTP destination in Observo AI, ensure the following requirements are met:
Observo AI Account: You must have an active Observo AI account with administrative access to the Observo console.
Target HTTP Endpoint: A valid HTTP/HTTPS URL where data will be sent. The endpoint must be accessible and configured to accept POST requests with JSON payloads.
Authentication Details: If the HTTP endpoint requires authentication, prepare the necessary credentials such as Bearer token, API key, or basic authentication credentials.
Network Access: Ensure your Observo Site (data plane) has network connectivity to the target HTTP endpoint, with no firewall rules blocking outbound traffic.
Data Schema: Understand the expected JSON schema or data format required by the destination endpoint to avoid integration issues.
Observo Site Deployment: A functional Observo Site must be deployed in your environment (on-premises or cloud) to handle data routing. Refer to the Observo AI documentation for deployment instructions.
Observo AI Platform
The Observo AI Site must be installed and available.
Verify support for JSON/text payloads
Network
Connectivity to HTTP/HTTPS ports
Default ports: TCP 80 (HTTP), TCP 443 (HTTPS); check firewall/proxy
Authentication
TLS certificate setup or Basic Auth if enabled
Provide CA, host cert, and key files for HTTPS; configure Basic Auth if needed
Load Balancer
Optional for high-volume HTTP data
Use HAProxy, nginx, or AWS ELB; enable Proxy Protocol if needed
Integration
To configure an HTTP destination in Observo AI for sending telemetry data, follow these steps:
Log in to Observo AI:
Navigate to the Destinations tab.
Click the Add Destinations button and select Create New.
Choose "HTTP" from the list of available destinations.
General Settings:
Name: Add a unique identifier, such as http-dest-1.
Description (Optional): Provide a description for the destination.
URL / URI: Full URL path. Example:
http://example.com:8080/endpointDefault: datadoghq.com.
HTTP Method: Enter POST or PUT. Default: POST
Authentication (Optional):
Auth Strategy: Select Basic or Bearer.
Auth User: Username to login to the end point
Auth Password: Password to login to the end point if basic strategy
Auth Token: Token to login to the end point if bearer auth strategy
Encoding:
Encoding Codec: The codec to use for encoding events. Default: JSON Encoding
OptionsSub-OptionsJSON Encoding
Pretty JSON (False): Format JSON with indentation and line breaks for better readability. Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
logfmt Encoding
Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
Apache Avro Encoding
Avro Schema: Specify the Apache Avro schema definition for serializing events. Examples: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
Newline Delimited JSON Encoding
Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
No encoding
Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
Plain text encoding
Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
Parquet
Include Raw Log (False): Capture the complete log message as an additional field(observo_record) apart from the given schema. Examples: In addition to the Parquet schema, there will be a field named "observo_record" in the Parquet file. Parquet Schema: Enter parquet schema for encoding. Examples: message root { optional binary stream; optional binary time; optional group kubernetes { optional binary pod_name; optional binary pod_id; optional binary docker_id; optional binary container_hash; optional binary container_image; optional group labels { optional binary pod-template-hash; } } } Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
Common Event Format (CEF)
CEF Device Event Class ID: Provide a unique identifier for categorizing the type of event (maximum 1023 characters). Example: login-failure CEF Device Product: Specify the product name that generated the event (maximum 63 characters). Example: Log Analyzer CEF Device Vendor: Specify the vendor name that produced the event (maximum 63 characters). Example: Observo CEF Device Version: Specify the version of the product that generated the event (maximum 31 characters). Example: 1.0.0 CEF Extensions (Add): Define custom key-value pairs for additional event data fields in CEF format. CEF Name: Provide a human-readable description of the event (maximum 512 characters). Example: cef.name CEF Severity: Indicate the importance of the event with a value from 0 (lowest) to 10 (highest). Example: 5 CEF Version (Select): Specify which version of the CEF specification to use for formatting. - CEF specification version 0.1 - CEF specification version 1.x Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
CSV Format
CSV Fields (Add): Specify the field names to include as columns in the CSV output and their order. Examples: - timestamp - host - message CSV Buffer Capacity (Optional): Set the internal buffer size (in bytes) used when writing CSV data. Example: 8192 CSV Delimitier (Optional): Set the character that separates fields in the CSV output. Example: , Enable Double Quote Escapes (True): When enabled, quotes in field data are escaped by doubling them. When disabled, an escape character is used instead. CSV Escape Character (Optional): Set the character used to escape quotes when double_quote is disabled. Example: <br> CSV Quote Character (Optional): Set the character used for quoting fields in the CSV output. Example: " CSV Quoting Style (Optional): Control when field values should be wrapped in quote characters. Options: - Always quot all fields - Quote only when necessary - Never use quotes - Quote all non-numeric fields Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
Protocol Buffers
Protobuf Message Type: Specify the fully qualified message type name for Protobuf serialization. Example: package.Message Protobuf Descriptor File: Specify the path to the compiled protobuf descriptor file (.desc). Example: /path/to/descriptor.desc Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
Graylog Extended Log Format (GELF)
Format of event timestamps when encoding (Select): - RFC3339 format (default) - UNIX format
Request Configuration:
Request Headers: Custom HTTP request headers to include with each request. (Add key/value pairs as needed)
Request Concurrency: Configuration for outbound request concurrency. Default: Adaptive concurrency
OptionsA fixed concurrency of 1
Adaptive concurrency
Time window for rate limiting in seconds: Default: 1
Max requests allowed in time window: (Empty)
Max retries for failed requests: (Empty)
Min wait time before first retry in seconds: (Empty)
Max wait time between retries in seconds: Default: 3600
Max time in seconds after which a request is considered as failed: Default: 60
Batching Configuration:
Batch Max Bytes: Set the maximum batch size in bytes. Default: Empty.
Batch Max Events: Set the maximum number of events per batch Default: Empty.
Batch Timeout Seconds: Set the maximum time to wait before sending a batch. Default: 1
Acknowledgement:
Enable Acknowledgements (False)
Framing:
Framing Method: The framing method. Default: Newline Delimited
OptionsSub-OptionsRaw Event Data (not delimited)
Framing Character Delimited Delimiter: The ASCII (7-bit) character that delimits byte sequences. Example: ,
Single Character Delimited
Framing Character Delimited Delimiter: The ASCII (7-bit) character that delimits byte sequences. Example: ,
Prefixed with Byte Length
Framing Character Delimited Delimiter: The ASCII (7-bit) character that delimits byte sequences. Example: ,
Newline Delimited
Framing Character Delimited Delimiter: The ASCII (7-bit) character that delimits byte sequences. Example: ,
TLS Configuration (Optional):
TLS Verify Certificate (False): Enables certificate verification. Certificates must be valid in terms of not being expired, and being issued by a trusted issuer. This verification operates in a hierarchical manner, checking validity of the certificate, the issuer of that certificate and so on until reaching a root certificate. Relevant for both incoming and outgoing connections. Do NOT set this to false unless you understand the risks of not verifying the validity of certificates.
TLS Verify Hostname (False): Enables hostname verification. Hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension. Only relevant for outgoing connections. NOT recommended to set this to false unless you understand the risks.
Buffering Configuration (Optional):
Buffer Type: Specifies the buffering mechanism for event delivery. Default: Empty
OptionsDescriptionMemory
High-Performance, in-memory buffering Max Events: The maximum number of events allowed in the buffer. Default: 500 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer.This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.
Disk
Lower-Performance, Less-costly, on disk buffering Max Bytes Size: The maximum number of bytes size allowed in the buffer. Must be at-least 268435488 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.
Advanced Settings:
Compression: Enable compression to reduce data transfer size. Default: None
OptionsGzip
None
Save and Test Configuration:
Save the configuration settings in Observo AI.
Send sample trace data and verify that it appears in the HTTP-based service.
Example Scenarios
DataSync Solutions, a fictitious company specializing in real-time data processing, wants to integrate Observo with a custom SIEM system that exposes an HTTPS endpoint for log ingestion. The SIEM system requires Bearer token authentication and accepts JSON payloads via POST requests. The configuration will ensure secure and efficient telemetry data delivery to the SIEM for centralized analysis.
Standard HTTP Destination Setup
Here is a standard HTTP Destination configuration example. Only the required sections and their associated field updates are displayed in the table below:
General Settings
Name
http-dest-datasync-1
Unique identifier for the destination.
Description
Routes telemetry data to DataSync Solutions' SIEM system for real-time log analysis.
Provides context for the destination's purpose.
URL / URI
https://siem.datasyncsolutions.com:8443/api/logs
Full URL path of the SIEM system's HTTPS endpoint for log ingestion.
HTTP Method
POST
Specifies the HTTP method for sending data, defaulting to POST as required by the SIEM.
Authentication
Auth Strategy
Bearer
Selects Bearer token authentication as required by the SIEM system.
Auth User
datasync-api-user
Username associated with the API access for the SIEM system.
Auth Token
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkRhdGFTeW5jIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
Bearer token for authenticating requests to the SIEM endpoint.
Save and Test Configuration:
Save settings, send sample trace data, verify ingestion in the SIEM system.
Saves configuration, tests data flow, and confirms data appears in the SIEM’s log dashboard.
Notes:
Ensure the Bearer token is valid and not expired; regenerate it in the SIEM system if necessary.
Verify HTTPS connectivity (port 8443) to
https://siem.datasyncsolutions.com:8443/api/logsand check firewall rules.Confirm the SIEM system accepts JSON payloads; use Observo’s transform capabilities to align data with the SIEM’s schema if needed.
Monitor Observo logs and the SIEM’s dashboard to verify data ingestion and troubleshoot errors.
This configuration enables DataSync Solutions to securely route telemetry data from Observo to their SIEM system for real-time log analysis.
Troubleshooting
Common issues and solutions when configuring an HTTP destination:
Connection Errors:
Issue: The Observo Site cannot connect to the HTTP endpoint.
Solution: Verify the URL is correct and accessible. Check network connectivity and firewall rules to ensure outbound traffic to the endpoint is allowed. Test the endpoint using tools like curl or Postman.
Authentication Failures:
Issue: The endpoint rejects requests due to invalid credentials.
Solution: Double-check the authentication method and credentials such as Bearer token or API key. Ensure the token is valid and not expired. Contact the endpoint provider for updated credentials if needed.
Data Format Issues:
Issue: The endpoint returns errors due to incorrect data format or schema.
Solution: Confirm the endpoint’s expected JSON schema. Use Observo’s transform capabilities to structure or enrich data to match the required format. Review logs in the Observo console for specific error messages.
Rate Limiting:
Issue: The endpoint returns HTTP 429 (Too Many Requests) errors.
Solution: Check the endpoint’s rate limit policies. Adjust the pipeline’s data throughput in the Observo console or implement throttling to comply with the endpoint’s limits.
Pipeline Failures:
Issue: The pipeline fails to deliver data to the HTTP destination.
Solution: Check the pipeline configuration for errors in source or transform settings. Review Observo logs for detailed error messages and ensure the Observo Site is operational.
No data forwarded
Incorrect IP/port or network block
Verify client configuration and network connectivity
Invalid payload format
Unsupported data format
Use appropriate parser for JSON/text payloads
High CPU usage
Single worker overloaded
Enable HTTP load balancing
Request too large
Payload exceeds size limit
Increase Max Request Size in Advanced Settings
TLS errors
Certificate or version mismatch
Check TLS settings and certificate files
Authentication errors
Incorrect credentials or token
Verify Basic Auth or API Token settings
Resources
For additional guidance and detailed information, refer to the following resources:
Best Practices:
Use HTTPS with TLS for secure data delivery unless the client only supports HTTP.
Configure separate HTTP Sources for HTTP and HTTPS traffic to enhance security and performance.
For high-volume environments, deploy a load balancer to distribute HTTP traffic across worker nodes.
Last updated
Was this helpful?

