HTTP Server (Push)
Http Server (Push) source employs push-based mechanism where the monitored applications or services actively transmit their metrics or data to the monitoring platform via HTTP requests.
Purpose
The purpose of the Source HTTP Server (Push) in Observo AI is to ingest log and event data from any system capable of sending data via HTTP, such as custom applications, scripts, or third-party tools. It provides a flexible and lightweight method to collect structured or unstructured data in real time without requiring a specialized agent. This enables seamless integration of diverse data sources into Observo AI's observability pipeline for enrichment, reduction, and routing.
Prerequisites
Before configuring the HTTP Server (Push) Source in Observo AI, ensure the following requirements are met to facilitate seamless data ingestion:
Observo AI Platform Setup:
The Observo AI Site must be installed and available.
Verify that the platform supports HTTP payloads, including JSON, text, or other structured formats commonly sent via HTTP POST or GET requests.
Network and Connectivity:
Ensure Observo AI can receive HTTP/HTTPS traffic on the configured port. The default port is TCP 10080 for HTTP and TCP 10443 for HTTPS. Custom ports within the range 10000-10200 are supported.
Check for firewall rules, proxy settings, or VPC configurations that may affect connectivity to the specified ports.
If using HTTPS, ensure certificates are properly configured, such as CA certificates for secure communication.
Authentication (Optional for HTTPS):
For HTTPS-enabled sources, prepare one of the following authentication methods:
Certificate-Based Authentication: Provide paths to CA certificate, host certificate, and private key files if required.
No Authentication: If HTTPS is disabled, no additional credentials are needed.
Optionally, configure Basic Authentication or API token-based authentication if required by the HTTP clients.
Load Balancer (Optional):
For high-volume HTTP traffic, configure a load balancer such as HAProxy, nginx, or AWS ELB to distribute traffic across Observo AI worker nodes to prevent CPU strain on a single node.
If using a load balancer, consider enabling Proxy Protocol v1 or v2 to preserve the original sender IP address.
Observo AI Platform
The Observo AI Site must be installed and available.
Verify support for JSON/text payloads
Network
Connectivity to HTTP/HTTPS ports
Default ports: TCP 10080 (HTTP), TCP 10443 (HTTPS); check firewall/proxy
Authentication
TLS certificate setup or Basic Auth if enabled
Provide CA, host cert, and key files for HTTPS; configure Basic Auth if needed
Load Balancer
Optional for high-volume HTTP data
Use HAProxy, nginx, or AWS ELB; enable Proxy Protocol if needed
Integration
The Integration section outlines the steps to configure the HTTP Server (Push) Source in Observo AI. To configure HTTP Server (Push) as a source in Observo AI, follow these steps to set up and test the data flow:
Log in to Observo AI:
Navigate to the Sources tab.
Click the "Add Source" button and select "Create New".
Choose "HTTP Server (Push)" from the list of available sources to begin configuration.
General Settings:
Name: A unique identifier for the source, such as http-push-source-1.
Description (Optional): Provide a description for the source.
Socket Address: Socket address to listen for connections on. It should be in the format of host:port. Port must be in one of these ranges: 10001-10099. Ports in use: 10001, 10002, 10003, 10004, 10005, 10006, 10007, 10008, 10009, 10050, 10099, 10100.
Example0.0.0.0:10000
Authentication Settings (Optional):
Username: Username for basic authentication.
Password: Password for basic authentication.
Advanced Settings:
Encoding: The expected encoding of received data. Note that for JSON encoding, the fields of the JSON objects are output as separate fields.
Select from the dropdown:JSON
Plaintext
Method: Specifies the action of the HTTP request. Default: HTTP POST method
Select from the dropdown:HTTP DELETE method
HTTP GET method
HTTP HEAD method
HTTP PATCH method
HTTP POST method
HTTP PUT method
Path: The URL path on which log event POST requests shall be sent (Default: /)
Examples/event/path
/logs
Parser Config:
Enable Source Log Parser: (False)
Toggle Enable Source Log Parser Switch to enable.
Select appropriate Parser from the Source Log Parser dropdown.
Add additional Parsers as needed.
Pattern Extractor:
Refer to Observo AI's Pattern Extractor documentation for details on configuring pattern-based data extraction.
Archival Destination:
Toggle Enable Archival on Source Switch to enable.
Under Archival Destination, select from the list of Archival Destinations (Required).
Save and Test Configuration:
Save the configuration settings.
Send sample HTTP data, such as via curl or a similar HTTP client, and verify ingestion in Observo AI by monitoring the Analytics tab in the target pipeline.
Example Scenarios
OmniRetail Solutions, a fictitious enterprise in the retail sector, operates a chain of e-commerce and brick-and-mortar stores. To enhance their observability, OmniRetail aims to ingest log and event data from their custom e-commerce application and third-party tools into the Observo AI platform using the HTTP Server (Push) source. This integration will enable real-time monitoring of customer transactions, website performance, and inventory updates. Below is the detailed configuration process for setting up the HTTP Server (Push) source in Observo AI, based on the provided documentation, with all required fields specified.
Standard HTTP Server (Push) Source Setup
Here is a standard HTTP Server (Push) Source configuration example. Only the required sections and their associated field updates are displayed in the table below:
General Settings
Name
ecommerce-http-logs
Unique identifier for the HTTP Server (Push) source.
Description
Source for ingesting e-commerce transaction and performance logs
Optional description for clarity.
Socket Address
0.0.0.0:10010
Listens for HTTP requests on port 10010 across all interfaces, within the supported range (10001-10099).
Authentication Settings
Username
omni_admin
Username for HTTP Basic Authentication.
Password
${HTTP_AUTH_PASSWORD}
Password stored securely in Observo AI’s secure storage.
Advanced Settings
Encoding
JSON
Expects JSON-encoded data, with fields output as separate fields in Observo AI.
Method
HTTP POST method
Specifies POST as the action for HTTP requests to send log data.
Path
/ecommerce/logs
URL path for receiving log event POST requests.
Test Configuration
Save the configuration in the Observo AI interface.
Send sample HTTP POST requests with JSON payloads (e.g., transaction logs) to http://observo.omniretail.com:10010/ecommerce/logs using curl or a similar HTTP client, including Basic Authentication credentials.
Verify data ingestion in the Observo AI Analytics tab, checking for expected transaction and performance metrics.
Monitor Observo AI logs for errors and confirm data throughput matches expected volume.
Scenario Troubleshooting
No Data Received: Ensure the client sends data to http://observo.omniretail.com:10010/ecommerce/logs and that port 10010 is open. Check firewall rules and network connectivity.
Authentication Errors: Verify that omni_admin and ${HTTP_AUTH_PASSWORD} match the credentials used by the client.
Invalid Payload Format: Confirm that payloads are valid JSON and that the JSON parser is enabled in Parser Config if needed.
High CPU Usage: For high-volume traffic, configure a load balancer (e.g., nginx) to distribute requests across Observo AI worker nodes.
Connection Issues: Test connectivity using curl to ensure the Observo AI endpoint is reachable and port 10010 is not blocked.
This configuration enables OmniRetail Solutions to securely ingest e-commerce logs into Observo AI, supporting real-time monitoring of transactions and website performance.
Troubleshooting
If issues arise with the HTTP Server (Push) Source in Observo AI, use the following steps to diagnose and resolve them:
Verify Configuration Settings:
Ensure fields like Name, Socket Address, Mode, and port numbers match the HTTP client's configuration.
Confirm that the correct ports, such as 10080 for HTTP, 10443 for HTTPS, are open and accessible.
Check Network Connectivity:
Verify that firewall rules, proxy settings, or VPC configurations allow traffic to the specified HTTP/HTTPS ports.
Test connectivity using tools like curl or telnet to ensure the Observo AI instance can receive data on the configured ports.
Validate TLS Configuration:
For HTTPS-enabled sources, ensure CA, certificate, and key files are correctly specified and accessible.
Check for TLS version mismatches, such as clients using TLS 1.2 while Observo AI expects TLS 1.3. Adjust TLS settings or disable "Verify Hostname" if certificate issues occur.
Validate Authentication:
If Basic Authentication or API Token is enabled, ensure the client provides the correct credentials or token.
Check Observo AI logs for authentication failure errors.
Monitor Logs and Data:
Verify data ingestion by monitoring the Analytics tab in the Observo AI pipeline for throughput and event counts.
Check Observo AI logs for errors related to parsing, request size limits, or connection issues.
Common Error Messages:
"No data received": Ensure the HTTP client is sending data to the correct IP/port and that network connectivity is not blocked. Verify the client's protocol (HTTP/HTTPS) matches the source configuration.
"Invalid payload format": Confirm that the HTTP payloads are in a supported format, such as JSON or text. Use the appropriate parser in the Source Log Parser settings to process the data.
"High CPU usage": For high-volume HTTP traffic, enable load balancing to distribute traffic across worker nodes.
"Request too large": Increase the Max Request Size in Advanced Settings if large payloads are being rejected.
No data received
Incorrect IP/port or network block
Verify client configuration and network connectivity
Invalid payload format
Unsupported data format
Use appropriate parser for JSON/text payloads
High CPU usage
Single worker overloaded
Enable HTTP load balancing
Request too large
Payload exceeds size limit
Increase Max Request Size in Advanced Settings
TLS errors
Certificate or version mismatch
Check TLS settings and certificate files
Authentication errors
Incorrect credentials or token
Verify Basic Auth or API Token settings
Resources
For additional guidance and detailed information, refer to the following resources:
What are the urls for the references & best practices:
Best Practices:
Use HTTPS with TLS for secure data delivery unless the client only supports HTTP.
Configure separate HTTP Server (Push) Sources for HTTP and HTTPS traffic to enhance security and performance.
For high-volume environments, deploy a load balancer to distribute HTTP traffic across worker nodes.
Last updated
Was this helpful?

