Tanium
Integrate Tanium as a source in the Observo AI platform using the Splunk HEC Source. high-volume endpoint visibility, operational data, and security events to be forwarded into Observo AI for advanced analytics, threat detection, and operational intelligence.
Purpose
Tanium's platform exports comprehensive endpoint telemetry and security data in near real-time. Observo AI receives and ingests this data through:
Splunk HEC for low-latency ingestion of critical security events
Observo AI processes these logs using AI-driven filters, enrichments, and destination rules—helping enterprises reduce SIEM costs, detect threats faster, and maintain audit visibility.
Prerequisites
Before configuring the Tanium Integration in Observo AI, verify these essential requirements are satisfied for successful data collection:
Observo AI Platform Setup:
The Observo AI platform must be installed and operational, configured to support Splunk HEC Source integrations.
Platform should handle standard log formats including JSON, XML, and key-value pairs, as Tanium generates data in multiple formats.
Ensure proper network connectivity and firewall configurations to allow traffic from Tanium infrastructure..
Tanium Platform Requirements:
An active Tanium deployment with Tanium Core Platform and appropriate modules installed (Threat Response, Comply, Asset, Patch, etc.).
Administrative privileges or delegated permissions to configure data export settings and manage integrations
Tanium Data Export license enabled for your deployment.
Access to Tanium Console with ability to create and manage Saved Questions, Scheduled Actions, and Data Connectors.
Authentication and Security:
For HEC Integration: Generate secure authentication tokens with appropriate access levels for HTTP endpoint communication.
Valid TLS certificates (TLS 1.2+ required) with proper CA chain for secure data transmission.
Network security groups or firewall rules configured to allow Tanium server communication.
Network and Connectivity:
Ensure Observo AI can receive HTTPS traffic on designated ports (typically 8088 or 10088 for HEC).
Configure load balancers or reverse proxies for high availability and security hardening.
Establish proper DNS resolution for all endpoints involved in the integration.
Establish reliable network connectivity between Tanium infrastructure and Observo AI platform.
Observo AI Platform
Must support Splunk HEC
Verify TLS 1.2+ support and compression handling
Tanium Platform
Active deployment with Data Export license and admin access
Core Platform required with appropriate module licenses
Authentication
Secure token-based authentication for HEC
Rotate credentials every 90 days minimum
Network Security
HTTPS endpoints with proper TLS and IP restrictions
Use trusted CA certificates, avoid self-signed in production
Integration
Integration via Splunk HEC
For Tanium log ingestion, Observo AI provides streamlined integration with Tanium through a dedicated HTTP Event Collector (HEC) approach, optimal for following reasons:
Advantages:
Real-time log streaming with minimal latency
Direct authentication through secure tokens
Immediate data availability for analysis and alerting
Simplified architecture with fewer components
Considerations:
Requires stable HTTPS endpoint with high availability
Direct exposure to internet traffic requiring robust security measures
Real-time processing demands higher resource allocation
Integration
Tanium → Observo AI via Splunk HEC
This configuration establishes direct HTTPS-based data streaming from Tanium to Observo AI using the HEC source.
Observo AI Configuration:
Log in to Observo AI:
Navigate to the Sources tab
Click the Add Source button and select Create New
Choose Splunk HEC from the list of available sources to begin configuration
Refer to Splunk HEC doc for configuration details
Tanium Configuration:
Access Tanium Console:
Navigate to Modules → Connect
Click Create Connection to begin configuration
Select Splunk as the destination type from the available connection templates
Choose HTTP Event Collector (HEC) as the delivery method
HEC Endpoint Configuration::
Type: HTTPS
URL:
<Observo push source endpoint>/services/collector/rawRetrieve the push source endpoint from Observo UI for Splunk HEC source you created in step 1.
Note: Tanium Connect typically uses the /services/collector/raw endpoint for HEC integrations
Authentication Method: Token-based authentication
HEC Token:
<Your Auth Code>Ensure the auth code matches the configuration in Observo AI Splunk HEC source
SSL/TLS: Enable secure transmission (HTTPS)
Data Source Configuration:
Source Types: Configure relevant Tanium data feeds:
Reporting Sources: Asset inventory, compliance findings, vulnerability assessments
Sensor Data: System information, running processes, network connections
Module Data: Threat Response alerts, Detect signals, Comply findings
Live Data: Real-time endpoint queries and responses
Output Format: JSON (recommended) or Syslog format
Compression: Enable gzip compression to reduce bandwidth usage
Schedule and Delivery Settings:
Delivery Schedule: Configure based on data type and criticality
Real-time for security events and alerts
Periodic (hourly/daily) for inventory and compliance data
Batch Settings: Optimize batch size for network efficiency
Error Handling: Configure retry logic and failure notifications
Test Configuration
Save the configuration in the Observo AI interface
Use curl to test the HEC endpoint with sample Tanium log data
Verify token authentication and TLS connectivity
Monitor Observo AI ingestion logs for successful data reception
Validate log parsing and field extraction in the Analytics tab
Validate the HEC endpoint configuration using Tanium's built-in connection test feature
Scenario Troubleshooting
Authentication and Connection Issues:
Authentication Failures: Verify HEC token validity, check token permissions and confirm matching configuration
TLS Handshake Failures: Check certificate validity, CN/SAN matching, and TLS version compatibility
Connection Timeouts: Test network connectivity, review firewall configurations, and validate endpoint accessibility
HTTP Error Codes: Interpret 4xx/5xx errors, check endpoint availability, and verify request format
Data Transmission Issues:
Missing Events: Verify Tanium export schedules, check filtering rules, and confirm data source selections
Volume Overload: Implement rate limiting, batch processing optimization, and capacity scaling
Authentication Expiry: Establish credential rotation procedures and monitoring
Network Security: : Regularly review firewall rules and monitor for unauthorized access
Parsing Errors: : Validate JSON format structure, check field mappings, and test data types
Module depndencies: Ensure required Tanium modules are licensed and configured properly
Security Best Practices
Authentication and Access Control
Token Management: Rotate HEC tokens every 90 days minimum, use strong token generation
Access Control: Use dedicated service accounts for Tanium data export with minimal privileges. Restrict token permissions to minimum required for data export functionality
Multi-Factor Authentication: Enable MFA for all administrative access to Tanium export configurations
Audit Logging: Maintain comprehensive logs of all configuration changes and access attempts
Transport Security
TLS Configuration: Enforce TLS 1.3 where possible, maintain TLS 1.2 minimum for legacy compatibility
Certificate Management: Use trusted CA certificates, avoid self-signed certificates in production
Cipher Suites: Restrict to modern, secure cipher suites, disable legacy protocols
Certificate Monitoring: Implement automated certificate expiry monitoring and renewal
Network Security
Firewall Rules: Configure strict ingress rules allowing only necessary traffic
Monitoring and Alerting: Implement comprehensive monitoring for unusual traffic patterns or connection anomalies
Network Segmentation: Isolate Tanium and Observo AI integration traffic using dedicated network segments
Data Protection
Access Logging: Monitor and log all access to logging infrastructure and exported data
Data Classification: Classify exported data based on sensitivity and implement appropriate handling procedures
Privacy Controls: Implement data masking for sensitive information in endpoint telemetry
Troubleshooting
When encountering issues with the Tanium Integration in Observo AI, follow these systematic diagnostic and resolution procedures:
Configuration Validation
Endpoint Verification: Ensure all URLs, ports, and authentication parameters parameters are correctly configured and accessible
Authentication Check: Verify tokens, certificates, and validate expiration dates, and verify permission levels
Format Validation: Ensure log formats match expected JSON structure and field mappings
Network Connectivity: Verify end-to-end connectivity using network diagnostic tools
Common Error Messages and Resolutions
"401 Unauthorized"
Invalid or expired HEC token
Regenrate token or verify token validity in Tanium and Observo AI configuration
"403 Forbidden"
Token or user access limitations
Review HEC token permissions, verify Tanium user privileges
"SSL Handshake Failed"
Certificate or TLS version mismatch
Verify certificate validity, check TLS version compatibility
"Connection Timeout"
Network connectivity or firewall issues
Test network path, review firewall rules, check load balancer health
""Export Failure"
Tanium service issues or configuration errors
Check Tanium service status and export configuratios
"JSON Parse Error"
Log format mismatch or corruption
Validate sample logs against expected JSON schema
"Rate Limit Exceeded"
Excessive request frequency or volume
Adjust rate limits, implement backoff strategies, optimize batch sizes
"SSL Verification Failed"
Certificate issues or TLS configuration problems
Validate certificate chain, check TLS versions, check certificate validity, update certificate
Monitoring and Validation
Log Volume Monitoring: Monitor data volume trends and identify processing bottlenecks
Error Rate Analysis: Monitor ingestion error rates and patterns
Latency Measurement: Track end-to-end data delivery times and optimize for performance requirements
Data Quality Checks: Validate field extraction, timestamp parsing, and data completeness
Performance Optimization
Compression Settings: Optimize compression levels for bandwidth vs. CPU trade-offs
Batch Optimization:: Fine-tune batch sizes based on network capacity and processing efficiency
Resource Utilization: Monitor CPU, memory, and network utilization for scaling decisions
Advanced Troubleshooting
Network Trace Analysis: Use packet capture tools to analyze network-level issues
Log Analysis: Enable debug logging for detailed troubleshooting information
Health Check Implementation: Implement comprehensive health checks for all components
Backup Procedures: Establish procedures for failover and data recovery scenarios
Resources
Note: Tanium documentation URLs require authenticated access through your specific Tanium environment.To access current Tanium documentation, log into your Tanium environment and navigate to: Help → Documentation within the Tanium Console.
Last updated
Was this helpful?

