AWS SQS
AWS SQS (Simple Queue Service) is a fully managed message queuing service that enables decoupling and scaling of microservices, distributed systems, and serverless applications. It provides reliable, scalable message delivery between application components. This document outlines the parameters required for configuring AWS SQS as a destination for event publishing.
Purpose
Observo AI's AWS SQS destination enables real-time streaming of security and observability events to message queues for downstream processing. This destination supports both standard and FIFO queues, allowing teams to choose between maximum throughput or strict message ordering based on their requirements.
Prerequisites
Before configuring the AWS SQS destination in Observo AI, ensure the following requirements are met:
AWS Account and Permissions:
An active AWS account with access to the target SQS queues.
Required IAM permissions for SQS:
sqs:SendMessage
sqs:GetQueueUrl
sqs:GetQueueAttributes
For FIFO queues, ensure proper configuration of message deduplication and message group IDs.
Authentication:
Prepare one of the following authentication methods:
IAM Role-based Authentication: Use IAM roles attached to EC2 instances or ECS tasks.
Access Key Authentication: Provide an AWS access key ID and secret access key.
Assume Role Authentication: Use STS to assume a specific IAM role with appropriate permissions.
Network and Connectivity:
Ensure Observo AI can communicate with AWS SQS endpoints.
If using VPC endpoints for SQS, verify their configuration and routing.
Check for any proxy settings or firewall rules that might affect connectivity.
Observo AI Platform
Must be installed with SQS support
Verify encoding format compatibility
AWS Account
Active account with SQS access
Ensure queue exists and is accessible
IAM Permissions
Required for SQS operations
Include permissions for FIFO if needed
Authentication
IAM Role, Access Key, or Assume Role
Prepare credentials accordingly
Network
Connectivity to AWS SQS endpoints
Check VPC endpoints and proxies
Integration
To configure AWS SQS as a destination in Observo AI, follow these steps:
Log in to Observo AI:
Navigate to the Destinations tab.
Click the Add Destination button and select Create New.
Choose AWS SQS from the list of available destinations to begin configuration.
General Settings:
Name: Provide a unique identifier for the destination, e.g., sqs-security-events.
Description (Optional): Add a brief description of the destination's purpose.
Queue URL: Specify the URL of the Amazon SQS queue to which messages will be sent.
Examplehttps://sqs.us-east-2.amazonaws.com/123456789012/MyQueue
Region: Specify the AWS region where the SQS queue is located.
Exampleus-east-2
Encoding:
Encoding Codec: Select the codec for encoding events. Default: JSON Encoding.
OptionsSub-OptionsJSON Encoding
Pretty JSON (False): Enable formatted JSON output with indentation for improved readability. Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format
logfmt Encoding
Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format
Apache Avro Encoding
Avro Schema: Define the Apache Avro schema for event serialization. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format
Newline Delimited JSON Encoding
Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (Default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format
No encoding
Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format
Plain text encoding
Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format
Parquet
Include Raw Log (False): Preserve the original log message as an additional field (observo_record) alongside the defined schema. Examples: When enabled, a field named "observo_record" will be included in the Parquet output alongside the schema fields. Parquet Schema: Define the Parquet schema for event encoding. Examples: message root { optional binary stream; optional binary time; optional group kubernetes { optional binary pod_name; optional binary pod_id; optional binary docker_id; optional binary container_hash; optional binary container_image; optional group labels { optional binary pod-template-hash; } } } Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format (default) - UNIX format
Common Event Format (CEF)
CEF Device Event Class ID: Define a unique identifier for event type categorization (maximum 1023 characters). Example: login-failure CEF Device Product: Enter the product name that generated the event (maximum 63 characters). Example: Log Analyzer CEF Device Vendor: Enter the vendor name that produced the event (maximum 63 characters). Example: Observo CEF Device Version: Enter the version of the product that generated the event (maximum 31 characters). Example: 1.0.0 CEF Extensions (Add): Create custom key-value pairs for additional event data in CEF format. CEF Name: Enter a human-readable event description (maximum 512 characters). Example: cef.name CEF Severity: Set the event importance level from 0 (lowest) to 10 (highest). Example: 5 CEF Version (Select): Choose the CEF specification version for formatting. - CEF specification version 0.1 - CEF specification version 1.x Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format
CSV Format
CSV Fields (Add): Define the field names and their sequence for CSV columns. Examples: - timestamp - host - message CSV Buffer Capacity (Optional): Configure the internal buffer size in bytes for CSV writing. Example: 8192 CSV Delimiter (Optional): Choose the character that separates CSV fields. Example: , Enable Double Quote Escapes (True): When activated, quotes in data are escaped by doubling. When deactivated, an escape character is used. CSV Escape Character (Optional): Define the character for escaping quotes when double_quote is deactivated. Example: <br> CSV Quote Character (Optional): Define the character for quoting CSV fields. Example: " CSV Quoting Style (Optional): Determine when to wrap field values in quotes. Options: - Always quote all fields - Quote only when necessary - Never use quotes - Quote all non-numeric fields Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format
Protocol Buffers
Protobuf Message Type: Enter the fully qualified message type name for Protobuf serialization. Example: package.Message Protobuf Descriptor File: Enter the path to the compiled protobuf descriptor file (.desc). Example: /path/to/descriptor.desc Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format (default) - UNIX format
Graylog Extended Log Format (GELF)
Encoding Metric Tag Values (Select): Determines how metric tag values are represented. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the final non-bare tag value appears with the metric. When set to full, all metric tags are shown as individual assignments. Encoding Timestamp Format (Select): - RFC3339 format (default) - UNIX format
Request Configuration (Optional):
Request Concurrency: Configure outbound request parallelism. Default: Adaptive concurrency.
OptionsDescriptionAdaptive concurrency
Dynamically adjusts parallelism based on demand
A fixed concurrency of 1
Maintains single-threaded processing
Request Rate Limit Duration Secs: Time window for the rate_limit_num setting. Default: 1.
Request Rate Limit Num: Maximum requests permitted within the rate_limit_duration_secs window.
Request Retry Attempts: Maximum retry attempts for failed requests. Default: Unlimited.
Request Retry Initial Backoff Secs: Initial wait duration before the first retry attempt. Subsequent retries use Fibonacci sequence for backoff calculation. Default: 1.
Request Retry Max Duration Secs: Maximum interval between retry attempts. Default: 3600.
Request Timeout Secs: Duration before aborting a request. Avoid setting below the service's internal timeout to prevent orphaned requests. Default: 60.
TLS Configuration (Optional):
TLS CA: Enter the CA certificate in PEM format.
TLS CRT: Enter the client certificate in PEM format.
TLS Key: Enter the private key in PEM format.
Verify Certificate: Activates certificate verification. Certificates must be current and issued by a trusted authority. Verification follows a hierarchical chain from the certificate to the root. Applies to both incoming and outgoing connections. Only disable if you fully understand the security implications. Default-Disabled.
Verify Hostname : Activates hostname verification. The hostname used for connection must match the TLS certificate's Common Name or Subject Alternative Name. Applies only to outgoing connections. Only disable if you fully understand the security implications. Default-Disabled.
Acknowledgments:
Acknowledgements Enabled: Activates end-to-end acknowledgements. When enabled, sources will wait for the sink to acknowledge events before acknowledging at the source level.Default-Disabled.
Authentication (Optional):
Auth Access Key Id: Enter the AWS access key ID.
ExampleAKIAIOSFODNN7EXAMPLE
Auth Secret Access Key: Enter the AWS secret access key.
ExamplewJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Auth Assume Role: Enter the ARN of an IAM role to assume.
Examplearn:aws:iam::123456789098:role/my_role
Auth Region: Enter the AWS region for STS requests. If not specified, uses the configured service region.
Exampleus-west-2
External Id: Enter the external ID for role assumption.
Example12345
Auth Load Timeout Secs: Timeout duration for credential loading, in seconds. Applies when using the default credentials chain or assume_role.
Example30
Auth Imds Connect Timeout Seconds (Optional): Connection timeout for IMDS.
Auth Imds Max Attempts: Number of retry attempts for IMDS token and metadata retrieval.
Auth Imds Read Timeout Seconds: Read timeout for IMDS.
Buffering Configuration (Optional):
Buffer Type: Choose the buffering mechanism for event delivery.
OptionsDescriptionMemory
High-performance, in-memory buffering Max Events: Maximum event count permitted in the buffer. Default: 500 When Full: Buffer overflow handling strategy. Default: Block - Block: Wait for buffer space to become available. Applies backpressure upstream, causing sources to reduce event acceptance rate. Ensures no data loss but may cause edge buffering. - Drop Newest: Discard incoming events when buffer is full. Events are intentionally dropped. Used when performance is prioritized over data completeness and temporary event loss is acceptable.
Disk
Persistent, disk-based buffering Max Bytes Size: Maximum buffer size in bytes. Minimum required: 268435488 When Full: Buffer overflow handling strategy. Default: Block - Block: Wait for buffer space to become available. Applies backpressure upstream, causing sources to reduce event acceptance rate. Ensures no data loss but may cause edge buffering. - Drop Newest: Discard incoming events when buffer is full. Events are intentionally dropped. Used when performance is prioritized over data completeness and temporary event loss is acceptable.
Advanced Settings (Optional):
Message Deduplication ID: Template value enabling AWS to detect duplicate messages. Should generate a unique string per event. Consult AWS documentation for deduplication mechanics.
Example{{ transaction_id }}
Message Group ID: Identifier specifying message group membership. Applicable only to FIFO queues.
Examplesobservo
observo-%Y-%m-%d
Endpoint: Custom endpoint URL for AWS-compatible services.
Examplehttp://127.0.0.0:5000/path/to/service
Save and Test Configuration:
Save the configuration settings.
Send sample events to the SQS queue and verify successful message delivery.
Troubleshooting
If issues arise with the AWS SQS destination in Observo AI, use the following steps to diagnose and resolve them:
Verify Configuration Settings:
Ensure all fields, such as Queue URL and Region, are correctly entered and match the AWS setup.
Confirm that the SQS queue exists and is accessible in the specified region.
For FIFO queues, verify that Message Deduplication ID and Message Group ID are properly configured.
Check Authentication:
Verify that IAM credentials (access key and secret key) are valid and have not expired.
If using assume role authentication, ensure the role ARN is correct and the external ID matches.
Confirm that the IAM user or role has the necessary permissions.
Validate Permissions:
Ensure the credentials have the required permissions:
sqs:SendMessage, sqs:GetQueueUrl, sqs:GetQueueAttributes.
For FIFO queues, verify additional permissions if required.
Network and Connectivity:
Verify firewall rules, VPC endpoint configurations, or proxy settings that may block access to AWS SQS endpoints.
Test connectivity using the AWS CLI with similar configurations to verify SQS access.
Ensure DNS resolution is working correctly for SQS endpoints.
Monitor Message Delivery:
Check the SQS queue metrics in the AWS console to verify message delivery.
Use the Observo AI Analytics tab to monitor event throughput and identify delivery failures.
Messages not delivered
Incorrect queue URL or region
Verify queue URL and region
Authentication errors
Invalid credentials or role
Check authentication method and permissions
Connectivity issues
Firewall or proxy blocking access
Test network connectivity and VPC endpoints
"Access Denied"
Insufficient permissions
Verify IAM permissions for SQS
"Queue does not exist"
Incorrect queue URL
Check queue URL and region settings
"Message rejected"
Invalid message format or size
Verify encoding settings and message size
Throttling errors
Rate limit exceeded
Adjust rate limits or request increase
Resources
For additional guidance and detailed information, refer to the following resources:
AWS Documentation:
Amazon SQS Queue Configuration: Guide to configuring SQS queues.
Amazon SQS Permissions: Details on SQS authentication and permissions.
Amazon SQS FIFO Queues: Information on configuring and using FIFO queues.
Best Practices:
Review AWS best practices for message queue design, including batch processing, dead-letter queue configuration, and visibility timeout settings for optimal performance and reliability.
Last updated
Was this helpful?

