Logz.io

The Observo AI Logz.io Logs destination enables seamless transmission of observability events, such as logs, to Logz.io’s cloud-based platform for real-time monitoring, analysis, and visualization, supporting JSON encoding, Gzip compression, and secure authentication via a Log Shipping Token.

Purpose

The Observo AI Logz.io Logs destination enables users to send observability events, such as logs, to Logz.io for real-time monitoring, analysis, and visualization. This integration leverages Logz.io’s cloud-based log management platform to provide insights into system performance and troubleshoot issues efficiently. It ensures seamless data flow from your observability pipeline to Logz.io with optimized encoding and compression.

Prerequisites

Before configuring the Logz.io Logs destination in Observo AI, ensure the following requirements are met to facilitate seamless data ingestion:

  • Logz.io Account:

    • Create a Logz.io account if one does not already exist. This account serves as the hub for your observability data.

    • Ensure the account is active and configured to accept data via the Logz.io listener URL.

    • Note the region of your Logz.io account such as AWS US East, Azure West Europe to determine the correct listener URL.

  • CrowdStrike LogScale Account:

    • Create a CrowdStrike LogScale account if one does not already exist. This account serves as the hub for your log data.

    • Ensure the account is active and configured to accept log data via the LogScale API.

  • Log Shipping Token:

    • Generate a Log Shipping Token in the Logz.io platform to authenticate data ingestion.

    • Navigate to Settings > Manage Tokens in the Logz.io UI, create a new token, and securely store its value.

    • Ensure the token has permissions to send logs to Logz.io.

  • Network Access:

    • Verify that the Observo AI instance can communicate with the Logz.io listener URL such as `https://listener.logz.io:8071` for AWS US East.

    • Check for firewall rules or network policies that may block outbound HTTPS traffic to Logz.io’s endpoint on port 8071 (or 8081 for JSON encoding issues).

  • Logz.io Tags (Optional):

    • Prepare tags for organizing logs in Logz.io such as `env:production`, or `service:webapp`.

    • Tags can be configured in Observo AI to align with Logz.io’s tagging conventions.

Prerequisite
Description
Notes

Logz.io Account

Hub for observability data

Must be active and configured for logs

Log Shipping Token

Authenticates data ingestion

Securely store the Log Shipping Token

Network Access

Enables communication with Logz.io

Ensure HTTPS connectivity to Logz.io listener URL

Logz.io Tags

Organizes logs in Logz.io

Optional, but recommended for filtering

Integration

The Integration section outlines default configurations for the Logz.io Logs destination. To configure Logz.io Logs as a destination in Observo AI, follow these steps:

  1. Log in to Observo AI:

    • Navigate to the Destinations tab.

    • Click the Add Destinations button and select Create New.

    • Choose Logz.io Logs from the list of available destinations to begin configuration.

  2. General Settings:

    • Name: Add a unique identifier, such as logzio-logs-1.

    • Description (Optional): Provide a description, such as “Sends observability logs to Logz.io.”

    • URL / URI: Enter the Logz.io listener URL with the Log Shipping Token and type.

      Example
      Description

      Your Logz.io region's listener URL in the format: - https://listener.logz.io:8071?token=<LOG-SHIPPING-TOKEN>&type=observo`) - Replace <> with the host for your region

      Replace &#x3C;LOG-SHIPPING-TOKEN> with your token and use the appropriate region-specific host such as listener.logz.io for AWS US East, listener-nl.logz.io for Azure West Europe. Note: The required port depends on whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. If you get a 400 error when using json encoding, try to use port 8081 instead of 8071. Consult https://docs.logz.io/user-guide/log-shipping/built-in-log-types.html for logz type

      Here’s how it works in practical explanation

      Use listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The port required depends on the protocol: HTTP uses port 8070, and HTTPS uses port 8071. Replace <<LOG-SHIPPING-TOKEN>> with the token for the account you're shipping logs to. Discover additional information on how to effectively utilize and manage Log Shipping Tokens. If you encounter a 400 error when using JSON encoding, try using port 8081 instead of 8071. For more information on Logz types, refer to Logz.io's documentation.

  3. Encoding:

    • Encoding Codec: The codec to use for encoding events. Default: JSON Encoding.

      Options
      Sub-Options

      JSON Encoding

      Pretty JSON (False): Format JSON with indentation and line breaks for better readability.

      logfmt Encoding

      None

      Apache Avro Encoding

      Avro Schema: Specify the Apache Avro schema definition for serializing events. Examples: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] }

      Newline Delimited JSON Encoding

      None

      No encoding

      None

      Plain text encoding

      None

      Parquet

      Include Raw Log (False): Capture the complete log message as an additional field(observo_record) apart from the given schema. Examples: In addition to the Parquet schema, there will be a field named "observo_record" in the Parquet file. Parquet Schema: Enter parquet schema for encoding. Examples: message root { optional binary stream; optional binary time; optional group kubernetes { optional binary pod_name; optional binary pod_id; optional binary docker_id; optional binary container_hash; optional binary container_image; optional group labels { optional binary pod-template-hash; } } }

      Common Event Format (CEF)

      CEF Device Event Class ID: Provide a unique identifier for categorizing the type of event (maximum 1023 characters). Example: login-failure CEF Device Product: Specify the product name that generated the event (maximum 63 characters). Example: Log Analyzer CEF Device Vendor: Specify the vendor name that produced the event (maximum 63 characters). Example: Observo CEF Device Version: Specify the version of the product that generated the event (maximum 31 characters). Example: 1.0.0 CEF Extensions (Add): Define custom key-value pairs for additional event data fields in CEF format. CEF Name: Provide a human-readable description of the event (maximum 512 characters). Example: cef.name CEF Severity: Indicate the importance of the event with a value from 0 (lowest) to 10 (highest). Example: 5 CEF Version (Select): Specify which version of the CEF specification to use for formatting. - CEF specification version 0.1 - CEF specification version 1.x

      CSV Format

      CSV Fields (Add): Specify the field names to include as columns in the CSV output and their order. Examples: - timestamp - host - message CSV Buffer Capacity (Optional): Set the internal buffer size (in bytes) used when writing CSV data. Example: 8192 CSV Delimitier (Optional): Set the character that separates fields in the CSV output. Example: , Enable Double Quote Escapes (True): When enabled, quotes in field data are escaped by doubling them. When disabled, an escape character is used instead. CSV Escape Character (Optional): Set the character used to escape quotes when double_quote is disabled. Example: <br> CSV Quote Character (Optional): Set the character used for quoting fields in the CSV output. Example: " CSV Quoting Style (Optional): Control when field values should be wrapped in quote characters. Options: - Always quot all fields - Quote only when necessary - Never use quotes - Quote all non-numeric fields

      Protocol Buffers

      Protobuf Message Type: Specify the fully qualified message type name for Protobuf serialization. Example: package.Message Protobuf Descriptor File: Specify the path to the compiled protobuf descriptor file (.desc). Example: /path/to/descriptor.desc

      Graylog Extended Log Format (GELF)

      None

  4. Request Configuration (Optional):

    • Request Concurrency: Select concurrency. Default: Adaptive concurrency

      Options
      Description

      Adaptive concurrency

      Concurrency will be managed by the Adaptive Request Concurrency feature.

      A fixed concurrency of 1

      A fixed concurrency of 1. Only one request can be outstanding at any given time.

  5. Batching Configuration (Optional):

    • Max Bytes in Batch (Empty): The maximum size of a batch processed by this sink, measured by the uncompressed size of the events before serialization or compression.

    • Batch Timeout Secs (Empty): The maximum age of a batch before it is flushed.

  6. Framing:

    • Framing Method: Select the framing method. Default: Newline Delimited

      Options
      Descriptions

      Raw Event data (not delimited)

      Event data is not delimited at all

      Single Character Delimited

      Event data is delimited by a single ASCII (7-bit) character

      Prefixed with Byte Length

      Event data is prefixed with its length in bytes.

      Newline Delimited

      Event data is delimited by a newline (LF) character

  7. Buffering Configuration:

    • Buffer Type: Specifies the buffering mechanism for event delivery.

      Options
      Description

      Memory

      High-Performance, in-memory buffering Max Events: The maximum number of events allowed in the buffer. Default: 500 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer.This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.

      Disk

      Lower-Performance, Less-costly, on disk buffering Max Bytes Size: The maximum number of bytes size allowed in the buffer. Must be at-least 268435488 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.

  8. Advanced Settings:

    • Compression: Select the compression algorithm Default: None

      Options
      Description

      None

      Data stored or transmitted in original uncompressed form

      Gzip compression

      DEFLATE-based compression with headers for file storage

  9. Save and Test Configuration:

    • Save the configuration settings in Observo AI.

    • Send sample log data and verify it appears in Logz.io’s Log Management interface under the specified tags or type.

Example Scenario

To illustrate the Logz.io Logs destination’s functionality, consider a scenario where you configure Observo AI to send application error logs to Logz.io for monitoring a web service:

In Observo AI, create a pipeline to collect error logs from a web service:

General Settings

Field
Value
Description

Name

logzio-webapp-logs

Unique identifier for the destination.

Description

Send webapp logs to Logs.io.

Provides context for the destination's purpose.

URL / URI

https://listener.logz.io:8071?token=A94A8FE5CCB19BA61C4C08&type=observo

Specify the base URL of the Logs.io instance

Encoding

Field
Value
Description

Encoding Codec

json

Select json for the codec.

Advanced Settings

Field
Value
Description

Compression

GZIP

Use GZIP compression algorithm

Test Configuration:

  • Save settings, route the pipeline’s output to the Logz.io destination and send sample error log data to verify ingestion in Logs.io.

  • In Logz.io, navigate to the Log Management interface, filter by `service:webapp`, and verify that error logs are displayed.

  • This setup enables real-time monitoring of application errors with compressed data transfer to optimize bandwidth.

Troubleshooting

If issues arise with the Logz.io Logs destination, use the following steps to diagnose and resolve them:

  • Verify Configuration Settings:

    • Ensure the URL / URI, including the Log Shipping Token and region-specific host, is correctly entered and matches the Logz.io account configuration.

    • If using JSON encoding and encountering a 400 error, switch the port to 8081 such as `https://listener.logz.io:8081?token=<LOG-SHIPPING-TOKEN>&type=observo`.

  • Check Authentication:

    • Verify that the Log Shipping Token is valid and has not been revoked or expired.

    • Regenerate the token in Logz.io if necessary and update the Observo AI configuration.

  • Monitor Logs:

    • Check Observo AI logs for errors or warnings related to log data transmission to Logz.io.

    • In the Logz.io platform, navigate to Log Management to confirm that logs are arriving with the expected tags or type.

  • Validate Network Connectivity:

    • Ensure that the Observo AI instance can reach the Logz.io listener URL such as `https://listener.logz.io:8071`.

    • Check for firewall rules or network policies blocking HTTPS traffic on port 8071 (or 8081 for JSON encoding).

  • Test Data Flow:

    • Send sample log data through Observo AI and monitor its arrival in Logz.io’s Log Management interface.

    • Use the Analytics tab in the targeted Observo AI pipeline to monitor data volume and ensure expected throughput.

  • Check Quotas and Limits:

    • Verify that the Logz.io account is not hitting log ingestion limits or quotas (refer to Logz.io documentation).

    • Adjust batching settings such as Max Bytes in Batch, Batch Timeout Secs if backpressure or slow data transfer occurs.

Issue
Possible Cause
Resolution

Logs not appearing in Logz.io

Incorrect URL / URI or Log Shipping Token

Verify URL, token, and region-specific host

Authentication errors

Expired or invalid Log Shipping Token

Regenerate token and update configuration

400 error with JSON encoding

Incorrect port for JSON encoding

Switch to port 8081 in URL

Connection failures

Network or firewall issues

Check network policies and HTTPS connectivity

Slow log transfer

Backpressure or rate limiting

Adjust batching settings or check Logz.io quotas

Resources

For additional guidance and detailed information, refer to the following resources:

Last updated

Was this helpful?