Loki

The Observo AI Loki destination enables sending log data to a Loki instance for scalable, index-free log aggregation and querying, using configurable labels, authentication, batching, and optional TLS for seamless integration with Grafana for real-time visualization and analysis.

Purpose

The Observo AI Loki destination enables users to send log data to Loki for efficient log aggregation, storage, and querying in a scalable, cloud-native environment. This integration allows organizations to leverage Loki’s lightweight, index-free log storage to analyze observability events alongside metrics and traces. It provides flexible label-based log stream creation, ensuring seamless integration with Grafana for visualization and troubleshooting.

Prerequisites

Before configuring the Loki destination in Observo AI, ensure the following requirements are met to facilitate seamless log data ingestion:

  • Loki Instance:

    • Set up a Loki instance, either self-hosted or via a managed service like Grafana Cloud, to serve as the hub for log data.

    • Ensure the Loki instance is active and configured to accept log data via its HTTP endpoint.

    • Note the Loki endpoint URL (e.g., https://logs-prod-us-central1.grafana.net/loki/api/v1/push for Grafana Cloud).

  • Authentication Credentials:

    • Prepare authentication credentials based on the chosen strategy: Basic (username/password) or Authorization Header (Bearer token).

    • For Basic authentication, obtain the username and password from the Loki provider and securely store them.

    • For Bearer authentication, generate a token (e.g., Grafana Cloud API token) and securely store its value.

  • Network Access:

    • Verify that the Observo AI instance can communicate with the Loki endpoint over HTTPS.

    • Check for firewall rules or network policies that may block outbound HTTPS traffic to the Loki endpoint.

  • Loki Labels (Optional):

    • Prepare labels for organizing log streams in Loki (e.g., env=production, app=webapp).

    • Labels can be static or dynamically mapped from log event fields to align with Loki’s stream-based querying.

    • Prerequisite Description Notes Hub for log data Must be active and configured for log ingestion Authentication Credentials Authenticates log data ingestion Securely store username/password or token Network Access Enables communication with Loki Ensure HTTPS connectivity to Loki endpoint Loki Labels Organizes log streams in Loki Required for stream creation, supports static or dynamic values

Prerequisite
Description
Notes

Loki Instance

Hub for log data

Must be active and configured for log ingestion

Authentication Credentials

Authenticates log data ingestion

Securely store username/password or token

Network Access

Enables communication with Loki

Ensure HTTPS connectivity to Loki endpoint

Loki Labels

Organizes log streams in Loki

Required for stream creation, supports static or dynamic values

Integration

The Integration section outlines default configurations for the Loki destination. To tailor the setup to your environment, consult the Loki documentation for advanced configuration options. To configure Loki as a destination in Observo AI, follow these steps:

  1. Log in to Observo AI:

    • Navigate to the Destinations tab.

    • Click the Add Destinations button and select Create New.

    • Choose Loki from the list of available destinations to begin configuration.

  2. General Settings:

    • Name: Add a unique identifier, such as loki-logs-1.

    • Description (Optional): Provide a description, e.g., “Sends observability logs to Loki.”

    • Endpoint: The base URL of the Loki instance.

      Example

      http://localhost:3100

      • Note: The URL Path value is appended to the Endpoint value.

        Example

        http://localhost:3100/loki/api/v1/push

    • Labels: All Loki labels. Templates can be used for message parameters

      • Labels Key can be either

        • static_name This doesn't need any translation and is usually added for a static value.

          Example: Sets a static label env with value prod

          env=prod

        • This implies that all fields of the log event are to be used as labels. The label names will match whatever keys were there in the corresponding log value map.

          Example: Uses all fields in the sub object as labels, e.g., stream=stdout, id=some id

          *={{ sub }}

        • ending with * This is similar to the above but adds a prefix for all the key names.

          Example: Prefixes all keys in kubernetes.labels with pod_labels_, (e.g., pod_labels_team=Santiago Wanderers)

          pod_labels_*={{ kubernetes.labels }}

      • Label Values can be either of the following values

        • static_value This doesn't need any translation and is usually added for a static value.

          Example: Sets a static label env with value prod

          env=development

        • Specifying a map from an incoming log event

          Example: Map of log_event_key value.

          app={{ log_event_key }}

      • Combining all of the above if the following labels are provided for the incoming log line.

        Incoming Log Line:

        { "key": 1, "sub": { "stream": "stdout", "id": "some id" }, "kubernetes": { "labels": { "team": "Wanderers"} } }

        Set Labels:

        "pod_labels_*" = "{{ kubernetes.labels }}"

        "*" = "{{ sub }}"

        env = "prod"

        The labels produced will be:

        { "stream": "stdout", "id": "some id", "pod_labels_team": "Wanderers", "env": "prod" }

    • Compression: Select the Gzip or No Compression for the compression algorithm. (Default: No compression)

    • URL Path: The path to use in the URL of the Loki instance.

      Example

      /loki/api/v1/push

    • Out of Order Action Message behavior:

      • Out-of-order event behavior.

        • Some sources may generate events with timestamps that aren’t in chronological order.

        • While the sink will sort events before sending them to Loki, there is the chance another event comes in that is out-of-order with respective the latest events sent to Loki.

        • Prior to Loki 2.4.0, this was not supported and would result in an error during the push request.

        • If you’re using Loki 2.4.0 or newer, Accept it is the preferred action, which lets Loki handle any necessary sorting/reordering.

        • If you’re using an earlier version, then you must use Drop or RewriteTimestamp to last seen depending on which option makes the most sense for your use case.

    • Tenant ID: The tenant ID to specify in requests to Loki.

      • When running Loki locally, a tenant ID is not required.

      Example

      some_tenant_id

      Example

      {{ event_field }}

    • Remove Timestamp: Whether or not to remove the timestamp from the event payload (Default: Enabled).

      • The timestamp will still be sent as event metadata for Loki to use for indexing.

  3. Authentication:

    • Auth Strategy: The authentication strategy to use.

      • Basic authentication: The username and password are concatenated and encoded via base64.

      • Bearer authentication: The bearer token value (OAuth2, JWT, etc) is passed as-is.

    • Auth User: The basic authentication username.

    • Auth Password: The basic authentication password.

    • Auth Token: The bearer authentication token.

  4. Acknowledgement:

    • Whether or not end-to-end acknowledgements are enabled (Default: Disabled).

      • When enabled, any source connected to this supporting end-to-end acknowledgements, will wait for events to be acknowledged by the sink before acknowledging them at the source.

  5. Encoding:

    • Encoding Codec: The codec to use for encoding events. Default: JSON Encoding.

      Options
      Sub-Options

      JSON Encoding

      Pretty JSON (False): Format JSON with indentation and line breaks for better readability. Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      logfmt Encoding

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Apache Avro Encoding

      Avro Schema: Specify the Apache Avro schema definition for serializing events. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Newline Delimited JSON Encoding

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (Default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      No encoding

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Plain text encoding

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Parquet

      Include Raw Log (False): Capture the complete log message as an additional field(observo_record) apart from the given schema. Examples: In addition to the Parquet schema, there will be a field named "observo_record" in the Parquet file. Parquet Schema: Enter parquet schema for encoding. Examples: message root { optional binary stream; optional binary time; optional group kubernetes { optional binary pod_name; optional binary pod_id; optional binary docker_id; optional binary container_hash; optional binary container_image; optional group labels { optional binary pod-template-hash; } } } Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Common Event Format (CEF)

      CEF Device Event Class ID: Provide a unique identifier for categorizing the type of event (maximum 1023 characters). Example: login-failure CEF Device Product: Specify the product name that generated the event (maximum 63 characters). Example: Log Analyzer CEF Device Vendor: Specify the vendor name that produced the event (maximum 63 characters). Example: Observo CEF Device Version: Specify the version of the product that generated the event (maximum 31 characters). Example: 1.0.0 CEF Extensions (Add): Define custom key-value pairs for additional event data fields in CEF format. CEF Name: Provide a human-readable description of the event (maximum 512 characters). Example: cef.name CEF Severity: Indicate the importance of the event with a value from 0 (lowest) to 10 (highest). Example: 5 CEF Version (Select): Specify which version of the CEF specification to use for formatting. - CEF specification version 0.1 - CEF specification version 1.x Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      CSV Format

      CSV Fields (Add): Specify the field names to include as columns in the CSV output and their order. Examples: - timestamp - host - message CSV Buffer Capacity (Optional): Set the internal buffer size (in bytes) used when writing CSV data. Example: 8192 CSV Delimitier (Optional): Set the character that separates fields in the CSV output. Example: , Enable Double Quote Escapes (True): When enabled, quotes in field data are escaped by doubling them. When disabled, an escape character is used instead. CSV Escape Character (Optional): Set the character used to escape quotes when double_quote is disabled. Example: <br> CSV Quote Character (Optional): Set the character used for quoting fields in the CSV output. Example: " CSV Quoting Style (Optional): Control when field values should be wrapped in quote characters. Options: - Always quot all fields - Quote only when necessary - Never use quotes - Quote all non-numeric fields Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Protocol Buffers

      Protobuf Message Type: Specify the fully qualified message type name for Protobuf serialization. Example: package.Message Protobuf Descriptor File: Specify the path to the compiled protobuf descriptor file (.desc). Example: /path/to/descriptor.desc Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Graylog Extended Log Format (GELF)

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

  • Encoding Metric Tag Values: Controls how metric tag values are encoded.

    • When set to single, only the last non-bare value of tags will be displayed with the metric.

    • When set to full, all metric tags will be exposed as separate assignments.

  • Encoding Timestamp Format: Format used for timestamp fields.

    • RFC 3339 timestamp: (e.g. 2025-02-04T10:30:00Z )

    • UNIX timestamp: (e.g. 1625078400)

  1. Request Configuration (Optional):

    • Request Concurrency: Set the outbound request concurrency. Default: Adaptive Concurrency.

    • Request Rate Limit Duration Secs: Set the time window for rate limiting

      Example

      1 second

    • Request Rate Limit Num: Set the maximum number of requests allowed in the rate limit window. (Default: Unlimited)

    • Request Retry Attempts: Set the maximum number of retries for failed requests. (Default: Unlimited)

    • Request Retry Initial Backoff Secs: Set the initial wait time before retrying. (Default: 1)

    • Request Retry Max Duration Secs: Set the maximum wait time between retries. Default: 3600.

      Example

      3600 seconds

    • Request Timeout Secs: Set the request timeout. Default: 60

      Example

      60 seconds

  2. Batching Configuration:

    • Batch Max Bytes: The maximum size of a batch that will be processed by a sink.

      • This is based on the uncompressed size of the batched events, before they are serialized / compressed.

    • Batch Max Events: The maximum size of a batch before it is flushed.

    • Batch Timeout Secs: The maximum age of a batch before it is flushed.

  3. TLS Configuration (Optional):

    • TLS CA: Provide the CA certificate as an inline string in PEM format, if using a custom certificate authority.

    • TLS CRT: Provide the certificate as a string in PEM format, if applicable.

    • TLS Key: Provide the key as a string in PEM format, if applicable.

    • TLS Verify Certificate: Enable certificate verification (True/False).

      • Default: False (set to True for secure connections).

    • TLS Verify Hostname: Enable hostname verification (True/False).

      • Default: False (set to True for secure connections).

  4. Buffering:

    • Buffer Type: Specifies the buffering mechanism for event delivery. Default: Empty

      Options
      Description

      Memory

      High-Performance, in-memory buffering Max Events: The maximum number of events allowed in the buffer. Default: 500 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer.This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.

      Disk

      Lower-Performance, Less-costly, on disk buffering Max Bytes Size: The maximum number of bytes size allowed in the buffer. Must be at-least 268435488 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.

  5. Advanced Settings:

    • Remove Label Fields: Whether or not to delete fields from the event when they are used as labels. (Default: Disabled)

  6. Save and Test Configuration:

    • Save the configuration settings in Observo AI.

    • Send sample log data and verify that it appears in the Loki instance, accessible via Grafana’s Explore interface under the specified labels.

Example Scenario

To illustrate the Loki destination’s functionality, consider a scenario where you configure Observo AI to send application logs to Loki for analysis:

In Observo AI, create a pipeline with a Source for collecting logs from a Kubernetes cluster. Then add a Loki Destination with the following settings, using your specific Loki values:

General Settings

Field
Value
Description

Name

loki-k8s-logs

Unique identifier for the destination.

Description

Send k8s log stream to Loki.

Provides context for the destination's purpose.

Endpoint

https://logs-prod-us-central1.grafana.net/loki/api/v1/push

Specify the base URL of the Loki endpoint

Labels

env=prod, app={{ app }}, pod_labels_*={{ kubernetes.labels }}

Enter Loki Labels for log stream.

Compression

GZIP

Use GZIP compression algorithm

URL Path

/loki/api/v1/push

The path to use in the URL of the Loki instance.

Encoding

Field
Value
Description

Encoding Codec

json

Select json for the codec.

Authentication

Field
Value
Description

Auth Strategy

Basic

Select Basic and enter Auth Username and password

Test Configuration:

  • Save settings, route the pipeline’s output to the Loki destination

  • Send sample log data (e.g., container stdout logs) through the pipeline..

  • In Grafana, navigate to the Explore interface, select the Loki data source, filter by labels (e.g., env=prod, app=webapp), and verify that the logs are displayed with the expected metadata.

  • This setup enables real-time log querying with efficient data transfer to Loki.

Troubleshooting

If issues arise with the Loki destination, use the following steps to diagnose and resolve them:

  • Verify Configuration Settings:

    • Ensure the Endpoint URL and labels are correctly configured and match the Loki instance setup.

    • Confirm that the labels align with the log event structure for proper stream creation.

  • Check Authentication:

    • For Basic authentication, verify that the username and password are valid and not expired.

    • For Authorization Header, ensure the token is valid and has not been revoked.

    • Regenerate credentials in the Loki provider (e.g., Grafana Cloud) if necessary and update the Observo AI configuration.

  • Monitor Logs:

    • Check Observo AI logs for errors or warnings related to log data transmission to Loki.

    • In Grafana, navigate to the Explore interface with the Loki data source to confirm that logs are arriving with the expected labels.

  • Validate Network Connectivity:

    • Ensure that the Observo AI instance can reach the Loki endpoint (e.g., https://logs-prod-us-central1.grafana.net/loki/api/v1/push) over HTTPS.

    • Check for firewall rules or network policies blocking HTTPS traffic.

  • Test Data Flow:

    • Send sample log data through Observo AI and monitor its arrival in Grafana’s Explore interface.

    • Use the Analytics tab in the targeted Observo AI pipeline to monitor data volume and ensure expected throughput.

  • Check Quotas and Limits:

    • Verify that the Loki instance is not hitting log ingestion limits or quotas (refer to Loki or Grafana Cloud documentation).

    • Adjust batching settings (e.g., Max Bytes in Batch, Batch Timeout Secs) if backpressure or slow data transfer occurs.

    • Adjust batching settings (e.g., Batch Max Bytes, Batch Timeout Secs) if backpressure or slow data transfer occurs.

    • Issue Possible Cause Resolution Logs not appearing in Loki Incorrect Endpoint URL or labels Verify Endpoint URL and label configuration Authentication errors Expired or invalid username/password or token Regenerate credentials and update configuration Connection failures Network or firewall issues Check network policies and HTTPS connectivity Slow log transfer Backpressure or rate limiting Adjust batching settings or check Loki quotas

Issue
Possible Cause
Resolution

Logs not appearing in Loki

Incorrect Endpoint URL or labels

Verify Endpoint URL and label configuration

Authentication errors

Expired or invalid username/password or token

Regenerate token and update credentials configuration

Connection failures

Network or firewall issues

Check network policies and HTTPS connectivity

Slow log transfer

Backpressure or rate limiting

Adjust batching settings or check Loki quotas

Resources

For additional guidance and detailed information, refer to the following resources:

Last updated

Was this helpful?