Overview

Introduction

Observo's pipeline configuration allows users to efficiently manage and direct observability data from various sources through transformations to the targeted sinks/destinations.

Sources represent the origin points from where observability data is collected.

Examples of Supported Sources:

  • AWS S3

  • Kafka

  • OpenTelemetry

  • Syslog

Sinks are the endpoints that receive data after it has been processed and transformed.

Examples of Supported Sinks:

  • AWS S3

  • Splunk

  • Logz.io

  • Elasticsearch

  • Logstash

Transforms refer to the operations applied on data as it flows from sources to sinks.

Common Transform Types:

  • Add Fields: Adds metadata or supplementary information to the data stream.

  • Aggregation: Combines multiple data entries into summarized forms.

  • Deduplication: Removes redundant data entries.

  • Filtering: Restricts data flow based on specified criteria using regex.

  • Encoding: Applies encoding techniques like URL or Base64 to field values.

  • Rename Fields: Standardizes field names across different data streams.

  • Masking: Obscures sensitive data for protection.

4. Pipelines

A pipeline is a sequence of processing steps that include sources, transforms, and sinks. Click here to learn more about setting up your first pipeline in Observo.

These configurations ensure that Observo can handle various sophisticated data routing and processing needs, making it a robust solution for large-scale observability data management.

Last updated

Was this helpful?