Azure Blob Storage

Azure Blob Storage is a scalable object storage solution for unstructured data, such as text, binary data, logs, and media files. It is commonly used for data archiving, backup, and analytics. This document outlines the parameters required for configuring Azure Blob Storage as a destination for event storage.

For more details, refer to the Azure Blob Storage Documentation.

Purpose

The Observo AI Azure Blob Storage destination enables users to send telemetry data, including logs, metrics, and traces, to Microsoft Azure Blob Storage for scalable, cost-effective storage and further analysis. This destination integrates seamlessly with Azure's cloud ecosystem, allowing organizations to centralize telemetry data for observability, compliance, and analytics purposes.

Prerequisites

Before configuring the Azure Blob Storage destination in Observo AI, ensure the following requirements are met to facilitate seamless data ingestion:

  • Azure Storage Account:

    • Create an Azure Storage account in the Azure portal if one does not already exist. This account serves as the storage hub for your data (Create a Storage Account).

    • Ensure the storage account is accessible and configured for write operations.

    • Note the storage account name and the container name where data will be stored.

  • Authentication:

    • Register an application in Azure Active Directory (Azure AD) to handle authentication for data ingestion (Register an Application).

    • Navigate to "App registrations" in the Azure portal, create a new registration, and note the Application (client) ID and Directory (tenant) ID.

    • Create a client secret under "Certificates & secrets" and securely store its value.

    • Alternatively, obtain an access key for the storage account (Manage Storage Account Access Keys).

  • Role Assignment:

    • Assign the "Storage Blob Data Contributor" role to the Azure AD application or service principal for the storage account to grant necessary permissions (Assign Azure Roles).

    • Verify the role assignment in the storage account’s "Access control (IAM)" section.

  • Blob Container:

    • Create a blob container within the storage account to store the telemetry data (Create a Container).

    • Ensure the container is accessible and matches the region of your storage account for optimal performance.

Prerequisite
Description
Notes

Azure Storage Account

Storage hub for telemetry data

Must be accessible for write operations

Authentication

Handles secure data ingestion

Store Client ID, Tenant ID, Client Secret, or Access Key

Role Assignment

Grants permissions to application

Assign "Storage Blob Data Contributor" role

Blob Container

Storage location for data

Create container in storage account

Integration

The Integration section outlines default configurations. To tailor the setup to your environment, consult the Configuration Parameters section for advanced options.

To configure Azure Blob Storage as a destination in Observo AI, follow these steps:

  1. Log in to Observo AI:

    • Navigate to the Destinations tab.

    • Click the Add Destinations button and select Create New.

    • Choose Azure Blob Storage from the list of available destinations to begin configuration.

  2. General Settings:

    • Name: Add a unique identifier such as azure-blob-storage-1.

    • Description (Optional): Provide a description for the destination.

    • Container Name: Enter the Azure Blob Storage Account container name

      Examples

      myblob

      myobservostorage

    • Blob Prefix: Enter the Prefix to apply to all blob keys. Useful for partitioning objects. Must end in / to act as a directory path. Default: %Y/%m/%d

      Examples

      date=%F/hour=%H

      year=%Y/month=%m/day=%d

      application_id={{ application_id }}/date=%F

      %Y/%m/%d

      date=%F

    • Blob Append UUID to Timestamp (True): Whether or not to append a UUID v4 token to the end of the blob key’s timestamp portion. Ensure uniqueness of name in high performance use cases.

      Example

      For blob key `date=2022-07-18/1658176486`, setting this field to `true` would result in an object key that looked like `date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547`.

    • Blob Time Format: The timestamp format for the time component of the blob key. By default, blob keys are appended with timestamp (in epoch seconds) reflecting when the objects are sent to S3. The resulting blob key is the key prefix followed by the formatted timestamp, eg: date=2022-07-18/1658176486. Supports strftime specifiers. Default: %s

      Example

      %s

  3. Encoding:

    • Encoding Codec: The codec to use for encoding events. Default: JSON Encoding.

      Options
      Sub-Options

      JSON Encoding

      Pretty JSON (False): Format JSON with indentation and line breaks for better readability. Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      logfmt Encoding

      Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      Apache Avro Encoding

      Avro Schema: Specify the Apache Avro schema definition for serializing events. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      Newline Delimited JSON Encoding

      Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      No encoding

      Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      Plain text encoding

      Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      Parquet

      Include Raw Log (False): Capture the complete log message as an additional field(observo_record) apart from the given schema. Examples: In addition to the Parquet schema, there will be a field named "observo_record" in the Parquet file. Parquet Schema: Enter parquet schema for encoding. Examples: message root { optional binary stream; optional binary time; optional group kubernetes { optional binary pod_name; optional binary pod_id; optional binary docker_id; optional binary container_hash; optional binary container_image; optional group labels { optional binary pod-template-hash; } } } Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      Common Event Format (CEF)

      CEF Device Event Class ID: Provide a unique identifier for categorizing the type of event (maximum 1023 characters). Example: login-failure CEF Device Product: Specify the product name that generated the event (maximum 63 characters). Example: Log Analyzer CEF Device Vendor: Specify the vendor name that produced the event (maximum 63 characters). Example: Observo CEF Device Version: Specify the version of the product that generated the event (maximum 31 characters). Example: 1.0.0 CEF Extensions (Add): Define custom key-value pairs for additional event data fields in CEF format. CEF Name: Provide a human-readable description of the event (maximum 512 characters). Example: cef.name CEF Severity: Indicate the importance of the event with a value from 0 (lowest) to 10 (highest). Example: 5 CEF Version (Select): Specify which version of the CEF specification to use for formatting. - CEF specification version 0.1 - CEF specification version 1.x Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      CSV Format

      CSV Fields (Add): Specify the field names to include as columns in the CSV output and their order. Examples: - timestamp - host - message CSV Buffer Capacity (Optional): Set the internal buffer size (in bytes) used when writing CSV data. Example: 8192 CSV Delimitier (Optional): Set the character that separates fields in the CSV output. Example: , Enable Double Quote Escapes (True): When enabled, quotes in field data are escaped by doubling them. When disabled, an escape character is used instead. CSV Escape Character (Optional): Set the character used to escape quotes when double_quote is disabled. Example: <br> CSV Quote Character (Optional): Set the character used for quoting fields in the CSV output. Example: " CSV Quoting Style (Optional): Control when field values should be wrapped in quote characters. Options: - Always quot all fields - Quote only when necessary - Never use quotes Quote all non-numeric fields Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      Protocol Buffers

      Protobuf Message Type: Specify the fully qualified message type name for Protobuf serialization. Example: package.Message Protobuf Descriptor File: Specify the path to the compiled protobuf descriptor file (.desc). Example: /path/to/descriptor.desc Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

      Graylog Extended Log Format (GELF)

      Fields to exclude from serialization (Add): Transformations to prepare an event for serialization. List of fields that are excluded from the encoded event. Example: message.payload Fields to include in serialization (Add): Transformations to prepare an event for serialization. List of fields that are included in the encoded event. Other fields will be ignored. "Fields to exclude from serialization" and "Fields to include in serialization" are mutually exclusive; both cannot contain values simultaneously. Example: message.payload Encoding Timestamp Format (Select): - RFC 3339 timestamp: Formats timestamps as RFC 3339 strings. (default) - Unix timestamp (Float): Formats timestamps as Unix epoch values in floating point. - Unix timestamp (Milliseconds): Formats timestamps as Unix epoch values in milliseconds. - Unix timestamp (Nanoseconds): Formats timestamps as Unix epoch values in nanoseconds. - Unix timestamp (Microseconds): Formats timestamps as Unix epoch values in microseconds. - Unix timestamp: Formats timestamps as Unix epoch values

  4. TLS Configuration (Optional):

    • TLS CA: Provide the CA certificate in PEM format.

    • TLS CRT: Provide the client certificate in PEM format.

    • TLS Key: Provide the private key in PEM format.

    • TLS Key Pass: Passphrase used to unlock the encrypted key file.

    • This has no effect unless key_file is set.

      Examples

      ${KEY_PASS_ENV_VAR}

      PassWord1

    • TLS Verify Certificate (False): Enables certificate verification. Certificates must be valid in terms of not being expired, and being issued by a trusted issuer. This verification operates in a hierarchical manner, checking validity of the certificate, the issuer of that certificate and so on until reaching a root certificate. Relevant for both incoming and outgoing connections. Do NOT set this to false unless you understand the risks of not verifying the validity of certificates.

    • TLS Verify Hostname: Enables hostname verification. If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension. Only relevant for outgoing connections. Do NOT set this to false unless you understand the risks of not verifying the remote hostname

  5. Batching Requirements (Default):

    • Batch Max Bytes: The maximum size of a batch that will be processed by a sink. This is based on the uncompressed size of the batched events, before they are serialized / compressed. Default: 500000 (5 MB)

    • Batch Max Events: The maximum size of a batch before it is flushed. Default: 1000

    • Batch Timeout Secs: The maximum age of a batch before it is flushed. Default: 300

  6. Buffering Configuration (Optional):

    • Buffer Type: Specifies the buffering mechanism for event delivery.

      Options
      Description

      Memory

      High-Performance, in-memory buffering Max Events: The maximum number of events allowed in the buffer. Default: 500 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer.This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.

      Disk

      Lower-Performance, Less-costly, on disk buffering Max Bytes Size: The maximum number of bytes size allowed in the buffer. Must be at-least 268435488 When Full: Event handling behavior when a buffer is full. Default: Block - Block: Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. - Drop Newest: Drop the event instead of waiting for free space in the buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events.

  7. Advanced Settings (Optional):

    • Connection String: The Azure Blob Storage Account connection string. Either 'Storage Account', or this field, must be specified.

    • Storage Account: The Azure Blob Storage Account name. Either 'Connection String', or this field, must be specified.

    • Compression: Compression algorithm to use for the request body. Default: Gzip compression

      Options
      Description

      Gzip compression

      DEFLATE compression with headers for file storage

      None

      Data stored and transmitted in original form

    • Healthcheck (False): Whether or not to check the health of the sink when Observo Agent starts up.

    • Time Generated Key (Optional): Use this option to customize the log field used as TimeGenerated in Azure. The setting of log_schema.timestamp_key, usually timestamp, is used here by default. This field should be used in rare cases where TimeGenerated should point to a specific log field. For example, use this field to set the log field source_timestamp as holding the value that should be used as TimeGenerated on the Azure side.

      Example

      time_generated

  8. Save and Test Configuration:

    • Save the configuration settings in Observo AI.

    • Send sample data and verify that it reaches the specified blob container in Azure Blob Storage.

Example Scenarios

InsureSafe, a fictitious insurance enterprise, manages extensive claims processing, customer interaction logs, and compliance audit trails to ensure regulatory adherence and operational efficiency. To centralize these telemetry data for long-term storage and analysis, InsureSafe aims to send JSON-formatted logs and metrics to an Azure Blob Storage container named insuresafe-telemetry within the storage account insuresafestorage2025 in the eastus region. Authentication is handled via an Azure AD application with the "Storage Blob Data Contributor" role. The configuration below outlines the steps to set up the Azure Blob Storage destination in Observo AI, adhering to the required fields specified in the Integration section of the provided document, enabling InsureSafe to achieve scalable storage and compliance-ready analytics.

Standard Azure Blob Storage Destination Setup

Here is a standard Azure Blob Storage Destination configuration example. Only the required sections and their associated field updates are displayed in the table below:

General Settings

Field
Value
Description

Name

insuresafe-blob-storage

Unique identifier for the Azure Blob Storage destination

Description

Store claims and audit logs in Azure Blob Storage for InsureSafe

Optional description for clarity

Container Name

insuresafe-telemetry

Name of the Azure Blob Storage container

Blob Prefix

year=%Y/month=%m/day=%d/

Partitions blobs by year, month, and day

Blob Append UUID to Timestamp

True

Appends UUID to timestamp for unique blob keys

Blob Time Format

%s

Timestamps in seconds since Unix epoch

Encoding

Field
Value
Description

Encoding Codec

JSON Encoding

Encodes events in JSON format

Pretty JSON

True

Formats JSON with indentation for readability

Encoding Timestamp Format

RFC 3339 timestamp

Formats timestamps as RFC 3339 strings

TLS Configuration

Field
Value
Description

TLS CA

/opt/observo/certs/ca.crt

Path to CA certificate for server verification

TLS CRT

/opt/observo/certs/insuresafe.crt

Path to client certificate for authentication

TLS Key

/opt/observo/certs/insuresafe.key

Path to private key for authentication

TLS Key Pass

InsureSafe2025

Passphrase to unlock the encrypted key file

TLS Verify Certificate

True

Enables certificate verification

TLS Verify Hostname

True

Verifies hostname in the TLS certificate

Batching Configuration

Field
Value
Description

Batch Max Bytes

500000

Maximum batch size (5MB) before flushing

Batch Max Events

1000

Maximum number of events in a batch

Batch Timeout Secs

300

Maximum age of a batch before flushing

Buffering Configuration

Field
Value
Description

Buffer Type

Memory

Uses high-performance in-memory buffering

Max Events

500

Maximum number of events in the buffer

When Full

Block

Applies backpressure when buffer is full

Advanced Settings

Field
Value
Description

Connection String

None

Not used as Storage Account is specified

Storage Account

insuresafestorage2025

Azure Blob Storage account name

Compression

Gzip compression

Applies Gzip compression to request body

Healthcheck

True

Checks sink health on Observo Agent startup

Time Generated Key

timestamp

Uses default timestamp field for TimeGenerated

Additional Configuration

  • Save and Test: Save the configuration and send sample claims processing logs to the insuresafe-telemetry container.

  • Verify data presence in the Azure Blob Storage container using the Observo AI Analytics tab and the Azure portal to confirm successful data flow.

Outcome

With this configuration, InsureSafe successfully exports claims processing logs, customer interaction data, and compliance audit trails to Azure Blob Storage via Observo AI, enabling scalable, cost-effective storage for regulatory compliance and advanced analytics, thereby enhancing operational efficiency and data governance in its insurance operations.

Troubleshooting

If issues arise with the Azure Blob Storage destination, use the following steps to diagnose and resolve them:

  • Verify Configuration Settings:

    • Ensure all fields, such as Storage Account Name, Container Name, Client ID, Tenant ID, Client Secret, or Access Key, are correctly entered and match Azure configurations.

  • Check Authentication:

    • Verify that the client secret is valid and has not expired, or confirm the access key is correct.

    • Confirm that the Azure AD application has the "Storage Blob Data Contributor" role assigned for the storage account.

  • Monitor Logs:

    • Check Observo AI logs for errors or warnings related to data transmission.

    • In the Azure portal, navigate to the storage account and inspect the blob container to confirm data arrival.

  • Validate Container Configuration:

    • Ensure the specified container exists and is accessible within the storage account.

  • Network and Connectivity:

    • Check for firewall rules or network policies that may block communication between Observo AI and Azure Blob Storage.

    • Verify that the storage account endpoint is accessible.

  • Test Data Flow:

    • Send sample data and monitor its arrival in the blob container.

    • Use the Analytics tab in the targeted Observo AI pipeline to monitor data volume and ensure expected throughput.

  • Check Quotas and Limits:

Issue
Possible Cause
Resolution

Data not appearing in container

Incorrect storage account or container name

Verify account and container names in configuration

Authentication errors

Expired or incorrect client secret or access key

Regenerate secret or key and update configuration

Connection failures

Network or firewall issues

Check network policies and connectivity

Slow data transfer

Backpressure or rate limiting

Adjust batching settings or check Azure quotas

Resources

For additional guidance and detailed information, refer to the following resources:

Last updated

Was this helpful?