Crowdstrike LogScale Logs

CrowdStrike LogScale is a scalable, cloud-native log management platform that processes massive log and event data volumes for real-time analytics, threat detection, and observability. It uses an index-free architecture and 15x compression to reduce costs and enable sub-second queries, integrating with Observo AI for optimized data pipelines. Observo AI enhances LogScale by filtering, enriching, and routing logs, cutting data volume by up to 80% and streamlining workflows for security and IT use cases.

Purpose

The Observo AI CrowdStrike LogScale Logs destination enables users to send log data to CrowdStrike LogScale for real-time analytics, threat detection, and observability. This integration leverages LogScale’s scalable, index-free architecture and high compression to optimize log management and reduce costs. It streamlines data pipelines by filtering, enriching, and routing logs, enhancing security and IT workflows.

Prerequisites

Before configuring the CrowdStrike LogScale Logs destination in Observo AI, ensure the following requirements are met to facilitate seamless data ingestion:

  • CrowdStrike LogScale Account:

    • Create a CrowdStrike LogScale account if one does not already exist. This account serves as the hub for your log data.

    • Ensure the account is active and configured to accept log data via the LogScale API.

  • Ingest Token:

    • Generate a LogScale Ingest Token in the LogScale platform to authenticate data ingestion.

    • Navigate to the repository settings in the LogScale UI, create a new token, and securely store its value.

    • Ensure the token has permissions to ingest logs into the target repository.

  • Network Access:

    • Verify that the Observo AI instance can communicate with the LogScale endpoint such as https://cloud.us.logscale.com.

    • Check for firewall rules or network policies that may block outbound HTTPS traffic to LogScale’s endpoint on port 443.

  • LogScale Tags (Optional):

    • Prepare tags for organizing logs in LogScale such as env:production or service:webapp.

    • Tags can be configured in Observo AI to align with LogScale’s tagging conventions.

Prerequisite
Description
Notes

CrowdStrike LogScale Account

Hub for log data

Must be active and configured for logs

Ingest Token

Authenticates log data ingestion

Securely store the Ingest Token

Network Access

Enables communication with LogScale

Ensure HTTPS connectivity to LogScale endpoint

LogScale Tags

Organizes logs in LogScale

Optional, but recommended for filtering

Integration

The Integration section outlines default configurations for the CrowdStrike LogScale Logs destination. To tailor the setup to your environment, consult the Configuration Parameters section in the CrowdStrike LogScale documentation for advanced options.

To configure CrowdStrike LogScale Logs as a destination in Observo AI, follow these steps:

  1. Log in to Observo AI:

    • Navigate to the Destinations tab.

    • Click the Add Destinations button and select Create New.

    • Choose CrowdStrike LogScale Logs from the list of available destinations to begin configuration.

  2. General Settings:

    • Name: Add a unique identifier, such as logscale-logs-1.

    • Description (Optional): Provide a description such as “Sends security logs to CrowdStrike LogScale.”

    • Endpoint: The base URL of the CrowdStrike LogScale instance. The scheme (http or https) must be specified. No path should be included since the paths defined by the Splunk API are used.

      Examples

      http://127.0.0.1

      https://example.com

    • Token: The LogScale ingestion token.

      Examples

      ${HUMIO_TOKEN}

      A94A8FE5CCB19BA61C4C08

    • Event Type (Optional): Specify the event parser such as json or none. Default: none.

      Examples

      json

      None

      {{ event_type }}

    • Host Key (Optional): Overrides the name of the log field used to retrieve the hostname to send to LogScale. Default: host.

    • Index (Optional): Optional name of the repository to ingest into. In public-facing APIs, this must (if present) be equal to the repository used to create the ingest token used for authentication. In private cluster setups, LogScale can be configured to allow these to be different. For more information, see the LogScale documentation.

      Examples

      {{ host }}

      custom_index

    • Timestamp Key (Empty): Override the timestamp field

      Example

      timestamp

    • Acknowledgements Enabled (False): Toggle to enable end-to-end acknowledgements. When enabled, any source connected to this supporting end-to-end acknowledgements, will wait for events to be acknowledged by the sink before acknowledging them at the source.

  3. Encoding:

    • Encoding Codec: The codec to use for encoding events. Default: JSON Encoding.

      Options
      Sub-Options

      JSON Encoding

      Pretty JSON (False): Format JSON with indentation and line breaks for better readability. Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      logfmt Encoding

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Apache Avro Encoding

      Avro Schema: Specify the Apache Avro schema definition for serializing events. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Newline Delimited JSON Encoding

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (Default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      No encoding

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Plain text encoding

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Parquet

      Include Raw Log (False): Capture the complete log message as an additional field(observo_record) apart from the given schema. Examples: In addition to the Parquet schema, there will be a field named "observo_record" in the Parquet file. Parquet Schema: Enter parquet schema for encoding. Examples: message root { optional binary stream; optional binary time; optional group kubernetes { optional binary pod_name; optional binary pod_id; optional binary docker_id; optional binary container_hash; optional binary container_image; optional group labels { optional binary pod-template-hash; } } } Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Common Event Format (CEF)

      CEF Device Event Class ID: Provide a unique identifier for categorizing the type of event (maximum 1023 characters). Example: login-failure CEF Device Product: Specify the product name that generated the event (maximum 63 characters). Example: Log Analyzer CEF Device Vendor: Specify the vendor name that produced the event (maximum 63 characters). Example: Observo CEF Device Version: Specify the version of the product that generated the event (maximum 31 characters). Example: 1.0.0 CEF Extensions (Add): Define custom key-value pairs for additional event data fields in CEF format. CEF Name: Provide a human-readable description of the event (maximum 512 characters). Example: cef.name CEF Severity: Indicate the importance of the event with a value from 0 (lowest) to 10 (highest). Example: 5 CEF Version (Select): Specify which version of the CEF specification to use for formatting. - CEF specification version 0.1 - CEF specification version 1.x Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      CSV Format

      CSV Fields (Add): Specify the field names to include as columns in the CSV output and their order. Examples: - timestamp - host - message CSV Buffer Capacity (Optional): Set the internal buffer size (in bytes) used when writing CSV data. Example: 8192 CSV Delimitier (Optional): Set the character that separates fields in the CSV output. Example: , Enable Double Quote Escapes (True): When enabled, quotes in field data are escaped by doubling them. When disabled, an escape character is used instead. CSV Escape Character (Optional): Set the character used to escape quotes when double_quote is disabled. Example: <br> CSV Quote Character (Optional): Set the character used for quoting fields in the CSV output. Example: " CSV Quoting Style (Optional): Control when field values should be wrapped in quote characters. Options: - Always quot all fields - Quote only when necessary - Never use quotes - Quote all non-numeric fields Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Protocol Buffers

      Protobuf Message Type: Specify the fully qualified message type name for Protobuf serialization. Example: package.Message Protobuf Descriptor File: Specify the path to the compiled protobuf descriptor file (.desc). Example: /path/to/descriptor.desc Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

      Graylog Extended Log Format (GELF)

      Encoding Avro Schema (Optional): The Avro schema. Example: { "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }] } Encoding Metric Tag Values (Select): Controls how metric tag values are encoded. - Tag values will be exposed as single strings (default) - Tags exposed as arrays of strings Note: When set to single, only the last non-bare value of tags will be displayed with the metric. When set to full, all metric tags will be exposed as separate assignments. Encoding Timestamp Format (Select): - RFC3339 format - UNIX format

  4. Request Configuration (Optional):

    • Request Concurrency: Configuration for outbound request concurrency. Default: Adaptive concurrency.

      Options
      Description

      Adaptive concurrency

      Adjusts parallelism based on system load

      A fixed concurrency of 1

      Processes one task at a time only

    • Request Rate Limit Duration Secs: The time window used for the rate_limit_num option. Default: 1.

    • Request Rate Limit Num: The maximum number of requests allowed within the rate_limit_duration_secs time window. Default: Unlimited.

    • Request Retry Attempts: The maximum number of retries to make for failed requests. The default, represents an infinite number of retries. Default: Unlimited.

    • Request Retry Initial Backoff Secs: The amount of time to wait in seconds before attempting the first retry for a failed request. After the first retry has failed, the fibonacci sequence will be used to select future backoffs. Default: 1.

    • Request Retry Max Duration Secs: The maximum amount of time to wait between retries. Default: 3600.

    • Request Timeout Secs: The time a request waits before being aborted. It is recommended that this value is not lowered below the service’s internal timeout, as this could create orphaned requests, and duplicate data downstream. Default: 60.

  5. Batching Configuration:

    • Batch Max Bytes (Empty):The maximum size of a batch that will be processed by a sink. This is based on the uncompressed size of the batched events, before they are serialized / compressed.

    • Batch Max Events (Empty): The maximum size of a batch before it is flushed.

    • Batch Timeout Secs: The maximum age of a batch before it is flushed. Default: 1

  6. TLS Configuration (Optional):

    • TLS CA: Provide the CA certificate as an inline string in PEM format, if using a custom certificate authority.

    • TLS CRT: Provide the certificate as a string in PEM format, if applicable.

    • TLS Key: Provide the key as a string in PEM format, if applicable.

    • TLS Verify Certificate (False): Toggle to enable certificate verification.

    • TLS Verify Hostname (False): Toggle to enable hostname verification.

  7. Advanced Settings:

    • Compression: Select the compression algorithm. Default: No compression

      Options
      Description

      Gzip compression

      Common, efficient file compression format

      No compression

      Data sent/stored uncompressed

      Zlib compression

      Lightweight compression using DEFLATE algorithm

    • Endpoint Target: Select the LogScale endpoint type. Default: Event endpoint (Metadata sent with event payload)

      Options

      Event endpoint (Metadata sent with event payload)

      Event endpoint (Metadata sent with event payload)

      Raw endpoint (Metadata sent as query parameter)

    • Source (Optional): Specify the log source. This is typically the filename the logs originated from. If unset, the Splunk collector will set it.Default: unset.

      Examples

      {{ file }}

      /var/log/syslog

      UDP:514

  8. Save and Test Configuration:

    • Save the configuration settings in Observo AI.

    • Send sample log data and verify it appears in LogScale’s repository under the specified tags or index.

Example Scenarios

WealthGuard Financial, a financial services enterprise, uses Observo AI to send security and transaction logs to CrowdStrike LogScale’s https://cloud.us.logscale.com endpoint. The configuration uses JSON encoding, TLS security, and tagging for real-time analytics, threat detection, and FINRA Rule 4511 compliance.

Standard Crowdstrike LogScale Logs Destination Setup

Here is a standard Crowdstrike LogScale Logs Destination configuration example. Only the required sections and their associated field updates are displayed in the table below:

General Settings

Field
Value
Description

Name

wealthguard-logscale

Unique identifier for the LogScale destination.

Description

Sends security and transaction logs to CrowdStrike LogScale for real-time analytics

Optional description for clarity.

Endpoint

https://cloud.us.logscale.com

Base URL of the CrowdStrike LogScale instance (no path included).

Token

B12C9GE6DDA20CB72D5E09

LogScale Ingest Token for authenticating data ingestion.

Event Type

json

Specifies JSON parser for structured log data.

Host Key

hostname

Overrides the default host field to retrieve hostname from hostname field.

Index

wealthguard-security

Repository name for ingesting logs, matching the Ingest Token’s repository.

Timestamp Key

event_timestamp

Overrides the timestamp field to use event_timestamp for log events.

Acknowledgements Enabled

True

Enables end-to-end acknowledgments to ensure logs are received by LogScale before source acknowledgment.

Encoding

Field
Value
Description

Encoding Codec

JSON Encoding

Uses JSON format for structured, key-value log data.

Encoding AVRO Schema

{ "type": "record", "name": "log", "fields": [{ "name": "message", "type": "string" }, { "name": "transaction_id", "type": "string" }, { "name": "user_id", "type": "string" }] }

Optional Avro schema for additional serialization validation.

Encoding Metric Tag Values

Tags exposed as arrays of strings

Exposes all metric tags as arrays for flexible filtering in LogScale.

Encoding Timestamp Format

RFC 3339

Uses ISO 8601 format (e.g., 2025-07-30T15:00:00Z) for timestamps.

Request Configuration

Field
Value
Description

Request Concurrency

Adaptive concurrency

Adjusts parallelism based on system load for efficient log transmission.

Request Rate Limit Duration Secs

1

Time window for rate limiting requests.

Request Rate Limit Num

200

Maximum number of requests allowed within the time window.

Request Retry Attempts

5

Maximum retries for failed log transmission requests.

Request Retry Initial Backoff Secs

1

Initial wait time before the first retry (seconds).

Request Retry Max Duration Secs

3600

Maximum wait time between retries (seconds).

Request Timeout Secs

60

Timeout before aborting a request to prevent orphaned requests.

Batching Configuration

Field
Value
Description

Batch Max Bytes

1000000

Maximum batch size (1 MB uncompressed) before flushing to optimize throughput.

Batch Max Events

1000

Maximum events in a batch before flushing to balance performance.

Batch Timeout Secs

2

Maximum batch age before flushing (2 seconds) for timely log delivery.

TLS Configuration

Field
Value
Description

TLS CA

<PEM-encoded CA certificate>

CA certificate in PEM format for secure connections to LogScale.

TLS CRT

<PEM-encoded client certificate>

Client certificate in PEM format for authentication.

TLS Key

<PEM-encoded private key>

Private key in PEM format for secure communication.

TLS Verify Certificate

True

Enables certificate verification to ensure valid, trusted certificates.

TLS Verify Hostname

True

Ensures the hostname matches the TLS certificate for outgoing connections.

Advanced Settings

Field
Value
Description

Compression

Gzip compression

Uses Gzip to compress log data for efficient transmission.

Endpoint Target

Event endpoint (Metadata sent with event payload)

Sends metadata with the event payload to the LogScale endpoint.

Source

/var/log/wealthguard/trading.log

Specifies the log source as the trading platform’s log file for traceability.

Test Configuration

  • Save all settings in Observo AI to apply the configuration.

  • Send sample security and transaction logs to LogScale.

  • Check the wealthguard-security repository in LogScale to confirm logs arrive with tags such as env:production, service:trading and the event_timestamp field.

Troubleshooting

If issues arise with the CrowdStrike LogScale Logs destination, use the following steps to diagnose and resolve them:

  • Verify Configuration Settings:

    • Ensure all fields, such as Endpoint, Token, and Event Type, are correctly entered and match the LogScale account configuration.

    • Confirm that the Index (if set) matches the repository associated with the Ingest Token.

  • Check Authentication:

    • Verify that the Ingest Token is valid and has not been revoked or expired. · Regenerate the token in LogScale if necessary and update the Observo AI configuration.

  • Monitor Logs:

    • Check Observo AI logs for errors or warnings related to log data transmission to LogScale.

    • In the LogScale platform, navigate to the target repository to confirm that logs are arriving with the expected tags or index.

  • Validate Network Connectivity:

    • Ensure that the Observo AI instance can reach the LogScale endpoint such as https://cloud.us.logscale.com.

    • Check for firewall rules or network policies blocking HTTPS traffic on port 443.

  • Test Data Flow:

    • Send sample log data through Observo AI and monitor its arrival in LogScale’s repository.

    • Use the Analytics tab in the targeted Observo AI pipeline to monitor data volume and ensure expected throughput.

  • Check Quotas and Limits:

    • Verify that the LogScale account is not hitting ingestion limits or quotas (refer to LogScale documentation).

    • Adjust batching settings such as Batch Max Bytes, Batch Max Events if backpressure or slow data transfer occurs.

Issue
Possible Cause
Resolution

Logs not appearing in LogScale

Incorrect Endpoint, Token, or Index

Verify Endpoint, Token, and Index in configuration

Authentication errors

Expired or invalid Ingest Token

Regenerate token and update configuration

Connection failures

Network or firewall issues

Check network policies and HTTPS connectivity

Slow log transfer

Backpressure or rate limiting

Adjust batching settings or check LogScale quotas

Resources

For additional guidance and detailed information, refer to the following resources:

Last updated

Was this helpful?