diff --git a/solutions/observability/ai/observability-ai-assistant.md b/solutions/observability/ai/observability-ai-assistant.md index e437cdf128..0b2fe34d32 100644 --- a/solutions/observability/ai/observability-ai-assistant.md +++ b/solutions/observability/ai/observability-ai-assistant.md @@ -58,7 +58,7 @@ The [**GenAI settings**](/explore-analyze/ai-features/manage-access-to-ai-assist - Manage which AI connectors are available in your environment. - Enable or disable AI Assistant and other AI-powered features in your environment. -- {applies_to}`stack: ga 9.2` {applies_to}`serverless: unavailable` Specify in which Elastic solutions the `AI Assistant for {{observability}} and Search` and the `AI Assistant for Security` appear. +- {applies_to}`stack: ga 9.2` {applies_to}`serverless: unavailable` Specify in which Elastic solutions the `AI Assistant for Observability and Search` and the `AI Assistant for Security` appear. ## Your data and the AI Assistant [data-information] diff --git a/solutions/observability/streams/management/extract.md b/solutions/observability/streams/management/extract.md index deb1860ae1..4efddad59e 100644 --- a/solutions/observability/streams/management/extract.md +++ b/solutions/observability/streams/management/extract.md @@ -34,25 +34,32 @@ Applied changes aren't retroactive and only affect *future ingested data*. Streams supports the following processors: -- [**Drop**](./extract/drop.md): Drops the document without raising any errors. This is useful to prevent the document from getting indexed based on a condition. -- [**Remove**](./extract/remove.md): Removes existing fields. -- [**Date**](./extract/date.md): Converts date strings into timestamps, with options for timezone, locale, and output formatting. +- [**Append**](./extract/append.md): Adds a value to an existing array field, or creates the field as an array if it doesn't exist. +- [**Concat**](./extract/concat.md): Concatenates a mix of field values and literal strings into a single field. - [**Convert**](./extract/convert.md): Converts a field in the currently ingested document to a different type, such as converting a string to an integer. -- [**Replace**](./extract/replace.md): Replaces parts of a string field according to a regular expression pattern with a replacement string. +- [**Date**](./extract/date.md): Converts date strings into timestamps, with options for timezone, locale, and output formatting. - [**Dissect**](./extract/dissect.md): Extracts fields from structured log messages using defined delimiters instead of patterns, making it faster than Grok and ideal for consistently formatted logs. +- [**Drop**](./extract/drop.md): Drops the document without raising any errors. This is useful to prevent the document from getting indexed based on a condition. - [**Grok**](./extract/grok.md): Extracts fields from unstructured log messages using predefined or custom patterns, supports multiple match attempts in sequence, and can automatically generate patterns with an [LLM connector](/explore-analyze/ai-features/llm-guides/llm-connectors.md). -- [**Set**](./extract/set.md): Assigns a specific value to a field, creating the field if it doesn’t exist or overwriting its value if it does. +- [**Join**](./extract/join.md): Concatenates the values of multiple fields with a delimiter. +- [**Lowercase**](./extract/lowercase.md): Converts a string field to lowercase. - [**Math**](./extract/math.md): Evaluates arithmetic or logical expressions. +- [**Network direction**](./extract/network-direction.md): Determines network traffic direction based on source and destination IP addresses. +- [**Redact**](./extract/redact.md): Redacts sensitive data in a string field by matching grok patterns. +- [**Remove**](./extract/remove.md): Removes existing fields or removes fields by prefix. - [**Rename**](./extract/rename.md): Changes the name of a field, moving its value to a new field name and removing the original. -- [**Append**](./extract/append.md): Adds a value to an existing array field, or creates the field as an array if it doesn’t exist. +- [**Replace**](./extract/replace.md): Replaces parts of a string field according to a regular expression pattern with a replacement string. +- [**Set**](./extract/set.md): Assigns a specific value to a field, creating the field if it doesn't exist or overwriting its value if it does. +- [**Trim**](./extract/trim.md): Removes leading and trailing whitespace from a string field. +- [**Uppercase**](./extract/uppercase.md): Converts a string field to uppercase. ### Processor limitations and inconsistencies [streams-processor-inconsistencies] -Streams exposes a Streamlang configuration, but internally it relies on {{es}} ingest pipeline processors and ES|QL. Streamlang doesn’t always have 1:1 parity with the ingest processors because it needs to support options that work in both ingest pipelines and ES|QL. In most cases, you won’t need to worry about these details, but the underlying design decisions still affect the UI and available configuration options. The following are some limitations and inconsistencies when using Streamlang processors: +Streams exposes a Streamlang configuration, but internally it relies on {{es}} ingest pipeline processors and ES|QL. Streamlang doesn't always have 1:1 parity with the ingest processors because it needs to support options that work in both ingest pipelines and ES|QL. In most cases, you won't need to worry about these details, but the underlying design decisions still affect the UI and available configuration options. The following are some limitations and inconsistencies when using Streamlang processors: -- **Consistently typed fields**: ES|QL requires one consistent type per column, so workflows that produce mixed types across documents won’t transpile. +- **Consistently typed fields**: ES|QL requires one consistent type per column, so workflows that produce mixed types across documents won't transpile. - **Conversion of types**: ES|QL and ingest pipelines accept different conversion combinations and strictness (especially for strings), so `convert` can behave differently across targets. -- **Multi-value commands/functions**: Fields can contain one or multiple values. ES|QL and ingest processors don’t always handle these cases the same way. For example, grok in ES|QL handles multiple values automatically, while the grok processor does not +- **Multi-value commands/functions**: Fields can contain one or multiple values. ES|QL and ingest processors don't always handle these cases the same way. For example, grok in ES|QL handles multiple values automatically, while the grok processor does not - **Conditional execution**: ES|QL's enforced table shape limits conditional casting, parsing, and wildcard field operations that ingest pipelines can do per-document. - **Arrays of objects / flattening**: Ingest pipelines preserve nested JSON arrays, while ES|QL flattens to columns, so operations like rename and delete on parent objects can differ or fail. @@ -152,6 +159,8 @@ stack: ga 9.3+ - Users who prefer working with code - Advanced configurations with complex or deeply nested conditions +Refer to the [Streamlang reference](./streamlang.md) for the complete syntax, including all available processors, condition operators, and examples. + ### Preview changes [streams-preview-changes] @@ -273,4 +282,4 @@ You can still add your own processors manually to the `@custom` pipeline if need - Streams does not support all processors. More processors will be added in future versions. - The data preview simulation might not accurately reflect the changes to the existing data when editing existing processors or re-ordering them. Streams will allow proper simulations using original documents in a future version. -- Streams can't properly handle arrays. While it supports basic actions like appending or renaming, it can't access individual array elements. For classic streams, the workaround is to use the [manual pipeline configuration](./extract/manual-pipeline-configuration.md) that supports Painless scripting and all ingest processors. \ No newline at end of file +- Streams can't properly handle arrays. While it supports basic actions like appending or renaming, it can't access individual array elements. For classic streams, the workaround is to use the [manual pipeline configuration](./extract/manual-pipeline-configuration.md) that supports Painless scripting and all ingest processors. diff --git a/solutions/observability/streams/management/extract/append.md b/solutions/observability/streams/management/extract/append.md index 90494a21b6..7052c2522f 100644 --- a/solutions/observability/streams/management/extract/append.md +++ b/solutions/observability/streams/management/extract/append.md @@ -13,9 +13,8 @@ products: - id: elastic-stack --- # Append processor [streams-append-processor] -% Need use cases -Use the **Append** processor to add a value to an existing array field, or create the field as an array if it doesn’t exist. +Use the **Append** processor to add a value to an existing array field, or create the field as an array if it doesn't exist. To use an append processor: @@ -24,4 +23,22 @@ To use an append processor: 1. Set **Source Field** to the field you want append values to. 1. Set **Target field** to the values you want to append to the **Source Field**. -This functionality uses the {{es}} [append processor](elasticsearch://reference/enrich-processor/append-processor.md) internally, but you configure it in Streamlang. Streamlang doesn’t always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). \ No newline at end of file +This functionality uses the {{es}} [append processor](elasticsearch://reference/enrich-processor/append-processor.md) internally, but you configure it in Streamlang. Streamlang doesn't always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). + +## YAML reference [streams-append-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the append processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `to` | string | Yes | Array field to append to. | +| `value` | array | Yes | Values to append. | +| `allow_duplicates` | boolean | No | When `false`, duplicate values are not appended. | + +```yaml +- action: append + to: attributes.tags + value: + - processed + - reviewed +``` diff --git a/solutions/observability/streams/management/extract/concat.md b/solutions/observability/streams/management/extract/concat.md new file mode 100644 index 0000000000..24657bd487 --- /dev/null +++ b/solutions/observability/streams/management/extract/concat.md @@ -0,0 +1,51 @@ +--- +applies_to: + serverless: ga + stack: ga 9.4+ +products: + - id: observability + - id: elasticsearch + - id: kibana + - id: cloud-serverless + - id: cloud-hosted + - id: cloud-enterprise + - id: cloud-kubernetes + - id: elastic-stack +--- + +# Concat processor [streams-concat-processor] + +The **Concat** processor concatenates a mix of field values and literal strings into a single field. + +To concatenate values: + +1. Select **Create** → **Create processor**. +1. Select **Concat** from the **Processor** menu. +1. Set the items to concatenate. Each item is either a field reference or a literal string value. +1. Set the **Target field** where the concatenated result is stored. + +## YAML reference [streams-concat-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the concat processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | array | Yes | Items to concatenate. Each item is either `{ type: "field", value: "" }` or `{ type: "literal", value: "" }`. | +| `to` | string | Yes | Target field. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if any referenced field is missing. | + +```yaml +- action: concat + from: + - type: literal + value: "User: " + - type: field + value: attributes.username + - type: literal + value: " (ID: " + - type: field + value: attributes.user_id + - type: literal + value: ")" + to: attributes.user_summary +``` diff --git a/solutions/observability/streams/management/extract/convert.md b/solutions/observability/streams/management/extract/convert.md index a003737ae0..820cabe6d8 100644 --- a/solutions/observability/streams/management/extract/convert.md +++ b/solutions/observability/streams/management/extract/convert.md @@ -28,4 +28,26 @@ To convert a field to a different data type: If you add a **Convert** processor inside a condition group (a **WHERE** block), you must set a **Target field**. :::: -This functionality uses the {{es}} [Convert processor](elasticsearch://reference/enrich-processor/convert-processor.md) internally, but you configure it in Streamlang. Streamlang doesn’t always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). \ No newline at end of file +This functionality uses the {{es}} [Convert processor](elasticsearch://reference/enrich-processor/convert-processor.md) internally, but you configure it in Streamlang. Streamlang doesn't always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). + +## YAML reference [streams-convert-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the convert processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field containing the value to convert. | +| `type` | string | Yes | Target data type: `integer`, `long`, `double`, `boolean`, or `string`. | +| `to` | string | No | Target field for the converted value. Defaults to the source field. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the source field is missing. | + +:::{note} +When using `convert` inside a condition (`where` block), you must set a `to` field that is different from `from`. +::: + +```yaml +- action: convert + from: attributes.status_code + type: integer + to: attributes.status_code_int +``` diff --git a/solutions/observability/streams/management/extract/date.md b/solutions/observability/streams/management/extract/date.md index 4cdb38ee05..993dea59c9 100644 --- a/solutions/observability/streams/management/extract/date.md +++ b/solutions/observability/streams/management/extract/date.md @@ -24,7 +24,7 @@ To extract a timestamp field using the date processor: 1. Set the **Source Field** to the field containing the timestamp. 1. Set the **Format** field to one of the accepted date formats (ISO8602, UNIX, UNIX_MS, or TAI64N) or use a Java time pattern. Refer to the [example formats](#streams-date-examples) for more information. -This functionality uses the {{es}} [Date processor](elasticsearch://reference/enrich-processor/date-processor.md) internally, but you configure it in Streamlang. Streamlang doesn’t always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). +This functionality uses the {{es}} [Date processor](elasticsearch://reference/enrich-processor/date-processor.md) internally, but you configure it in Streamlang. Streamlang doesn't always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). ## Example formats [streams-date-examples] @@ -58,4 +58,28 @@ You can set the following optional fields for the date processor in the **Advanc | Target field | The field that will hold the parsed date. Defaults to `@timestamp`. | | Timezone | The timezone to use when parsing the date. Supports template snippets. Defaults to `UTC`. | | Locale | The locale to use when parsing the date, relevant when parsing month names or weekdays. Supports template snippets. Defaults to `ENGLISH`. | -| Output format | The format to use when writing the date to `target_field`. Must be a valid Java time pattern. Defaults to `yyyy-MM-dd'T'HH:mm:ss.SSSXXX`. | \ No newline at end of file +| Output format | The format to use when writing the date to `target_field`. Must be a valid Java time pattern. Defaults to `yyyy-MM-dd'T'HH:mm:ss.SSSXXX`. | + +## YAML reference [streams-date-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the date processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field containing the date string. | +| `formats` | string[] | Yes | Date formats to try, in order (for example, `ISO8601`, `UNIX`, or a Java time pattern). | +| `to` | string | No | Target field for the parsed date. Defaults to `@timestamp`. | +| `output_format` | string | No | Format for the output date string. Must be a valid Java time pattern. | +| `timezone` | string | No | Timezone to use when parsing. Defaults to `UTC`. | +| `locale` | string | No | Locale to use when parsing month names or weekdays. | + +```yaml +- action: date + from: attributes.timestamp + formats: + - "yyyy-MM-dd'T'HH:mm:ss.SSSZ" + - "yyyy-MM-dd HH:mm:ss" + to: attributes.parsed_time + output_format: "yyyy-MM-dd" + timezone: "America/New_York" +``` diff --git a/solutions/observability/streams/management/extract/dissect.md b/solutions/observability/streams/management/extract/dissect.md index a82f820aa8..7a3923a831 100644 --- a/solutions/observability/streams/management/extract/dissect.md +++ b/solutions/observability/streams/management/extract/dissect.md @@ -60,4 +60,21 @@ To add a generated dissect pattern: ### How does **Generate patterns** work? [streams-dissect-pattern-generation] :::{include} ../../../../_snippets/streams-suggestions.md -::: \ No newline at end of file +::: + +## YAML reference [streams-dissect-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the dissect processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field to parse. | +| `pattern` | string | Yes | Dissect pattern with `%{field}` placeholders. | +| `append_separator` | string | No | Separator used when concatenating target fields. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the source field is missing. | + +```yaml +- action: dissect + from: body.message + pattern: "%{attributes.timestamp} %{attributes.level} %{attributes.message}" +``` \ No newline at end of file diff --git a/solutions/observability/streams/management/extract/drop.md b/solutions/observability/streams/management/extract/drop.md index 69fff6ee9f..b472cf962d 100644 --- a/solutions/observability/streams/management/extract/drop.md +++ b/solutions/observability/streams/management/extract/drop.md @@ -27,4 +27,21 @@ To configure a condition for dropping documents: The default is the `always` condition. Not setting a specific condition results in every document that matches the drop condition getting dropped from indexing. ::: -This functionality uses the {{es}} [Drop processor](elasticsearch://reference/enrich-processor/drop-processor.md) internally, but you configure it in Streamlang. Streamlang doesn’t always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). \ No newline at end of file +This functionality uses the {{es}} [Drop processor](elasticsearch://reference/enrich-processor/drop-processor.md) internally, but you configure it in Streamlang. Streamlang doesn't always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). + +## YAML reference [streams-drop-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the drop document processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +The `drop_document` processor has no additional parameters beyond the common options. Use a `where` [condition](../streamlang.md#streams-streamlang-conditions) to specify when documents should be dropped. + +:::{warning} +If no condition is set, the default `always` condition drops every document. +::: + +```yaml +- action: drop_document + where: + field: attributes.path + eq: "/health" +``` diff --git a/solutions/observability/streams/management/extract/grok.md b/solutions/observability/streams/management/extract/grok.md index ecdbfd0db0..69fff9e357 100644 --- a/solutions/observability/streams/management/extract/grok.md +++ b/solutions/observability/streams/management/extract/grok.md @@ -69,4 +69,24 @@ To add a generated grok pattern: ### How does **Generate patterns** work? [streams-grok-pattern-generation] :::{include} ../../../../_snippets/streams-suggestions.md -::: \ No newline at end of file +::: + +## YAML reference [streams-grok-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the grok processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field to parse. | +| `patterns` | string[] | Yes | One or more grok patterns, tried in order. | +| `pattern_definitions` | object | No | Custom pattern definitions as key-value pairs. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the source field is missing. | + +```yaml +- action: grok + from: body.message + patterns: + - "%{IP:attributes.client_ip} %{WORD:attributes.method} %{URIPATHPARAM:attributes.path}" + pattern_definitions: + MY_PATTERN: "%{YEAR}-%{MONTHNUM}-%{MONTHDAY}" +``` \ No newline at end of file diff --git a/solutions/observability/streams/management/extract/join.md b/solutions/observability/streams/management/extract/join.md new file mode 100644 index 0000000000..55a577833c --- /dev/null +++ b/solutions/observability/streams/management/extract/join.md @@ -0,0 +1,46 @@ +--- +applies_to: + serverless: ga + stack: ga 9.4+ +products: + - id: observability + - id: elasticsearch + - id: kibana + - id: cloud-serverless + - id: cloud-hosted + - id: cloud-enterprise + - id: cloud-kubernetes + - id: elastic-stack +--- + +# Join processor [streams-join-processor] + +The **Join** processor concatenates the values of multiple fields into a single field with a delimiter between them. + +To join fields: + +1. Select **Create** → **Create processor**. +1. Select **Join** from the **Processor** menu. +1. Set the **Source Fields** to the fields you want to join. +1. Set the **Delimiter** to the separator placed between values. +1. Set the **Target field** where the joined result is stored. + +## YAML reference [streams-join-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the join processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string[] | Yes | Source fields to join. | +| `delimiter` | string | Yes | Delimiter placed between values. | +| `to` | string | Yes | Target field. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if any source field is missing. | + +```yaml +- action: join + from: + - attributes.first_name + - attributes.last_name + delimiter: " " + to: attributes.full_name +``` diff --git a/solutions/observability/streams/management/extract/lowercase.md b/solutions/observability/streams/management/extract/lowercase.md new file mode 100644 index 0000000000..ef7778c11c --- /dev/null +++ b/solutions/observability/streams/management/extract/lowercase.md @@ -0,0 +1,41 @@ +--- +applies_to: + serverless: ga + stack: ga 9.4+ +products: + - id: observability + - id: elasticsearch + - id: kibana + - id: cloud-serverless + - id: cloud-hosted + - id: cloud-enterprise + - id: cloud-kubernetes + - id: elastic-stack +--- + +# Lowercase processor [streams-lowercase-processor] + +The **Lowercase** processor converts a string field to lowercase. + +To convert a field to lowercase: + +1. Select **Create** → **Create processor**. +1. Select **Lowercase** from the **Processor** menu. +1. Set the **Source Field** to the field you want to convert. +1. (Optional) Set **Target field** to write the result to a different field. + +## YAML reference [streams-lowercase-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the lowercase processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field. | +| `to` | string | No | Target field. Defaults to the source field. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the source field is missing. | + +```yaml +- action: lowercase + from: attributes.method + to: attributes.method_lower +``` diff --git a/solutions/observability/streams/management/extract/math.md b/solutions/observability/streams/management/extract/math.md index d7796013dd..670e940e72 100644 --- a/solutions/observability/streams/management/extract/math.md +++ b/solutions/observability/streams/management/extract/math.md @@ -23,3 +23,19 @@ To calculate a value using an expression and store the result in a target field: 1. Select **Math** from the **Processor** menu. 1. Set the **Target field** where you want to write the expression result. 1. Set your expression in the **Expression** field. You can directly reference fields in your expression (for example, `bytes / duration`). + +## YAML reference [streams-math-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the math processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `expression` | string | Yes | A TinyMath expression. Can reference fields directly (for example, `attributes.price * attributes.quantity`). | +| `to` | string | Yes | Target field for the result. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if any referenced field is missing. | + +```yaml +- action: math + expression: "attributes.bytes / attributes.duration" + to: attributes.throughput +``` diff --git a/solutions/observability/streams/management/extract/network-direction.md b/solutions/observability/streams/management/extract/network-direction.md new file mode 100644 index 0000000000..ae7d865910 --- /dev/null +++ b/solutions/observability/streams/management/extract/network-direction.md @@ -0,0 +1,52 @@ +--- +applies_to: + serverless: ga + stack: ga 9.4+ +products: + - id: observability + - id: elasticsearch + - id: kibana + - id: cloud-serverless + - id: cloud-hosted + - id: cloud-enterprise + - id: cloud-kubernetes + - id: elastic-stack +--- + +# Network direction processor [streams-network-direction-processor] + +The **Network direction** processor determines network traffic direction (inbound, outbound, internal, or external) based on source and destination IP addresses. + +To determine network direction: + +1. Select **Create** → **Create processor**. +1. Select **Network direction** from the **Processor** menu. +1. Set the **Source IP** to the field containing the source IP address. +1. Set the **Destination IP** to the field containing the destination IP address. +1. Set the internal networks using either a list of CIDR ranges or a field containing the list. + +## YAML reference [streams-network-direction-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the network direction processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +Specify exactly one of `internal_networks` or `internal_networks_field`. + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `source_ip` | string | Yes | Field containing the source IP address. | +| `destination_ip` | string | Yes | Field containing the destination IP address. | +| `target_field` | string | No | Target field for the direction result. | +| `internal_networks` | string[] | One of `internal_networks` or `internal_networks_field` | List of internal network CIDR ranges. | +| `internal_networks_field` | string | One of `internal_networks` or `internal_networks_field` | Field containing the list of internal networks. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if a source field is missing. | + +```yaml +- action: network_direction + source_ip: attributes.source.ip + destination_ip: attributes.destination.ip + target_field: attributes.network.direction + internal_networks: + - "10.0.0.0/8" + - "172.16.0.0/12" + - "192.168.0.0/16" +``` diff --git a/solutions/observability/streams/management/extract/redact.md b/solutions/observability/streams/management/extract/redact.md new file mode 100644 index 0000000000..092ef7591d --- /dev/null +++ b/solutions/observability/streams/management/extract/redact.md @@ -0,0 +1,48 @@ +--- +applies_to: + serverless: ga + stack: ga 9.4+ +products: + - id: observability + - id: elasticsearch + - id: kibana + - id: cloud-serverless + - id: cloud-hosted + - id: cloud-enterprise + - id: cloud-kubernetes + - id: elastic-stack +--- + +# Redact processor [streams-redact-processor] + +The **Redact** processor redacts sensitive data in a string field by matching grok patterns and replacing the matched content with a placeholder. + +To redact sensitive information: + +1. Select **Create** → **Create processor**. +1. Select **Redact** from the **Processor** menu. +1. Set the **Source Field** to the field containing text you want to redact. +1. Set the **Patterns** to one or more grok patterns that match sensitive data (for example, IP addresses or email addresses). + +This functionality uses the {{es}} [Redact processor](elasticsearch://reference/enrich-processor/redact-processor.md) internally, but you configure it in Streamlang. Streamlang doesn't always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). + +## YAML reference [streams-redact-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the redact processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field to redact. | +| `patterns` | string[] | Yes | Grok patterns that match sensitive data. | +| `pattern_definitions` | object | No | Custom pattern definitions. | +| `prefix` | string | No | Prefix for the redacted placeholder. Defaults to `<`. | +| `suffix` | string | No | Suffix for the redacted placeholder. Defaults to `>`. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the source field is missing. Defaults to `true`. | + +```yaml +- action: redact + from: body.message + patterns: + - "%{IP:client_ip}" + - "%{EMAILADDRESS:email}" +``` diff --git a/solutions/observability/streams/management/extract/remove.md b/solutions/observability/streams/management/extract/remove.md index 32a19c74c1..f2006b2e24 100644 --- a/solutions/observability/streams/management/extract/remove.md +++ b/solutions/observability/streams/management/extract/remove.md @@ -23,4 +23,37 @@ To remove a field: 1. From the **Processor** menu, select **Remove** to remove a field or **Remove by prefix** to remove a field and all its nested fields. 1. Set the **Source Field** to the field you want to remove. -This functionality uses the {{es}} [Remove processor](elasticsearch://reference/enrich-processor/remove-processor.md) internally, but you configure it in Streamlang. Streamlang doesn’t always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). \ No newline at end of file +This functionality uses the {{es}} [Remove processor](elasticsearch://reference/enrich-processor/remove-processor.md) internally, but you configure it in Streamlang. Streamlang doesn't always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). + +## YAML reference [streams-remove-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the remove processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +### Remove [streams-remove-yaml] + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Field to remove. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the field is missing. | + +```yaml +- action: remove + from: attributes.temp_field +``` + +### Remove by prefix [streams-remove-by-prefix-processor] + +Removes a field and all nested fields matching a prefix. + +:::{note} +The `where` clause is not supported on `remove_by_prefix`. This processor cannot be used inside condition blocks. +::: + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Prefix to match. All fields under this prefix are removed. | + +```yaml +- action: remove_by_prefix + from: attributes.debug +``` diff --git a/solutions/observability/streams/management/extract/rename.md b/solutions/observability/streams/management/extract/rename.md index dda051d5ae..910f6e9666 100644 --- a/solutions/observability/streams/management/extract/rename.md +++ b/solutions/observability/streams/management/extract/rename.md @@ -23,4 +23,21 @@ To use a rename processor: 1. Set **Source Field** to the field you want to rename. 1. Set **Target field** to the new name you want to use for the **Source Field**. -This functionality uses the {{es}} [Rename processor](elasticsearch://reference/enrich-processor/rename-processor.md) internally, but you configure it in Streamlang. Streamlang doesn’t always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). \ No newline at end of file +This functionality uses the {{es}} [Rename processor](elasticsearch://reference/enrich-processor/rename-processor.md) internally, but you configure it in Streamlang. Streamlang doesn't always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). + +## YAML reference [streams-rename-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the rename processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field to rename. | +| `to` | string | Yes | New field name. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the source field is missing. | +| `override` | boolean | No | When `true`, allow overwriting an existing target field. | + +```yaml +- action: rename + from: attributes.old_name + to: attributes.new_name +``` diff --git a/solutions/observability/streams/management/extract/replace.md b/solutions/observability/streams/management/extract/replace.md index 53f7fbe29c..b7633074e4 100644 --- a/solutions/observability/streams/management/extract/replace.md +++ b/solutions/observability/streams/management/extract/replace.md @@ -25,4 +25,23 @@ To use the **Replace** processor: 1. Set the **Pattern** to the regular expression or text that you want to replace. 1. Set the **Replacement** to the value that will replace the portion of the string matching your pattern. Replacements can be text, an empty value, or a capture group reference. -This functionality uses the {{es}} [Gsub processor](elasticsearch://reference/enrich-processor/gsub-processor.md) internally, but you configure it in Streamlang. Streamlang doesn’t always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). \ No newline at end of file +This functionality uses the {{es}} [Gsub processor](elasticsearch://reference/enrich-processor/gsub-processor.md) internally, but you configure it in Streamlang. Streamlang doesn't always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). + +## YAML reference [streams-replace-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the replace processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field containing the string. | +| `pattern` | string | Yes | Regular expression pattern to match (Java regex). | +| `replacement` | string | Yes | Replacement string. Supports capture group references (for example, `$1`, `$2`). | +| `to` | string | No | Target field for the result. Defaults to the source field. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the source field is missing. | + +```yaml +- action: replace + from: attributes.email + pattern: "(\\w+)@(\\w+\\.\\w+)" + replacement: "***@$2" +``` diff --git a/solutions/observability/streams/management/extract/set.md b/solutions/observability/streams/management/extract/set.md index 7168436daa..17903247ca 100644 --- a/solutions/observability/streams/management/extract/set.md +++ b/solutions/observability/streams/management/extract/set.md @@ -14,7 +14,7 @@ products: --- # Set processor [streams-set-processor] -Use the **Set** processor to assign a specific value to a field, creating the field if it doesn’t exist or overwriting its value if it does. +Use the **Set** processor to assign a specific value to a field, creating the field if it doesn't exist or overwriting its value if it does. To use a set processor: @@ -23,4 +23,27 @@ To use a set processor: 1. Set **Source Field** to the field you want to insert, upsert, or update. 1. Set **Value** to the value you want the source field to be set to. -This functionality uses the {{es}} [Set processor](elasticsearch://reference/enrich-processor/set-processor.md) internally, but you configure it in Streamlang. Streamlang doesn’t always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). \ No newline at end of file +This functionality uses the {{es}} [Set processor](elasticsearch://reference/enrich-processor/set-processor.md) internally, but you configure it in Streamlang. Streamlang doesn't always have 1:1 parity with the ingest processor options and behavior. Refer to [Processor limitations and inconsistencies](../extract.md#streams-processor-inconsistencies). + +## YAML reference [streams-set-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the set processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +Specify exactly one of `value` or `copy_from`. + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `to` | string | Yes | Target field. | +| `value` | any | One of `value` or `copy_from` | A literal value to assign. | +| `copy_from` | string | One of `value` or `copy_from` | A source field to copy the value from. | +| `override` | boolean | No | When `false`, the target field is only set if it doesn't already exist. | + +```yaml +- action: set + to: attributes.environment + value: production + +- action: set + to: attributes.backup_message + copy_from: body.message +``` diff --git a/solutions/observability/streams/management/extract/trim.md b/solutions/observability/streams/management/extract/trim.md new file mode 100644 index 0000000000..040c7831c0 --- /dev/null +++ b/solutions/observability/streams/management/extract/trim.md @@ -0,0 +1,40 @@ +--- +applies_to: + serverless: ga + stack: ga 9.4+ +products: + - id: observability + - id: elasticsearch + - id: kibana + - id: cloud-serverless + - id: cloud-hosted + - id: cloud-enterprise + - id: cloud-kubernetes + - id: elastic-stack +--- + +# Trim processor [streams-trim-processor] + +The **Trim** processor removes leading and trailing whitespace from a string field. + +To trim whitespace from a field: + +1. Select **Create** → **Create processor**. +1. Select **Trim** from the **Processor** menu. +1. Set the **Source Field** to the field you want to trim. +1. (Optional) Set **Target field** to write the result to a different field. + +## YAML reference [streams-trim-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the trim processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field. | +| `to` | string | No | Target field. Defaults to the source field. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the source field is missing. | + +```yaml +- action: trim + from: attributes.name +``` diff --git a/solutions/observability/streams/management/extract/uppercase.md b/solutions/observability/streams/management/extract/uppercase.md new file mode 100644 index 0000000000..bcd6bf24a3 --- /dev/null +++ b/solutions/observability/streams/management/extract/uppercase.md @@ -0,0 +1,40 @@ +--- +applies_to: + serverless: ga + stack: ga 9.4+ +products: + - id: observability + - id: elasticsearch + - id: kibana + - id: cloud-serverless + - id: cloud-hosted + - id: cloud-enterprise + - id: cloud-kubernetes + - id: elastic-stack +--- + +# Uppercase processor [streams-uppercase-processor] + +The **Uppercase** processor converts a string field to uppercase. + +To convert a field to uppercase: + +1. Select **Create** → **Create processor**. +1. Select **Uppercase** from the **Processor** menu. +1. Set the **Source Field** to the field you want to convert. +1. (Optional) Set **Target field** to write the result to a different field. + +## YAML reference [streams-uppercase-yaml-reference] + +In [YAML mode](../extract.md#streams-editing-yaml-mode), configure the uppercase processor using the following parameters. For the complete Streamlang syntax, refer to the [Streamlang reference](../streamlang.md). + +| Parameter | Type | Required | Description | +| --- | --- | --- | --- | +| `from` | string | Yes | Source field. | +| `to` | string | No | Target field. Defaults to the source field. | +| `ignore_missing` | boolean | No | When `true`, skip this processor if the source field is missing. | + +```yaml +- action: uppercase + from: attributes.level +``` diff --git a/solutions/observability/streams/management/streamlang.md b/solutions/observability/streams/management/streamlang.md new file mode 100644 index 0000000000..f6d4c943fd --- /dev/null +++ b/solutions/observability/streams/management/streamlang.md @@ -0,0 +1,284 @@ +--- +applies_to: + serverless: ga + stack: ga 9.2+ +products: + - id: observability + - id: elasticsearch + - id: kibana + - id: cloud-serverless + - id: cloud-hosted + - id: cloud-enterprise + - id: cloud-kubernetes + - id: elastic-stack +--- +# Streamlang reference [streams-streamlang-reference] + +Streamlang is a JSON DSL for defining stream processing and routing logic. You can write Streamlang directly using the [YAML editing mode](./extract.md#streams-editing-yaml-mode) in the **Processing** tab, or use the [interactive mode](./extract.md#streams-editing-interactive-mode) which generates Streamlang behind the scenes. + +Streamlang provides a consistent processing interface that can be transpiled to multiple execution targets, including {{es}} ingest pipelines and ES|QL. This allows processing to run at ingest time or query time without rewriting rules. + +## Structure [streams-streamlang-structure] + +A Streamlang configuration is a YAML document with a single top-level `steps` array. Each step is either a processor (an `action` block) or a [condition block](#streams-streamlang-condition-blocks) (a `condition` block with nested steps): + +```yaml +steps: + - action: + # processor-specific parameters + - action: + # processor-specific parameters + where: + # optional condition + - condition: + field: + eq: + steps: + - action: + # nested processor +``` + +Steps run in order. Each processor transforms the document, and the result is passed to the next step. + +## Processors [streams-streamlang-processors] + +Processors are the building blocks of a Streamlang configuration. Each processor has an `action` field that specifies the type of operation to perform. + +All processors support the following common options: + +| Option | Type | Description | +| --- | --- | --- | +| `description` | string | A human-readable description of the processor. | +| `ignore_failure` | boolean | When `true`, document processing continues even if this processor fails. | +| `where` | [condition](#streams-streamlang-conditions) | A condition that must be met for the processor to run. | + +The following table lists all available processors. Refer to the individual processor pages for YAML parameters and examples. + +| Action | Description | +| --- | --- | +| [`append`](./extract/append.md) | Adds values to an array field, or creates the field as an array if it doesn't exist. | +| [`concat`](./extract/concat.md) | Concatenates a mix of field values and literal strings into a single field. | +| [`convert`](./extract/convert.md) | Converts a field value to a different data type. | +| [`date`](./extract/date.md) | Parses date strings into timestamps. | +| [`dissect`](./extract/dissect.md) | Parses structured text using delimiter-based patterns. | +| [`drop_document`](./extract/drop.md) | Prevents a document from being indexed based on a condition. | +| [`grok`](./extract/grok.md) | Parses unstructured text using predefined or custom patterns. | +| [`join`](./extract/join.md) | Concatenates the values of multiple fields with a delimiter. | +| [`lowercase`](./extract/lowercase.md) | Converts a string field to lowercase. | +| [`math`](./extract/math.md) | Evaluates an arithmetic expression and stores the result. | +| [`network_direction`](./extract/network-direction.md) | Determines network traffic direction based on source and destination IP addresses. | +| [`redact`](./extract/redact.md) | Redacts sensitive data in a string field by matching patterns. | +| [`remove`](./extract/remove.md) | Removes a field from the document. | +| [`remove_by_prefix`](./extract/remove.md#streams-remove-by-prefix-processor) | Removes a field and all nested fields matching a prefix. | +| [`rename`](./extract/rename.md) | Moves a field's value to a new field name and removes the original. | +| [`replace`](./extract/replace.md) | Replaces portions of a string field that match a regular expression. | +| [`set`](./extract/set.md) | Assigns a value to a field, creating the field if it doesn't exist. | +| [`trim`](./extract/trim.md) | Removes leading and trailing whitespace from a string field. | +| [`uppercase`](./extract/uppercase.md) | Converts a string field to uppercase. | + +## Conditions [streams-streamlang-conditions] + +Conditions are Boolean expressions used to control when processors run and how data is routed. They appear in `where` clauses on processors, in [condition blocks](#streams-streamlang-condition-blocks), and in stream [partitioning](./partitioning.md). + +### Comparison operators [streams-streamlang-comparison-operators] + +Each comparison condition specifies a `field` and an operator with a value: + +| Operator | Description | Example value | +| --- | --- | --- | +| `eq` | Equals | `"active"`, `200` | +| `neq` | Not equals | `"error"` | +| `lt` | Less than | `100` | +| `lte` | Less than or equal to | `100` | +| `gt` | Greater than | `0` | +| `gte` | Greater than or equal to | `1` | +| `contains` | Field value contains the substring | `"error"` | +| `startsWith` | Field value starts with the string | `"/api"` | +| `endsWith` | Field value ends with the string | `".log"` | +| `includes` | Multivalue field includes the value | `"admin"` | + +```yaml +where: + field: attributes.status + eq: active +``` + +### Range conditions [streams-streamlang-range-conditions] + +Use `range` to match values within a numeric range. You can combine any of `gt`, `gte`, `lt`, and `lte`: + +```yaml +where: + field: attributes.status_code + range: + gte: 200 + lt: 300 +``` + +### Existence conditions [streams-streamlang-existence-conditions] + +Use `exists` to check whether a field is present: + +```yaml +# Field must exist +where: + field: attributes.user_id + exists: true + +# Field must not exist +where: + field: attributes.temp + exists: false +``` + +### Logical operators [streams-streamlang-logical-operators] + +Combine conditions using `and`, `or`, and `not`: + +```yaml +# All conditions must be true +where: + and: + - field: attributes.env + eq: production + - field: attributes.level + eq: error + +# At least one condition must be true +where: + or: + - field: attributes.level + eq: error + - field: attributes.level + eq: warn + +# Negate a condition +where: + not: + field: attributes.path + startsWith: "/internal" +``` + +### Special conditions [streams-streamlang-special-conditions] + +| Condition | Description | +| --- | --- | +| `always: {}` | Always evaluates to `true`. | +| `never: {}` | Always evaluates to `false`. | + +## Condition blocks [streams-streamlang-condition-blocks] + +Condition blocks group processors that should only run when a condition is met. Use a `condition` step with nested `steps`: + +```yaml +steps: + - condition: + field: attributes.env + eq: production + steps: + - action: set + to: attributes.is_prod + value: true + - action: remove + from: attributes.debug_info +``` + +Condition blocks can be nested for complex logic: + +```yaml +steps: + - condition: + field: attributes.source + eq: webserver + steps: + - action: grok + from: body.message + patterns: + - "%{IP:attributes.client_ip} %{WORD:attributes.method} %{URIPATHPARAM:attributes.path} %{NUMBER:attributes.status}" + - condition: + field: attributes.status + gte: 500 + steps: + - action: set + to: attributes.alert_level + value: critical +``` + +## Field naming [streams-streamlang-field-naming] + +For [wired streams](../wired-streams.md), fields must follow OTel-compatible namespacing. Custom fields must use one of these prefixes: + +- `attributes.*` +- `body.structured.*` +- `resource.attributes.*` +- `scope.attributes.*` + +The following special fields are allowed without a namespace prefix: `@timestamp`, `observed_timestamp`, `trace_id`, `span_id`, `severity_text`, `severity_number`, `event_name`, `body`, and `body.text`. + +System-managed fields like `stream.name` are reserved and cannot be modified by processors. + +## Examples [streams-streamlang-examples] + +### Parse and enrich web server logs [streams-streamlang-example-webserver] + +```yaml +steps: + - action: grok + from: body.message + patterns: + - "%{IP:attributes.client_ip} - %{DATA:attributes.user} \\[%{HTTPDATE:attributes.timestamp}\\] \"%{WORD:attributes.method} %{URIPATHPARAM:attributes.path} HTTP/%{NUMBER:attributes.http_version}\" %{NUMBER:attributes.status} %{NUMBER:attributes.bytes}" + - action: date + from: attributes.timestamp + formats: + - "dd/MMM/yyyy:HH:mm:ss Z" + - action: convert + from: attributes.status + type: integer + - action: convert + from: attributes.bytes + type: long + - action: remove + from: attributes.timestamp + ignore_missing: true +``` + +### Conditionally tag and drop documents [streams-streamlang-example-conditional] + +```yaml +steps: + - condition: + field: attributes.level + eq: DEBUG + steps: + - action: drop_document + - condition: + and: + - field: attributes.level + eq: ERROR + - field: attributes.service + eq: payments + steps: + - action: append + to: attributes.tags + value: + - critical + - pager + - action: set + to: attributes.priority + value: 1 +``` + +### Redact sensitive data [streams-streamlang-example-redact] + +```yaml +steps: + - action: redact + from: body.message + patterns: + - "%{EMAILADDRESS:email}" + - "%{IP:ip_address}" + - action: replace + from: attributes.auth_header + pattern: "Bearer .+" + replacement: "Bearer [REDACTED]" +``` diff --git a/solutions/toc.yml b/solutions/toc.yml index 63008ad317..53c5d9302f 100644 --- a/solutions/toc.yml +++ b/solutions/toc.yml @@ -458,18 +458,26 @@ toc: - file: observability/streams/management/retention.md - file: observability/streams/management/extract.md children: - - file: observability/streams/management/extract/drop.md - - file: observability/streams/management/extract/remove.md - - file: observability/streams/management/extract/date.md + - file: observability/streams/management/extract/append.md + - file: observability/streams/management/extract/concat.md - file: observability/streams/management/extract/convert.md - - file: observability/streams/management/extract/replace.md + - file: observability/streams/management/extract/date.md - file: observability/streams/management/extract/dissect.md + - file: observability/streams/management/extract/drop.md - file: observability/streams/management/extract/grok.md - - file: observability/streams/management/extract/set.md + - file: observability/streams/management/extract/join.md + - file: observability/streams/management/extract/lowercase.md - file: observability/streams/management/extract/math.md + - file: observability/streams/management/extract/network-direction.md + - file: observability/streams/management/extract/redact.md + - file: observability/streams/management/extract/remove.md - file: observability/streams/management/extract/rename.md - - file: observability/streams/management/extract/append.md + - file: observability/streams/management/extract/replace.md + - file: observability/streams/management/extract/set.md + - file: observability/streams/management/extract/trim.md + - file: observability/streams/management/extract/uppercase.md - file: observability/streams/management/extract/manual-pipeline-configuration.md + - file: observability/streams/management/streamlang.md - file: observability/streams/management/partitioning.md - file: observability/streams/management/schema.md - file: observability/streams/management/data-quality.md