# ...
config: |
processor:
batch:
timeout: 5s
send_batch_max_size: 10000
service:
pipelines:
traces:
processors: [batch]
metrics:
processors: [batch]
# ...
Processors process the data between it is received and exported. Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters.
The Batch Processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information.
# ...
config: |
processor:
batch:
timeout: 5s
send_batch_max_size: 10000
service:
pipelines:
traces:
processors: [batch]
metrics:
processors: [batch]
# ...
Parameter | Description | Default |
---|---|---|
|
Sends the batch after a specific time duration and irrespective of the batch size. |
|
|
Sends the batch of telemetry data after the specified number of spans or metrics. |
|
|
The maximum allowable size of the batch. Must be equal or greater than the |
|
|
When activated, a batcher instance is created for each unique set of values found in the |
|
|
When the |
|
The Memory Limiter Processor periodically checks the Collector’s memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter Processor forces garbage collection to run.
# ...
config: |
processor:
memory_limiter:
check_interval: 1s
limit_mib: 4000
spike_limit_mib: 800
service:
pipelines:
traces:
processors: [batch]
metrics:
processors: [batch]
# ...
Parameter | Description | Default |
---|---|---|
|
Time between memory usage measurements. The optimal value is |
|
|
The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value. |
|
|
Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of |
20% of |
|
Same as the |
|
|
Same as the |
|
The Resource Detection Processor identifies host resource details in alignment with OpenTelemetry’s resource semantic standards. Using the detected information, this processor can add or replace the resource values in telemetry data. This processor supports traces and metrics. You can use this processor with multiple detectors such as the Docket metadata detector or the OTEL_RESOURCE_ATTRIBUTES
environment variable detector.
The Resource Detection Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
kind: ClusterRole
metadata:
name: otel-collector
rules:
- apiGroups: ["config.openshift.io"]
resources: ["infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
# ...
# ...
config: |
processor:
resourcedetection:
detectors: [openshift]
override: true
service:
pipelines:
traces:
processors: [resourcedetection]
metrics:
processors: [resourcedetection]
# ...
# ...
config: |
processors:
resourcedetection/env:
detectors: [env] (1)
timeout: 2s
override: false
# ...
1 | Specifies which detector to use. In this example, the environment detector is specified. |
The Attributes Processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions.
The Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
This processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported:
Inserts a new attribute into the input data when the specified key does not already exist.
Updates an attribute in the input data if the key already exists.
Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists.
Removes an attribute from the input data.
Hashes an existing attribute value as SHA1.
Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it is overridden similarly to the Span Processor’s to_attributes
setting with the existing attribute as the source.
Converts an existing attribute to a specified type.
# ...
config: |
processors:
attributes/example:
actions:
- key: db.table
action: delete
- key: redacted_span
value: true
action: upsert
- key: copy_key
from_attribute: key_original
action: update
- key: account_id
value: 2245
action: insert
- key: account_password
action: delete
- key: account_email
action: hash
- key: http.status_code
action: convert
converted_type: int
# ...
The Resource Processor applies changes to the resource attributes. This processor supports traces, metrics, and logs.
The Resource Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
# ...
config: |
processor:
attributes:
- key: cloud.availability_zone
value: "zone-1"
action: upsert
- key: k8s.cluster.name
from_attribute: k8s-cluster
action: insert
- key: redundant-attribute
action: delete
# ...
Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute.
The Span Processor modifies the span name based on its attributes or extracts the span attributes from the span name. This processor can also change the span status and include or exclude spans. This processor supports traces.
Span renaming requires specifying attributes for the new name by using the from_attributes
configuration.
The Span Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
# ...
config: |
processor:
span:
name:
from_attributes: [<key1>, <key2>, ...] (1)
separator: <value> (2)
# ...
1 | Defines the keys to form the new span name. |
2 | An optional separator. |
You can use this processor to extract attributes from the span name.
# ...
config: |
processor:
span/to_attributes:
name:
to_attributes:
rules:
- ^\/api\/v1\/document\/(?P<documentId>.*)\/update$ (1)
# ...
1 | This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a documentID attibute is created. In this example, if the input span name is /api/v1/document/12345678/update , this results in the /api/v1/document/{documentId}/update output span name, and a new "documentId"="12345678" attribute is added to the span. |
You can have the span status modified.
# ...
config: |
processor:
span/set_status:
status:
code: Error
description: "<error_description>"
# ...
The Kubernetes Attributes Processor enables automatic configuration of spans, metrics, and log resource attributes by using the Kubernetes metadata. This processor supports traces, metrics, and logs. This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata.
The Kubernetes Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
kind: ClusterRole
metadata:
name: otel-collector
rules:
- apiGroups: ['']
resources: ['pods', 'namespaces']
verbs: ['get', 'watch', 'list']
# ...
# ...
config: |
processors:
k8sattributes:
filter:
node_from_env_var: KUBE_NODE_NAME
# ...
The Filter Processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. You can combine the conditions by using the logical OR operator. This processor supports traces, metrics, and logs.
The Filter Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
# ...
config: |
processors:
filter/ottl:
error_mode: ignore (1)
traces:
span:
- 'attributes["container.name"] == "app_container_1"' (2)
- 'resource.attributes["host.name"] == "localhost"' (3)
# ...
1 | Defines the error mode. When set to ignore , ignores errors returned by conditions. When set to propagate , returns the error up the pipeline. An error causes the payload to be dropped from the Collector. |
2 | Filters the spans that have the container.name == app_container_1 attribute. |
3 | Filters the spans that have the host.name == localhost resource attribute. |
The Routing Processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming gRPC or plain HTTP request or read a resource attribute, and then direct the trace information to relevant exporters according to the read value.
The Routing Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
# ...
config: |
processors:
routing:
from_attribute: X-Tenant (1)
default_exporters: (2)
- jaeger
table: (3)
- value: acme
exporters: [jaeger/acme]
exporters:
jaeger:
endpoint: localhost:14250
jaeger/acme:
endpoint: localhost:24250
# ...
1 | The HTTP header name for the lookup value when performing the route. |
2 | The default exporter when the attribute value is not present in the table in the next section. |
3 | The table that defines which values are to be routed to which exporters. |
Optionally, you can create an attribute_source
configuration, which defines where to look for the attribute that you specify in the from_attribute
field. The supported values are context
for searching the context including the HTTP headers, and resource
for searching the resource attributes.
The Cumulative-to-Delta Processor processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics.
You can filter metrics by using the include:
or exclude:
fields and specifying the strict
or regexp
metric name matching.
This processor does not convert non-monotonic sums and exponential histograms.
The Cumulative-to-Delta Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
# ...
config: |
processors:
cumulativetodelta:
include: (1)
match_type: strict (2)
metrics: (3)
- <metric_1_name>
- <metric_2_name>
exclude: (4)
match_type: regexp
metrics:
- "<regular_expression_for_metric_names>"
# ...
1 | Optional: Configures which metrics to include. When omitted, all metrics, except for those listed in the exclude field, are converted to delta metrics. |
2 | Defines a value provided in the metrics field as a strict exact match or regexp regular expression. |
3 | Lists the metric names, which are exact matches or matches for regular expressions, of the metrics to be converted to delta metrics. If a metric matches both the include and exclude filters, the exclude filter takes precedence. |
4 | Optional: Configures which metrics to exclude. When omitted, no metrics are excluded from conversion to delta metrics. |
The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes.
The Group-by-Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example:
# ...
processors:
groupbyattrs:
keys: (1)
- <key1> (2)
- <key2>
# ...
1 | Specifies attribute keys to group by. |
2 | If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated. |
The Transform Processor enables modification of telemetry data according to specified rules and in the OpenTelemetry Transformation Language (OTTL). For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed.
All statements are written in the OTTL.
You can configure multiple context statements for different signals, traces, metrics, and logs.
The value of the context
type specifies which OTTL Context the processor must use when interpreting the associated statements.
The Transform Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
# ...
config: |
processors:
transform:
error_mode: ignore (1)
<trace|metric|log>_statements: (2)
- context: <string> (3)
conditions: (4)
- <string>
- <string>
statements: (5)
- <string>
- <string>
- <string>
- context: <string>
statements:
- <string>
- <string>
- <string>
# ...
1 | Optional: See the following table "Values for the optional error_mode field". |
2 | Indicates a signal to be transformed. |
3 | See the following table "Values for the context field". |
4 | Optional: Conditions for performing a transformation. |
# ...
config: |
transform:
error_mode: ignore
trace_statements: (1)
- context: resource
statements:
- keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"]) (2)
- replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***") (3)
- limit(attributes, 100, [])
- truncate_all(attributes, 4096)
- context: span (4)
statements:
- set(status.code, 1) where attributes["http.path"] == "/health"
- set(name, attributes["http.route"])
- replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}")
- limit(attributes, 100, [])
- truncate_all(attributes, 4096)
# ...
1 | Transforms a trace signal. |
2 | Keeps keys on the resources. |
3 | Replaces attributes and replaces string characters in password fields with asterisks. |
4 | Performs transformations at the span level. |
Signal Statement | Valid Contexts |
---|---|
|
|
|
|
|
|
Value | Description |
---|---|
|
Ignores and logs errors returned by statements and then continues to the next statement. |
|
Ignores and doesn’t log errors returned by statements and then continues to the next statement. |
|
Returns errors up the pipeline and drops the payload. Implicit default. |