Pipeline

Configuring telemetry pipelines

The Pipeline resource connects targets, tunnelTargetPolicies, subscriptions, outputs, and inputs together. It defines the flow of telemetry data through the cluster.

Basic Configuration

The simplest way to configure a Pipeline is using direct references to tagets,subscriptions and outputs.

apiVersion: operator.gnmic.dev/v1alpha1
kind: Pipeline
metadata:
  name: core-telemetry
spec:
  clusterRef: telemetry-cluster
  enabled: true
  targetRefs:
    - router1
    - router2
  subscriptionRefs:
    - interface-counters
  outputs:
    outputRefs:
      - prometheus-output

It can also be configured with labelSelectors:

apiVersion: operator.gnmic.dev/v1alpha1
kind: Pipeline
metadata:
  name: core-telemetry
spec:
  clusterRef: telemetry-cluster
  enabled: true
  targetSelectors:
    - matchLabels:
        role: core
        env: prod
    - matchLabels:
        role: edge
        env: prod
  subscriptionSelectors:
    - matchLabels:
        type: interface-stats
  outputs:
    outputSelectors:
      - matchLabels:
        type: prometheus
        env: prod

The above example build a pipeline that includes:

  • Targets with the labels role=core AND env=prod, as well as targets with the labels role=edge AND env=prod
  • Subscriptions with the label type=interface-stats
  • Ooutputs with the labels type=prometheus AND env=prod

Direct references and label selectors can be combined for the same selection type.

Spec Fields

FieldTypeRequiredDescription
clusterRefstringYesName of the Cluster to run in
enabledboolYesWhether the pipeline is active
targetRefs[]stringNoDirect target references
targetSelectors[]LabelSelectorNoLabel selectors for targets
tunnelTargetPolicyRefs[]stringNoDirect tunnel target policy references
tunnelTargetPolicySelectors[]LabelSelectorNoLabel selectors for tunnel target policies
subscriptionRefs[]stringNoDirect subscription references
subscriptionSelectors[]LabelSelectorNoLabel selectors for subscriptions
outputs.outputRefs[]stringNoDirect output references
outputs.outputSelectors[]LabelSelectorNoLabel selectors for outputs
outputs.processorRefs[]stringNoDirect processor references for outputs (order preserved)
outputs.processorSelectors[]LabelSelectorNoLabel selectors for output processors (sorted by name)
inputs.inputRefs[]stringNoDirect input references
inputs.inputSelectors[]LabelSelectorNoLabel selectors for inputs
inputs.processorRefs[]stringNoDirect processor references for inputs (order preserved)
inputs.processorSelectors[]LabelSelectorNoLabel selectors for input processors (sorted by name)

Resource Selection

Direct References

Select specific resources by name:

spec:
  targetRefs:
    - router1
    - router2
    - switch1

Label Selectors

Select resources by labels:

spec:
  targetSelectors:
    - matchLabels:
        role: core
    - matchLabels:
        role: edge

Mixed Selection

Combine refs and selectors (union):

spec:
  # These specific targets
  targetRefs:
    - special-router
  # Plus all targets with this label
  targetSelectors:
    - matchLabels:
        env: production

Enabling/Disabling

Pipelines can be disabled without deletion:

spec:
  enabled: false  # Pipeline is inactive

This removes the pipeline’s contribution to the cluster configuration without deleting the Pipeline resource.

Example: Multi-Output Pipeline

Send data to multiple destinations:

apiVersion: operator.gnmic.dev/v1alpha1
kind: Pipeline
metadata:
  name: multi-output
spec:
  clusterRef: telemetry-cluster
  enabled: true
  targetSelectors:
    - matchLabels:
        tier: critical
  subscriptionRefs:
    - full-telemetry
  outputs:
    outputRefs:
      - prometheus-realtime
      - kafka-archive
      - influxdb-analytics

Example: Input Pipeline

Process data from an external source:

apiVersion: operator.gnmic.dev/v1alpha1
kind: Pipeline
metadata:
  name: kafka-processor
spec:
  clusterRef: telemetry-cluster
  enabled: true
  inputs:
    inputRefs:
      - kafka-telemetry
  outputs:
    outputRefs:
      - prometheus-output

Overlapping Pipelines

Multiple pipelines can share resources. The operator aggregates configuration:

# Pipeline A: Interface metrics to Prometheus
apiVersion: operator.gnmic.dev/v1alpha1
kind: Pipeline
metadata:
  name: interfaces-to-prometheus
spec:
  clusterRef: my-cluster
  enabled: true
  targetSelectors:
    - matchLabels:
        role: core
  subscriptionRefs:
    - interface-counters
  outputs:
    outputRefs:
      - prometheus
---
# Pipeline B: Same targets, BGP metrics to Kafka
apiVersion: operator.gnmic.dev/v1alpha1
kind: Pipeline
metadata:
  name: bgp-to-kafka
spec:
  clusterRef: my-cluster
  enabled: true
  targetSelectors:
    - matchLabels:
        role: core  # Same targets as Pipeline A
  subscriptionRefs:
    - bgp-state
  outputs:
    outputRefs:
      - kafka

Result: Core routers get both subscriptions, each going to different outputs.

Tunnel Target Policies

For gRPC tunnel mode (where devices connect to the collector), use tunnel target policies instead of static targets:

apiVersion: operator.gnmic.dev/v1alpha1
kind: Pipeline
metadata:
  name: tunnel-telemetry
spec:
  clusterRef: tunnel-cluster  # Must have grpcTunnel configured
  enabled: true
  # Tunnel target policies instead of targets
  tunnelTargetPolicyRefs:
    - core-routers
  tunnelTargetPolicySelectors:
    - matchLabels:
        tier: edge
  subscriptionRefs:
    - interface-counters
  outputs:
    outputRefs:
      - prometheus-output

Note: The referenced cluster must have grpcTunnel configured. If not, the pipeline status will show an error.

See TunnelTargetPolicy documentation for details on configuring tunnel target matching.

Adding Processors

Processors transform data before it reaches outputs or after it comes from inputs:

apiVersion: operator.gnmic.dev/v1alpha1
kind: Pipeline
metadata:
  name: processed-telemetry
spec:
  clusterRef: telemetry-cluster
  enabled: true
  targetSelectors:
    - matchLabels:
        env: production
  subscriptionRefs:
    - interface-counters
  outputs:
    outputRefs:
      - prometheus-output
    # Processors applied to output data
    processorRefs:
      - filter-empty-events     # Applied first
      - add-cluster-metadata    # Applied second
    processorSelectors:
      - matchLabels:
          stage: enrichment     # Added after refs, sorted by name

Processor Ordering

Order matters for processors! The final order is:

  1. processorRefs - exact order specified (duplicates allowed)
  2. processorSelectors - sorted by name, deduplicated

Separate Input/Output Processors

Inputs and outputs have independent processor chains:

spec:
  outputs:
    outputRefs: [prometheus]
    processorRefs:
      - format-for-prometheus
  inputs:
    inputRefs: [kafka-input]
    processorRefs:
      - validate-kafka-format

See the Processor documentation for details on processor types and configuration.

Status

The Pipeline status shows the current state and resolved resource counts:

status:
  status: Active
  targetsCount: 10
  tunnelTargetPoliciesCount: 3
  subscriptionsCount: 3
  inputsCount: 0
  outputsCount: 2
  conditions:
    - type: Ready
      status: "True"
      reason: PipelineReady
      message: "Pipeline has 10 targets, 3 subscriptions, 0 inputs, 2 outputs"
    - type: ResourcesResolved
      status: "True"
      reason: ResourcesResolved
      message: "All referenced resources were successfully resolved"

Status Fields

FieldDescription
statusOverall status (Active, Incomplete, Error)
targetsCountNumber of resolved static targets
tunnelTargetPoliciesCountNumber of resolved tunnel target policies
subscriptionsCountNumber of resolved subscriptions
inputsCountNumber of resolved inputs
outputsCountNumber of resolved outputs
conditionsStandard Kubernetes conditions

Conditions

TypeDescription
ReadyTrue when pipeline has required resources
ResourcesResolvedTrue when all referenced resources were found

Pipeline Readiness

A pipeline is considered ready when it has:

  • (Targets AND Subscriptions) OR Inputs - data sources
  • AND Outputs - data destinations

Examples:

  • ✅ Ready: 10 targets, 2 subscriptions, 1 output
  • ✅ Ready: 0 targets, 0 subscriptions, 1 input, 1 output
  • ❌ Incomplete: 10 targets, 0 subscriptions, 0 outputs (missing subscriptions)
  • ❌ Incomplete: 0 targets, 0 inputs, 1 output (missing data source)