Skip to content

Inputs and outputs

duckflux follows a single principle for data flow: string by default, schema on demand. Every participant receives and returns a string unless you explicitly define a schema — just like stdin/stdout, the universal interface. This means a workflow with zero input configuration still works, and you only add schema when you need validation or typed access.


All I/O is optional. A minimal workflow with no inputs, no output, and no input mapping on any participant is completely valid:

participants:
greet:
type: exec
run: echo "Hello, duckflux!"
flow:
- greet

No schema required. The workflow runs, and the output is whatever greet printed to stdout — a plain string.

Schema is opt-in at every level. When defined, it uses JSON Schema syntax (written in YAML) for validation and typed access.


The top-level inputs block declares what parameters the workflow accepts from its caller.

The simplest form: just declare the names. Every field is treated as a string with no validation:

inputs:
repoUrl:
branch:
env:

These are accessible in any CEL expression as workflow.inputs.repoUrl, workflow.inputs.branch, workflow.inputs.env.

Add JSON Schema properties to each field for type validation, defaults, and documentation:

inputs:
repoUrl:
type: string
format: uri
required: true
description: "Repository URL to deploy from"
branch:
type: string
default: "main"
maxRetries:
type: integer
minimum: 1
maximum: 10
default: 3
tags:
type: array
items:
type: string
verbose:
type: boolean
default: false
PropertyDescription
typeJSON Schema type: string, integer, number, boolean, array, object
defaultValue used when the caller does not provide the field
requiredIf true, the runner rejects the workflow when the field is missing
descriptionHuman-readable description (used in duckflux validate output)
formatString format hint: uri, date, email, etc.
minimum / maximumNumeric bounds
itemsSchema for array items

Inputs can be provided via the CLI in three equivalent ways (highest priority wins):

Terminal window
# Inline flags
duckflux run deploy.flow.yaml --input branch=main --input env=staging
# JSON file
duckflux run deploy.flow.yaml --input-file inputs.json
# Piped JSON via stdin
echo '{"branch": "main", "env": "staging"}' | duckflux run deploy.flow.yaml

Resolution priority: --input flags > --input-file > stdin.


The input field on a participant maps data from the workflow into that participant before it executes. Values are CEL expressions — they can reference workflow.inputs.*, env.*, other step outputs, and any runtime variable.

Pass a single value as the participant’s entire input:

participants:
coder:
type: exec
run: ./generate.sh
input: workflow.inputs.taskDescription

The value of workflow.inputs.taskDescription is passed as-is — a string.

Pass multiple named fields:

participants:
coder:
type: exec
run: ./generate.sh
input:
task: workflow.inputs.taskDescription
context: reviewer.output.feedback
repo: workflow.inputs.repoUrl

Each key becomes a named input the participant can read. Values are CEL expressions evaluated at execution time.

Inline participants (defined directly in the flow) use the same input field:

flow:
- as: notify
type: http
url: https://hooks.example.com/webhook
method: POST
input:
message: coder.output.summary
branch: workflow.inputs.branch

When invoking a reusable participant in the flow, you can override its input mapping for that specific call:

participants:
coder:
type: exec
run: ./generate.sh
input: workflow.inputs.taskDescription
flow:
- coder:
input: reviewer.output.feedback # override for this invocation
- reviewer

Each participant produces output accessible as <step>.output in any subsequent CEL expression.

Default behavior: automatic string and JSON parsing

Section titled “Default behavior: automatic string and JSON parsing”

Without a schema, participant output is a string. The runtime attempts automatic parsing:

  1. If the output is valid JSON → accessible as a map (coder.output.field, coder.output.nested.value)
  2. If not → accessible as a plain string (coder.output)

This means a participant that prints {"approved": true, "score": 9} to stdout makes reviewer.output.approved and reviewer.output.score immediately available — no schema required.

Define an output map on the participant to enable validation. If the step’s output does not match the schema, it is treated as a failure and the onError strategy applies:

participants:
reviewer:
type: exec
run: ./review.sh
output:
approved:
type: boolean
required: true
score:
type: integer
minimum: 0
maximum: 10
comments:
type: string

Regardless of whether a schema is defined, step results are accessed the same way in expressions:

# in conditions
- if:
condition: reviewer.output.approved == true
# in guards
- deploy:
when: reviewer.output.score > 7
# in loop exit conditions
- loop:
until: reviewer.output.approved == true
max: 5
steps:
- coder
- reviewer
# in input mappings of later steps
participants:
notify:
type: http
url: https://hooks.example.com/done
method: POST
input:
approved: reviewer.output.approved
score: reviewer.output.score

Beyond .output, each step also exposes execution metadata:

VariableTypeDescription
<step>.statusstringsuccess, failure, or skipped
<step>.outputstring or mapThe step’s raw output (auto-parsed if JSON)
<step>.startedAttimestampWhen the step started
<step>.finishedAttimestampWhen the step finished
<step>.durationdurationExecution time
<step>.retriesintHow many times the step was retried
<step>.errorstringError message when status == "failure"

The top-level output block defines what the workflow returns to its caller (CLI, API, or parent workflow when used as a sub-workflow).

If output is not defined, the workflow output is the output of the last executed step.

Map the entire workflow output to a single value:

output: reviewer.output.summary

Map individual fields from step outputs:

output:
approved: reviewer.output.approved
code: coder.output.code
summary: reviewer.output.summary
testResult: tests.status

Add a schema block alongside map to validate the workflow’s return value:

output:
schema:
approved:
type: boolean
required: true
code:
type: string
summary:
type: string
map:
approved: reviewer.output.approved
code: coder.output.code
summary: reviewer.output.summary

When schema is provided, the runtime validates the mapped output before returning it to the caller. Validation failure is treated as a workflow-level error.


The output of each step is implicitly passed as input to the next sequential step — analogous to Unix pipes. The chained value is accessible inside the receiving participant via its input variable.

flow:
- type: exec
run: echo '{"score": 9}'
- as: notify
type: http
url: https://hooks.example.com/done
method: POST
# input.score is the piped output from the previous step

When a participant has both a chained input and an explicit input mapping, the runtime merges them:

Chained typeExplicit typeResult
mapmapMerged — explicit keys take precedence
stringstringExplicit takes precedence
Incompatible typesRuntime error
ConstructChain output
if/then/elseOutput of the last step in the executed branch
loopOutput of the last step of the last iteration
parallelArray of outputs from all branches, in declaration order

duckflux has four distinct I/O boundaries:

LevelFieldPurpose
Workflow inputsinputs: (top-level)Parameters the workflow accepts from the caller
Participant inputsinput: (on a participant)Data mapped into a specific step before execution
Participant outputsoutput: (on a participant)Schema for validating a step’s return value
Workflow outputoutput: (top-level)Final result returned to the workflow’s caller

Nothing defined → string in, string out — no validation
Bare keys (inputs only) → named strings, no type validation
Mapping only → data passthrough, no schema validation
With schema → type/constraint validation via JSON Schema

A deployment workflow that uses all four I/O levels:

id: deploy
name: Deployment Pipeline
version: "1"
# Workflow inputs — with schema
inputs:
branch:
type: string
default: "main"
env:
type: string
required: true
description: "Target environment: staging or production"
maxRetries:
type: integer
default: 3
minimum: 1
maximum: 5
participants:
build:
type: exec
run: ./build.sh
# Participant input — structured mapping
input:
branch: workflow.inputs.branch
env: workflow.inputs.env
# Participant output — with schema
output:
artifact:
type: string
required: true
version:
type: string
tests:
type: exec
run: npm test
input: build.output.artifact # single string passthrough
deploy:
type: exec
run: ./deploy.sh
input:
artifact: build.output.artifact
version: build.output.version
env: workflow.inputs.env
output:
url:
type: string
deployedAt:
type: string
flow:
- build
- tests
- deploy:
when: tests.status == "success"
# Workflow output — structured mapping with schema
output:
schema:
url:
type: string
required: true
version:
type: string
env:
type: string
map:
url: deploy.output.url
version: build.output.version
env: workflow.inputs.env