This document describes how to write and debug Windmill scripts, flows, schedules, triggers, and apps. It is tool-agnostic and intended for use with any AI coding assistant.
Combine these rules with the user's global code style preferences (FP patterns, naming, TypeScript conventions, etc.).
OSS-first disclaimer
Prefer solutions that work with Windmill open-source by default.
When a suggestion relies on an explicitly Enterprise-only feature (for example: Node.js runtime,//nodejs///npmmodes, SSO-only features, or managed-cloud-only features), label it as Enterprise-only and, when possible, suggest an OSS-compatible alternative.
When troubleshooting or looking for current behavior, use these first:
- Windmill Docs: https://www.windmill.dev/docs/
- GitHub Issues: https://github.com/windmill-labs/windmill/issues
- Community Q&A: https://questions.windmill.dev/
For data-related features (Datatables, DuckLake, Data Studio, persistent storage), see:
Prefer answers grounded in these sources over guessing or outdated assumptions.
Scripts, flows, and apps run in isolated Windmill workers with these conventions:
- Scripts must export a
mainfunction (do not call it yourself). - Dependencies are installed automatically—never show install instructions.
- Credentials and configuration are stored as resources and passed as parameters.
- The
windmill-clientlibrary provides APIs for interacting with the platform. - Use
wmill resource-type list --schemato discover available resource types.
Use TypeScript on Bun unless explicitly asked otherwise. Bun is fast and supports the npm ecosystem.
// Bun script: export a single async main
export async function main(arg1: string, arg2: number): Promise<Result> {
// orchestrate IO, call pure helpers
return { /* ... */ };
}
mainarguments define the input UI and JSON schema.- Type annotations drive the UI form and pre-validation.
- Return value becomes the script's output (available to downstream steps in flows).
When credentials are needed, add a parameter typed via the RT namespace:
export async function main(stripe: RT.Stripe, amount: number) {
// stripe.token, stripe.*, etc.
}
Only include resource parameters when actually required.
import * as wmill from "windmill-client";
// Resources
wmill.getResource(path?: string, undefinedIfEmpty?: boolean): Promise<any>
wmill.setResource(value: any, path?: string, initializeToTypeIfNotExist?: string): Promise<void>
// State (persistent across executions)
wmill.getState(): Promise<any>
wmill.setState(state: any): Promise<void>
// Variables
wmill.getVariable(path: string): Promise<string>
wmill.setVariable(path: string, value: string, isSecretIfNotExist?: boolean, descriptionIfNotExist?: string): Promise<void>
// Script execution
wmill.runScript(path?: string | null, hash_?: string | null, args?: Record<string, any> | null, verbose?: boolean): Promise<any>
wmill.runScriptAsync(path: string | null, hash_: string | null, args: Record<string, any> | null, scheduledInSeconds?: number | null): Promise<string>
wmill.waitJob(jobId: string, verbose?: boolean): Promise<any>
wmill.getResult(jobId: string): Promise<any>
wmill.getRootJobId(jobId?: string): Promise<string>
// S3 (if configured)
wmill.loadS3File(s3object: S3Object, s3ResourcePath?: string | undefined): Promise<Uint8Array | undefined>
wmill.writeS3File(s3object: S3Object | undefined, fileContent: string | Blob, s3ResourcePath?: string | undefined): Promise<S3Object>
// Flow operations
wmill.setFlowUserState(key: string, value: any, errorIfNotPossible?: boolean): Promise<void>
wmill.getFlowUserState(key: string, errorIfNotPossible?: boolean): Promise<any>
wmill.getResumeUrls(approver?: string): Promise<{approvalPage: string, resume: string, cancel: string}>
type S3Object = { s3: string }; // path within configured bucket
Add as comment on line 1:
| Mode | Header | Notes |
|---|---|---|
| Default | (none) | Pre-bundled with Bun bundler |
| No bundling | //nobundling |
Skip pre-bundling |
| Native (v8) | //native |
Lightweight, fewer features |
| Node.js | //nodejs |
Enterprise-only Node.js runtime |
| npm install | //npm |
Enterprise-only npm install mode |
Prefer OSS-compatible modes unless there is a clear need for Enterprise-only behavior.
| Language | Entrypoint |
|---|---|
| Deno | export async function main(...); use npm: prefix for npm imports |
| Go | package inner; func main(...) (T, error) |
| Bash | No shebang; args via "$1", "$2", etc. |
| Rust | fn main(...) -> anyhow::Result<T>; deps via //! cargo block |
| PostgreSQL | $1::type, $2::type; name with -- $1 name |
| MySQL | ? placeholders; name with -- ? name (type) |
| BigQuery | @name1, @name2; name with -- @name1 (type) |
| GraphQL | Add arguments as query parameters |
Flows are DAGs defined in OpenFlow YAML. Each flow lives in a folder ending with .flow containing a flow.yaml definition.
summary: "Brief one-line description"
description: "Optional detailed description"
value:
modules: [] # Array of workflow steps
# Optional:
failure_module: {}
preprocessor_module: {}
same_worker: false
concurrent_limit: 0
concurrency_key: "string"
concurrency_time_window_s: 0
custom_debounce_key: "key"
debounce_delay_s: 0
skip_expr: "javascript_expression"
cache_ttl: 0
priority: 0
early_return: "javascript_expression"
schema:
type: object
properties: {}
required: []
id: unique_step_id
value:
type: rawscript
content: '!inline inline_script_0.inline_script.ts'
language: bun
input_transforms:
param1:
type: javascript
expr: "flow_input.name"
Inline script files live in the same .flow folder.
id: step_id
value:
type: script
path: "f/folder/script_name" # or "u/user/script_name" or "hub/script_path"
input_transforms:
param_name:
type: javascript
expr: "results.previous_step"
id: step_id
value:
type: flow
path: "f/folder/flow_name"
input_transforms:
param_name:
type: static
value: "fixed_value"
id: loop_step
value:
type: forloopflow
iterator:
type: javascript
expr: "flow_input.items"
skip_failures: false
parallel: true
parallelism: 4
modules:
- id: loop_body
value:
type: rawscript
content: |
export async function main(iter: { value: any; index: number }) {
return iter.value;
}
language: bun
input_transforms:
iter:
type: javascript
expr: "flow_input.iter"
id: while_step
value:
type: whileloopflow
skip_failures: false
parallel: false
parallelism: 1
modules:
- id: condition_check
value:
type: rawscript
content: |
export async function main(state: any) {
// Return true to continue, false to stop
return state.counter < 10;
}
language: bun
input_transforms:
state:
type: javascript
expr: "results.previous_step ?? flow_input.initial_state"
id: branch_step
value:
type: branchone
branches:
- summary: "Condition 1"
expr: "results.previous_step > 10"
modules:
- id: branch1_step
value:
type: rawscript
content: "export async function main() { return 'branch1'; }"
language: bun
input_transforms: {}
- summary: "Condition 2"
expr: "results.previous_step <= 10"
modules:
- id: branch2_step
value:
type: rawscript
content: "export async function main() { return 'branch2'; }"
language: bun
input_transforms: {}
default:
- id: default_step
value:
type: rawscript
content: "export async function main() { return 'default'; }"
language: bun
input_transforms: {}
id: parallel_step
value:
type: branchall
parallel: true
branches:
- summary: "Branch A"
skip_failure: false
modules:
- id: branch_a_step
value:
type: rawscript
content: "export async function main() { return 'A'; }"
language: bun
input_transforms: {}
- summary: "Branch B"
skip_failure: true
modules:
- id: branch_b_step
value:
type: rawscript
content: "export async function main() { return 'B'; }"
language: bun
input_transforms: {}
id: identity_step
value:
type: identity
flow: false # true if representing a sub-flow
flow_input.property_name— access workflow inputs.results.step_id— access outputs from previous steps.results.step_id.property— access specific properties.flow_input.iter.value/flow_input.iter.index— loop iteration context.
input_transforms:
database:
type: static
value: "$res:f/folder/my_database"
These can be added to any module alongside id and value.
id: step_id
value: { ... }
# Skip this step conditionally
skip_if:
expr: "results.previous_step.should_skip"
# Stop the flow after this step
stop_after_if:
expr: "results.step_id.should_stop"
skip_if_stopped: true
error_message: "Custom stop message"
# Stop loop after current iteration
stop_after_all_iters_if:
expr: "results.step_id.done"
skip_if_stopped: false
# Delay before execution
sleep:
type: javascript
expr: "flow_input.delay_seconds"
# Continue flow even if this step fails
continue_on_error: true
# Clean up results after use (memory optimization)
delete_after_use: true
# Cache results for specified seconds
cache_ttl: 3600
# Step timeout in seconds
timeout: 300
# Execution priority (higher = sooner)
priority: 10
# Mock the step (for testing)
mock:
enabled: true
return_value: { mocked: true }
retry:
constant:
attempts: 3
seconds: 5
# OR
# exponential:
# attempts: 5
# multiplier: 2
# seconds: 1
# random_factor: 10
suspend:
required_events: 1
timeout: 86400 # seconds
resume_form:
schema:
type: object
properties:
approved:
type: boolean
reason:
type: string
user_auth_required: true
user_groups_required:
type: static
value: ["admin", "approvers"]
self_approval_disabled: false
hide_cancel: false
continue_on_disapprove_timeout: false
value:
failure_module:
id: failure
value:
type: rawscript
content: |
export async function main(error: { message: string; step_id: string; name: string; stack: string }) {
console.log("Flow failed:", error.message);
return error;
}
language: bun
input_transforms: {}
value:
preprocessor_module:
id: preprocessor
value:
type: rawscript
content: |
export async function main() {
console.log("Flow starting...");
return { initialized: true };
}
language: bun
input_transforms: {}
schema:
type: object
properties:
email:
type: string
format: email
description: "User email"
count:
type: integer
minimum: 1
maximum: 100
database:
type: object
format: "resource-postgresql"
required: ["email"]
order: ["email", "count", "database"]
Schedules run scripts or flows on a cron basis.
# schedule.yaml
path: "f/folder/my_script"
schedule: "0 */6 * * *" # every 6 hours
timezone: "America/Denver"
args:
param1: "value"
enabled: true
Common cron patterns:
* * * * *— every minute0 * * * *— every hour0 0 * * *— daily at midnight0 0 * * 0— weekly on Sunday0 0 1 * *— monthly on the 1st
CLI:
wmill schedule list
wmill schedule push <file>
Triggers allow external systems to start scripts or flows. Each script and flow has a unique webhook URL, and custom triggers can be configured.
- Sync endpoint: waits for completion, returns result.
- Async endpoint: returns job ID immediately.
- Arguments can be passed as JSON body or query params.
Triggers can also be configured for:
- HTTP routes (custom paths)
- Email-based triggers
- Other supported event sources
See Windmill “Triggers” docs for details.
Windmill Apps are schema-driven UIs (defined in app.yaml) that sit on top of scripts and flows. Use them primarily for internal tools, admin panels, and operational dashboards.
-
Each app lives in a folder ending with
.app/, for example:f/my-dashboard.app/ ├── app.yaml # App definition (components, layout, wiring, policy) └── inline_script_0.inline_script.ts # Optional inline scripts -
The app is fully declarative:
- Components:
id,type,configuration,position. - Runnables: connections to scripts/flows.
- Bindings: expressions wiring component outputs to runnables and other components.
- Policy: execution identity and allowed runnables.
- Components:
Prefer managing apps in Git via wmill sync pull/push rather than treating them as UI-only artifacts.
-
Apps use a component library (tables, forms, charts, buttons, layout, etc.). Each component exposes inputs and outputs.
-
Runnables connect components to backend logic, typically by script/flow
path:runnables: - id: bg_fetch_users type: runnableByPath path: "f/data/fetch_users" runOnStart: true inputs: {} -
Components can bind to runnable results:
components: - id: table_users type: tablecomponent configuration: source: type: connected connection: componentId: bg_fetch_users path: result -
Data sources available in apps:
ctx.*(context): user email, username, workspace, URL query params.state.*: client-side state that frontend scripts can read/write.- Component outputs (e.g.,
table_users.selectedRow).
Keep business logic in scripts/flows; treat Apps as wiring and presentation.
Apps support small frontend scripts (browser-side JavaScript) to control components and state.
Examples:
// Increment a counter in global state
state.counter = (state.counter ?? 0) + 1;
// Set a component value
setValue('text_input_a', 'Hello');
// Navigate inside the app
goto('/another-page');
setTab('tabs_1', 'Details');
Use frontend scripts for:
- Simple UI logic and navigation.
- Shaping data returned from runnables.
- Managing global app state (filters, flags, view modes).
Avoid putting heavy business logic into frontend scripts.
The policy section of app.yaml controls who the app runs as and what it may execute.
Typical fields:
policy:
on_behalf_of: "u/admin"
execution_mode: viewer
on_behalf_of_email: "admin@example.com"
triggerables_v2:
"bg_fetch_users:rawscript/<hash>":
static_inputs:
database: "$res:f/data/my_pg"
one_of_inputs: {}
allow_user_resources: []
Key points:
execution_modeandon_behalf_ofdetermine execution identity (which user the app runs as).triggerables_v2whitelists which runnables/components can execute which scripts and with which static inputs.- Inline scripts referenced here include a hash of the inline content; stale hashes can cause runtime failures.
Safe pattern:
- Use the visual App editor once to create or scaffold an app so Windmill generates a correct
policy. - Then use
wmill sync pulland manageapp.yamlin Git, only modifyingpolicydeliberately.
Two main ways to attach logic to an app:
-
External scripts (recommended)
- Logic lives in regular Windmill scripts (
f/.../script_name) or flows. - Runnables reference scripts by
path. - Benefits:
- Testable and reusable.
- Simpler policy; no inline-script hash management.
- App YAML focuses on layout and wiring.
- Logic lives in regular Windmill scripts (
-
Inline scripts
- Logic lives in
.appasinline_script_X.inline_script.ts. - Runnables reference inline scripts;
triggerables_v2contains hashes. - More fragile for programmatic generation and harder to share between apps.
- Logic lives in
Default to external scripts; use inline scripts only when tight coupling to the app is needed.
Because app.yaml is just YAML, apps can be edited or generated without the drag-and-drop editor:
- Recommended workflow:
- Use the visual editor for initial scaffolding and policy generation.
- Pull the app into your repo with
wmill sync pull. - Hand-edit or generate:
- Components (types, positions, configuration).
- Runnables referencing scripts/flows by
path.
- Push with
wmill app pushorwmill sync push.
Caveats:
- Be cautious editing
policyandtriggerables_v2by hand. - Prefer external scripts over inline scripts when generating apps.
- The app schema and component library evolve; keep any generators tied to a specific Windmill version.
wmill app list # List apps
wmill app show <path> # Show app details
wmill app push <path> # Push app folder (ends with .app)
wmill sync pull # Pull scripts/flows/apps/config into Git
wmill sync push # Push local changes to workspace
This section covers local development and test-driven loops from code → test → deploy to Staging/Prod.
-
Install CLI and check version:
npm install -g @windmill-labs/windmill-cli wmill version -
Create a local repo and bind a workspace:
mkdir windmill-workspace && cd windmill-workspace git init # or clone existing repo wmill workspace add staging <workspace_name_or_id> <base_url> wmill workspace switch staging -
Pull existing resources into Git (everything except secret values):
wmill sync pull
This pulls scripts, flows, apps, schedules, triggers, resources, and variables into your repo in their YAML/file forms. Secrets (actual credential values) stay in Windmill; Git only tracks definitions.
-
Create or edit script:
wmill script bootstrap f/my_space/example_script bun # or create f/my_space/example_script.bun.ts manually -
Write FP-style code plus tests (pure helpers + Bun/Node tests if desired).
-
Generate metadata and lock:
wmill script generate-metadata f/my_space/example_script.bun.ts -
Run and debug in Staging:
wmill script run f/my_space/example_script -
Sync to Staging:
wmill sync push
-
Create or edit flow:
wmill flow bootstrap f/my_space/groundbreaking_flow # or create f/my_space/groundbreaking_flow.flow/flow.yaml -
Design the DAG with
rawscriptandscriptsteps, wiring inputs/outputs viaflow_input.*andresults.step_id. -
Generate locks:
wmill flow generate-locks --yes -
Run and debug in Staging:
wmill flow run f/my_space/groundbreaking_flow -
Sync to Staging:
wmill sync push
-
Scaffold in the App editor once (for initial layout and policy).
-
Pull locally:
wmill sync pull -
Edit
app.yamland referenced scripts:- Prefer runnables pointing to external scripts by
path. - Adjust layout, bindings, frontend scripts.
- Avoid manual
policyedits unless you understand the implications.
- Prefer runnables pointing to external scripts by
-
Push app changes:
wmill app push f/my-dashboard.app # or wmill sync push -
Test in Staging via the UI.
To run scripts locally with mocked variables/resources:
-
Create
mocked-api.json:{ "variables": { "MY_VAR": "value" }, "resources": { "f/data/my_pg": { "dsn": "postgres://..." } } } -
Point the client at it:
export WM_MOCKED_API_FILE=./mocked-api.json
Now getVariable and getResource read from this file for local tests.
-
Staging workspace is linked to a Git repo (via Git sync or CI). All non-secret state (scripts, flows, apps, schedules, triggers, resource/variable definitions) is tracked in Git; secrets stay only in Windmill.
-
After validating in Staging, merge
staging→main(or the prod branch). -
In CI:
wmill workspace switch prod wmill sync push -
Optionally run smoke tests in Prod using
wmill script run/wmill flow run, and verify critical apps.
Keep workspace-specific configuration in resources/variables so the same code works in both Staging and Prod.
wmill init
wmill workspace add
wmill workspace switch <name>
wmill version
wmill upgrade
wmill script generate-metadata
wmill script push <file>
wmill script run <path>
wmill script show <path>
wmill script list
wmill flow generate-locks --yes
wmill flow push <path>
wmill flow run <path>
wmill flow show <path>
wmill flow list
wmill sync pull
wmill sync push
wmill dev
wmill resource list
wmill resource show <path>
wmill resource-type list --schema
wmill variable list
wmill variable show <path>
wmill schedule list
wmill schedule push <file>
wmill trigger list
wmill trigger push <file>
- Keep
mainsmall; orchestrate IO and call pure helpers. - Use explicit types for inputs and outputs.
- Avoid
anyexcept at boundaries. - Use resources (
RT.*) only when credentials are needed. - Do not include package install commands.
- Prefer OSS-compatible runtimes/modes and label Enterprise-only usage.
- Use descriptive, unique step IDs.
- Chain steps via
results.step_id. - Add
failure_modulefor critical workflows. - Use
parallel: truefor independent loop iterations. - Always define a top-level
schema. - Keep inline scripts small.
- Use
retryfor external services. - Use
suspendfor human-in-the-loop steps.
- Treat Apps as wiring/UI; keep logic in scripts/flows.
- Use external scripts referenced by
pathfor most logic. - Let the visual editor generate initial
policy, then manage in Git. - Be explicit about execution identity (
execution_mode,on_behalf_of). - Use frontend scripts only for UI behavior and light data shaping.
- Check job logs and error messages first.
- Isolate side-effectful code from pure logic.
- Use
wmill script show/wmill flow show/wmill app showto inspect deployed artifacts. - Search docs, issues, and community Q&A for known problems.
- Prefer small, FP-style refactors that clarify data flow.