Frees memory by removing a dataset from the workflow environment. Useful in long-running workflows or when working with large datasets to prevent excessive memory usage.
When to Use
You no longer need a dataset after processing
Large datasets have been used and can be safely discarded
You want to reduce memory footprint during complex or multi-step workflows
How It Works
Receives a reference to the workflow interpreter via _interpreter.
Calls the interpreter’s unsetDataset() method to remove the specified dataset from memory.
Returns empty data, pdv, and extras structures with outputType set to 'work'.
Parameters
Required
dataset (string) – The name of the dataset to remove from memory.
Special
_interpreter – Automatically passed by the workflow orchestrator; used internally to manage dataset memory.
Input Requirements
The dataset to be released must exist in the workflow environment.
No other data is required; this step does not modify datasets.
Output
data: Empty array
pdv: Empty array
extras: Empty array
outputType: 'work'
Example Usage
steps:
- release:
dataset: largeCustomerData
Explanation:
Removes largeCustomerData from memory to free resources for subsequent workflow steps.
Notes & Best Practices
Only release datasets that are no longer needed; releasing an in-use dataset will break downstream steps.
Ideal for workflows handling multiple large datasets or iterative processing.
Can be combined with intermediate result storage if you need to persist essential outputs before releasing memory.