DAZL Documentation | Data Analytics A-to-Z Processing Language


Contents

loadInline

data management

slug: step-loadinline

Purpose

Loads data directly from inline definitions into the workflow pipeline. This step is ideal for small datasets, testing, prototyping, or when you want to define data explicitly within the workflow YAML.

When to Use

  • Rapidly test workflow logic without external datasets
  • Provide small sample datasets for demonstrations or prototypes
  • Inline initialization for charts, summaries, or frequency steps
  • Seed workflows with predefined static data

How It Works

  1. Receives an array of records directly via the data parameter in the workflow YAML.
  2. Wraps the provided dataset into the standard pipeline structure (data, pdv, extras).
  3. Returns the dataset as an in-memory object that downstream steps can reference.
  4. Tracks basic metadata such as the number of records loaded.

Parameters

Required

  • data (array) — Array of associative arrays representing dataset records. Each record is a map of field names to values.

Optional

  • output (string) — Alias for referencing the dataset in later workflow steps.

Security Features

  • Only operates on data explicitly provided in the workflow YAML.
  • No external data access, evaluation, or execution is performed.

Input Requirements

  • data must be a valid array of associative arrays.
  • Each record should have consistent fields to ensure proper handling by downstream steps.

Output

Data

  • Returns the provided inline dataset as-is.

PDV

  • Preserves any existing metadata (pdv) from the pipeline input.

Extras

  • Includes a record_count indicating the number of rows loaded.

Output Structure

Key Description
data Array of dataset records
pdv Metadata about columns (passed through from input)
extras Record count and optional diagnostics
outputType "work" — signals an in-memory dataset

Example Usage

steps:
  - loadInline:
      data:
        # Young Segment
        - {age: 22, income: 38000, spend: 800,  segment: 'Young'}
        - {age: 25, income: 45000, spend: 1200, segment: 'Young'}
        - {age: 29, income: 56000, spend: 1800, segment: 'Young'}
        - {age: 31, income: 60000, spend: 2000, segment: 'Young'}

        # Mid Segment
        - {age: 34, income: 67000, spend: 2100, segment: 'Mid'}
        - {age: 38, income: 74000, spend: 2600, segment: 'Mid'}
        - {age: 41, income: 79000, spend: 2800, segment: 'Mid'}
        - {age: 44, income: 83000, spend: 3000, segment: 'Mid'}

        # Senior Segment
        - {age: 46, income: 87000, spend: 3100, segment: 'Senior'}
        - {age: 50, income: 95000, spend: 3700, segment: 'Senior'}
        - {age: 55, income: 102000, spend: 4200, segment: 'Senior'}
        - {age: 60, income: 110000, spend: 4800, segment: 'Senior'}

      output: testData

  - chart:
      dataset: testData
      type: bubble
      x_axis: income
      y_axis: spend
      z_axis: age
      series: segment

Example Output

{
  "data": [
    {"age":22,"income":38000,"spend":800,"segment":"Young"},
    {"age":25,"income":45000,"spend":1200,"segment":"Young"},
    {"age":29,"income":56000,"spend":1800,"segment":"Young"},
    {"age":31,"income":60000,"spend":2000,"segment":"Young"},
    {"age":34,"income":67000,"spend":2100,"segment":"Mid"},
    {"age":38,"income":74000,"spend":2600,"segment":"Mid"},
    {"age":41,"income":79000,"spend":2800,"segment":"Mid"},
    {"age":44,"income":83000,"spend":3000,"segment":"Mid"},
    {"age":46,"income":87000,"spend":3100,"segment":"Senior"},
    {"age":50,"income":95000,"spend":3700,"segment":"Senior"},
    {"age":55,"income":102000,"spend":4200,"segment":"Senior"},
    {"age":60,"income":110000,"spend":4800,"segment":"Senior"}
  ],
  "pdv": {},
  "extras": {"record_count":12},
  "outputType": "work"
}

Related Documentation

  • load step – Load datasets from external sources
  • filter step – Filter records after loading
  • calculate step – Add or modify columns after loading
  • sort step - Arrange the dataset
  • keep step - Specify which columns to keep
  • drop step - Specify which columns to remove