Analyzing Experiment Results
This tutorial demonstrates how to use HydraFlow's powerful analysis capabilities to work with your experiment results.
Prerequisites
Before you begin this tutorial, you should:
- Understand the basic structure of a HydraFlow application (from the Basic Application tutorial)
- Be familiar with the concept of job definitions (from the Automated Workflows tutorial)
Project Setup
We'll start by running several experiments that we can analyze. We'll execute the three jobs defined in the Automated Workflows tutorial:
$ hydraflow run job_sequential
$ hydraflow run job_parallel
$ hydraflow run job_submit
[2025-11-30 07:23:42,972][HYDRA] Launching 3 jobs locally
[2025-11-30 07:23:42,972][HYDRA] #0 : width=100 height=100
2025/11/30 07:23:43 INFO mlflow.store.db.utils: Creating initial MLflow database
tables...
2025/11/30 07:23:43 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:23:43 INFO Context impl SQLiteImpl.
2025-11-30 07:23:43 INFO Will assume non-transactional DDL.
2025-11-30 07:23:43 INFO Running upgrade -> 451aebb31d03, add metric step
2025-11-30 07:23:43 INFO Running upgrade 451aebb31d03 -> 90e64c465722, migrate
user column to tags
2025-11-30 07:23:43 INFO Running upgrade 90e64c465722 -> 181f10493468, allow
nulls for metric values
2025-11-30 07:23:43 INFO Running upgrade 181f10493468 -> df50e92ffc5e, Add
Experiment Tags Table
2025-11-30 07:23:43 INFO Running upgrade df50e92ffc5e -> 7ac759974ad8, Update
run tags with larger limit
2025-11-30 07:23:43 INFO Running upgrade 7ac759974ad8 -> 89d4b8295536, create
latest metrics table
2025-11-30 07:23:43 INFO [89d4b8295536_create_latest_metrics_table_py]
Migration complete!
2025-11-30 07:23:43 INFO Running upgrade 89d4b8295536 -> 2b4d017a5e9b, add
model registry tables to db
2025-11-30 07:23:43 INFO [2b4d017a5e9b_add_model_registry_tables_to_db_py]
Adding registered_models and model_versions tables to database.
2025-11-30 07:23:43 INFO [2b4d017a5e9b_add_model_registry_tables_to_db_py]
Migration complete!
2025-11-30 07:23:43 INFO Running upgrade 2b4d017a5e9b -> cfd24bdc0731, Update
run status constraint with killed
2025-11-30 07:23:43 INFO Running upgrade cfd24bdc0731 -> 0a8213491aaa,
drop_duplicate_killed_constraint
2025-11-30 07:23:43 INFO Running upgrade 0a8213491aaa -> 728d730b5ebd, add
registered model tags table
2025-11-30 07:23:43 INFO Running upgrade 728d730b5ebd -> 27a6a02d2cf1, add
model version tags table
2025-11-30 07:23:43 INFO Running upgrade 27a6a02d2cf1 -> 84291f40a231, add
run_link to model_version
2025-11-30 07:23:43 INFO Running upgrade 84291f40a231 -> a8c4a736bde6, allow
nulls for run_id
2025-11-30 07:23:43 INFO Running upgrade a8c4a736bde6 -> 39d1c3be5f05,
add_is_nan_constraint_for_metrics_tables_if_necessary
2025-11-30 07:23:43 INFO Running upgrade 39d1c3be5f05 -> c48cb773bb87,
reset_default_value_for_is_nan_in_metrics_table_for_mysql
2025-11-30 07:23:43 INFO Running upgrade c48cb773bb87 -> bd07f7e963c5, create
index on run_uuid
2025-11-30 07:23:43 INFO Running upgrade bd07f7e963c5 -> 0c779009ac13, add
deleted_time field to runs table
2025-11-30 07:23:43 INFO Running upgrade 0c779009ac13 -> cc1f77228345, change
param value length to 500
2025-11-30 07:23:43 INFO Running upgrade cc1f77228345 -> 97727af70f4d, Add
creation_time and last_update_time to experiments table
2025-11-30 07:23:43 INFO Running upgrade 97727af70f4d -> 3500859a5d39, Add
Model Aliases table
2025-11-30 07:23:43 INFO Running upgrade 3500859a5d39 -> 7f2a7d5fae7d, add
datasets inputs input_tags tables
2025-11-30 07:23:43 INFO Running upgrade 7f2a7d5fae7d -> 2d6e25af4d3e,
increase max param val length from 500 to 8000
2025-11-30 07:23:43 INFO Running upgrade 2d6e25af4d3e -> acf3f17fdcc7, add
storage location field to model versions
2025-11-30 07:23:43 INFO Running upgrade acf3f17fdcc7 -> 867495a8f9d4, add
trace tables
2025-11-30 07:23:43 INFO Running upgrade 867495a8f9d4 -> 5b0e9adcef9c, add
cascade deletion to trace tables foreign keys
2025-11-30 07:23:43 INFO Running upgrade 5b0e9adcef9c -> 4465047574b1,
increase max dataset schema size
2025-11-30 07:23:43 INFO Running upgrade 4465047574b1 -> f5a4f2784254,
increase run tag value limit to 8000
2025-11-30 07:23:43 INFO Running upgrade f5a4f2784254 -> 0584bdc529eb, add
cascading deletion to datasets from experiments
2025-11-30 07:23:43 INFO Running upgrade 0584bdc529eb -> 400f98739977, add
logged model tables
2025-11-30 07:23:43 INFO Running upgrade 400f98739977 -> 6953534de441, add
step to inputs table
2025-11-30 07:23:43 INFO Running upgrade 6953534de441 -> bda7b8c39065,
increase_model_version_tag_value_limit
2025-11-30 07:23:43 INFO Running upgrade bda7b8c39065 -> cbc13b556ace, add V3
trace schema columns
2025-11-30 07:23:43 INFO Running upgrade cbc13b556ace -> 770bee3ae1dd, add
assessments table
2025-11-30 07:23:43 INFO Running upgrade 770bee3ae1dd -> a1b2c3d4e5f6, add
spans table
2025-11-30 07:23:43 INFO Running upgrade a1b2c3d4e5f6 -> de4033877273, create
entity_associations table
2025-11-30 07:23:43 INFO Running upgrade de4033877273 -> 1a0cddfcaa16, Add
webhooks and webhook_events tables
2025-11-30 07:23:43 INFO Running upgrade 1a0cddfcaa16 -> 534353b11cbc, add
scorer tables
2025-11-30 07:23:43 INFO Running upgrade 534353b11cbc -> 71994744cf8e, add
evaluation datasets
2025-11-30 07:23:43 INFO Running upgrade 71994744cf8e -> 3da73c924c2f, add
outputs to dataset record
2025-11-30 07:23:43 INFO Running upgrade 3da73c924c2f -> bf29a5ff90ea, add
jobs table
2025-11-30 07:23:43 INFO Context impl SQLiteImpl.
2025-11-30 07:23:43 INFO Will assume non-transactional DDL.
2025/11/30 07:23:43 INFO mlflow.tracking.fluent: Experiment with name
'job_sequential' does not exist. Creating a new experiment.
[2025-11-30 07:23:43,741][HYDRA] #1 : width=100 height=200
[2025-11-30 07:23:43,803][__main__][INFO] - 5f6cb24ed7814629a876e23281cf5d32
[2025-11-30 07:23:43,803][__main__][INFO] - {'width': 100, 'height': 200}
[2025-11-30 07:23:43,807][HYDRA] #2 : width=100 height=300
[2025-11-30 07:23:43,871][__main__][INFO] - 8d85ad7b8aa34dfb8cad4d40c08f7d6e
[2025-11-30 07:23:43,871][__main__][INFO] - {'width': 100, 'height': 300}
[2025-11-30 07:23:45,348][HYDRA] Launching 3 jobs locally
[2025-11-30 07:23:45,348][HYDRA] #0 : width=300 height=100
2025/11/30 07:23:45 INFO mlflow.store.db.utils: Creating initial MLflow database
tables...
2025/11/30 07:23:45 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:23:45 INFO Context impl SQLiteImpl.
2025-11-30 07:23:45 INFO Will assume non-transactional DDL.
2025-11-30 07:23:45 INFO Context impl SQLiteImpl.
2025-11-30 07:23:45 INFO Will assume non-transactional DDL.
[2025-11-30 07:23:45,803][HYDRA] #1 : width=300 height=200
[2025-11-30 07:23:45,869][__main__][INFO] - 66903036bb144d7c89cb8a69099596a7
[2025-11-30 07:23:45,869][__main__][INFO] - {'width': 300, 'height': 200}
[2025-11-30 07:23:45,873][HYDRA] #2 : width=300 height=300
[2025-11-30 07:23:45,936][__main__][INFO] - d7a8c0b5c4c046cebc5e69ba527bfa8c
[2025-11-30 07:23:45,936][__main__][INFO] - {'width': 300, 'height': 300}
0:00:04 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0:00:00 2/2 100%
[2025-11-30 07:23:48,539][HYDRA]
Joblib.Parallel(n_jobs=3,backend=loky,prefer=processes,require=None,verbose=0,ti
meout=None,pre_dispatch=2*n_jobs,batch_size=auto,temp_folder=None,max_nbytes=Non
e,mmap_mode=r) is launching 3 jobs
[2025-11-30 07:23:48,539][HYDRA] Launching jobs, sweep output dir :
multirun/01KB9TAXB5NJ028QP8HCG5RCM8
[2025-11-30 07:23:48,539][HYDRA] #0 : width=200 height=100
[2025-11-30 07:23:48,539][HYDRA] #1 : width=200 height=200
[2025-11-30 07:23:48,539][HYDRA] #2 : width=200 height=300
2025/11/30 07:23:50 INFO mlflow.store.db.utils: Creating initial MLflow database
tables...
2025/11/30 07:23:50 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:23:50 INFO Context impl SQLiteImpl.
2025-11-30 07:23:50 INFO Will assume non-transactional DDL.
2025-11-30 07:23:50 INFO Context impl SQLiteImpl.
2025-11-30 07:23:50 INFO Will assume non-transactional DDL.
2025/11/30 07:23:50 INFO mlflow.tracking.fluent: Experiment with name
'job_parallel' does not exist. Creating a new experiment.
2025/11/30 07:23:50 INFO mlflow.store.db.utils: Creating initial MLflow database
tables...
2025/11/30 07:23:50 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:23:50 INFO Context impl SQLiteImpl.
2025-11-30 07:23:50 INFO Will assume non-transactional DDL.
2025-11-30 07:23:50 INFO Context impl SQLiteImpl.
2025-11-30 07:23:50 INFO Will assume non-transactional DDL.
2025/11/30 07:23:50 INFO mlflow.store.db.utils: Creating initial MLflow database
tables...
2025/11/30 07:23:50 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:23:50 INFO Context impl SQLiteImpl.
2025-11-30 07:23:50 INFO Will assume non-transactional DDL.
2025-11-30 07:23:50 INFO Context impl SQLiteImpl.
2025-11-30 07:23:50 INFO Will assume non-transactional DDL.
[2025-11-30 07:23:52,987][HYDRA]
Joblib.Parallel(n_jobs=3,backend=loky,prefer=processes,require=None,verbose=0,ti
meout=None,pre_dispatch=2*n_jobs,batch_size=auto,temp_folder=None,max_nbytes=Non
e,mmap_mode=r) is launching 3 jobs
[2025-11-30 07:23:52,987][HYDRA] Launching jobs, sweep output dir :
multirun/01KB9TAXB5NJ028QP8HCG5RCM9
[2025-11-30 07:23:52,988][HYDRA] #0 : width=400 height=100
[2025-11-30 07:23:52,988][HYDRA] #1 : width=400 height=200
[2025-11-30 07:23:52,988][HYDRA] #2 : width=400 height=300
2025/11/30 07:23:54 INFO mlflow.store.db.utils: Creating initial MLflow database
tables...
2025/11/30 07:23:54 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:23:54 INFO Context impl SQLiteImpl.
2025-11-30 07:23:54 INFO Will assume non-transactional DDL.
2025-11-30 07:23:54 INFO Context impl SQLiteImpl.
2025-11-30 07:23:54 INFO Will assume non-transactional DDL.
2025/11/30 07:23:54 INFO mlflow.store.db.utils: Creating initial MLflow database
tables...
2025/11/30 07:23:54 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:23:54 INFO Context impl SQLiteImpl.
2025-11-30 07:23:54 INFO Will assume non-transactional DDL.
2025-11-30 07:23:55 INFO Context impl SQLiteImpl.
2025-11-30 07:23:55 INFO Will assume non-transactional DDL.
2025/11/30 07:23:55 INFO mlflow.store.db.utils: Creating initial MLflow database
tables...
2025/11/30 07:23:55 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:23:55 INFO Context impl SQLiteImpl.
2025-11-30 07:23:55 INFO Will assume non-transactional DDL.
2025-11-30 07:23:55 INFO Context impl SQLiteImpl.
2025-11-30 07:23:55 INFO Will assume non-transactional DDL.
0:00:08 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0:00:00 2/2 100%
[2025-11-30 07:23:58,481][HYDRA] Launching 2 jobs locally
[2025-11-30 07:23:58,482][HYDRA] #0 : width=250 height=150
2025/11/30 07:23:58 INFO mlflow.store.db.utils: Creating initial MLflow database tables...
2025/11/30 07:23:58 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:23:58 INFO [alembic.runtime.migration] Context impl SQLiteImpl.
2025-11-30 07:23:58 INFO [alembic.runtime.migration] Will assume non-transactional DDL.
2025-11-30 07:23:58 INFO [alembic.runtime.migration] Context impl SQLiteImpl.
2025-11-30 07:23:58 INFO [alembic.runtime.migration] Will assume non-transactional DDL.
2025/11/30 07:23:58 INFO mlflow.tracking.fluent: Experiment with name 'job_submit' does not exist. Creating a new experiment.
[2025-11-30 07:23:58,949][HYDRA] #1 : width=250 height=250
[2025-11-30 07:23:59,011][__main__][INFO] - b20dddee92bd4c65b937b7694d878200
[2025-11-30 07:23:59,011][__main__][INFO] - {'width': 250, 'height': 250}
[2025-11-30 07:24:00,437][HYDRA] Launching 2 jobs locally
[2025-11-30 07:24:00,437][HYDRA] #0 : width=350 height=150
2025/11/30 07:24:00 INFO mlflow.store.db.utils: Creating initial MLflow database tables...
2025/11/30 07:24:00 INFO mlflow.store.db.utils: Updating database tables
2025-11-30 07:24:00 INFO [alembic.runtime.migration] Context impl SQLiteImpl.
2025-11-30 07:24:00 INFO [alembic.runtime.migration] Will assume non-transactional DDL.
2025-11-30 07:24:00 INFO [alembic.runtime.migration] Context impl SQLiteImpl.
2025-11-30 07:24:00 INFO [alembic.runtime.migration] Will assume non-transactional DDL.
[2025-11-30 07:24:00,895][HYDRA] #1 : width=350 height=250
[2025-11-30 07:24:00,957][__main__][INFO] - 387c41f166a04554b6ffab581a79d59b
[2025-11-30 07:24:00,957][__main__][INFO] - {'width': 350, 'height': 250}
['/home/runner/work/hydraflow/hydraflow/.venv/bin/python', 'example.py', '--multirun', 'width=250', 'height=150,250', 'hydra.job.name=job_submit', 'hydra.sweep.dir=multirun/01KB9TB723M9F2RGCWNT0XS9VK']
['/home/runner/work/hydraflow/hydraflow/.venv/bin/python', 'example.py', '--multirun', 'width=350', 'height=150,250', 'hydra.job.name=job_submit', 'hydra.sweep.dir=multirun/01KB9TB723M9F2RGCWNT0XS9VM']
After running these commands, our project structure looks like this:
./
├── mlruns/
│ ├── 1/
│ │ ├── 5f6cb24ed7814629a876e23281cf5d32/
│ │ ├── 66903036bb144d7c89cb8a69099596a7/
│ │ ├── 8d85ad7b8aa34dfb8cad4d40c08f7d6e/
│ │ ├── c8be9f44915b49eca215940159f2e323/
│ │ ├── cd90c6ac8e7f430db8f9a5432ba9cc31/
│ │ └── d7a8c0b5c4c046cebc5e69ba527bfa8c/
│ ├── 2/
│ │ ├── 05fdf63c0f3b4915b75c8066320d7fee/
│ │ ├── 274363d4d8cb4529a3e652370d30bf1e/
│ │ ├── 814ace97fc0c4a889ee625583474cc85/
│ │ ├── 827be45c07e740f59fd5e054f96fe333/
│ │ ├── adb00f0e513c432bbeeca0a0677dbea4/
│ │ └── ae65848ddd7245dfbe66fe0266301dcc/
│ └── 3/
│ ├── 387c41f166a04554b6ffab581a79d59b/
│ ├── 613f19102a5048b8bcbf2b07424341fc/
│ ├── b20dddee92bd4c65b937b7694d878200/
│ └── cd73fad5d3b846ce9e6be23b5716db64/
├── example.py
├── hydraflow.yaml
├── mlflow.db
└── submit.py
The mlruns directory contains all our experiment data.
Let's explore how to access and analyze this data using HydraFlow's API.
Discovering Runs
Finding Run Directories
HydraFlow provides the iter_run_dirs
function to discover runs in your MLflow tracking directory:
>>> import mlflow
>>> from hydraflow import iter_run_dirs
>>> mlflow.set_tracking_uri("sqlite:///mlflow.db")
>>> run_dirs = list(iter_run_dirs())
>>> print(len(run_dirs))
>>> for run_dir in run_dirs[:4]:
... print(run_dir)
16
/home/runner/work/hydraflow/hydraflow/examples/mlruns/3/387c41f166a04554b6ffab581a79d59b
/home/runner/work/hydraflow/hydraflow/examples/mlruns/3/613f19102a5048b8bcbf2b07424341fc
/home/runner/work/hydraflow/hydraflow/examples/mlruns/3/b20dddee92bd4c65b937b7694d878200
/home/runner/work/hydraflow/hydraflow/examples/mlruns/3/cd73fad5d3b846ce9e6be23b5716db64
This function finds all run directories in your MLflow tracking directory, making it easy to collect runs for analysis.
Filtering by Experiment Name
You can filter runs by experiment name to focus on specific experiments:
>>> print(len(list(iter_run_dirs("job_sequential"))))
>>> names = ["job_sequential", "job_parallel"]
>>> print(len(list(iter_run_dirs(names))))
>>> print(len(list(iter_run_dirs("job_*"))))
6
12
16
As shown above, you can:
- Filter by a single experiment name
- Provide a list of experiment names
- Use pattern matching with wildcards
Working with Individual Runs
Loading a Run
The Run class represents a single
experiment run in HydraFlow:
>>> from hydraflow import Run
>>> run_dirs = iter_run_dirs()
>>> run_dir = next(run_dirs) # run_dirs is an iterator
>>> run = Run(run_dir)
>>> print(run)
>>> print(type(run))
Run('387c41f166a04554b6ffab581a79d59b')
<class 'hydraflow.core.run.Run'>
You can also use the load
class method, which accepts both string paths and Path objects:
>>> Run.load(str(run_dir))
>>> print(run)
Run('387c41f166a04554b6ffab581a79d59b')
Accessing Run Information
Each Run instance provides access to run information and configuration:
>>> print(run.info.run_dir)
>>> print(run.info.run_id)
>>> print(run.info.job_name) # Hydra job name = MLflow experiment name
/home/runner/work/hydraflow/hydraflow/examples/mlruns/3/387c41f166a04554b6ffab581a79d59b
387c41f166a04554b6ffab581a79d59b
job_submit
The configuration is available through the cfg attribute:
>>> print(run.cfg)
{'width': 350, 'height': 250}
Type-Safe Configuration Access
For better IDE integration and type checking, you can specify the configuration type:
from dataclasses import dataclass
@dataclass
class Config:
width: int = 1024
height: int = 768
>>> run = Run[Config](run_dir)
>>> print(run)
Run('387c41f166a04554b6ffab581a79d59b')
When you use Run[Config], your IDE will recognize run.cfg as
having the specified type, enabling autocompletion and type checking.
Accessing Configuration Values
The get method provides a unified interface to access values from a run:
>>> print(run.get("width"))
>>> print(run.get("height"))
350
250
Adding Custom Implementations
Basic Implementation
You can extend runs with custom implementation classes to add domain-specific functionality:
from pathlib import Path
class Impl:
root_dir: Path
def __init__(self, root_dir: Path):
self.root_dir = root_dir
def __repr__(self) -> str:
return f"Impl({self.root_dir.stem!r})"
>>> run = Run[Config, Impl](run_dir, Impl)
>>> print(run)
Run[Impl]('387c41f166a04554b6ffab581a79d59b')
The implementation is lazily initialized when you first access the impl attribute:
>>> print(run.impl)
>>> print(run.impl.root_dir)
Impl('artifacts')
/home/runner/work/hydraflow/hydraflow/examples/mlruns/3/387c41f166a04554b6ffab581a79d59b/artifacts
Configuration-Aware Implementation
Implementations can also access the run's configuration:
from dataclasses import dataclass, field
@dataclass
class Size:
root_dir: Path = field(repr=False)
cfg: Config
@property
def size(self) -> int:
return self.cfg.width * self.cfg.height
def is_large(self) -> bool:
return self.size > 100000
>>> run = Run[Config, Size].load(run_dir, Size)
>>> print(run)
>>> print(run.impl)
>>> print(run.impl.size)
Run[Size]('387c41f166a04554b6ffab581a79d59b')
Size(cfg={'width': 350, 'height': 250})
87500
This allows you to define custom analysis methods that use both the run's artifacts and its configuration.
Working with Multiple Runs
Creating a Run Collection
The RunCollection
class helps you analyze multiple runs:
>>> run_dirs = iter_run_dirs()
>>> rc = Run[Config, Size].load(run_dirs, Size)
>>> print(rc)
RunCollection(Run[Size], n=16)
The load method automatically creates a RunCollection when
given multiple run directories.
Basic Run Collection Operations
You can perform basic operations on a collection:
>>> print(rc.first())
>>> print(rc.last())
Run[Size]('387c41f166a04554b6ffab581a79d59b')
Run[Size]('66903036bb144d7c89cb8a69099596a7')
Filtering Runs
The filter method
lets you select runs based on various criteria:
>>> print(rc.filter(width=400))
RunCollection(Run[Size], n=3)
You can use lists to filter by multiple values (OR logic):
>>> print(rc.filter(height=[100, 300]))
RunCollection(Run[Size], n=8)
Tuples create range filters (inclusive):
>>> print(rc.filter(height=(100, 300)))
RunCollection(Run[Size], n=16)
You can even use custom filter functions:
>>> print(rc.filter(lambda r: r.impl.is_large()))
RunCollection(Run[Size], n=1)
Finding Specific Runs
The get method
returns a single run matching your criteria:
>>> run = rc.get(width=250, height=(100, 200))
>>> print(run)
>>> print(run.impl)
Run[Size]('cd73fad5d3b846ce9e6be23b5716db64')
Size(cfg={'width': 250, 'height': 150})
Converting to DataFrames
For data analysis, you can convert runs to a Polars DataFrame:
>>> print(rc.to_frame("width", "height", "size"))
shape: (16, 3)
┌───────┬────────┬───────┐
│ width ┆ height ┆ size │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═══════╪════════╪═══════╡
│ 350 ┆ 250 ┆ 87500 │
│ 350 ┆ 150 ┆ 52500 │
│ 250 ┆ 250 ┆ 62500 │
│ 250 ┆ 150 ┆ 37500 │
│ 400 ┆ 100 ┆ 40000 │
│ … ┆ … ┆ … │
│ 300 ┆ 300 ┆ 90000 │
│ 300 ┆ 100 ┆ 30000 │
│ 100 ┆ 200 ┆ 20000 │
│ 100 ┆ 100 ┆ 10000 │
│ 300 ┆ 200 ┆ 60000 │
└───────┴────────┴───────┘
You can add custom columns using callables:
>>> print(rc.to_frame("width", "height", is_large=lambda r: r.impl.is_large()))
shape: (16, 3)
┌───────┬────────┬──────────┐
│ width ┆ height ┆ is_large │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ bool │
╞═══════╪════════╪══════════╡
│ 350 ┆ 250 ┆ false │
│ 350 ┆ 150 ┆ false │
│ 250 ┆ 250 ┆ false │
│ 250 ┆ 150 ┆ false │
│ 400 ┆ 100 ┆ false │
│ … ┆ … ┆ … │
│ 300 ┆ 300 ┆ false │
│ 300 ┆ 100 ┆ false │
│ 100 ┆ 200 ┆ false │
│ 100 ┆ 100 ┆ false │
│ 300 ┆ 200 ┆ false │
└───────┴────────┴──────────┘
Functions can return lists for multiple values:
>>> def to_list(run: Run) -> list[int]:
... return [2 * run.get("width"), 3 * run.get("height")]
>>> print(rc.to_frame("width", from_list=to_list))
shape: (16, 2)
┌───────┬────────────┐
│ width ┆ from_list │
│ --- ┆ --- │
│ i64 ┆ list[i64] │
╞═══════╪════════════╡
│ 350 ┆ [700, 750] │
│ 350 ┆ [700, 450] │
│ 250 ┆ [500, 750] │
│ 250 ┆ [500, 450] │
│ 400 ┆ [800, 300] │
│ … ┆ … │
│ 300 ┆ [600, 900] │
│ 300 ┆ [600, 300] │
│ 100 ┆ [200, 600] │
│ 100 ┆ [200, 300] │
│ 300 ┆ [600, 600] │
└───────┴────────────┘
Or dictionaries for multiple named columns:
>>> def to_dict(run: Run) -> dict[int, str]:
... width2 = 2 * run.get("width")
... name = f"h{run.get('height')}"
... return {"width2": width2, "name": name}
>>> print(rc.to_frame("width", from_dict=to_dict))
shape: (16, 2)
┌───────┬──────────────┐
│ width ┆ from_dict │
│ --- ┆ --- │
│ i64 ┆ struct[2] │
╞═══════╪══════════════╡
│ 350 ┆ {700,"h250"} │
│ 350 ┆ {700,"h150"} │
│ 250 ┆ {500,"h250"} │
│ 250 ┆ {500,"h150"} │
│ 400 ┆ {800,"h100"} │
│ … ┆ … │
│ 300 ┆ {600,"h300"} │
│ 300 ┆ {600,"h100"} │
│ 100 ┆ {200,"h200"} │
│ 100 ┆ {200,"h100"} │
│ 300 ┆ {600,"h200"} │
└───────┴──────────────┘
Grouping Runs
The group_by
method organizes runs by common attributes:
>>> grouped = rc.group_by("width")
>>> for key, group in grouped.items():
... print(key, group)
350 RunCollection(Run[Size], n=2)
250 RunCollection(Run[Size], n=2)
400 RunCollection(Run[Size], n=3)
200 RunCollection(Run[Size], n=3)
100 RunCollection(Run[Size], n=3)
300 RunCollection(Run[Size], n=3)
You can group by multiple keys:
>>> grouped = rc.group_by("width", "height")
>>> for key, group in grouped.items():
... print(key, group)
(350, 250) RunCollection(Run[Size], n=1)
(350, 150) RunCollection(Run[Size], n=1)
(250, 250) RunCollection(Run[Size], n=1)
(250, 150) RunCollection(Run[Size], n=1)
(400, 100) RunCollection(Run[Size], n=1)
(400, 300) RunCollection(Run[Size], n=1)
(200, 200) RunCollection(Run[Size], n=1)
(200, 300) RunCollection(Run[Size], n=1)
(400, 200) RunCollection(Run[Size], n=1)
(200, 100) RunCollection(Run[Size], n=1)
(100, 300) RunCollection(Run[Size], n=1)
(300, 300) RunCollection(Run[Size], n=1)
(300, 100) RunCollection(Run[Size], n=1)
(100, 200) RunCollection(Run[Size], n=1)
(100, 100) RunCollection(Run[Size], n=1)
(300, 200) RunCollection(Run[Size], n=1)
Adding aggregation functions using the
agg
method transforms the result into a DataFrame:
>>> grouped = rc.group_by("width")
>>> df = grouped.agg(n=lambda runs: len(runs))
>>> print(df)
shape: (6, 2)
┌───────┬─────┐
│ width ┆ n │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═══════╪═════╡
│ 350 ┆ 2 │
│ 250 ┆ 2 │
│ 400 ┆ 3 │
│ 200 ┆ 3 │
│ 100 ┆ 3 │
│ 300 ┆ 3 │
└───────┴─────┘
Summary
In this tutorial, you've learned how to:
- Discover experiment runs in your MLflow tracking directory
- Load and access information from individual runs
- Add custom implementation classes for domain-specific analysis
- Filter, group, and analyze collections of runs
- Convert run data to DataFrames for advanced analysis
These capabilities enable you to efficiently analyze your experiments and extract valuable insights from your machine learning workflows.
Next Steps
Now that you understand HydraFlow's analysis capabilities, you can:
- Dive deeper into the Run Class and Run Collection documentation
- Explore advanced analysis techniques in the Analyzing Results section
- Apply these analysis techniques to your own machine learning experiments