Analyzing Experiment Results
This tutorial demonstrates how to use HydraFlow's powerful analysis capabilities to work with your experiment results.
Prerequisites
Before you begin this tutorial, you should:
- Understand the basic structure of a HydraFlow application (from the Basic Application tutorial)
- Be familiar with the concept of job definitions (from the Automated Workflows tutorial)
Project Setup
We'll start by running several experiments that we can analyze. We'll execute the three jobs defined in the Automated Workflows tutorial:
$ hydraflow run job_sequential
$ hydraflow run job_parallel
$ hydraflow run job_submit
2025/05/10 02:24:01 INFO mlflow.tracking.fluent: Experiment with name 'job_sequential' does not exist. Creating a new experiment.
[2025-05-10 02:24:03,487][HYDRA] Launching 3 jobs locally
[2025-05-10 02:24:03,487][HYDRA] #0 : width=100 height=100
[2025-05-10 02:24:03,608][__main__][INFO] - 9e24ff9ddcb74841901a8823d28574ca
[2025-05-10 02:24:03,608][__main__][INFO] - {'width': 100, 'height': 100}
[2025-05-10 02:24:03,611][HYDRA] #1 : width=100 height=200
[2025-05-10 02:24:03,690][__main__][INFO] - b18a4a901ebb494191685dff0a6588f8
[2025-05-10 02:24:03,690][__main__][INFO] - {'width': 100, 'height': 200}
[2025-05-10 02:24:03,692][HYDRA] #2 : width=100 height=300
[2025-05-10 02:24:03,772][__main__][INFO] - bb207cff6d244475afd3ef52a28cde72
[2025-05-10 02:24:03,772][__main__][INFO] - {'width': 100, 'height': 300}
[2025-05-10 02:24:06,234][HYDRA] Launching 3 jobs locally
[2025-05-10 02:24:06,234][HYDRA] #0 : width=300 height=100
[2025-05-10 02:24:06,357][__main__][INFO] - 6ae746d45a91481fabdcafd5ef2a869e
[2025-05-10 02:24:06,357][__main__][INFO] - {'width': 300, 'height': 100}
[2025-05-10 02:24:06,359][HYDRA] #1 : width=300 height=200
[2025-05-10 02:24:06,441][__main__][INFO] - 7412673803df46a187bd2efd7c957bd8
[2025-05-10 02:24:06,441][__main__][INFO] - {'width': 300, 'height': 200}
[2025-05-10 02:24:06,443][HYDRA] #2 : width=300 height=300
[2025-05-10 02:24:06,525][__main__][INFO] - 1a3d056670bf425ab905ca7783adfcc6
[2025-05-10 02:24:06,525][__main__][INFO] - {'width': 300, 'height': 300}
0:00:05 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0:00:00 2/2 100%
2025/05/10 02:24:09 INFO mlflow.tracking.fluent: Experiment with name 'job_parallel' does not exist. Creating a new experiment.
[2025-05-10 02:24:11,602][HYDRA]
Joblib.Parallel(n_jobs=3,backend=loky,prefer=processes,require=None,verbose=0,ti
meout=None,pre_dispatch=2*n_jobs,batch_size=auto,temp_folder=None,max_nbytes=Non
e,mmap_mode=r) is launching 3 jobs
[2025-05-10 02:24:11,602][HYDRA] Launching jobs, sweep output dir :
multirun/01JTW03MM9CF87GH5CYXP1MDRF
[2025-05-10 02:24:11,602][HYDRA] #0 : width=200 height=100
[2025-05-10 02:24:11,603][HYDRA] #1 : width=200 height=200
[2025-05-10 02:24:11,603][HYDRA] #2 : width=200 height=300
[2025-05-10 02:24:13,756][__main__][INFO] - 39867bd9bd204dd7a92e273ddad44da6
[2025-05-10 02:24:13,756][__main__][INFO] - {'width': 200, 'height': 200}
[2025-05-10 02:24:14,455][__main__][INFO] - 60cfe72363b34a53be75a7502acfc51b
[2025-05-10 02:24:14,455][__main__][INFO] - {'width': 200, 'height': 100}
[2025-05-10 02:24:14,488][__main__][INFO] - 828b97378b2f467bb2936ee1a3f1bb37
[2025-05-10 02:24:14,488][__main__][INFO] - {'width': 200, 'height': 300}
[2025-05-10 02:24:17,483][HYDRA]
Joblib.Parallel(n_jobs=3,backend=loky,prefer=processes,require=None,verbose=0,ti
meout=None,pre_dispatch=2*n_jobs,batch_size=auto,temp_folder=None,max_nbytes=Non
e,mmap_mode=r) is launching 3 jobs
[2025-05-10 02:24:17,483][HYDRA] Launching jobs, sweep output dir :
multirun/01JTW03MM9QVVXTNKVTRXVFCNQ
[2025-05-10 02:24:17,483][HYDRA] #0 : width=400 height=100
[2025-05-10 02:24:17,483][HYDRA] #1 : width=400 height=200
[2025-05-10 02:24:17,483][HYDRA] #2 : width=400 height=300
[2025-05-10 02:24:19,656][__main__][INFO] - 39be62f8a5a64684b6fac5d147e7b692
[2025-05-10 02:24:19,656][__main__][INFO] - {'width': 400, 'height': 200}
[2025-05-10 02:24:20,335][__main__][INFO] - 5cbf385ed12f4968a988fb9494a8c625
[2025-05-10 02:24:20,335][__main__][INFO] - {'width': 400, 'height': 100}
[2025-05-10 02:24:20,352][__main__][INFO] - 969657dc02ff45e1929d8ca4460c81ce
[2025-05-10 02:24:20,352][__main__][INFO] - {'width': 400, 'height': 300}
0:00:11 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0:00:00 2/2 100%
2025/05/10 02:24:23 INFO mlflow.tracking.fluent: Experiment with name 'job_submit' does not exist. Creating a new experiment.
[2025-05-10 02:24:25,656][HYDRA] Launching 2 jobs locally
[2025-05-10 02:24:25,656][HYDRA] #0 : width=250 height=150
[2025-05-10 02:24:25,779][__main__][INFO] - 43ad441f66df4ecf9921831ecc7bdcd3
[2025-05-10 02:24:25,779][__main__][INFO] - {'width': 250, 'height': 150}
[2025-05-10 02:24:25,782][HYDRA] #1 : width=250 height=250
[2025-05-10 02:24:25,862][__main__][INFO] - 9a63dce994c3481791f3a345eaf17279
[2025-05-10 02:24:25,862][__main__][INFO] - {'width': 250, 'height': 250}
[2025-05-10 02:24:28,268][HYDRA] Launching 2 jobs locally
[2025-05-10 02:24:28,268][HYDRA] #0 : width=350 height=150
[2025-05-10 02:24:28,392][__main__][INFO] - 691df185081b4575bb0ae6022e1c1b08
[2025-05-10 02:24:28,392][__main__][INFO] - {'width': 350, 'height': 150}
[2025-05-10 02:24:28,395][HYDRA] #1 : width=350 height=250
[2025-05-10 02:24:28,477][__main__][INFO] - e0a5e23efe204991aba9b32e07189604
[2025-05-10 02:24:28,477][__main__][INFO] - {'width': 350, 'height': 250}
['/home/runner/work/hydraflow/hydraflow/.venv/bin/python', 'example.py', '--multirun', 'width=250', 'height=150,250', 'hydra.job.name=job_submit', 'hydra.sweep.dir=multirun/01JTW042D5STW6SYG1KV7CRGH1']
['/home/runner/work/hydraflow/hydraflow/.venv/bin/python', 'example.py', '--multirun', 'width=350', 'height=150,250', 'hydra.job.name=job_submit', 'hydra.sweep.dir=multirun/01JTW042D5MTJS3X4HEF89E9B5']
After running these commands, our project structure looks like this:
.
├── mlruns
│ ├── 0
│ │ └── meta.yaml
│ ├── 458120621626186504
│ │ ├── 1a3d056670bf425ab905ca7783adfcc6
│ │ ├── 6ae746d45a91481fabdcafd5ef2a869e
│ │ ├── 7412673803df46a187bd2efd7c957bd8
│ │ ├── 9e24ff9ddcb74841901a8823d28574ca
│ │ ├── b18a4a901ebb494191685dff0a6588f8
│ │ ├── bb207cff6d244475afd3ef52a28cde72
│ │ └── meta.yaml
│ ├── 597170178905469546
│ │ ├── 43ad441f66df4ecf9921831ecc7bdcd3
│ │ ├── 691df185081b4575bb0ae6022e1c1b08
│ │ ├── 9a63dce994c3481791f3a345eaf17279
│ │ ├── e0a5e23efe204991aba9b32e07189604
│ │ └── meta.yaml
│ └── 982687451377901880
│ ├── 39867bd9bd204dd7a92e273ddad44da6
│ ├── 39be62f8a5a64684b6fac5d147e7b692
│ ├── 5cbf385ed12f4968a988fb9494a8c625
│ ├── 60cfe72363b34a53be75a7502acfc51b
│ ├── 828b97378b2f467bb2936ee1a3f1bb37
│ ├── 969657dc02ff45e1929d8ca4460c81ce
│ └── meta.yaml
├── example.py
├── hydraflow.yaml
└── submit.py
The mlruns
directory contains all our experiment data.
Let's explore how to access and analyze this data using HydraFlow's API.
Discovering Runs
Finding Run Directories
HydraFlow provides the iter_run_dirs
function to discover runs in your MLflow tracking directory:
>>> from hydraflow import iter_run_dirs
>>> run_dirs = list(iter_run_dirs("mlruns"))
>>> print(len(run_dirs))
>>> for run_dir in run_dirs[:4]:
... print(run_dir)
16
mlruns/597170178905469546/9a63dce994c3481791f3a345eaf17279
mlruns/597170178905469546/e0a5e23efe204991aba9b32e07189604
mlruns/597170178905469546/43ad441f66df4ecf9921831ecc7bdcd3
mlruns/597170178905469546/691df185081b4575bb0ae6022e1c1b08
This function finds all run directories in your MLflow tracking directory, making it easy to collect runs for analysis.
Filtering by Experiment Name
You can filter runs by experiment name to focus on specific experiments:
>>> print(len(list(iter_run_dirs("mlruns", "job_sequential"))))
>>> names = ["job_sequential", "job_parallel"]
>>> print(len(list(iter_run_dirs("mlruns", names))))
>>> print(len(list(iter_run_dirs("mlruns", "job_*"))))
6
12
16
As shown above, you can:
- Filter by a single experiment name
- Provide a list of experiment names
- Use pattern matching with wildcards
Working with Individual Runs
Loading a Run
The Run
class represents a single
experiment run in HydraFlow:
>>> from hydraflow import Run
>>> run_dirs = iter_run_dirs("mlruns")
>>> run_dir = next(run_dirs) # run_dirs is an iterator
>>> run = Run(run_dir)
>>> print(run)
>>> print(type(run))
Run('9a63dce994c3481791f3a345eaf17279')
<class 'hydraflow.core.run.Run'>
You can also use the load
class method, which accepts both string paths and Path objects:
>>> Run.load(str(run_dir))
>>> print(run)
Run('9a63dce994c3481791f3a345eaf17279')
Accessing Run Information
Each Run instance provides access to run information and configuration:
>>> print(run.info.run_dir)
>>> print(run.info.run_id)
>>> print(run.info.job_name) # Hydra job name = MLflow experiment name
mlruns/597170178905469546/9a63dce994c3481791f3a345eaf17279
9a63dce994c3481791f3a345eaf17279
job_submit
The configuration is available through the cfg
attribute:
>>> print(run.cfg)
{'width': 250, 'height': 250}
Type-Safe Configuration Access
For better IDE integration and type checking, you can specify the configuration type:
from dataclasses import dataclass
@dataclass
class Config:
width: int = 1024
height: int = 768
>>> run = Run[Config](run_dir)
>>> print(run)
Run('9a63dce994c3481791f3a345eaf17279')
When you use Run[Config]
, your IDE will recognize run.cfg
as
having the specified type, enabling autocompletion and type checking.
Accessing Configuration Values
The get
method provides a unified interface to access values from a run:
>>> print(run.get("width"))
>>> print(run.get("height"))
250
250
Adding Custom Implementations
Basic Implementation
You can extend runs with custom implementation classes to add domain-specific functionality:
from pathlib import Path
class Impl:
root_dir: Path
def __init__(self, root_dir: Path):
self.root_dir = root_dir
def __repr__(self) -> str:
return f"Impl({self.root_dir.stem!r})"
>>> run = Run[Config, Impl](run_dir, Impl)
>>> print(run)
Run[Impl]('9a63dce994c3481791f3a345eaf17279')
The implementation is lazily initialized when you first access the impl
attribute:
>>> print(run.impl)
>>> print(run.impl.root_dir)
Impl('artifacts')
mlruns/597170178905469546/9a63dce994c3481791f3a345eaf17279/artifacts
Configuration-Aware Implementation
Implementations can also access the run's configuration:
from dataclasses import dataclass, field
@dataclass
class Size:
root_dir: Path = field(repr=False)
cfg: Config
@property
def size(self) -> int:
return self.cfg.width * self.cfg.height
def is_large(self) -> bool:
return self.size > 100000
>>> run = Run[Config, Size].load(run_dir, Size)
>>> print(run)
>>> print(run.impl)
>>> print(run.impl.size)
Run[Size]('9a63dce994c3481791f3a345eaf17279')
Size(cfg={'width': 250, 'height': 250})
62500
This allows you to define custom analysis methods that use both the run's artifacts and its configuration.
Working with Multiple Runs
Creating a Run Collection
The RunCollection
class helps you analyze multiple runs:
>>> run_dirs = iter_run_dirs("mlruns")
>>> rc = Run[Config, Size].load(run_dirs, Size)
>>> print(rc)
RunCollection(Run[Size], n=16)
The load
method automatically creates a RunCollection
when
given multiple run directories.
Basic Run Collection Operations
You can perform basic operations on a collection:
>>> print(rc.first())
>>> print(rc.last())
Run[Size]('9a63dce994c3481791f3a345eaf17279')
Run[Size]('bb207cff6d244475afd3ef52a28cde72')
Filtering Runs
The filter
method
lets you select runs based on various criteria:
>>> print(rc.filter(width=400))
RunCollection(Run[Size], n=3)
You can use lists to filter by multiple values (OR logic):
>>> print(rc.filter(height=[100, 300]))
RunCollection(Run[Size], n=8)
Tuples create range filters (inclusive):
>>> print(rc.filter(height=(100, 300)))
RunCollection(Run[Size], n=16)
You can even use custom filter functions:
>>> print(rc.filter(lambda r: r.impl.is_large()))
RunCollection(Run[Size], n=1)
Finding Specific Runs
The get
method
returns a single run matching your criteria:
>>> run = rc.get(width=250, height=(100, 200))
>>> print(run)
>>> print(run.impl)
Run[Size]('43ad441f66df4ecf9921831ecc7bdcd3')
Size(cfg={'width': 250, 'height': 150})
Converting to DataFrames
For data analysis, you can convert runs to a Polars DataFrame:
>>> print(rc.to_frame("width", "height", "size"))
shape: (16, 3)
┌───────┬────────┬───────┐
│ width ┆ height ┆ size │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═══════╪════════╪═══════╡
│ 250 ┆ 250 ┆ 62500 │
│ 350 ┆ 250 ┆ 87500 │
│ 250 ┆ 150 ┆ 37500 │
│ 350 ┆ 150 ┆ 52500 │
│ 200 ┆ 300 ┆ 60000 │
│ … ┆ … ┆ … │
│ 300 ┆ 100 ┆ 30000 │
│ 100 ┆ 200 ┆ 20000 │
│ 100 ┆ 100 ┆ 10000 │
│ 300 ┆ 300 ┆ 90000 │
│ 100 ┆ 300 ┆ 30000 │
└───────┴────────┴───────┘
You can add custom columns using callables:
>>> print(rc.to_frame("width", "height", is_large=lambda r: r.impl.is_large()))
shape: (16, 3)
┌───────┬────────┬──────────┐
│ width ┆ height ┆ is_large │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ bool │
╞═══════╪════════╪══════════╡
│ 250 ┆ 250 ┆ false │
│ 350 ┆ 250 ┆ false │
│ 250 ┆ 150 ┆ false │
│ 350 ┆ 150 ┆ false │
│ 200 ┆ 300 ┆ false │
│ … ┆ … ┆ … │
│ 300 ┆ 100 ┆ false │
│ 100 ┆ 200 ┆ false │
│ 100 ┆ 100 ┆ false │
│ 300 ┆ 300 ┆ false │
│ 100 ┆ 300 ┆ false │
└───────┴────────┴──────────┘
Functions can return lists for multiple values:
>>> def to_list(run: Run) -> list[int]:
... return [2 * run.get("width"), 3 * run.get("height")]
>>> print(rc.to_frame("width", from_list=to_list))
shape: (16, 2)
┌───────┬────────────┐
│ width ┆ from_list │
│ --- ┆ --- │
│ i64 ┆ list[i64] │
╞═══════╪════════════╡
│ 250 ┆ [500, 750] │
│ 350 ┆ [700, 750] │
│ 250 ┆ [500, 450] │
│ 350 ┆ [700, 450] │
│ 200 ┆ [400, 900] │
│ … ┆ … │
│ 300 ┆ [600, 300] │
│ 100 ┆ [200, 600] │
│ 100 ┆ [200, 300] │
│ 300 ┆ [600, 900] │
│ 100 ┆ [200, 900] │
└───────┴────────────┘
Or dictionaries for multiple named columns:
>>> def to_dict(run: Run) -> dict[int, str]:
... width2 = 2 * run.get("width")
... name = f"h{run.get('height')}"
... return {"width2": width2, "name": name}
>>> print(rc.to_frame("width", from_dict=to_dict))
shape: (16, 2)
┌───────┬──────────────┐
│ width ┆ from_dict │
│ --- ┆ --- │
│ i64 ┆ struct[2] │
╞═══════╪══════════════╡
│ 250 ┆ {500,"h250"} │
│ 350 ┆ {700,"h250"} │
│ 250 ┆ {500,"h150"} │
│ 350 ┆ {700,"h150"} │
│ 200 ┆ {400,"h300"} │
│ … ┆ … │
│ 300 ┆ {600,"h100"} │
│ 100 ┆ {200,"h200"} │
│ 100 ┆ {200,"h100"} │
│ 300 ┆ {600,"h300"} │
│ 100 ┆ {200,"h300"} │
└───────┴──────────────┘
Grouping Runs
The group_by
method organizes runs by common attributes:
>>> grouped = rc.group_by("width")
>>> for key, group in grouped.items():
... print(key, group)
250 RunCollection(Run[Size], n=2)
350 RunCollection(Run[Size], n=2)
200 RunCollection(Run[Size], n=3)
400 RunCollection(Run[Size], n=3)
300 RunCollection(Run[Size], n=3)
100 RunCollection(Run[Size], n=3)
You can group by multiple keys:
>>> grouped = rc.group_by("width", "height")
>>> for key, group in grouped.items():
... print(key, group)
(250, 250) RunCollection(Run[Size], n=1)
(350, 250) RunCollection(Run[Size], n=1)
(250, 150) RunCollection(Run[Size], n=1)
(350, 150) RunCollection(Run[Size], n=1)
(200, 300) RunCollection(Run[Size], n=1)
(400, 200) RunCollection(Run[Size], n=1)
(200, 100) RunCollection(Run[Size], n=1)
(400, 300) RunCollection(Run[Size], n=1)
(400, 100) RunCollection(Run[Size], n=1)
(200, 200) RunCollection(Run[Size], n=1)
(300, 200) RunCollection(Run[Size], n=1)
(300, 100) RunCollection(Run[Size], n=1)
(100, 200) RunCollection(Run[Size], n=1)
(100, 100) RunCollection(Run[Size], n=1)
(300, 300) RunCollection(Run[Size], n=1)
(100, 300) RunCollection(Run[Size], n=1)
Adding aggregation functions using the
agg
method transforms the result into a DataFrame:
>>> grouped = rc.group_by("width")
>>> df = grouped.agg(n=lambda runs: len(runs))
>>> print(df)
shape: (6, 2)
┌───────┬─────┐
│ width ┆ n │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═══════╪═════╡
│ 250 ┆ 2 │
│ 350 ┆ 2 │
│ 200 ┆ 3 │
│ 400 ┆ 3 │
│ 300 ┆ 3 │
│ 100 ┆ 3 │
└───────┴─────┘
Summary
In this tutorial, you've learned how to:
- Discover experiment runs in your MLflow tracking directory
- Load and access information from individual runs
- Add custom implementation classes for domain-specific analysis
- Filter, group, and analyze collections of runs
- Convert run data to DataFrames for advanced analysis
These capabilities enable you to efficiently analyze your experiments and extract valuable insights from your machine learning workflows.
Next Steps
Now that you understand HydraFlow's analysis capabilities, you can:
- Dive deeper into the Run Class and Run Collection documentation
- Explore advanced analysis techniques in the Analyzing Results section
- Apply these analysis techniques to your own machine learning experiments