Interface (moirae.interface)#

Interfaces for running common workflows with Moirae, with a particular emphasis on data built with battery-data-toolkit

moirae.interface.row_to_inputs(row: Series, default_temperature: float = 25) Tuple[InputQuantities, OutputQuantities]#

Convert a row from the time series data to a distribution object

Parameters:
  • row – Row from the dataset.raw_data dataframe

  • default_temperature – Default temperature for the cells (units: C)

Returns:

  • Distribution describing the inputs

  • Distribution describing the measurements (model outputs)

moirae.interface.run_online_estimate(dataset: BatteryDataset | str | Path, estimator: OnlineEstimator, pbar: bool = False, output_states: bool = True, hdf5_output: Path | str | Group | HDF5Writer | None = None) Tuple[DataFrame, OnlineEstimator]#

Run an online estimation of battery parameters given a fixed dataset for the

Parameters:
  • dataset – Dataset containing the time series of a battery’s performance. Provide either the path to a battdata HDF5 file or BatteryDataset object.

  • estimator – Technique used to estimate the state of health, which is built using a physics model which describes the cell and initial guesses for the battery transient and health states.

  • pbar – Whether to display a progress bar

  • output_states – Whether to return summaries of the per-step states. Can require a large amount of memory for full datasets.

  • hdf5_output – Path to an HDF5 file or group within an already-open file in which to write the estimated parameter values. Writes the mean for each timestep and the full state for the first timestep in each cycle by default. Modify what is written by providing a HDF5Writer.

Returns:

  • Estimates of the parameters at all timesteps from the input dataset

  • Estimator after updating with the data in dataset

HDF5 Output (m.interface.hdf5)#

Tools for writing state estimates to HDF5 files

pydantic model moirae.interface.hdf5.HDF5Writer#

Bases: BaseModel, AbstractContextManager

Write state estimation data to an HDF5 file incrementally

Parameters:
  • hdf5_output – Path to an HDF5 file or group within a file in which to write data

  • storage_key – Name of the group within the file to store all states

  • dataset_options – Option used when initializing storage. See create_dataset(). Default is to use LZF compression.

  • resizable – Whether to allow the file to be shrunk or expanded

  • per_timestep – Which information to store at each timestep: - full: All available information about the estimated state - mean_cov: The mean and covariance of the estimated state - mean_var: The mean and variance (i.e., diagonal of covariance matrix) of the estimated state - mean: Only the mean - none: No information

  • per_cycle – Which information to write at the first step of a cycle. The options are the same as per_timestep.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Config:
  • arbitrary_types_allowed: bool = True

Fields:
field dataset_options: Dict[str, Any] [Optional]#

Option used when initializing storage. See create_dataset()

field file_options: Dict[str, Any] [Optional]#

Options employed when opening the HDFt file. See File()

field hdf5_output: Path | str | Group [Required]#

File or already-open HDF5 file in which to store data

field per_cycle: Literal['full', 'mean_cov', 'mean_var', 'mean', 'none'] = 'full'#

What information to store at the first timestep each cycle

field per_timestep: Literal['full', 'mean_cov', 'mean_var', 'mean', 'none'] = 'mean'#

What information to write each timestep

field position: int | None = None#

Index of the next step to be written

field resizable: bool = True#

Whether to use resizable datasets.

field storage_key: str = 'state_estimates'#

Name of the group in which to store the estimates

append_step(time: float, cycle: int, state: MultivariateRandomDistribution, output: MultivariateRandomDistribution)#

Add a state estimate to the dataset

Parameters:
  • time – Test time of timestep

  • cycle – Cycle associated with the timestep

  • state – State to be stored

  • output – Outputs predicted from the estimator

model_post_init(context: Any, /) None#

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that’s what pydantic-core passes when calling it.

Parameters:
  • self – The BaseModel instance.

  • context – The context.

prepare(estimator: OnlineEstimator, expected_steps: int | None = None, expected_cycles: int | None = None)#

Create the necessary groups and store metadata about the OnlineEstimator

Additional keyword arguments are passed to create_dataset().

Parameters:
  • estimator – Estimator being used to create estimates

  • expected_steps – Expected number of estimation timesteps. Required if not resizable.

  • expected_cycles – Expected number of cycles.

property is_ready: bool#

Whether the class is ready to write estimates

moirae.interface.hdf5.read_state_estimates(data_path: str | ~pathlib.Path | ~h5py._hl.group.Group, per_timestep: bool = True, dist_type: type[~moirae.estimators.online.filters.distributions.MultivariateRandomDistribution] = <class 'moirae.estimators.online.filters.distributions.MultivariateGaussian'>) Iterator[tuple[float, MultivariateRandomDistribution, MultivariateRandomDistribution]]#

Read the state estimates from a file into progressive streams

Parameters:
  • data_path – Path to the HDF5 file or the group holding state estimates from an already-open file.

  • per_timestep – Whether to read per-timestep rather than per-cycle estiamtes

  • dist_type – Distribution type to create if the “full” distribution is stored

Yields:
  • Time associated with estimate

  • Distribution of state estimates

  • Distribution of the output estimates