primap2.open_dataset

primap2.open_dataset(filename_or_obj: str | Path | IO, group: str | None = None, chunks: int | dict | None = None, cache: bool | None = None, drop_variables: str | Iterable | None = None, backend_kwargs: dict | None = None) Dataset[source]

Open and decode a dataset from a file or file-like object.

Parameters:
filename_or_objstr, Path, file-like, or DataStore

Strings and Path objects are interpreted as a path to a netCDF file or an OpenDAP URL and opened with h5py. Byte-strings or file-like objects are also supported.

groupstr, optional

Path to the netCDF4 group in the given file to open.

chunksint or dict, optional

If chunks is provided, it is used to load the new dataset into dask arrays. chunks={} loads the dataset with dask using a single chunk for all arrays.

cachebool, optional

If True, cache data loaded from the underlying datastore in memory as NumPy arrays when accessed to avoid reading from the underlying data- store multiple times. Defaults to True unless you specify the chunks argument to use dask, in which case it defaults to False. Does not change the behavior of coordinates corresponding to dimensions, which always load their data from disk into a pandas.Index.

drop_variables: str or iterable, optional

A variable or list of variables to exclude from being parsed from the dataset. This may be useful to drop variables with problems or inconsistent values.

backend_kwargs: dict, optional

A dictionary of keyword arguments to pass on to the backend. This may be useful when backend options would improve performance or allow user control of dataset processing.

Returns:
datasetDataset

The newly created dataset.

Notes

open_dataset opens the file with read-only access. When you modify values of a Dataset, even one linked to files on disk, only the in-memory copy you are manipulating in xarray is modified: the original file on disk is never touched.