{ "cells": [ { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "# Interchange format\n", "\n", "In PRIMAP2, data is internally handled in\n", "[xarray datasets](https://xarray.pydata.org/en/stable/data-structures.html#dataset)\n", "with defined coordinates and metadata.\n", "On disk this structure is stored as a netcdf file.\n", "Because the netcdf file format was developed for the exchange of multi-dimensional\n", "datasets with a varying number of dimensions for different entities and rich meta data,\n", "we recommend that consumers of datasets published by us use the provided netcdf files.\n", "\n", "However, we recognise that many existing workflows rely on tools that handle tabular\n", "data exclusively and therefore also publish in the **PRIMAP2 Interchange Format** which\n", "is a tabular wide format with additional meta data.\n", "Users of the interchange format have to integrate the given meta data carefully into\n", "their workflows to ensure correct results.\n", "\n", "## Logical format\n", "In the interchange format all dimensions and time points are represented by columns in\n", "a two-dimensional array.\n", "Values of the time columns are data while values of the other\n", "columns are coordinates.\n", "To store metadata, including the information contained in\n", "the `attrs` dict in the PRIMAP2 xarray format, we use an additional dictionary.\n", "See sections *In-memory representation* and *on-disk representation* below for\n", "information on the storage of these structures.\n", "\n", "The requirements for the data, columns, and coordinates follow the requirements\n", "in the standard PRIMAP2 data format.\n", "Dimensions `area` and `source`, which are mandatory in the xarray format, are mandatory\n", "columns in the tabular data in the interchange format.\n", "The `time` dimension is included in the horizontal\n", "dimension of the tabular interchange format. Additionally, we have `unit` and `entity`\n", "as mandatory columns with the restriction that each entity can have only one unit.\n", "\n", "All optional dimensions (see [Data format details](data_format_details.rst)) can be\n", "added as optional columns. Secondary categories are columns with free format names.\n", "They are listed as secondary columns in the metadata dict.\n", "\n", "Column names correspond to the dimension key of the xarray format, i.e. they contain\n", "the terminology in parentheses (e.g. `area (ISO3)`).\n", "\n", "Additional columns are currently not possible, but the option will be added\n", "in a future release ([#25](https://github.com/pik-primap/primap2/issues/25)).\n", "\n", "The metadata dict has an `attrs` entry, which corresponds to the `attrs` dict of the\n", "xarray format (see [Data format details](data_format_details.rst)).\n", "Additionally, the metadata dict contains information on the `dimensions` of the\n", "data for each entity, on the `time_format` of the data columns and (if stored on disk)\n", "on the name of the `data_file`\n", "(see [Interchange format details](interchange_format_details.rst)).\n", "\n", "## Use\n", "The interchange format is intended for use mainly in two settings.\n", "\n", "* To publish data processed using PRIMAP2 in a way that is easy to read by others but\n", "also keeps the internal structure and metadata. The format will be used by future data\n", "publications by the PRIMAP team including PRIMAP-hist.\n", "* To have a common intermediate format for reading data from original sources (mostly\n", "xls or csv files in different formats) to simplify data reading functions and to enable\n", "use of our data reading functionality by other projects. All data is\n", "first read into the interchange format and subsequently converted into the native\n", "PRIMAP2 format. This enables using our data reading routines in other software\n", "packages.\n", "\n", "## In-memory representation\n", "The in-memory representation of the interchange format is using a pandas DataFrame\n", "to store the data, and a dict to store the additional metadata. Pandas DataFrames\n", "have the capability to store the metadata on their `attrs`, however this function\n", "is still experimental and subject to change without notice, so care has to be taken\n", "not to lose the data if processing is done on the DataFrame.\n", "For an example see *Examples* section below.\n", "\n", "## On-disk representation\n", "On disk the dataset is represented by a csv file containing the array, and a yaml file\n", "containing the additional metadata as a dict.\n", "Both files should have the same name except for the\n", "ending.\n", "On disk, the key `data_file` is added to the metadata dict, which contains the\n", "name of the csv file.\n", "Thus, a function reading interchange format data just needs the yaml\n", "file name to read the data.\n", "\n", "## Examples\n", "Here we show a few examples of the interchange format." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "execution": { "iopub.execute_input": "2021-01-12T16:46:26.482526Z", "iopub.status.busy": "2021-01-12T16:46:26.482238Z", "iopub.status.idle": "2021-01-12T16:46:27.442994Z", "shell.execute_reply": "2021-01-12T16:46:27.442425Z", "shell.execute_reply.started": "2021-01-12T16:46:26.482495Z" }, "jupyter": { "outputs_hidden": false }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# import all the used libraries\n", "import primap2 as pm2" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "### Reading csv data\n", "The PRIMAP2 data reading procedures first convert data into the interchange format.\n", "For explanations of the used parameters see the\n", "[Data reading example](data_reading_writing_examples/test_data_wide.ipynb). A more complex dataset is\n", "read in [Data reading PRIMAP-hist](data_reading_writing_examples/PRIMAP-hist.ipynb)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "execution": { "iopub.execute_input": "2021-01-12T16:46:27.443932Z", "iopub.status.busy": "2021-01-12T16:46:27.443739Z", "iopub.status.idle": "2021-01-12T16:46:27.724241Z", "shell.execute_reply": "2021-01-12T16:46:27.723607Z", "shell.execute_reply.started": "2021-01-12T16:46:27.443906Z" }, "jupyter": { "outputs_hidden": false }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "file = \"data_reading_writing_examples/test_csv_data_sec_cat.csv\"\n", "coords_cols = {\n", " \"unit\": \"unit\",\n", " \"entity\": \"gas\",\n", " \"area\": \"country\",\n", " \"category\": \"category\",\n", " \"sec_cats__Class\": \"classification\",\n", "}\n", "coords_defaults = {\n", " \"source\": \"TESTcsv2021\",\n", " \"sec_cats__Type\": \"fugitive\",\n", " \"scenario\": \"HISTORY\",\n", "}\n", "coords_terminologies = {\n", " \"area\": \"ISO3\",\n", " \"category\": \"IPCC2006\",\n", " \"sec_cats__Type\": \"type\",\n", " \"sec_cats__Class\": \"class\",\n", " \"scenario\": \"general\",\n", "}\n", "coords_value_mapping = {\n", " \"category\": \"PRIMAP1\",\n", " \"entity\": \"PRIMAP1\",\n", " \"unit\": \"PRIMAP1\",\n", "}\n", "data_if = pm2.pm2io.read_wide_csv_file_if(\n", " file,\n", " coords_cols=coords_cols,\n", " coords_defaults=coords_defaults,\n", " coords_terminologies=coords_terminologies,\n", " coords_value_mapping=coords_value_mapping,\n", ")\n", "data_if.head()" ] }, { "cell_type": "markdown", "source": [ "### Writing interchange format data\n", "Data is written using the `pm2io.write_interchange_format` function which takes a filename\n", "and path (`str` or `pathlib.Path`), an interchange format dataframe (`pandas.DataFrame`)\n", "and optionally an attribute `dict` as inputs. If the filename has an ending, it will be\n", "ignored. The function writes a `yaml` file and a `csv` file." ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "file_if = \"data_reading_writing_examples/test_csv_data_sec_cat_if\"\n", "pm2.pm2io.write_interchange_format(file_if, data_if)" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "### Reading data from disk\n", "To read interchange format data from disk the function `pm2io.read_interchange_format`\n", "is used. It just takes a filename and path as input (`str` or `pathlib.Path`) and returns\n", "a `pandas.DataFrame` containing the data and metadata. The filename and path has to point\n", "to the `yaml` file. the `csv` file will be read from the filename contained in the `yaml`\n", "file." ] }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "data_if_read = pm2.pm2io.read_interchange_format(file_if)\n", "data_if_read.head()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "### Converting to and from standard PRIMAP2 format\n", "Data in the standard, xarray-based PRIMAP2 format can be converted to and from the interchange format with the corresponding functions:" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "ds_minimal = pm2.open_dataset(\"minimal_ds.nc\")\n", "\n", "if_minimal = ds_minimal.pr.to_interchange_format()\n", "\n", "if_minimal.head()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "ds_minimal_re = pm2.pm2io.from_interchange_format(if_minimal)\n", "\n", "ds_minimal_re" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.6" } }, "nbformat": 4, "nbformat_minor": 4 }