diff --git a/tutorials/03_climate_projection/03x01_projections-cmip6.ipynb b/tutorials/03_climate_projection/03x01_projections-cmip6.ipynb index cd7a1d8..48645cd 100644 --- a/tutorials/03_climate_projection/03x01_projections-cmip6.ipynb +++ b/tutorials/03_climate_projection/03x01_projections-cmip6.ipynb @@ -193,7 +193,7 @@ "metadata": {}, "outputs": [], "source": [ - "!pip install cdsapi" + "# !pip install cdsapi" ] }, { diff --git a/tutorials/03_climate_projection/03x01_projections-cmip6_parallel.ipynb b/tutorials/03_climate_projection/03x01_projections-cmip6_parallel.ipynb deleted file mode 100644 index 4986719..0000000 --- a/tutorials/03_climate_projection/03x01_projections-cmip6_parallel.ipynb +++ /dev/null @@ -1,1156 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Plot an Ensemble of CMIP6 Climate Projections - parallel download implementation" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "\n", - "CDS_PROJECT = \"projections-cmip6_parallel\"" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - }, - { - "cell_type": "markdown", - "metadata": { - "tags": [] - }, - "source": [ - "### About" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This notebook provides a practical introduction on how to access and process [CMIP6 global climate projections](https://cds.climate.copernicus.eu/datasets/projections-cmip6?tab=overview) data available in the Climate Data Store (CDS) of the Copernicus Climate Change Service (C3S). The workflow shows how to compute and visualize the output of an ensemble of models for the annual global average temperature between 1850 to 2100. You will use the `historical` experiment for the temporal period 1850 to 2014 and the three scenarios `SSP1-2.6`, `SSP2-4.5` and `SSP5-8.5` for the period from 2015 to 2100.\n", - "\n", - "For the sake of simplicity, and to facilitate data download, the tutorial will make use of some of the coarser resolution models that have a smaller data size. It is nevertheless only a choice for this exercise and not a recommendation (since ideally all models, including those with highest resolution, should be used). Many more models are available on the CDS, and when calculating an ensemble of models, it is best practice to use as many as possible for a more reliable output. See [here](https://confluence.ecmwf.int/display/CKB/CMIP6%3A+Global+climate+projections#CMIP6:Globalclimateprojections-Models,gridsandpressurelevels) a full list of models included in the CDS-CMIP6 dataset.\n", - "\n", - "Learn [here](https://confluence.ecmwf.int/display/CKB/CMIP6%3A+Global+climate+projections#CMIP6:Globalclimateprojections) more about CMIP6 global climate projections and the CMIP6 experiments in the CDS." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n", - "The notebook has the following outline:\n", - "\n", - "1. Request data from the CDS programmatically with the CDS API\n", - "2. Unzip the downloaded data files\n", - "3. Load and prepare CMIP6 data for one model and one experiment\n", - "4. Load and prepare CMIP6 data for all models and experiments\n", - "5. Visualize CMIP6 annual global average temperature between 1850 to 2100" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "tags": [] - }, - "source": [ - "### Data" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This notebook introduces you to [CMIP6 Global climate projections](https://cds.climate.copernicus.eu/datasets/projections-cmip6?tab=overview). The datasets used in the notebook have the following specifications:\n", - "\n", - "> **Data**: CMIP6 global climate projections of near-surface air temperature
\n", - "> **Experiments**: Historical, SSP1-2.6, SSP2-4.5, SSP5-8.5
\n", - "> **Models**: 7 models from Germany, France, UK, Japan and Russia
\n", - "> **Temporal range**: Historical: 1850 - 2014. Scenarios: 2015 - 2100
\n", - "> **Spatial coverage**: Global
\n", - "> **Format**: NetCDF, compressed into zip files" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### d4science_copernicus_cds Library\n", - "\n", - "To request data from the Climate Data Store (CDS) programmatically using the CDS API, we will manage our authentication with the `d4science_copernicus_cds` library.\n", - "\n", - "The library prompts us to enter our credentials, which are then securely saved in our workspace. **This request is only made the first time**; afterward, the `get_credentials` function will automatically retrieve the credentials from the environment or workspace, eliminating the need to re-enter them in the Jupyter notebook.\n", - "\n", - "To obtain your API credentials:\n", - "1. Register or log in to the CDS at [https://cds.climate.copernicus.eu](https://cds.climate.copernicus.eu).\n", - "2. Visit [https://cds.climate.copernicus.eu/how-to-api](https://cds.climate.copernicus.eu/how-to-api) and copy the API key provided.\n", - "\n", - "The library will prompt you to enter:\n", - "- **URL**: The URL field is prefilled; simply press Enter to accept the default.\n", - "- **KEY**: Insert the obtained API key when prompted, then confirm saving your credentials by pressing \"y.\"\n", - "\n", - "Once saved, your credentials will be loaded automatically in future sessions, ensuring a seamless experience with the CDS API." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# !pip install git+https://code-repo.d4science.org/D4Science/d4science_copernicus_cds.git" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from d4science_copernicus_cds import cds_get_credentials, cds_datadir" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "URL, KEY = cds_get_credentials()\n", - "print(\"URL\", URL)\n", - "print (\"KEY\", KEY)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "cds_datadir will create a folder in our workspace, under cds_dataDir, with current timestamp and custom label" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "DATADIR = cds_datadir(CDS_PROJECT)\n", - "print(DATADIR)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "to avoid to download already downloaded data, we put them in a local folder" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "CDS_DATA = os.path.join(\"../data/\", CDS_PROJECT)\n", - "os.makedirs(CDS_DATA, exist_ok=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "use `CDS_DATA = DATADIR` if you want to avoid the use of shared download folder" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "this tutorial needs additional dependencies" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "!pip install --upgrade xarray zarr dask fsspec\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "---" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Install CDS API via pip" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "!pip install cdsapi" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Load libraries" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# General libs for file paths, data extraction, etc\n", - "from glob import glob\n", - "from pathlib import Path\n", - "from os.path import basename\n", - "import zipfile # To extract zipfiles\n", - "import urllib3 \n", - "urllib3.disable_warnings() # Disable warnings for data download via API\n", - "\n", - "# CDS API\n", - "import cdsapi\n", - "\n", - "# Libraries for working with multi-dimensional arrays\n", - "import numpy as np\n", - "import xarray as xr\n", - "import pandas as pd\n", - "\n", - "# Libraries for plotting and visualising data\n", - "import matplotlib.path as mpath\n", - "import matplotlib.pyplot as plt\n", - "import cartopy.crs as ccrs\n", - "from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER\n", - "import cartopy.feature as cfeature\n", - "\n", - "import os" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "~## Request data from the CDS programmatically with the CDS API~" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### ~Enter your CDS API key~\n", - "\n", - "~We will request data from the Climate Data Store (CDS) programmatically with the help of the CDS API. Let us make use of the option to manually set the CDS API credentials.~\n", - "\n", - "~First, you have to define two variables: `URL` and `KEY` which build together your CDS API key.~\n", - "\n", - "~The string of characters that make up your KEY include your personal User ID and CDS API key. To obtain these, first register or login to the CDS (https://cds.climate.copernicus.eu), then visit https://cds.climate.copernicus.eu/how-to-api and copy the string of characters listed after \"key:\". Replace the `#########` below with this string.~\n", - "\n", - "~URL = 'https://cds.climate.copernicus.eu/api'~\n", - "\n", - "~KEY = 'xxx'~\n", - "\n", - "~Here we specify a data directory in which we will download our data and all output files that we will generate:~\n", - "\n", - "~DATADIR = './'~" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrieve Data\n", - "\n", - "The next step is then to request the data with the help of the CDS API. Below, we loop through multiple data requests. These include data for different models and scenarios. It is not possible to specify multiple models in one data request as their spatial resolution varies.\n", - "\n", - "We will download monthly aggregated data. These are disseminated as netcdf files within a zip archive.\n", - "\n", - "In order to loop through the various experiments and models in our data requests, we will specify them as Python 'lists' here:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "experiments = [\n", - " 'historical', \n", - " 'ssp126', \n", - " 'ssp245', \n", - " 'ssp585'\n", - "]" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "models = [\n", - " 'hadgem3_gc31_ll', \n", - " 'inm_cm5_0',\n", - " 'inm_cm4_8',\n", - " 'ipsl_cm6a_lr', \n", - " 'miroc_es2l', \n", - " 'mpi_esm1_2_lr', \n", - " 'ukesm1_0_ll'\n", - "]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "> **Note:** Note that these are a selection of the lightest models (in terms of data volume), to facilitate download for the sake of this exercise. There are many [more models available on the CDS](https://cds.climate.copernicus.eu/datasets/projections-cmip6?tab=overview)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now we can download the data for each model and experiment sequentially. We will do this separately for the historical experiments and for the various future scenarios, given that they refer to two different time periods.\n", - "\n", - "Before you run the cells below, the terms and conditions on the use of the data need to have been accepted in the CDS. You can view and accept these conditions by logging into the [CDS](https://cds.climate.copernicus.eu), searching for the dataset, then scrolling to the end of the `Download data` section." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "> **Note:** For more information about data access through the Climate Data Store, please see the CDS user guide [here](https://cds.climate.copernicus.eu/user-guide)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### UPGRADE: Parallelizing Data Downloads with concurrent.futures\n", - "In this section, we are enhancing the standard implementation by using concurrent.futures to execute all data downloads in parallel. This approach allows us to efficiently retrieve multiple datasets simultaneously, significantly reducing the overall time required for the downloads.\n", - "\n", - "By leveraging the power of parallel processing, we can handle multiple requests at once, making our data retrieval process faster and more efficient. This is particularly useful when dealing with large datasets or multiple scenarios, as it helps to optimize the workflow and improve productivity." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import concurrent.futures" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "\n", - "# Funzione per eseguire il retrieve\n", - "def retrieve_data(client, name, request, target):\n", - " return (client.retrieve(name, request), target)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### DOWNLOAD DATA FOR HISTORICAL PERIOD\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# DOWNLOAD DATA FOR HISTORICAL PERIOD - original sequential (fixed) implementation\n", - "# c = cdsapi.Client()\n", - "\n", - "# for j in models:\n", - "# c.retrieve(\n", - "# 'projections-cmip6',\n", - "# {\n", - "# 'download_format': 'zip',\n", - "# 'data_format': 'netcdf_legacy',\n", - "# 'temporal_resolution': 'monthly',\n", - "# 'experiment': 'historical',\n", - "# 'level': 'single_levels',\n", - "# 'variable': 'near_surface_air_temperature',\n", - "# 'model': f'{j}',\n", - "# # 'date': '1850-01-01/2014-12-31',\n", - "# \"year\": years,\n", - "# \"month\": months\n", - "# },\n", - "# f'{DATADIR}cmip6_monthly_1850-2014_historical_{j}.zip')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "c = cdsapi.Client()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# new apis requires arrays of values for years and months instead of dates ranges (was date': '1850-01-01/2014-12-31',)\n", - "\n", - "start_year = 1850\n", - "end_year = 2014\n", - "\n", - "years = ['%04d' % (x) for x in range(start_year, end_year+1)]\n", - "months = ['%02d' % (x) for x in range(1, 13)]" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "\n", - "# DOWNLOAD DATA FOR HISTORICAL PERIOD - improved parallel implementation parallelized implementation \n", - "\n", - "requests_historical = []\n", - "targets_historical = []\n", - "\n", - "experiment = 'historical'\n", - "for model in models:\n", - " request = {\n", - " 'name': 'projections-cmip6',\n", - " 'request': {\n", - " 'download_format': 'zip',\n", - " 'data_format': 'netcdf_legacy',\n", - " 'temporal_resolution': 'monthly',\n", - " 'experiment': experiment,\n", - " 'level': 'single_levels',\n", - " 'variable': 'near_surface_air_temperature',\n", - " 'model': model,\n", - " # 'date': '1850-01-01/2014-12-31',\n", - " 'year': years,\n", - " 'month': months\n", - " },\n", - " }\n", - " requests_historical.append(request)\n", - " targets_historical.append(f'{DATADIR}cmip6_monthly_{experiment}_{model}.zip')\n", - "\n", - "results = []\n", - "targets = []\n", - "# Esegui le richieste in parallelo\n", - "with concurrent.futures.ThreadPoolExecutor() as executor:\n", - " \n", - " futures = [executor.submit(retrieve_data, c, requests_historical[i]['name'], requests_historical[i]['request'], targets_historical[i]) for i in range(len(requests_historical))]\n", - " for future in concurrent.futures.as_completed(futures):\n", - " try:\n", - " result, target = future.result()\n", - " results.append(result)\n", - " targets.append(target)\n", - "\n", - " print(f'Data result at: {result} will be downloaded to: {target}') \n", - " except Exception as e:\n", - " print(f'Error retrieving data: {e}')\n", - "\n", - "c.download(results, targets)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### DOWNLOAD DATA FOR FUTURE SCENARIOS\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# DOWNLOAD DATA FOR FUTURE SCENARIOS - old implementation\n", - "\n", - "# c = cdsapi.Client()\n", - "# for i in experiments[1:]:\n", - "# for j in models:\n", - "# c.retrieve(\n", - "# 'projections-cmip6',\n", - "# {\n", - "# 'download_format': 'zip',\n", - "# 'data_format': 'netcdf_legacy',\n", - "# 'temporal_resolution': 'monthly',\n", - "# 'experiment': f'{i}',\n", - "# 'level': 'single_levels',\n", - "# 'variable': 'near_surface_air_temperature',\n", - "# 'model': f'{j}',\n", - "# # 'date': '1850-01-01/2014-12-31',\n", - "# \"year\": years,\n", - "# \"month\": months\n", - "# },\n", - "# f'{DATADIR}cmip6_monthly_2015-2100_{i}_{j}.zip'\n", - "# )" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# DOWNLOAD DATA FOR FUTURE SCENARIOS - parallelized implementation\n", - "\n", - "requests_furure = []\n", - "targets_future = []\n", - "\n", - "for experiment in experiments[1:]:\n", - " filename = os.path.join(CDS_PROJECT, f'cmip6_monthly_{experiment}_{model}.zip')\n", - " for model in models:\n", - " request = {\n", - " 'name': 'projections-cmip6',\n", - " 'request': {\n", - " 'download_format': 'zip',\n", - " 'data_format': 'netcdf_legacy',\n", - " 'temporal_resolution': 'monthly',\n", - " 'experiment': experiment,\n", - " 'level': 'single_levels',\n", - " 'variable': 'near_surface_air_temperature',\n", - " 'model': model,\n", - " # 'date': '1850-01-01/2014-12-31',\n", - " 'year': years,\n", - " 'month': months\n", - " },\n", - " }\n", - " requests_furure.append(request)\n", - " targets_future.append(f'{DATADIR}cmip6_monthly_{experiment}_{model}.zip')\n", - "\n", - "results = []\n", - "targets = []\n", - "# Esegui le richieste in parallelo\n", - "with concurrent.futures.ThreadPoolExecutor() as executor:\n", - " \n", - " futures = [executor.submit(retrieve_data, c, requests_furure[i]['name'], requests_furure[i]['request'], targets_future[i]) for i in range(len(requests_furure))]\n", - " for future in concurrent.futures.as_completed(futures):\n", - " try:\n", - " result, target = future.result()\n", - " results.append(result)\n", - " targets.append(target)\n", - "\n", - " print(f'Data result at: {result} will be downloaded to: {target}') \n", - " except Exception as e:\n", - " print(f'Error retrieving data: {e}')\n", - "\n", - "c.download(results, targets)\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# new apis requires arrays of values for years and months instead of dates ranges (was date': '1850-01-01/2014-12-31',)\n", - "\n", - "\n", - "start_year = 2015\n", - "end_year = 2100\n", - "\n", - "years = ['%04d' % (x) for x in range(start_year, end_year+1)]\n", - "months = ['%02d' % (x) for x in range(1, 13)]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Unzip the downloaded data files" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "From the CDS, CMIP6 data are available as `NetCDF` files compressed into `zip` archives. For this reason, before we can load any data, we have to extract the files. Having downloaded the four experiments `historical`, `SSP1-2.6`, `SSP2-4.5` and `SSP5-8.5` as seperate zip files, we can use the functions from the `zipfile` Python package to extract their contents. For each zip file we first construct a `ZipFile()` object, then we apply the function `extractall()` to extract its content." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "cmip6_zip_paths = glob(f'{DATADIR}*.zip')\n", - "for j in cmip6_zip_paths:\n", - " with zipfile.ZipFile(j, 'r') as zip_ref:\n", - " zip_ref.extractall(f'{DATADIR}')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Create a list of the extracted files\n", - "\n", - "To facilitate batch processing later in the tutorial, here we create a list of the extracted NetCDF files:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "cmip6_nc = list()\n", - "cmip6_nc_rel = glob(f'{DATADIR}tas*.nc')\n", - "for i in cmip6_nc_rel:\n", - " cmip6_nc.append(os.path.basename(i))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We will briefly inspect this list by printing the first five elements, corresponding to the filenames of a sample of the extracted NetCDF files:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "cmip6_nc[0:5]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Load and prepare CMIP6 data for one model and one experiment" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now that we have downloaded and extracted the data, we can prepare it in order to view a time series of the spread of annual global temperature for the model ensemble. These preparation steps include the following:\n", - "\n", - "1. **Spatial aggregation**: to have a single global temperature value for each model/experiment dataset, and for each time step\n", - "2. **Temporal aggregation**: from monthly to yearly\n", - "3. **Conversion of temperature units** from degrees Kelvin to Celsius\n", - "4. **Addition of data dimensions** in preparation for the merging of datasets from different models and experiments\n", - "\n", - "In this section we apply these steps to a single dataset from one model and one experiment. In the next section we merge data from all models/experiments in preparation for the final processing and plotting of the temperature time series." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Load and inspect data" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We begin by loading the first of the NetCDF files in our list. We will use the Python library [xarray](http://xarray.pydata.org/en/stable/) and its function `open_dataset` to read NetCDF files.\n", - "\n", - "The result is a `xarray.Dataset` object with four dimensions: `bnds`, `lat`, `lon`, `time`, of which the dimension `bnds` is not callable." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ds = xr.open_dataset(f'{DATADIR}{cmip6_nc[0]}')\n", - "ds" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "By examining the data above, we can see from the temporal range (1850 to 2014) that it is from the `historical` experiment.\n", - "\n", - "We see that the data dimensions have been given labelled coordinates of time, latitude and longitude. We can find more about the dataset from the `Attributes`, such information includes the model name, description of the variable (`long_name`), units, etc.\n", - "\n", - "Some of this information we will need later, this includes the experiment and model IDs. We will save these into variables:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "exp = ds.attrs['experiment_id']\n", - "mod = ds.attrs['source_id']" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "An `xarray.Dataset()` may contain arrays of multiple variables. We only have one variable in the dataset, which is near-surface air temperature, `tas`. Below we create an `xarray.DataArray()` object, which takes only one variable, but gives us more flexibility in processing." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "da = ds['tas']" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Spatial aggregation" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The next step is to aggregate the temperature values spatially (i.e. average over the latitude and longitude dimensions) and compute the global monthly near-surface temperature.\n", - "\n", - "A very important consideration however is that the gridded data cells do not all correspond to the same areas. The size covered by each data point varies as a function of latitude. We need to take this into account when averaging. One way to do this is to use the cosine of the latitude as a proxy for the varying sizes. \n", - "\n", - "This can be implemented by first calculating weights as a function of the cosine of the latitude, then applying these weights to the data array with the xarray function `weighted()`:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "weights = np.cos(np.deg2rad(da.lat))\n", - "weights.name = \"weights\"\n", - "da_weighted = da.weighted(weights)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The next step is then to compute the mean across the latitude and longitude dimensions of the weighted data array with the function `mean()`. The result is a DataArray with one dimension (`time`)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "da_agg = da_weighted.mean(['lat', 'lon'])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Temporal aggregation" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We now aggregate the monthly global near-surface air temperature values to annual global near-surface air temperature values. This operation can be done in two steps: first, all the values for one specific year have to be grouped with the function `groupby()` and second, we can create the average of each group with the function `mean()`.\n", - "\n", - "The result is a one-dimensional DataArray. Please note that this operation changes the name of the dimension from `time` to `year`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "da_yr = da_agg.groupby('time.year').mean()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Conversion from Kelvin to Celsius" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The metadata of the original data (before it was stripped during the subsequent processing steps) tells us that the near-surface air temperature data values are in units of Kelvin. We will convert them to degrees Celsius by subtracting 273.15 from the data values. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "da_yr = da_yr - 273.15" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Create additional data dimensions (to later combine data from multiple models & experiments)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Finally, we will create additional dimensions for the model and for the experiment. These we will label with the model and experiment name as taken from the metadata of the original data (see above). These will be useful when we repeat the processes above for all models and experiments, and combine them into one array." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "da_yr = da_yr.assign_coords(model=mod)\n", - "da_yr = da_yr.expand_dims('model')\n", - "da_yr = da_yr.assign_coords(experiment=exp)\n", - "da_yr = da_yr.expand_dims('experiment')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Load and prepare CMIP6 data for all models and experiments" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To repeat the steps above for all models and all experiments, we will collect all of the commands we have used so far into a function, which we can then apply to a batch of files corresponding to the data from all models and experiments." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Function to aggregate in geographical lat lon dimensions\n", - "\n", - "def geog_agg(fn):\n", - " ds = xr.open_dataset(f'{DATADIR}{fn}')\n", - " exp = ds.attrs['experiment_id']\n", - " mod = ds.attrs['source_id']\n", - " da = ds['tas']\n", - " weights = np.cos(np.deg2rad(da.lat))\n", - " weights.name = \"weights\"\n", - " da_weighted = da.weighted(weights)\n", - " da_agg = da_weighted.mean(['lat', 'lon'])\n", - " da_yr = da_agg.groupby('time.year').mean()\n", - " da_yr = da_yr - 273.15\n", - " da_yr = da_yr.assign_coords(model=mod)\n", - " da_yr = da_yr.expand_dims('model')\n", - " da_yr = da_yr.assign_coords(experiment=exp)\n", - " da_yr = da_yr.expand_dims('experiment')\n", - " da_yr.to_netcdf(path=f'{DATADIR}cmip6_agg_{exp}_{mod}_{str(da_yr.year[0].values)}.nc')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now we can apply this function to all the extracted NetCDF files. The `try` and `except` clauses ensure that all NetCDF files are attempted, even if some fail to be processed. One reason why some may fail is if the data are labelled differently, e.g. the model *MCM-UA-1-0* has coordinates labelled as \"*latitude*\" and *longitude*\". This differs from the suggested standard, and more commonly applied labels of \"*lat*\" and \"*lon*\". Any that fail will be recorded in a print statement, and these can be processed separately. See [here](https://confluence.ecmwf.int/display/CKB/CMIP6%3A+Global+climate+projections#CMIP6:Globalclimateprojections-QualitycontroloftheCDS-CMIP6subset) more details on the quality control of the CMIP6 datasets on the CDS." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "for i in cmip6_nc:\n", - " try:\n", - " geog_agg(i)\n", - " except: print(f'{i} failed')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In the absence of any print statements, we see that all files were successfully processed. \n", - "\n", - "We will now combine these processed files into one dataset for the final steps to create a visualisation of near-surface air temperature from the model ensemble." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "If all files have the same coordinates, the function `xarray.open_mfdataset` will merge the data according to the same coordinates." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data_ds = xr.open_mfdataset(f'{DATADIR}cmip6_agg*.nc')\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The dataset created by `xarray.open_mfdataset` is by default in the form of \"lazy Dask arrays\". \n", - "\n", - "Dask divides arrays into many small pieces, called chunks, each of which is presumed to be small enough to fit into memory. As opposed to eager evaluation, operations on Dask arrays are lazy, i.e. operations queue up a series of tasks mapped over blocks, and no computation is performed until you request values to be computed. For more details, see https://xarray.pydata.org/en/stable/user-guide/dask.html. \n", - "\n", - "To facilitate further processing we would need to convert these Dask arrays into in-memory \"eager\" arrays, which we can do by using the `load()` method: " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "data_ds.load()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Finally, we create an Xarray DataArray object for the near-surface air temperature variable, 'tas':" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data = data_ds['tas']" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Visualize the CMIP6 annual global average temperature between 1850 to 2100" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We will now create a plot of the model ensemble of near-surface air temperature for the historical and future periods, according to the three selected scenarios." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Calculate quantiles for model ensemble" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Rather than plotting the data from all models, we will instead view the range of values as given by quantiles, including the 10th (near to lower limit), the 50th (mid-range) and the 90th (near to upper limit) quantiles:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data_90 = data.quantile(0.9, dim='model')\n", - "data_10 = data.quantile(0.1, dim='model')\n", - "data_50 = data.quantile(0.5, dim='model')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "> **Note:** The warning message is due to the presence of NaN (Not a Number) data given that the historical and scenario datasets represent only parts (historical and future respectively) of the entire time series. As these two datasets have been merged, NaN values will exist (e.g. there will be no data for the historical experiment for the future period)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### View time-series" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Finally we will visualise this data in one time-series plot. We will use the matplotlib function `plot()`. The dimension `year` will be the x-axis and the near-surface air temperature values in degrees Celsius will be the y-axis. \n", - "\n", - "The plotting function below has four main parts:\n", - "* **Initiate the plot**: initiate a matplotlib plot with `plt.subplots()`\n", - "* **Plot the time-series**: plot the data for each experiment, including the historical experiment and three scenarios with the `plot()` function\n", - "* **Set axes limits, labels, title and legend**: Define title and axes labels, and add additional items to the plot, such as legend or gridlines\n", - "* **Save the figure**: Save the figure as a PNG file with the `matplotlib.pyplot.savefig()` function" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "fig, ax = plt.subplots(1, 1, figsize = (16, 8))\n", - "\n", - "colours = ['black','red','green','blue']\n", - "for i in np.arange(len(experiments)):\n", - " ax.plot(data_50.year, data_50[i,:], color=f'{colours[i]}', \n", - " label=f'{data_50.experiment[i].values} 50th quantile')\n", - " ax.fill_between(data_50.year, data_90[i,:], data_10[i,:], alpha=0.1, color=f'{colours[i]}', \n", - " label=f'{data_50.experiment[i].values} 10th and 90th quantile range')\n", - "\n", - "ax.set_xlim(1850,2100)\n", - "ax.set_title('CMIP6 annual global average temperature (1850 to 2100)')\n", - "ax.set_ylabel('tam (Celsius)')\n", - "ax.set_xlabel('year')\n", - "handles, labels = ax.get_legend_handles_labels()\n", - "ax.legend(handles, labels)\n", - "ax.grid(linestyle='--')\n", - "\n", - "fig.savefig(f'{DATADIR}CMIP6_annual_global_tas.png')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The visualization of the `CMIP6 annual global average temperature (1850 to 2100)` above shows that the global average temperature was more or less stable in the pre-industrial phase, but steadily increases since the 1990s. It shows further that, depending on the SSP scenario, the course and increase of the global annual temperature differs. While for the best case `SSP1-2.6` scenario, the global annual temperature could stabilize around 15 degC, in the worst case `SSP5-8.5` scenario, the global annual temperature could increase to above 20 degC." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

\n", - "

This project is licensed under APACHE License 2.0. | View on GitHub" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "


" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Output files saved in: " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "DATADIR" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.8.5" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} diff --git a/tutorials/03_climate_projection/03x02_projections-cordex.ipynb b/tutorials/03_climate_projection/03x02_projections-cordex.ipynb index 77428c8..94e94ca 100644 --- a/tutorials/03_climate_projection/03x02_projections-cordex.ipynb +++ b/tutorials/03_climate_projection/03x02_projections-cordex.ipynb @@ -3,15 +3,13 @@ { "cell_type": "markdown", "metadata": {}, - "source": [ - "# Analysis of Projected versus Historical Climatology with CORDEX Data" - ] + "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### About" + "# Analysis of Projected versus Historical Climatology with CORDEX Data" ] }, { @@ -23,6 +21,13 @@ "CDS_PROJECT = \"03x02_projections-cordex\"" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### About" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -116,47 +121,16 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Collecting git+https://code-repo.d4science.org/D4Science/d4science_copernicus_cds.git\n", - " Cloning https://code-repo.d4science.org/D4Science/d4science_copernicus_cds.git to /private/var/folders/hc/hl1vgl493cqdfj_g7d8pggv40000gn/T/pip-req-build-fxhc4mqq\n", - " Running command git clone --filter=blob:none --quiet https://code-repo.d4science.org/D4Science/d4science_copernicus_cds.git /private/var/folders/hc/hl1vgl493cqdfj_g7d8pggv40000gn/T/pip-req-build-fxhc4mqq\n", - " warning: filtering not recognized by server, ignoring\n", - " warning: filtering not recognized by server, ignoring\n", - " Resolved https://code-repo.d4science.org/D4Science/d4science_copernicus_cds.git to commit 22ec9dd7e72830d0057adc7454aa7e864a86a23c\n", - " Preparing metadata (setup.py) ... \u001b[?25ldone\n", - "\u001b[?25hRequirement already satisfied: cdsapi>=0.7.2 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from d4science-copernicus-cds==1.0.0) (0.7.4)\n", - "Requirement already satisfied: attrs in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from d4science-copernicus-cds==1.0.0) (24.2.0)\n", - "Requirement already satisfied: typing_extensions in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from d4science-copernicus-cds==1.0.0) (4.12.2)\n", - "Requirement already satisfied: cads-api-client>=1.4.7 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (1.5.2)\n", - "Requirement already satisfied: requests>=2.5.0 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (2.32.3)\n", - "Requirement already satisfied: tqdm in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (4.67.0)\n", - "Requirement already satisfied: multiurl>=0.3.2 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cads-api-client>=1.4.7->cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (0.3.2)\n", - "Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from requests>=2.5.0->cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (2.2.3)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from requests>=2.5.0->cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (3.4.0)\n", - "Requirement already satisfied: idna<4,>=2.5 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from requests>=2.5.0->cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (3.10)\n", - "Requirement already satisfied: certifi>=2017.4.17 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from requests>=2.5.0->cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (2024.8.30)\n", - "Requirement already satisfied: python-dateutil in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from multiurl>=0.3.2->cads-api-client>=1.4.7->cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (2.9.0.post0)\n", - "Requirement already satisfied: pytz in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from multiurl>=0.3.2->cads-api-client>=1.4.7->cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (2024.2)\n", - "Requirement already satisfied: six>=1.5 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from python-dateutil->multiurl>=0.3.2->cads-api-client>=1.4.7->cdsapi>=0.7.2->d4science-copernicus-cds==1.0.0) (1.16.0)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n" - ] - } - ], + "outputs": [], "source": [ - "!pip install git+https://code-repo.d4science.org/D4Science/d4science_copernicus_cds.git" + "# !pip install git+https://code-repo.d4science.org/D4Science/d4science_copernicus_cds.git" ] }, { "cell_type": "code", - "execution_count": 2, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ @@ -165,18 +139,9 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "URL https://cds.climate.copernicus.eu/api\n", - "KEY db1f2085-6b8b-42e6-b832-625dfaf831a4\n" - ] - } - ], + "outputs": [], "source": [ "URL, KEY = cds_get_credentials()\n", "print(\"URL\", URL)\n", @@ -194,16 +159,7 @@ "cell_type": "code", "execution_count": null, "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "datadir: %s /Users/Alfredo/cds_dataDir/out_2024_11_08_16_32_34_projections-cordex/\n", - "/Users/Alfredo/cds_dataDir/out_2024_11_08_16_32_34_projections-cordex/\n" - ] - } - ], + "outputs": [], "source": [ "import os\n", "\n", @@ -220,7 +176,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ @@ -251,35 +207,11 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": null, "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: cdsapi in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (0.7.4)\n", - "Requirement already satisfied: tqdm in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cdsapi) (4.67.0)\n", - "Requirement already satisfied: cads-api-client>=1.4.7 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cdsapi) (1.5.2)\n", - "Requirement already satisfied: requests>=2.5.0 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cdsapi) (2.32.3)\n", - "Requirement already satisfied: typing-extensions in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cads-api-client>=1.4.7->cdsapi) (4.12.2)\n", - "Requirement already satisfied: attrs in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cads-api-client>=1.4.7->cdsapi) (24.2.0)\n", - "Requirement already satisfied: multiurl>=0.3.2 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from cads-api-client>=1.4.7->cdsapi) (0.3.2)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from requests>=2.5.0->cdsapi) (3.4.0)\n", - "Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from requests>=2.5.0->cdsapi) (2.2.3)\n", - "Requirement already satisfied: idna<4,>=2.5 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from requests>=2.5.0->cdsapi) (3.10)\n", - "Requirement already satisfied: certifi>=2017.4.17 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from requests>=2.5.0->cdsapi) (2024.8.30)\n", - "Requirement already satisfied: python-dateutil in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from multiurl>=0.3.2->cads-api-client>=1.4.7->cdsapi) (2.9.0.post0)\n", - "Requirement already satisfied: pytz in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from multiurl>=0.3.2->cads-api-client>=1.4.7->cdsapi) (2024.2)\n", - "Requirement already satisfied: six>=1.5 in /Users/Alfredo/.pyenv/versions/3.10.12/lib/python3.10/site-packages (from python-dateutil->multiurl>=0.3.2->cads-api-client>=1.4.7->cdsapi) (1.16.0)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n" - ] - } - ], + "outputs": [], "source": [ - "!pip install cdsapi" + "# !pip install cdsapi" ] }, { @@ -291,7 +223,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ @@ -370,63 +302,18 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": null, "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "2024-11-08 16:32:35,536 INFO [2024-09-28T00:00:00] **Welcome to the New Climate Data Store (CDS)!** This new system is in its early days of full operations and still undergoing enhancements and fine tuning. Some disruptions are to be expected. Your \n", - "[feedback](https://jira.ecmwf.int/plugins/servlet/desk/portal/1/create/202) is key to improve the user experience on the new CDS for the benefit of everyone. Thank you.\n", - "2024-11-08 16:32:35,537 WARNING [2024-09-26T00:00:00] Should you have not yet migrated from the old CDS system to the new CDS, please check our [informative page](https://confluence.ecmwf.int/x/uINmFw) for guidance.\n", - "2024-11-08 16:32:35,537 INFO [2024-09-26T00:00:00] Watch our [Forum](https://forum.ecmwf.int/) for Announcements, news and other discussed topics.\n", - "2024-11-08 16:32:35,537 INFO [2024-09-16T00:00:00] Remember that you need to have an ECMWF account to use the new CDS. **Your old CDS credentials will not work in new CDS!**\n", - "2024-11-08 16:32:35,538 WARNING [2024-06-16T00:00:00] CDS API syntax is changed and some keys or parameter names may have also changed. To avoid requests failing, please use the \"Show API request code\" tool on the dataset Download Form to check you are using the correct syntax for your API request.\n" - ] - } - ], + "outputs": [], "source": [ "c = cdsapi.Client()" ] }, { "cell_type": "code", - "execution_count": 10, + "execution_count": null, "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Downloading ../data/projections-cordex/1971-2000_cordex_historical_africa.zip\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "2024-11-08 16:32:40,312 INFO Request ID is 3636de2e-8ccb-4f8a-a09c-b331eb536911\n", - "2024-11-08 16:32:40,958 INFO status has been updated to accepted\n", - "2024-11-08 16:32:44,906 INFO status has been updated to running\n", - "2024-11-08 16:32:47,222 INFO status has been updated to successful\n" - ] - }, - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "09118f68c4864e688a354f1913744d26", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "4370e553fb6d0700264d5dc2075c0fbd.zip: 0%| | 0.00/908M [00:00