Data augmentation

Functions to augment the user’s dataset with information from official sources.

gingado provides data augmentation functionalities that can help users to augment their datasets with a time series dimension. This can be done both on a stand-alone basis as the user incorporates new data on top of the original dataset, or as part of a scikit-learn Pipeline that also includes other steps like data transformation and model estimation.

Data augmentation with SDMX

The Statistical Data and Metadata eXchange (SDMX) is an ISO standard comprising:

  • technical standards

  • statistical guidelines, including cross-domain concepts and codelists

  • an IT architecture and tools

SDMX is sponsored by the Bank for International Settlements, European Central Bank, Eurostat, International Monetary Fund, Organisation for Economic Co-operation and Development, United Nations, and World Bank Group.

More information about the SDMX is available on its webpage.

gingado uses SDMX to augment user datasets through the transformer AugmentSDMX.

For example, the code below is a simple illustration of AugmentSDMX augmentation under two scenarios: without a variance threshold (ie, including all data regardless if they are constants) or with a relatively high variance threshold (such that no data is actually added).

In both cases, the object is using the default data flow, which is the daily series of monetary policy rates set by central banks.

These AugmentSDMX objects are used to augment a data frame with simulated data for illustrative purposes. In real life, this data would be the user’s original data.

rng = np.random.default_rng(seed=42)

periods = 15
idx = pd.date_range(freq='d', start='2020-01-01', periods=periods)
orig_data = pd.DataFrame({'orig_col': rng.standard_normal(periods)}, index=idx)
orig_data.head()
orig_col
2020-01-01 0.304717
2020-01-02 -1.039984
2020-01-03 0.750451
2020-01-04 0.940565
2020-01-05 -1.951035
from gingado.augmentation import AugmentSDMX
aug_NoVarThresh = AugmentSDMX(variance_threshold=None)
aug_data = aug_NoVarThresh.fit_transform(orig_data)
aug_data
Querying data from BIS's dataflow 'WS_CBPOL_D' - Policy rates daily...
orig_col BIS__WS_CBPOL_D_D__CH BIS__WS_CBPOL_D_D__CL BIS__WS_CBPOL_D_D__CN BIS__WS_CBPOL_D_D__CO BIS__WS_CBPOL_D_D__CZ BIS__WS_CBPOL_D_D__DK BIS__WS_CBPOL_D_D__GB BIS__WS_CBPOL_D_D__HK BIS__WS_CBPOL_D_D__HU ... BIS__WS_CBPOL_D_D__RO BIS__WS_CBPOL_D_D__RS BIS__WS_CBPOL_D_D__RU BIS__WS_CBPOL_D_D__SA BIS__WS_CBPOL_D_D__SE BIS__WS_CBPOL_D_D__TH BIS__WS_CBPOL_D_D__TR BIS__WS_CBPOL_D_D__US BIS__WS_CBPOL_D_D__XM BIS__WS_CBPOL_D_D__ZA
2020-01-01 0.304717 -0.75 NaN 4.15 4.25 NaN NaN NaN NaN NaN ... NaN 2.25 NaN 2.25 NaN NaN NaN 1.625 0.0 NaN
2020-01-02 -1.039984 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.73 0.9 ... NaN 2.25 NaN 2.25 -0.25 1.25 12.0 1.625 0.0 6.5
2020-01-03 0.750451 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.68 0.9 ... 2.5 2.25 NaN 2.25 -0.25 1.25 12.0 1.625 0.0 6.5
2020-01-04 0.940565 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.68 0.9 ... 2.5 2.25 NaN 2.25 -0.25 1.25 12.0 1.625 0.0 6.5
2020-01-05 -1.951035 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.68 0.9 ... 2.5 2.25 NaN 2.25 -0.25 1.25 12.0 1.625 0.0 6.5
2020-01-06 -1.302180 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.55 0.9 ... 2.5 2.25 6.25 2.25 -0.25 1.25 12.0 1.625 0.0 6.5
2020-01-07 0.127840 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.41 0.9 ... 2.5 2.25 6.25 2.25 -0.25 1.25 12.0 1.625 0.0 6.5
2020-01-08 -0.316243 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.28 0.9 ... 2.5 2.25 6.25 2.25 0.00 1.25 12.0 1.625 0.0 6.5
2020-01-09 -0.016801 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.00 0.9 ... 2.5 2.25 6.25 2.25 0.00 1.25 12.0 1.625 0.0 6.5
2020-01-10 -0.853044 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.00 0.9 ... 2.5 2.25 6.25 2.25 0.00 1.25 12.0 1.625 0.0 6.5
2020-01-11 0.879398 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.00 0.9 ... 2.5 2.25 6.25 2.25 0.00 1.25 12.0 1.625 0.0 6.5
2020-01-12 0.777792 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.00 0.9 ... 2.5 2.25 6.25 2.25 0.00 1.25 12.0 1.625 0.0 6.5
2020-01-13 0.066031 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.00 0.9 ... 2.5 2.25 6.25 2.25 0.00 1.25 12.0 1.625 0.0 6.5
2020-01-14 1.127241 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.00 0.9 ... 2.5 2.25 6.25 2.25 0.00 1.25 12.0 1.625 0.0 6.5
2020-01-15 0.467509 -0.75 1.75 4.15 4.25 2.0 -0.75 0.75 2.00 0.9 ... 2.5 2.25 6.25 2.25 0.00 1.25 12.0 1.625 0.0 6.5

15 rows × 39 columns

aug_StrictVarThresh = AugmentSDMX(variance_threshold=10)
aug_data = aug_StrictVarThresh.fit_transform(orig_data)
aug_data
Querying data from BIS's dataflow 'WS_CBPOL_D' - Policy rates daily...
No columns added to original data because no feature in x meets the variance threshold 10.00000
orig_col
2020-01-01 0.304717
2020-01-02 -1.039984
2020-01-03 0.750451
2020-01-04 0.940565
2020-01-05 -1.951035
2020-01-06 -1.302180
2020-01-07 0.127840
2020-01-08 -0.316243
2020-01-09 -0.016801
2020-01-10 -0.853044
2020-01-11 0.879398
2020-01-12 0.777792
2020-01-13 0.066031
2020-01-14 1.127241
2020-01-15 0.467509

AugmentSDMX

AugmentSDMX (sources: 'dict' = {'BIS': 'WS_CBPOL_D'}, variance_threshold: 'float | None' = None, propagate_last_known_value: 'bool' = True, fillna: 'float | int' = 0, verbose: 'bool' = True)

A transformer that augments a dataset using SDMX data.

Attributes:
    sources (dict): A dictionary with sources as keys and dataflows as values.
    variance_threshold (float | None): Variables with lower variance through time are removed if specified. Otherwise, all variables are kept.
    propagate_last_known_value (bool): Whether to propagate the last known non-NA value to following dates.
    fillna (float | int): Value to use to fill missing data.
    verbose (bool): Whether to inform the user about the process progress.

fit

fit (self, X: 'pd.Series | pd.DataFrame', y: 'None' = None)

Fits the instance of AugmentSDMX to `X`, learning its time series frequency.

Args:
    X (pd.Series | pd.DataFrame): Data having an index of `datetime` type.
    y (None): `y` is kept as an argument for API consistency only.

Returns:
    AugmentSDMX: A fitted version of the same AugmentSDMX instance.

transform

transform (self, X: 'pd.Series | pd.DataFrame', y: 'None' = None, training: 'bool' = False) -> 'np.ndarray'

Transforms input dataset `X` by adding the requested data using SDMX.

Args:
    X (pd.Series | pd.DataFrame): Data having an index of `datetime` type.
    y (None): `y` is kept as an argument for API consistency only.
    training (bool): `True` if `transform` is called during training, `False` (default) if called during testing.

Returns:
    np.ndarray: `X` augmented with data from SDMX with the same number of samples but more columns.

fit_transform

fit_transform (self, X: 'pd.Series | pd.DataFrame', y: 'None' = None) -> 'np.ndarray'

Fit to data, then transform it.

Args:
    X (pd.Series | pd.DataFrame): Data having an index of `datetime` type.
    y (None): `y` is kept as an argument for API consistency only.

Returns:
    np.ndarray: `X` augmented with data from SDMX with the same number of samples but more columns.

Compatibility with scikit-learn

As mentioned above, gingado’s transformers are built to be compatible with scikit-learn. The code below demonstrates this compatibility.

First, we create the example dataset. In this case, it comprises the daily foreign exchange rate of selected currencies to the Euro. The Brazilian Real (BRL) is chosen for this example as the dependent variable.

from gingado.utils import load_SDMX_data, Lag
from sklearn.model_selection import TimeSeriesSplit
X = load_SDMX_data(
    sources={'ECB': 'EXR'}, 
    keys={'FREQ': 'D', 'CURRENCY': ['EUR', 'AUD', 'BRL', 'CAD', 'CHF', 'GBP', 'JPY', 'SGD', 'USD']},
    params={"startPeriod": 2003}
    )
# drop rows with empty values
X.dropna(inplace=True)
# adjust column names in this simple example for ease of understanding:
# remove parts related to source and dataflow names
X.columns = X.columns.str.replace("ECB__EXR_D__", "").str.replace("__EUR__SP00__A", "")
X = Lag(lags=1, jump=0, keep_contemporaneous_X=True).fit_transform(X)
y = X.pop('BRL')
# retain only the lagged variables in the X variable
X = X[X.columns[X.columns.str.contains('_lag_')]]
Querying data from ECB's dataflow 'EXR' - Exchange Rates...
X_train, X_test = X.iloc[:-1], X.tail(1)
y_train, y_test = y.iloc[:-1], y.tail(1)

X_train.shape, y_train.shape, X_test.shape, y_test.shape
((5417, 8), (5417,), (1, 8), (1,))

Next, the data augmentation object provided by gingado adds more data. In this case, for brevity only one dataflow from one source is listed. If users want to add more SDMX sources, simply add more keys to the dictionary. And if users want data from all dataflows from a given source provided the keys and parameters such as frequency and dates match, the value should be set to 'all', as in {'ECB': ['CISS'], 'BIS': 'all'}.

test_src = {'ECB': ['CISS'], 'BIS': ['WS_CBPOL_D']}

X_train__fit_transform = AugmentSDMX(sources=test_src).fit_transform(X=X_train)
X_train__fit_then_transform = AugmentSDMX(sources=test_src).fit(X=X_train).transform(X=X_train, training=True)

assert X_train__fit_transform.shape == X_train__fit_then_transform.shape
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
Querying data from BIS's dataflow 'WS_CBPOL_D' - Policy rates daily...
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
Querying data from BIS's dataflow 'WS_CBPOL_D' - Policy rates daily...

This is the dataset now after this particular augmentation:

print(f"No of columns: {len(X_train__fit_transform.columns)} {X_train__fit_transform.columns}")
X_train__fit_transform
No of columns: 69 Index(['AUD_lag_1', 'BRL_lag_1', 'CAD_lag_1', 'CHF_lag_1', 'GBP_lag_1',
       'JPY_lag_1', 'SGD_lag_1', 'USD_lag_1',
       'ECB__CISS_D__AT__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__BE__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__CN__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__DE__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__ES__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__FI__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__FR__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__GB__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__IE__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__IT__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__NL__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__PT__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_BM__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_CI__IDX',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_CO__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_EM__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_FI__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_FX__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_MM__CON',
       'ECB__CISS_D__US__Z0Z__4F__EC__SS_CI__IDX',
       'ECB__CISS_D__US__Z0Z__4F__EC__SS_CIN__IDX', 'BIS__WS_CBPOL_D_D__CH',
       'BIS__WS_CBPOL_D_D__CL', 'BIS__WS_CBPOL_D_D__CN',
       'BIS__WS_CBPOL_D_D__CO', 'BIS__WS_CBPOL_D_D__CZ',
       'BIS__WS_CBPOL_D_D__DK', 'BIS__WS_CBPOL_D_D__GB',
       'BIS__WS_CBPOL_D_D__HK', 'BIS__WS_CBPOL_D_D__HR',
       'BIS__WS_CBPOL_D_D__HU', 'BIS__WS_CBPOL_D_D__ID',
       'BIS__WS_CBPOL_D_D__IL', 'BIS__WS_CBPOL_D_D__IN',
       'BIS__WS_CBPOL_D_D__IS', 'BIS__WS_CBPOL_D_D__JP',
       'BIS__WS_CBPOL_D_D__AR', 'BIS__WS_CBPOL_D_D__KR',
       'BIS__WS_CBPOL_D_D__MA', 'BIS__WS_CBPOL_D_D__MK',
       'BIS__WS_CBPOL_D_D__MX', 'BIS__WS_CBPOL_D_D__BR',
       'BIS__WS_CBPOL_D_D__MY', 'BIS__WS_CBPOL_D_D__NO',
       'BIS__WS_CBPOL_D_D__NZ', 'BIS__WS_CBPOL_D_D__PE',
       'BIS__WS_CBPOL_D_D__PH', 'BIS__WS_CBPOL_D_D__CA',
       'BIS__WS_CBPOL_D_D__PL', 'BIS__WS_CBPOL_D_D__AU',
       'BIS__WS_CBPOL_D_D__RO', 'BIS__WS_CBPOL_D_D__RS',
       'BIS__WS_CBPOL_D_D__RU', 'BIS__WS_CBPOL_D_D__SA',
       'BIS__WS_CBPOL_D_D__SE', 'BIS__WS_CBPOL_D_D__TH',
       'BIS__WS_CBPOL_D_D__TR', 'BIS__WS_CBPOL_D_D__US',
       'BIS__WS_CBPOL_D_D__XM', 'BIS__WS_CBPOL_D_D__ZA'],
      dtype='object')
AUD_lag_1 BRL_lag_1 CAD_lag_1 CHF_lag_1 GBP_lag_1 JPY_lag_1 SGD_lag_1 USD_lag_1 ECB__CISS_D__AT__Z0Z__4F__EC__SS_CIN__IDX ECB__CISS_D__BE__Z0Z__4F__EC__SS_CIN__IDX ... BIS__WS_CBPOL_D_D__RO BIS__WS_CBPOL_D_D__RS BIS__WS_CBPOL_D_D__RU BIS__WS_CBPOL_D_D__SA BIS__WS_CBPOL_D_D__SE BIS__WS_CBPOL_D_D__TH BIS__WS_CBPOL_D_D__TR BIS__WS_CBPOL_D_D__US BIS__WS_CBPOL_D_D__XM BIS__WS_CBPOL_D_D__ZA
TIME_PERIOD
2003-01-03 1.8554 3.6770 1.6422 1.4528 0.65200 124.40 1.8188 1.0446 0.021899 0.043292 ... NaN 9.5 NaN NaN 3.75 1.75 44.0 1.250 2.75 13.50
2003-01-06 1.8440 3.6112 1.6264 1.4555 0.65000 124.56 1.8132 1.0392 0.020801 0.039924 ... 19.75 9.5 NaN 2.0 3.75 1.75 44.0 1.250 2.75 13.50
2003-01-07 1.8281 3.5145 1.6383 1.4563 0.64950 124.40 1.8210 1.0488 0.019738 0.038084 ... 19.75 9.5 NaN 2.0 3.75 1.75 44.0 1.250 2.75 13.50
2003-01-08 1.8160 3.5139 1.6257 1.4565 0.64960 124.82 1.8155 1.0425 0.019947 0.040338 ... 19.75 9.5 21.0 2.0 3.75 1.75 44.0 1.250 2.75 13.50
2003-01-09 1.8132 3.4405 1.6231 1.4586 0.64950 124.90 1.8102 1.0377 0.017026 0.040535 ... 19.75 9.5 21.0 2.0 3.75 1.75 44.0 1.250 2.75 13.50
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2024-02-19 1.6517 5.3551 1.4518 0.9491 0.85605 161.88 1.4500 1.0768 0.101870 0.089907 ... 7.00 6.5 16.0 6.0 4.00 2.50 45.0 5.375 4.50 8.25
2024-02-20 1.6479 5.3433 1.4522 0.9492 0.85448 161.59 1.4503 1.0776 0.102998 0.087479 ... 7.00 6.5 16.0 6.0 4.00 2.50 45.0 5.375 4.50 8.25
2024-02-21 1.6457 5.3521 1.4562 0.9526 0.85660 162.18 1.4525 1.0802 0.107851 0.088532 ... 7.00 6.5 16.0 6.0 4.00 2.50 45.0 5.375 4.50 8.25
2024-02-22 1.6486 5.3253 1.4618 0.9510 0.85619 162.12 1.4524 1.0809 0.103062 0.086758 ... 7.00 6.5 16.0 6.0 4.00 2.50 45.0 5.375 4.50 8.25
2024-02-23 1.6515 5.3499 1.4618 0.9535 0.85625 163.12 1.4552 1.0844 0.097974 0.089931 ... 7.00 6.5 16.0 6.0 4.00 2.50 45.0 5.375 4.50 8.25

5417 rows × 69 columns

Pipeline

AugmentSDMX can also be part of a Pipeline object, which minimises operational errors during modelling and avoids using testing data during training:

from sklearn.pipeline import Pipeline
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
pipeline = Pipeline([
    ('augmentation', AugmentSDMX(sources={'BIS': 'WS_CBPOL_D'})),
    ('imp', IterativeImputer(max_iter=10)),
    ('forest', RandomForestRegressor())
], verbose=True)

Tuning the data augmentation to enhance model performance

And since AugmentSDMX can be included in a Pipeline, it can also be fine-tuned by parameter search techniques (such as grid search), further helping users make the best of available data to enhance performance of their models.

Tip

Users can cache the data augmentation step to avoid repeating potentially lengthy data downloads. See the memory argument in the sklearn.pipeline.Pipeline documentation.

grid = GridSearchCV(
    estimator=pipeline,
    param_grid={
        'augmentation': ['passthrough', AugmentSDMX(sources={'ECB': 'CISS'})]
    },
    verbose=2,
    cv=TimeSeriesSplit(n_splits=2)
    )

y_pred_grid = grid.fit(X_train, y_train).predict(X_test)
Fitting 2 folds for each of 2 candidates, totalling 4 fits
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=   0.0s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.1s
[Pipeline] ............ (step 3 of 3) Processing forest, total=   1.8s
[CV] END ...........................augmentation=passthrough; total time=   1.8s
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=   0.0s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.0s
[Pipeline] ............ (step 3 of 3) Processing forest, total=   3.7s
[CV] END ...........................augmentation=passthrough; total time=   3.7s
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=  27.9s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.5s
[Pipeline] ............ (step 3 of 3) Processing forest, total=   4.7s
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
[CV] END ..augmentation=AugmentSDMX(sources={'ECB': 'CISS'}); total time=  48.3s
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=  32.1s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.6s
[Pipeline] ............ (step 3 of 3) Processing forest, total=  10.2s
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
[CV] END ..augmentation=AugmentSDMX(sources={'ECB': 'CISS'}); total time= 1.2min
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=   0.0s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.0s
[Pipeline] ............ (step 3 of 3) Processing forest, total=   5.0s
grid.best_params_
{'augmentation': 'passthrough'}
print(f"In this particular case, the best model was achieved by {'not ' if grid.best_params_['augmentation'] == 'passthrough' else ''}using the data augmentation.")
In this particular case, the best model was achieved by not using the data augmentation.
print(f"The last value in the training dataset was {y_train.tail(1).to_numpy()}. The predicted value was {y_pred_grid}, and the actual value was {y_test.to_numpy()}.")
The last value in the training dataset was [5.3831]. The predicted value was [5.36024], and the actual value was [5.4111].

Sources of data

gingado seeks to only lists realiable data sources by choice, with a focus on official sources. This is meant to provide users with the trust that their dataset will be complemented by reliable sources. Unfortunately, it is not possible at this stage to include all official sources given the substantial manual and maintenance work. gingado leverages the existence of the Statistical Data and Metadata eXchange (SDMX), an organisation of official data sources that establishes common data and metadata formats, to download data that is relevant (and hopefully also useful) to users.

The function list_SDMX_sources returns a list of codes corresponding to the data sources available to provide gingado users with data through SDMX.

from gingado.utils import list_SDMX_sources
list_SDMX_sources()
['ABS',
 'ABS_XML',
 'BBK',
 'BIS',
 'CD2030',
 'ECB',
 'EC_COMP',
 'EC_EMPL',
 'EC_GROW',
 'ESTAT',
 'ILO',
 'IMF',
 'INEGI',
 'INSEE',
 'ISTAT',
 'LSD',
 'NB',
 'NBB',
 'OECD',
 'SGR',
 'SPC',
 'STAT_EE',
 'UNICEF',
 'UNSD',
 'WB',
 'WB_WDI']

You can also see what the available dataflows are. The code below returns a dictionary where each key is the code for an SDMX source, and the values associated with each key are the code and name for the respective dataflows.

from gingado.utils import list_all_dataflows
dflows = list_all_dataflows()
dflows
ABS_XML  ABORIGINAL_POP_PROJ                 Projected population, Aboriginal and Torres St...
         ABORIGINAL_POP_PROJ_REMOTE          Projected population, Aboriginal and Torres St...
         ABS_ABORIGINAL_POPPROJ_INDREGION    Projected population, Aboriginal and Torres St...
         ABS_ACLD_LFSTATUS                   Australian Census Longitudinal Dataset (ACLD):...
         ABS_ACLD_TENURE                     Australian Census Longitudinal Dataset (ACLD):...
                                                                   ...                        
UNSD     DF_UNData_UNFCC                                                       SDMX_GHG_UNDATA
WB       DF_WITS_Tariff_TRAINS                                WITS - UNCTAD TRAINS Tariff Data
         DF_WITS_TradeStats_Development                             WITS TradeStats Devlopment
         DF_WITS_TradeStats_Tariff                                      WITS TradeStats Tariff
         DF_WITS_TradeStats_Trade                                        WITS TradeStats Trade
Name: dataflow, Length: 11082, dtype: object

For example, the dataflows from the World Bank are:

dflows['WB']
DF_WITS_Tariff_TRAINS             WITS - UNCTAD TRAINS Tariff Data
DF_WITS_TradeStats_Development          WITS TradeStats Devlopment
DF_WITS_TradeStats_Tariff                   WITS TradeStats Tariff
DF_WITS_TradeStats_Trade                     WITS TradeStats Trade
Name: dataflow, dtype: object