Data augmentation

Functions to augment the user’s dataset with information from official sources.

gingado provides data augmentation functionalities that can help users to augment their datasets with a time series dimension. This can be done both on a stand-alone basis as the user incorporates new data on top of the original dataset, or as part of a scikit-learn Pipeline that also includes other steps like data transformation and model estimation.

Data augmentation with SDMX

The Statistical Data and Metadata eXchange (SDMX) is an ISO standard comprising:

  • technical standards

  • statistical guidelines, including cross-domain concepts and codelists

  • an IT architecture and tools

SDMX is sponsored by the Bank for International Settlements, European Central Bank, Eurostat, International Monetary Fund, Organisation for Economic Co-operation and Development, United Nations, and World Bank Group.

More information about the SDMX is available on its webpage.

gingado uses SDMX to augment user datasets through the transformer AugmentSDMX.

For example, the code below is a simple illustration of AugmentSDMX augmentation under two scenarios: without a variance threshold (ie, including all data regardless if they are constants) or with a relatively high variance threshold (such that no data is actually added).

In both cases, the object is using the default data flow, which is the daily series of monetary policy rates set by central banks.

These AugmentSDMX objects are used to augment a data frame with simulated data for illustrative purposes. In real life, this data would be the user’s original data.

rng = np.random.default_rng(seed=42)

periods = 15
idx = pd.date_range(freq='d', start='2020-01-01', periods=periods)
orig_data = pd.DataFrame({'orig_col': rng.standard_normal(periods)}, index=idx)
orig_data.head()
orig_col
2020-01-01 0.304717
2020-01-02 -1.039984
2020-01-03 0.750451
2020-01-04 0.940565
2020-01-05 -1.951035
from gingado.augmentation import AugmentSDMX
aug_NoVarThresh = AugmentSDMX(variance_threshold=None)
aug_data = aug_NoVarThresh.fit_transform(orig_data)
aug_data
Querying data from BIS's dataflow 'WS_CBPOL' - Policy rate...
orig_col BIS__WS_CBPOL_D__CZ BIS__WS_CBPOL_D__DK BIS__WS_CBPOL_D__GB BIS__WS_CBPOL_D__HK BIS__WS_CBPOL_D__HU BIS__WS_CBPOL_D__ID BIS__WS_CBPOL_D__IL BIS__WS_CBPOL_D__IN BIS__WS_CBPOL_D__IS ... BIS__WS_CBPOL_D__TR BIS__WS_CBPOL_D__US BIS__WS_CBPOL_D__XM BIS__WS_CBPOL_D__ZA BIS__WS_CBPOL_D__AU BIS__WS_CBPOL_D__AR BIS__WS_CBPOL_D__CH BIS__WS_CBPOL_D__CL BIS__WS_CBPOL_D__CN BIS__WS_CBPOL_D__CO
2020-01-01 0.304717 NaN NaN NaN NaN NaN NaN 0.25 5.15 3.0 ... NaN 1.625 0.0 NaN NaN 55.0 -0.75 NaN 4.15 4.25
2020-01-02 -1.039984 2.0 -0.75 0.75 2.73 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 55.0 -0.75 1.75 4.15 4.25
2020-01-03 0.750451 2.0 -0.75 0.75 2.68 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 55.0 -0.75 1.75 4.15 4.25
2020-01-04 0.940565 2.0 -0.75 0.75 2.68 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 55.0 -0.75 1.75 4.15 4.25
2020-01-05 -1.951035 2.0 -0.75 0.75 2.68 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 55.0 -0.75 1.75 4.15 4.25
2020-01-06 -1.302180 2.0 -0.75 0.75 2.55 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 55.0 -0.75 1.75 4.15 4.25
2020-01-07 0.127840 2.0 -0.75 0.75 2.41 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 55.0 -0.75 1.75 4.15 4.25
2020-01-08 -0.316243 2.0 -0.75 0.75 2.28 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 55.0 -0.75 1.75 4.15 4.25
2020-01-09 -0.016801 2.0 -0.75 0.75 2.00 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 55.0 -0.75 1.75 4.15 4.25
2020-01-10 -0.853044 2.0 -0.75 0.75 2.00 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 52.0 -0.75 1.75 4.15 4.25
2020-01-11 0.879398 2.0 -0.75 0.75 2.00 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 52.0 -0.75 1.75 4.15 4.25
2020-01-12 0.777792 2.0 -0.75 0.75 2.00 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 52.0 -0.75 1.75 4.15 4.25
2020-01-13 0.066031 2.0 -0.75 0.75 2.00 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 52.0 -0.75 1.75 4.15 4.25
2020-01-14 1.127241 2.0 -0.75 0.75 2.00 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 52.0 -0.75 1.75 4.15 4.25
2020-01-15 0.467509 2.0 -0.75 0.75 2.00 0.9 5.0 0.25 5.15 3.0 ... 12.0 1.625 0.0 6.5 0.75 52.0 -0.75 1.75 4.15 4.25

15 rows × 39 columns

aug_StrictVarThresh = AugmentSDMX(variance_threshold=10)
aug_data = aug_StrictVarThresh.fit_transform(orig_data)
aug_data
Querying data from BIS's dataflow 'WS_CBPOL' - Policy rate...
No columns added to original data because no feature in x meets the variance threshold 10.00000
orig_col
2020-01-01 0.304717
2020-01-02 -1.039984
2020-01-03 0.750451
2020-01-04 0.940565
2020-01-05 -1.951035
2020-01-06 -1.302180
2020-01-07 0.127840
2020-01-08 -0.316243
2020-01-09 -0.016801
2020-01-10 -0.853044
2020-01-11 0.879398
2020-01-12 0.777792
2020-01-13 0.066031
2020-01-14 1.127241
2020-01-15 0.467509

AugmentSDMX

AugmentSDMX (sources: 'dict' = {'BIS': 'WS_CBPOL_D'}, variance_threshold: 'float | None' = None, propagate_last_known_value: 'bool' = True, fillna: 'float | int' = 0, verbose: 'bool' = True)

A transformer that augments a dataset using SDMX data.

Attributes:
    sources (dict): A dictionary with sources as keys and dataflows as values.
    variance_threshold (float | None): Variables with lower variance through time are removed if specified. Otherwise, all variables are kept.
    propagate_last_known_value (bool): Whether to propagate the last known non-NA value to following dates.
    fillna (float | int): Value to use to fill missing data.
    verbose (bool): Whether to inform the user about the process progress.

fit

fit (self, X: 'pd.Series | pd.DataFrame', y: 'None' = None)

Fits the instance of AugmentSDMX to `X`, learning its time series frequency.

Args:
    X (pd.Series | pd.DataFrame): Data having an index of `datetime` type.
    y (None): `y` is kept as an argument for API consistency only.

Returns:
    AugmentSDMX: A fitted version of the same AugmentSDMX instance.

transform

transform (self, X: 'pd.Series | pd.DataFrame', y: 'None' = None, training: 'bool' = False) -> 'np.ndarray'

Transforms input dataset `X` by adding the requested data using SDMX.

Args:
    X (pd.Series | pd.DataFrame): Data having an index of `datetime` type.
    y (None): `y` is kept as an argument for API consistency only.
    training (bool): `True` if `transform` is called during training, `False` (default) if called during testing.

Returns:
    np.ndarray: `X` augmented with data from SDMX with the same number of samples but more columns.

fit_transform

fit_transform (self, X: 'pd.Series | pd.DataFrame', y: 'None' = None) -> 'np.ndarray'

Fit to data, then transform it.

Args:
    X (pd.Series | pd.DataFrame): Data having an index of `datetime` type.
    y (None): `y` is kept as an argument for API consistency only.

Returns:
    np.ndarray: `X` augmented with data from SDMX with the same number of samples but more columns.

Compatibility with scikit-learn

As mentioned above, gingado’s transformers are built to be compatible with scikit-learn. The code below demonstrates this compatibility.

First, we create the example dataset. In this case, it comprises the daily foreign exchange rate of selected currencies to the Euro. The Brazilian Real (BRL) is chosen for this example as the dependent variable.

from gingado.utils import load_SDMX_data, Lag
from sklearn.model_selection import TimeSeriesSplit
X = load_SDMX_data(
    sources={'ECB': 'EXR'}, 
    keys={'FREQ': 'D', 'CURRENCY': ['EUR', 'AUD', 'BRL', 'CAD', 'CHF', 'GBP', 'JPY', 'SGD', 'USD']},
    params={"startPeriod": 2003}
    )
# drop rows with empty values
X.dropna(inplace=True)
# adjust column names in this simple example for ease of understanding:
# remove parts related to source and dataflow names
X.columns = X.columns.str.replace("ECB__EXR_D__", "").str.replace("__EUR__SP00__A", "")
X = Lag(lags=1, jump=0, keep_contemporaneous_X=True).fit_transform(X)
y = X.pop('BRL')
# retain only the lagged variables in the X variable
X = X[X.columns[X.columns.str.contains('_lag_')]]
Querying data from ECB's dataflow 'EXR' - Exchange Rates...
X_train, X_test = X.iloc[:-1], X.tail(1)
y_train, y_test = y.iloc[:-1], y.tail(1)

X_train.shape, y_train.shape, X_test.shape, y_test.shape
((5583, 8), (5583,), (1, 8), (1,))

Next, the data augmentation object provided by gingado adds more data. In this case, for brevity only one dataflow from one source is listed. If users want to add more SDMX sources, simply add more keys to the dictionary. And if users want data from all dataflows from a given source provided the keys and parameters such as frequency and dates match, the value should be set to 'all', as in {'ECB': ['CISS'], 'BIS': 'all'}.

test_src = {'ECB': ['CISS'], 'BIS': ['WS_CBPOL_D']}

X_train__fit_transform = AugmentSDMX(sources=test_src).fit_transform(X=X_train)
X_train__fit_then_transform = AugmentSDMX(sources=test_src).fit(X=X_train).transform(X=X_train, training=True)

assert X_train__fit_transform.shape == X_train__fit_then_transform.shape
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...

This is the dataset now after this particular augmentation:

print(f"No of columns: {len(X_train__fit_transform.columns)} {X_train__fit_transform.columns}")
X_train__fit_transform
No of columns: 30 Index(['AUD_lag_1', 'BRL_lag_1', 'CAD_lag_1', 'CHF_lag_1', 'GBP_lag_1',
       'JPY_lag_1', 'SGD_lag_1', 'USD_lag_1',
       'ECB__CISS_D__AT__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__BE__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__CN__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__DE__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__ES__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__FI__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__FR__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__GB__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__IE__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__IT__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__NL__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__PT__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_BM__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_CI__IDX',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_CIN__IDX',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_CO__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_EM__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_FI__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_FX__CON',
       'ECB__CISS_D__U2__Z0Z__4F__EC__SS_MM__CON',
       'ECB__CISS_D__US__Z0Z__4F__EC__SS_CI__IDX',
       'ECB__CISS_D__US__Z0Z__4F__EC__SS_CIN__IDX'],
      dtype='object')
AUD_lag_1 BRL_lag_1 CAD_lag_1 CHF_lag_1 GBP_lag_1 JPY_lag_1 SGD_lag_1 USD_lag_1 ECB__CISS_D__AT__Z0Z__4F__EC__SS_CIN__IDX ECB__CISS_D__BE__Z0Z__4F__EC__SS_CIN__IDX ... ECB__CISS_D__U2__Z0Z__4F__EC__SS_BM__CON ECB__CISS_D__U2__Z0Z__4F__EC__SS_CI__IDX ECB__CISS_D__U2__Z0Z__4F__EC__SS_CIN__IDX ECB__CISS_D__U2__Z0Z__4F__EC__SS_CO__CON ECB__CISS_D__U2__Z0Z__4F__EC__SS_EM__CON ECB__CISS_D__U2__Z0Z__4F__EC__SS_FI__CON ECB__CISS_D__U2__Z0Z__4F__EC__SS_FX__CON ECB__CISS_D__U2__Z0Z__4F__EC__SS_MM__CON ECB__CISS_D__US__Z0Z__4F__EC__SS_CI__IDX ECB__CISS_D__US__Z0Z__4F__EC__SS_CIN__IDX
TIME_PERIOD
2003-01-03 1.8554 3.6770 1.6422 1.4528 0.65200 124.40 1.8188 1.0446 0.021899 0.043292 ... 0.032967 0.191425 0.087478 -0.243312 0.150230 0.139279 0.036880 0.075381 0.245832 0.134591
2003-01-06 1.8440 3.6112 1.6264 1.4555 0.65000 124.56 1.8132 1.0392 0.020801 0.039924 ... 0.032967 0.191425 0.091020 -0.243312 0.150230 0.139279 0.036880 0.075381 0.245832 0.148526
2003-01-07 1.8281 3.5145 1.6383 1.4563 0.64950 124.40 1.8210 1.0488 0.019738 0.038084 ... 0.032967 0.191425 0.093478 -0.243312 0.150230 0.139279 0.036880 0.075381 0.245832 0.156745
2003-01-08 1.8160 3.5139 1.6257 1.4565 0.64960 124.82 1.8155 1.0425 0.019947 0.040338 ... 0.032967 0.191425 0.097876 -0.243312 0.150230 0.139279 0.036880 0.075381 0.245832 0.165487
2003-01-09 1.8132 3.4405 1.6231 1.4586 0.64950 124.90 1.8102 1.0377 0.017026 0.040535 ... 0.032967 0.191425 0.100672 -0.243312 0.150230 0.139279 0.036880 0.075381 0.245832 0.184370
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2024-10-11 1.6276 6.1061 1.5031 0.9393 0.83686 162.85 1.4300 1.0932 0.016586 0.021274 ... 0.020541 0.041259 0.050987 -0.098359 0.031749 0.060016 0.006549 0.020761 0.050777 0.005564
2024-10-14 1.6233 6.0886 1.5063 0.9378 0.83705 162.94 1.4283 1.0938 0.015057 0.020007 ... 0.020541 0.041259 0.045562 -0.098359 0.031749 0.060016 0.006549 0.020761 0.050777 0.004102
2024-10-15 1.6248 6.1443 1.5047 0.9409 0.83665 163.39 1.4277 1.0915 0.014872 0.018210 ... 0.020541 0.041259 0.049828 -0.098359 0.031749 0.060016 0.006549 0.020761 0.050777 0.005212
2024-10-16 1.6236 6.0949 1.5063 0.9401 0.83355 162.85 1.4271 1.0903 0.014872 0.018210 ... 0.020541 0.041259 0.049828 -0.098359 0.031749 0.060016 0.006549 0.020761 0.050777 0.005212
2024-10-17 1.6291 6.1432 1.5013 0.9397 0.83605 162.57 1.4264 1.0897 0.014872 0.018210 ... 0.020541 0.041259 0.049828 -0.098359 0.031749 0.060016 0.006549 0.020761 0.050777 0.005212

5583 rows × 30 columns

Pipeline

AugmentSDMX can also be part of a Pipeline object, which minimises operational errors during modelling and avoids using testing data during training:

from sklearn.pipeline import Pipeline
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
pipeline = Pipeline([
    ('augmentation', AugmentSDMX(sources={'BIS': 'WS_CBPOL_D'})),
    ('imp', IterativeImputer(max_iter=10)),
    ('forest', RandomForestRegressor())
], verbose=True)

Tuning the data augmentation to enhance model performance

And since AugmentSDMX can be included in a Pipeline, it can also be fine-tuned by parameter search techniques (such as grid search), further helping users make the best of available data to enhance performance of their models.

Tip

Users can cache the data augmentation step to avoid repeating potentially lengthy data downloads. See the memory argument in the sklearn.pipeline.Pipeline documentation.

grid = GridSearchCV(
    estimator=pipeline,
    param_grid={
        'augmentation': ['passthrough', AugmentSDMX(sources={'ECB': 'CISS'})]
    },
    verbose=2,
    cv=TimeSeriesSplit(n_splits=2)
    )

y_pred_grid = grid.fit(X_train, y_train).predict(X_test)
Fitting 2 folds for each of 2 candidates, totalling 4 fits
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=   0.0s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.0s
[Pipeline] ............ (step 3 of 3) Processing forest, total=   1.6s
[CV] END ...........................augmentation=passthrough; total time=   1.7s
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=   0.0s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.0s
[Pipeline] ............ (step 3 of 3) Processing forest, total=   3.4s
[CV] END ...........................augmentation=passthrough; total time=   3.5s
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=   8.4s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.4s
[Pipeline] ............ (step 3 of 3) Processing forest, total=   5.1s
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
[CV] END ..augmentation=AugmentSDMX(sources={'ECB': 'CISS'}); total time=  21.3s
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=  15.0s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.6s
[Pipeline] ............ (step 3 of 3) Processing forest, total=  11.1s
Querying data from ECB's dataflow 'CISS' - Composite Indicator of Systemic Stress...
[CV] END ..augmentation=AugmentSDMX(sources={'ECB': 'CISS'}); total time=  40.7s
[Pipeline] ...... (step 1 of 3) Processing augmentation, total=   0.0s
[Pipeline] ............... (step 2 of 3) Processing imp, total=   0.0s
[Pipeline] ............ (step 3 of 3) Processing forest, total=   5.3s
grid.best_params_
{'augmentation': 'passthrough'}
print(f"In this particular case, the best model was achieved by {'not ' if grid.best_params_['augmentation'] == 'passthrough' else ''}using the data augmentation.")
In this particular case, the best model was achieved by not using the data augmentation.
print(f"The last value in the training dataset was {y_train.tail(1).to_numpy()}. The predicted value was {y_pred_grid}, and the actual value was {y_test.to_numpy()}.")
The last value in the training dataset was [6.1749]. The predicted value was [6.178231], and the actual value was [6.1328].

Sources of data

gingado seeks to only lists realiable data sources by choice, with a focus on official sources. This is meant to provide users with the trust that their dataset will be complemented by reliable sources. Unfortunately, it is not possible at this stage to include all official sources given the substantial manual and maintenance work. gingado leverages the existence of the Statistical Data and Metadata eXchange (SDMX), an organisation of official data sources that establishes common data and metadata formats, to download data that is relevant (and hopefully also useful) to users.

The function list_SDMX_sources returns a list of codes corresponding to the data sources available to provide gingado users with data through SDMX.

from gingado.utils import list_SDMX_sources
list_SDMX_sources()
['ABS',
 'ABS_JSON',
 'BBK',
 'BIS',
 'COMP',
 'ECB',
 'EMPL',
 'ESTAT',
 'ESTAT3',
 'ESTAT_COMEXT',
 'GROW',
 'ILO',
 'IMF',
 'INEGI',
 'INSEE',
 'ISTAT',
 'LSD',
 'NB',
 'NBB',
 'OECD',
 'OECD_JSON',
 'SGR',
 'SPC',
 'STAT_EE',
 'UNESCO',
 'UNICEF',
 'UNSD',
 'WB',
 'WB_WDI']

You can also see what the available dataflows are. The code below returns a dictionary where each key is the code for an SDMX source, and the values associated with each key are the code and name for the respective dataflows.

from gingado.utils import list_all_dataflows
dflows = list_all_dataflows()
dflows
ABS   ABORIGINAL_POP_PROJ                 Projected population, Aboriginal and Torres St...
      ABORIGINAL_POP_PROJ_REMOTE          Projected population, Aboriginal and Torres St...
      ABS_ABORIGINAL_POPPROJ_INDREGION    Projected population, Aboriginal and Torres St...
      ABS_ACLD_LFSTATUS                   Australian Census Longitudinal Dataset (ACLD):...
      ABS_ACLD_TENURE                     Australian Census Longitudinal Dataset (ACLD):...
                                                                ...                        
UNSD  DF_UNData_UNFCC                                                       SDMX_GHG_UNDATA
WB    DF_WITS_Tariff_TRAINS                                WITS - UNCTAD TRAINS Tariff Data
      DF_WITS_TradeStats_Development                             WITS TradeStats Devlopment
      DF_WITS_TradeStats_Tariff                                      WITS TradeStats Tariff
      DF_WITS_TradeStats_Trade                                        WITS TradeStats Trade
Name: dataflow, Length: 24650, dtype: object

For example, the dataflows from the World Bank are:

dflows['WB']
DF_WITS_Tariff_TRAINS             WITS - UNCTAD TRAINS Tariff Data
DF_WITS_TradeStats_Development          WITS TradeStats Devlopment
DF_WITS_TradeStats_Tariff                   WITS TradeStats Tariff
DF_WITS_TradeStats_Trade                     WITS TradeStats Trade
Name: dataflow, dtype: object