The Temporal Causal Discovery Framework (TCDF) is a deep learning framework implemented in PyTorch that learns a causal graph structure by discovering causal relationships in observational time series data. TCDF uses Attention-based Convolutional Neural Networks combined with a causal validation step. By interpreting the internal parameters of the convolutional networks, TCDF can also discover the time delay between a cause and the occurrence of its effect. Our framework learns temporal causal graphs, which can include confounders and instantaneous effects. This broadly applicable framework can be used to gain novel insights into the causal dependencies in a complex system, which is important for reliable predictions, knowledge discovery and data-driven decision making.
Corresponding Paper: "Causal Discovery with Attention-Based Convolutional Neural Networks". Please cite this paper when using TCDF.
- Predicts one time series based on other time series and its own historical values using CNNs
- Discovers causal relationships between time series
- Discovers time delay between cause and effect
- Plots temporal causal graph
- Plots predicted time series
- Python >= 3.5
- PyTorch (tested with PyTorch 0.4.1)
- Optional: CUDA (tested with CUDA 9.2)
- numpy
- pandas
- random
- heapq
- copy
- os
- sys
- matplotlib
- pylab
- networkx
- argparse
Required: Dataset(s) containing multiple continuous time series (such as stock markets).
File format: CSV file (comma separated) with header and a column for each time series.
The folder 'data' contains two benchmarks:
- Financial benchmark with stock returns, taken from S. Kleinberg (Finance CPT) and preprocessed
- Neuroscientific FMRI benchmark with brain networks, taken from Smith et al. and preprocessed
Furthermore, there is one small dataset for demonstration purposes (which is a subset of a financial dataset).
Run runTCDF.py --data yourdataset.csv
to run TCDF on your own dataset. TCDF will discover causal relationships between time series in the dataset and their time delay. If the ground truth is available, the results of TCDF can be compared with the ground truth for evaluation as follows: runTCDF.py --ground_truth yourdataset.csv=yourgroundtruth.csv
. Use --help to see all argument options.
To evaluate the predictions made by TCDF, run evaluate_predictions_TCDF
. Use --help to see all argument options.
Check out the Jupyter Notebook TCDF Demo
to see an example.