The Rescaled Auto-Density (RAD) is a noise-robust metric for inferring the distance to criticality (the DTC). It aims to perform well in settings where the noise level varies between time series.
π Original Paper
π B. Harris, L. Gollo, B.D. Fulcher. "Tracking the distance to criticality in systems with unknown noise", Physical Review X14: 031021 (2024).
β What is RAD?
In the original paper, we use a data-driven approach to develop theory on how to infer the DTC in the variable-noise setting. This theory relies on two key time-series properties: i) the distribution of values; and ii) the scale of fast fluctuations. Combining these two properties, such as by curve-fitting the distribution and solving the Fokker--Planck equation, partials-out the noise strength to give a noise-robust estimate of the shape of the potential function (and hence the control parameter). RAD implements these key algorithmic steps in a simplified way, efficiently estimating the DTC from data using elementary time-series operations. In RAD, the two algorithmic elements are captured by standard deviations above and below the median value (measuring the distribution) and the standard deviation of lag-1 differences (measuring fast fluctuations). Our RAD feature is given by:
where is the input time series, is the lag-1 difference operator, is the standard deviation, and:
where is the median value of .
π Running RAD in Matlab, Julia, and Python
Code for implementing RAD is available in Matlab, Julia, and Python is collected in this .
RAD is also available through existing toolboxes in both Matlab and Julia:
Matlab: in hctsa (as of v1.08), as the CR_RAD()master function, and CR_RAD_1, CR_RAD_2, and CR_RAD_tau operations
Julia: in the TimeseriesFeatures.jl package (StatsBase.jl
β οΈ Usage Guide
RAD is best suited to univariate, regularly-sampled time-series data. The sampling period should be constant between time series, and if the time series are excessively smooth or over-sampled then the tau parameter should be set to a value greater than 1 (a suitable tau can often be inferred by inspecting the autocorrelation function of a time series).
RAD also includes an optional centering step (doAbs), which should be enabled if the time series do not represent radial values (e.g., enable doAbs if the distribution is approximately symmetric or 'two-sided').
theft is a software package for R that facilitates user-friendly access to a consistent interface for the extraction of time-series features. The package provides a single point of access to >1200 time-series features from a range of existing R and Python packages.
pyspi
Python library for computing hundreds of Statistics of Pairwise Interactions (SPIs)
Overview
Link to substantial documentation
TSDR: time-series dimension reduction
Time-series dimension reduction (TSDR) methods are a class of algorithms for the dimension reduction of multivariate data that exploit temporal structure. In contrast to general dimension reduction (GDR) methods, the outputs of which are invariant to temporal permutation, TSDR methods are sensitive to temporal structure.
The figure above visually summarises the categories of TSDR methods that we defined in Owens & Fulcher (2025). The first five categories (a-e) consist of methods that extract time-series components based on different aspects of temporal structure, i.e., slowness, autocorrelation, predictability, determinism, and non-stationarity. The final two categories (f-g) consist of methods that share a common methodology, i.e., diffusion-based and latent variable methods.
The tsdr package provides python implementations of TSDR methods from each of these seven categories. The methods are written in a simple, readable style to enable time-series analysts to better understand a subset of the TSDR methods discussed in the Owens & Fulcher (2025) review paper.
If you use this software, please read and cite this open-access article:
A feature-based information-theoretic method for detecting interpretable, long-timescale pairwise interactions from time series.
Original Paper π
Nguyen et al., Physical Review Research (2025). "A feature-based information-theoretic approach for detecting interpretable, long-timescale pairwise interactions from time series". Link.
This figure illustrates our method for detecting cases in which a target process, Y , is influenced by a statistical property of a recent time window of a source process, X. We contrast it to conventional MI, estimated directly from the signal-space of the variables, which we denote as MIs. (a) MIs is computed based on the observed time-series values of process X and Y. (b) MIf iterates through time-series segments of length l of process X and reduces each window to a single real-valued summary statistic zt. MI is then computed between feature variable Zt and the target variable Yt+1.
Public code repository (R)
R code is available , which includes code for implementing our method and for reproducing all results in our paper
For full documentation for using catch22 across multiple programming languages, click the link below!:
Time-series software developed in the Dynamics and Neural Systems Group
This web resource provides an overview of the time-series analysis software that our research group has developed.
We are the Dynamics and Neural Systems Group (led by A/Prof Ben Fulcher) and part of Complex Systems research in the School of Physics at the University of Sydney.
We are detectives π΅οΈ interested in finding patterns in time-series measured from complex time-varying systems π. As part of this process, we often develop algorithms and analysis techniques that could be useful to researchers or general analysts interested in applying them on their own data.
Our group has developed three main types of software:
Software for implementing highly comparative time-series analysis, in which a large library of scientific methods are implemented and compared.
Feature subsets are efficiently coded (and high-performing) subsets of time-series features derived from the full set of features in hctsa.
New types of specific time-series analysis methods developed in the group.
Highly Comparative Toolkits
Software for implementing highly comparative time-series analysis, in which a large library of scientific methods are implemented and compared.
Feature Subsets
Efficiently coded (and high-performing) subsets of time-series features derived from the full set of features in hctsa.
Time-Series Analysis Methods
Time-series analysis methods developed we've developed.
hctsa
(Matlab)
>7000 univariate time-series analysis features and an associated analytic pipeline.
pyspi
(python)(>140 statistics of pairwise interaction between pairs of time series).
theft
(R)An R implementation of feature-based time-series analysis using open-source feature sets.
py-hctsa
(Python)
A native python implementation of hctsa with over 4000 univariate time-series analysis features.
(Matlab & Python)
>130 multi-neuron spike train measures of synchrony, oscillations, phase relationships, and spiking intensity and variability
catch22(C, Matlab, R, Python, Julia)
A reduced set of 22 high-performing (and minimally redundant) time-series features.
catchamouse16
(C, Matlab, python)
A reduced set of 16 high-performing (and minimally redundant) features for analyzing fMRI time series.
Rescaled Auto-Density (RAD)
(Matlab, python, Julia)
An efficient method to infer the distance to a critical point from noisy time series.
Feature-based mutual information (MIf)
(python)
A method to infer long-timescale, feature-mediated interactions between processes.
MPSTime
A quantum-inspired method for estimation of the time-series joint probability distribution using matrix-product states (MPS).
catchaMouse16
CAnonical Time-series CHaracteristics for Mouse fMRI
The catchaMouse16 feature set provides a useful set of features to summarize the dynamics of fMRI time-series data, with implementations for Python, MATLAB, and C.
Details are in the following (open) journal publication
A specific subset of 16 time-series features from the time-series feature library designed to distinguish changes in functional Magnetic Resonance Imaging (fMRI) time series taken from mice undergoing experimental manipulations of excitatory and inhibitory neural activity in their cortical circuits.
It is a collection of features that are generated from a general pipeline (which can be accessed ) applied to mouse fMRI time-series data taken from mice. This represents a high-performing but minimally redundant data-driven subset of the full library of hctsa features, that best discriminate biologically relevant manipulations (using the DREADD technique) from non-invasive fMRI time series.
π Reading more about the background to catchaMouse16
For information on the full set of over 7000 time-series features on which catchaMouse16 was derived, see the following (open) publications:
B.D. Fulcher and N.S. Jones. . Cell Systems5, 527 (2017).
B.D. Fulcher, M.A. Little, N.S. Jones . J. Roy. Soc. Interface10, 83 (2013).
The pipeline to reproduce to create the catchaMouse16 feature set is an adaptation of the general pipeline from C.H. Lubba, S.S. Sethi, P. Knaute, S.R. Schultz, B.D. Fulcher, N.S. Jones. . Data Mining and Knowledge Discovery (2019).
β¨οΈ Installation
For C, MATLAB, and Python, the catchaMouse16 contains source code for building native binaries that can be called from these languages. You will need to download the source code, install (unix only), then follow the instructions below to build the binaries for your language.
For Julia users, these binaries have been for all platforms. You can use them by installing the package.
Compile by executing the makefile inside the catchaMouse16/C by running make.
Compute all the time-series features for some time-series data contained in <infile>.
If an <outfile is not provided then the output is sent to the stdout.
Access the efficient python implementation of feature-set by running the following code:
You can then test the features with
βΉ Feature Descriptions
Note: All catchaMouse16 features are statistical properties of the z-scored time series - they aim to focus on the properties of the time-ordering of the data and are insensitive to the raw values in the time series.
The table below follows the same cypher as in with the hctsa name and an interpretable name referred to throughout the paper with corresponding descriptions.
Interpretable name
Feature name
Description
MPSTime
A quantum inspired method for estimating the time-series joint distribution using matrix-product states (MPS).
Original Paper π
J.B. Moore, H.P. Stackhouse, B.D. Fulcher, S. Mahmoodian. "Using matrix-product states for time-series machine learning", .
using CatchaMouse16
X = randn(1000, 10) # an Array{Float64, 2} with 10 time series
f = catchaMouse16[:AC_nl_035](X) # a 1Γ10 Matrix{Float64} (for a single feature)
F = catchaMouse16(X) # a 16Γ10 FeatureMatrix{Float64} (for 16 features)
Public code repository (Julia)
Julia code is available in this GitHub repository, which includes code for implementing our method and for reproducing all results in our paper:
MPSTime, a framework for time-series machine learning with MPSs. (a) Encoding: Each real-valued time series amplitude π₯π‘ is encoded in a π-dimensional vector ππ‘ by projecting its value onto a truncated orthonormal basis with π basis functions. An entire time series (of length π samples) is then encoded as a set of π ππ‘ vectors, which we represent as a product state embedded in a ππ-dimensional Hilbert space. (b) MPS training: Using observed time series from a dataset, a generally entangled MPSβdepicted here using Penrose graphical notationβwith maximum bond dimension πmax is trained with a DMRG-inspired sweeping optimization algorithm to approximate the joint distribution of the training data. Two copies of the trained MPS (one conjugate-transposed, denoted by the dagger β ) with open physical indices encodes the learned distribution, allowing us to sample from and do inference with complex high-dimensional time-series distributions. In this work, we introduce MPS-based learning algorithms, which we collectively refer to as MPSTime, for two important time-series ML problems: (c) imputation (inferring unmeasured values of a time series) and (d) classification (inferring a time-series class). (c) Generative time-series modeling: We use conditional sampling to perform imputation of missing datapoints. Known points of a single time-series instance (black lines) project the MPS into a subspace, which is then used to find the unknown datapoints (red line). The same method can be used to tackle some forecasting problems if the missing points are future values. (d) MPS for classification: Multiple labeled classes of time series are used to train MPSs. Taking the overlap of unlabeled time-series data (encoded as a product state) with each MPS determines its class.