This article discusses advanced techniques for time series feature engineering, such as Fourier transform, wavelet transformation, derivatives, and autocorrelation. These methods help uncover hidden structures, capture time and frequency information, and measure linear relationships between data points, thereby enhancing the performance of machine learning models.
Time series data is a key component in a wide range of domains, including finance and meteorology. Effective feature engineering is critical in preparing time series data for machine learning models.
In this article, we delve into advanced techniques for time series feature engineering, such as Fourier transform, wavelet transformation, derivatives, and autocorrelation.
These techniques assist in uncovering hidden structures and trends, capturing both time and frequency information, and measuring the linear relationship between data points at varying lags.
Techniques
Fourier Transform
Fourier transform is a mathematical technique that decomposes a time series signal into its frequency components. It's based on the principle that any signal can be broken down into a series of sinusoidal waves with varying amplitudes, frequencies, and phases.
By capturing the periodic patterns in the data, the Fourier transform can help identify hidden structures and trends that might be useful for prediction.
The Fourier Transform can be categorized into two types: the continuous Fourier Transform (CFT) for continuous signals and the discrete Fourier Transform (DFT) for discrete signals.
Fast Fourier Transform (FFT) is an efficient algorithm for computing the DFT of a sequence.
The power spectral density, obtained from the squared magnitudes of the Fourier transform, can be used as a feature in machine learning models to improve their performance, and here is a simple usage example for FFT:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.fft import fft, ifft
time_series = np.random.random(200)
# Perform Fast Fourier Transform (FFT)
fft_values = fft(time_series)
# Get the magnitude and frequencies
fft_magnitude = np.abs(fft_values)
frequencies = np.fft.fftfreq(len(time_series))
# Plot the frequency spectrum
plt.plot(frequencies, fft_magnitude)
plt.xlabel('Frequency')
plt.ylabel('Magnitude')
plt.title('Frequency Spectrum')
plt.show()
# Filter out the low magnitude frequencies
threshold = 7
fft_values_filtered = fft_values.copy()
fft_values_filtered[fft_magnitude < threshold] = 0
# Perform the inverse Fast Fourier Transform (IFFT)
filtered_time_series = ifft(fft_values_filtered)
# Plot the original and filtered time series
plt.plot(time_series, label='Original')
plt.plot(filtered_time_series.real, label='Filtered')
plt.xlabel('Time')
plt.ylabel('Value')
plt.title('Time Series Forecasting')
plt.legend()
plt.show()
Wavelet Transformation
Wavelet transformation is a mathematical technique used to decompose a signal into different frequency components. It is particularly useful in time series analysis because it allows us to capture both time and frequency information in the data.
Unlike a Fourier decomposition which always uses complex exponential (sine and cosine) basis functions, a wavelet decomposition uses a time-localized oscillatory function as the analyzing or mother wavelet.
The most common wavelet transformation is the Continuous Wavelet Transform (CWT).
CWT is defined as:
The wavelet function can be chosen based on the type of data and the desired features. A popular choice is the Morlet wavelet, a product of a complex exponential and a Gaussian function, defined as:
Scipy example to compute the CWT using the Morlet wavelet:
Derivatives can be used to describe the rate of change in time series data. For example, the first derivative represents the velocity, while the second derivative represents acceleration. By incorporating these features, we can better capture the dynamics of the time series data.
The first-order derivative represents the instantaneous rate of change of a variable with respect to time. It can help identify the direction and magnitude of the trend in the data. As it’s shown on the slide, it can be used as a marker for detecting changes in the stock or market behavior regimes.
The second-order derivative represents the rate of change of the first-order derivative. It can help identify acceleration or deceleration in the trend and detect points of inflection.
Also, here it needs to be mentioned, Seasonal derivatives and Cross-derivative: help to capture the seasonal variations in the data and pairwise variables influencing.
To compute the derivatives, we can use the finite difference method:
It’s the simplest method to compute the first derivative with pandas and use as a feature:
time_series = pd.Series(data, index=dates)
# Calculate the first-order derivative
time_series_diff = time_series.diff().dropna()
# Combine the original time series and the first-order derivative
data = pd.concat([time_series.shift(1), time_series_diff], axis=1).dropna()
data.columns = ['y_t', 'first_derivative']
Here's an example of using the first-order derivative as an indicator for regime changes. When the derivative changes its sign, it signifies a change in the regime:
Autocorrelation and Partial Autocorrelation
Autocorrelation, also known as the autocovariance function or serial correlation, measures the linear relationship between the time series data points at different lags.
Partial autocorrelation, also known as the partial autocovariance function, on the other hand, measures the correlation between two data points at different lags after removing the effect of other data points at smaller lags.
Code example for autocorrelation and partial autocorrelation:
from statsmodels.tsa.stattools
import acf, pacf
data = np.array([...])
# Input time series data
lags = 10
autocorr = acf(data, nlags=lags)
partial_autocorr = pacf(data, nlags=lags)
f, ax = plt.subplots(nrows=2, ncols=1, figsize=(width, 2*height))
plot_acf(data,lags=lag_acf, ax=ax[0])
plot_pacf(data,lags=lag_pacf, ax=ax[1], method='ols')
plt.tight_layout()
plt.show()
Conclusion
Advanced feature engineering techniques for time series data can significantly improve the performance of machine learning models.
Fourier transform, wavelet transformation, derivatives, and autocorrelation each contribute unique insights into the underlying structure and trends of temporal data.
By leveraging these techniques, analysts can create more accurate and efficient forecasting models, ultimately leading to better decision-making in various domains.