02: Classical Statistical Methodsยถ

โ€œAll models are wrong, but some are useful.โ€ - George Box

Welcome to the world of classical time series forecasting! This notebook covers the foundational statistical methods that have been successfully used for decades.

๐ŸŽฏ Learning Objectivesยถ

By the end of this notebook, youโ€™ll be able to:

  • Apply moving averages and exponential smoothing

  • Understand and implement ARIMA models

  • Handle seasonal data with SARIMA

  • Evaluate and compare forecasting methods

  • Choose the right method for your data

๐Ÿ“Š Moving Averagesยถ

Moving averages are simple but effective smoothing techniques that reduce noise and highlight trends.

Types of Moving Averages:ยถ

  1. Simple Moving Average (SMA): Equal weights to all observations $\(SMA_t = \frac{y_{t-n+1} + \dots + y_t}{n}\)$

  2. Weighted Moving Average (WMA): Different weights for different observations $\(WMA_t = \frac{w_1 y_{t-n+1} + \dots + w_n y_t}{w_1 + \dots + w_n}\)$

  3. Exponential Moving Average (EMA): Exponentially decreasing weights $\(EMA_t = \alpha y_t + (1-\alpha) EMA_{t-1}\)$

Applications:ยถ

  • Trend identification

  • Noise reduction

  • Forecasting (naive extrapolation)

import numpy as np
,
,
,
,
,
,
,
,
,
,
,
,
husl")
plt.rcParams['figure.figsize'] = [12, 8]
plt.rcParams['font.size'] = 12

# Set random seed
np.random.seed(42)

print("Classical Time Series Libraries Loaded!")
print(f"Statsmodels loaded for ARIMA and exponential smoothing")
def generate_forecasting_data(n_train=200, n_test=50):
    """Generate synthetic time series for forecasting examples"""
    
    np.random.seed(42)
    t = np.arange(n_train + n_test)
    
    # Trend + seasonality + noise
    trend = 0.02 * t
    seasonal = 3 * np.sin(2 * np.pi * t / 12)  # Monthly seasonality
    noise = np.random.normal(0, 0.5, len(t))
    
    # Add some autocorrelation (AR(1) process)
    ar_component = np.zeros(len(t))
    phi = 0.7  # Autoregressive coefficient
    for i in range(1, len(t)):
        ar_component[i] = phi * ar_component[i-1] + noise[i]
    
    y = 10 + trend + seasonal + ar_component
    
    # Create DataFrame
    dates = pd.date_range('2020-01-01', periods=len(y), freq='M')
    df = pd.DataFrame({'value': y}, index=dates)
    
    # Split into train/test
    train = df.iloc[:n_train]
    test = df.iloc[n_train:]
    
    return train, test, df

# Generate data
train_data, test_data, full_data = generate_forecasting_data()

print(f"Training data: {len(train_data)} observations")
print(f"Test data: {len(test_data)} observations")
print(f"Total: {len(full_data)} observations")

# Plot the data
plt.figure(figsize=(12, 6))
plt.plot(train_data.index, train_data['value'], 'b-', label='Training Data', linewidth=2)
plt.plot(test_data.index, test_data['value'], 'r-', label='Test Data', linewidth=2)
plt.axvline(x=train_data.index[-1], color='k', linestyle='--', alpha=0.7, label='Train/Test Split')
plt.title('Time Series Forecasting Data')
plt.xlabel('Date')
plt.ylabel('Value')
plt.legend()
plt.grid(True, alpha=0.3)
plt.show()
def moving_average_forecast(train, test, window_sizes=[3, 6, 12]):
    """Implement different moving average forecasting methods"""
    
    forecasts = {}
    
    for window in window_sizes:
        # Simple Moving Average
        sma_forecast = train['value'].rolling(window=window).mean().iloc[-1]
        sma_forecasts = [sma_forecast] * len(test)  # Constant forecast
        
        # Rolling forecast (update as we go)
        rolling_forecasts = []
        current_window = list(train['value'].values[-window:])
        
        for i in range(len(test)):
            # Forecast next value
            forecast = np.mean(current_window)
            rolling_forecasts.append(forecast)
            
            # Update window with actual value (if available)
            if i < len(test):
                current_window.append(test['value'].iloc[i])
                current_window.pop(0)  # Remove oldest
        
        forecasts[f'SMA_{window}'] = sma_forecasts
        forecasts[f'Rolling_SMA_{window}'] = rolling_forecasts
    
    # Exponential Moving Average
    alpha = 0.3  # Smoothing parameter
    ema_values = [train['value'].iloc[0]]  # Initialize with first value
    
    for val in train['value'].iloc[1:]:
        ema_new = alpha * val + (1 - alpha) * ema_values[-1]
        ema_values.append(ema_new)
    
    # Forecast with EMA
    ema_forecast = ema_values[-1]
    ema_forecasts = [ema_forecast] * len(test)
    forecasts['EMA'] = ema_forecasts
    
    return forecasts

# Generate forecasts
ma_forecasts = moving_average_forecast(train_data, test_data)

# Plot results
plt.figure(figsize=(15, 8))

# Plot actual data
plt.plot(full_data.index, full_data['value'], 'k-', linewidth=2, label='Actual', alpha=0.8)
plt.axvline(x=train_data.index[-1], color='k', linestyle='--', alpha=0.7)

# Plot forecasts
colors = ['blue', 'red', 'green', 'orange', 'purple', 'brown']
for i, (method, forecast) in enumerate(ma_forecasts.items()):
    plt.plot(test_data.index, forecast, color=colors[i % len(colors)], 
             linestyle='--', linewidth=2, label=method, alpha=0.8)

plt.title('Moving Average Forecasting Methods')
plt.xlabel('Date')
plt.ylabel('Value')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()

# Calculate accuracy metrics
print("\nForecasting Accuracy (MAE):")
print("-" * 30)

actual_values = test_data['value'].values
for method, forecast in ma_forecasts.items():
    mae = mean_absolute_error(actual_values, forecast)
    rmse = np.sqrt(mean_squared_error(actual_values, forecast))
    print(f"{method:15} | MAE: {mae:.4f} | RMSE: {rmse:.4f}")

๐Ÿ“ˆ Exponential Smoothingยถ

Exponential smoothing methods give more weight to recent observations while exponentially decreasing the weights of older observations.

Single Exponential Smoothing (SES):ยถ

\[\hat{y}_{t+1} = \alpha y_t + (1-\alpha) \hat{y}_t\]

Double Exponential Smoothing (Holtโ€™s Method):ยถ

  • Level: \(\ell_t = \alpha y_t + (1-\alpha)(\ell_{t-1} + b_{t-1})\)

  • Trend: \(b_t = \beta (\ell_t - \ell_{t-1}) + (1-\beta) b_{t-1}\)

  • Forecast: \(\hat{y}_{t+h} = \ell_t + h b_t\)

Triple Exponential Smoothing (Holt-Winters):ยถ

  • Adds seasonal component

  • Additive: For constant seasonal variation

  • Multiplicative: For increasing/decreasing seasonal variation

Parameter Selection:ยถ

  • ฮฑ (alpha): Smoothing parameter for level (0-1)

  • ฮฒ (beta): Smoothing parameter for trend (0-1)

  • ฮณ (gamma): Smoothing parameter for seasonality (0-1)

def exponential_smoothing_forecast(train, test, seasonal_periods=12):
    """Implement different exponential smoothing methods"""
    
    forecasts = {}
    
    # Single Exponential Smoothing
    try:
        ses_model = ExponentialSmoothing(train['value'], trend=None, seasonal=None)
        ses_fit = ses_model.fit(optimized=True)
        ses_forecast = ses_fit.forecast(len(test))
        forecasts['SES'] = ses_forecast.values
        print(f"SES - Alpha: {ses_fit.params['smoothing_level']:.4f}")
    except:
        print("SES fitting failed")
    
    # Double Exponential Smoothing (Holt's)
    try:
        holt_model = ExponentialSmoothing(train['value'], trend='add', seasonal=None)
        holt_fit = holt_model.fit(optimized=True)
        holt_forecast = holt_fit.forecast(len(test))
        forecasts['Holt'] = holt_forecast.values
        print(f"Holt - Alpha: {holt_fit.params['smoothing_level']:.4f}, Beta: {holt_fit.params['smoothing_trend']:.4f}")
    except:
        print("Holt fitting failed")
    
    # Triple Exponential Smoothing (Holt-Winters)
    try:
        # Additive seasonality
        hw_add_model = ExponentialSmoothing(train['value'], trend='add', seasonal='add', 
                                          seasonal_periods=seasonal_periods)
        hw_add_fit = hw_add_model.fit(optimized=True)
        hw_add_forecast = hw_add_fit.forecast(len(test))
        forecasts['Holt-Winters_Add'] = hw_add_forecast.values
        
        # Multiplicative seasonality
        hw_mult_model = ExponentialSmoothing(train['value'], trend='add', seasonal='mul', 
                                           seasonal_periods=seasonal_periods)
        hw_mult_fit = hw_mult_model.fit(optimized=True)
        hw_mult_forecast = hw_mult_fit.forecast(len(test))
        forecasts['Holt-Winters_Mult'] = hw_mult_forecast.values
        
        print(f"HW Add - Alpha: {hw_add_fit.params['smoothing_level']:.4f}, "
              f"Beta: {hw_add_fit.params['smoothing_trend']:.4f}, "
              f"Gamma: {hw_add_fit.params['smoothing_seasonal']:.4f}")
    except Exception as e:
        print(f"Holt-Winters fitting failed: {e}")
    
    return forecasts

# Generate exponential smoothing forecasts
es_forecasts = exponential_smoothing_forecast(train_data, test_data)

# Plot results
plt.figure(figsize=(15, 8))

# Plot actual data
plt.plot(full_data.index, full_data['value'], 'k-', linewidth=2, label='Actual', alpha=0.8)
plt.axvline(x=train_data.index[-1], color='k', linestyle='--', alpha=0.7)

# Plot forecasts
colors = ['red', 'blue', 'green', 'orange', 'purple']
for i, (method, forecast) in enumerate(es_forecasts.items()):
    plt.plot(test_data.index, forecast, color=colors[i % len(colors)], 
             linestyle='--', linewidth=2, label=method, alpha=0.8)

plt.title('Exponential Smoothing Forecasting Methods')
plt.xlabel('Date')
plt.ylabel('Value')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()

# Calculate accuracy
print("\nExponential Smoothing Accuracy:")
print("-" * 35)

actual_values = test_data['value'].values
for method, forecast in es_forecasts.items():
    mae = mean_absolute_error(actual_values, forecast)
    rmse = np.sqrt(mean_squared_error(actual_values, forecast))
    print(f"{method:18} | MAE: {mae:.4f} | RMSE: {rmse:.4f}")

๐Ÿ”ฎ ARIMA Modelsยถ

ARIMA (AutoRegressive Integrated Moving Average) is one of the most widely used time series forecasting methods.

Components:ยถ

  1. AR (AutoRegressive): Regression on past values $\(y_t = c + \phi_1 y_{t-1} + \phi_2 y_{t-2} + \dots + \phi_p y_{t-p} + \epsilon_t\)$

  2. I (Integrated): Differencing to achieve stationarity

    • d = number of differences needed

  3. MA (Moving Average): Regression on past forecast errors $\(y_t = c + \epsilon_t + \theta_1 \epsilon_{t-1} + \theta_2 \epsilon_{t-2} + \dots + \theta_q \epsilon_{t-q}\)$

ARIMA(p,d,q) Model:ยถ

  • p: AR order (PACF)

  • d: Differencing order (stationarity)

  • q: MA order (ACF)

Model Selection:ยถ

  • ACF/PACF analysis

  • Information criteria (AIC, BIC)

  • Cross-validation

  • Residual diagnostics

def arima_forecast(train, test, p=1, d=1, q=1):
    """Fit ARIMA model and generate forecasts"""
    
    try:
        # Fit ARIMA model
        model = ARIMA(train['value'], order=(p, d, q))
        model_fit = model.fit()
        
        # Generate forecasts
        forecast = model_fit.forecast(steps=len(test))
        
        # Get confidence intervals
        forecast_obj = model_fit.get_forecast(steps=len(test))
        conf_int = forecast_obj.conf_int()
        
        print(f"ARIMA({p},{d},{q}) fitted successfully")
        print(f"AIC: {model_fit.aic:.2f}, BIC: {model_fit.bic:.2f}")
        
        # Print model summary
        print("\nModel Summary:")
        print(model_fit.summary())
        
        return forecast.values, conf_int.values
        
    except Exception as e:
        print(f"ARIMA({p},{d},{q}) fitting failed: {e}")
        return None, None

# Test different ARIMA configurations
arima_configs = [
    (1, 0, 0),  # AR(1)
    (0, 0, 1),  # MA(1)
    (1, 0, 1),  # ARMA(1,1)
    (1, 1, 1),  # ARIMA(1,1,1)
    (2, 1, 2),  # ARIMA(2,1,2)
]

arima_results = {}

for p, d, q in arima_configs:
    print(f"\n=== ARIMA({p},{d},{q}) ===")
    forecast, conf_int = arima_forecast(train_data, test_data, p, d, q)
    if forecast is not None:
        arima_results[f'ARIMA_{p}_{d}_{q}'] = {
            'forecast': forecast,
            'conf_int': conf_int
        }

# Plot ARIMA results
plt.figure(figsize=(15, 8))

# Plot actual data
plt.plot(full_data.index, full_data['value'], 'k-', linewidth=2, label='Actual', alpha=0.8)
plt.axvline(x=train_data.index[-1], color='k', linestyle='--', alpha=0.7)

# Plot ARIMA forecasts
colors = ['red', 'blue', 'green', 'orange', 'purple']
for i, (method, results) in enumerate(arima_results.items()):
    forecast = results['forecast']
    conf_int = results['conf_int']
    
    plt.plot(test_data.index, forecast, color=colors[i % len(colors)], 
             linestyle='--', linewidth=2, label=method, alpha=0.8)
    
    # Plot confidence intervals
    plt.fill_between(test_data.index, conf_int[:, 0], conf_int[:, 1], 
                     color=colors[i % len(colors)], alpha=0.2)

plt.title('ARIMA Model Forecasts with Confidence Intervals')
plt.xlabel('Date')
plt.ylabel('Value')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()

# Calculate accuracy for ARIMA models
print("\nARIMA Model Accuracy:")
print("-" * 25)

actual_values = test_data['value'].values
for method, results in arima_results.items():
    forecast = results['forecast']
    mae = mean_absolute_error(actual_values, forecast)
    rmse = np.sqrt(mean_squared_error(actual_values, forecast))
    print(f"{method:12} | MAE: {mae:.4f} | RMSE: {rmse:.4f}")

๐Ÿ“… Seasonal ARIMA (SARIMA)ยถ

SARIMA extends ARIMA to handle seasonal patterns explicitly.

SARIMA(p,d,q)(P,D,Q,s) Model:ยถ

  • p,d,q: Non-seasonal ARIMA parameters

  • P,D,Q: Seasonal ARIMA parameters

  • s: Seasonal period (e.g., 12 for monthly, 7 for daily)

Seasonal Components:ยถ

  • Seasonal AR: \(\Phi_P (1 - \phi_1 B^s - \dots - \phi_P B^{sP})\)

  • Seasonal I: \((1 - B^s)^D\)

  • Seasonal MA: \(\Theta_Q (1 + \theta_1 B^s + \dots + \theta_Q B^{sQ})\)

When to Use SARIMA:ยถ

  • Clear seasonal patterns in ACF/PACF

  • Seasonal differencing needed for stationarity

  • Domain knowledge suggests seasonal effects

def sarima_forecast(train, test, order=(1,1,1), seasonal_order=(1,1,1,12)):
    """Fit SARIMA model and generate forecasts"""
    
    try:
        # Fit SARIMA model
        model = SARIMAX(train['value'], order=order, seasonal_order=seasonal_order)
        model_fit = model.fit(disp=False)
        
        # Generate forecasts
        forecast = model_fit.forecast(steps=len(test))
        
        # Get confidence intervals
        forecast_obj = model_fit.get_forecast(steps=len(test))
        conf_int = forecast_obj.conf_int()
        
        print(f"SARIMA{order}{seasonal_order} fitted successfully")
        print(f"AIC: {model_fit.aic:.2f}, BIC: {model_fit.bic:.2f}")
        
        # Print key parameters
        print("\nKey Parameters:")
        params = model_fit.params
        for name, value in params.items():
            print(f"  {name}: {value:.4f}")
        
        return forecast.values, conf_int.values
        
    except Exception as e:
        print(f"SARIMA fitting failed: {e}")
        return None, None

# Test SARIMA models
sarima_configs = [
    ((1,1,1), (0,0,0,12)),  # No seasonal component
    ((1,1,1), (1,0,0,12)),  # Seasonal AR
    ((1,1,1), (0,0,1,12)),  # Seasonal MA
    ((1,1,1), (1,1,1,12)),  # Full seasonal
]

sarima_results = {}

for order, seasonal_order in sarima_configs:
    print(f"\n=== SARIMA{order}{seasonal_order} ===")
    forecast, conf_int = sarima_forecast(train_data, test_data, order, seasonal_order)
    if forecast is not None:
        key = f'SARIMA_{order[0]}{order[1]}{order[2]}_{seasonal_order[0]}{seasonal_order[1]}{seasonal_order[2]}{seasonal_order[3]}'
        sarima_results[key] = {
            'forecast': forecast,
            'conf_int': conf_int
        }

# Plot SARIMA results
plt.figure(figsize=(15, 8))

# Plot actual data
plt.plot(full_data.index, full_data['value'], 'k-', linewidth=2, label='Actual', alpha=0.8)
plt.axvline(x=train_data.index[-1], color='k', linestyle='--', alpha=0.7)

# Plot SARIMA forecasts
colors = ['red', 'blue', 'green', 'orange']
for i, (method, results) in enumerate(sarima_results.items()):
    forecast = results['forecast']
    conf_int = results['conf_int']
    
    plt.plot(test_data.index, forecast, color=colors[i % len(colors)], 
             linestyle='--', linewidth=2, label=method, alpha=0.8)
    
    # Plot confidence intervals
    plt.fill_between(test_data.index, conf_int[:, 0], conf_int[:, 1], 
                     color=colors[i % len(colors)], alpha=0.2)

plt.title('SARIMA Model Forecasts')
plt.xlabel('Date')
plt.ylabel('Value')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()

# Compare all methods
print("\n=== COMPREHENSIVE MODEL COMPARISON ===")
print("Method                  | MAE     | RMSE")
print("-" * 45)

actual_values = test_data['value'].values
all_methods = {**ma_forecasts, **es_forecasts, **arima_results, **sarima_results}

for method, results in all_methods.items():
    if isinstance(results, dict) and 'forecast' in results:
        forecast = results['forecast']
    else:
        forecast = results
    
    mae = mean_absolute_error(actual_values, forecast)
    rmse = np.sqrt(mean_squared_error(actual_values, forecast))
    print(f"{method:22} | {mae:.4f} | {rmse:.4f}")

# Find best method
best_method = min(all_methods.items(), 
                  key=lambda x: mean_absolute_error(actual_values, 
                                                   x[1]['forecast'] if isinstance(x[1], dict) and 'forecast' in x[1] else x[1]))
print(f"\n๐Ÿ† Best performing method: {best_method[0]}")

๐Ÿ” Model Diagnosticsยถ

Proper model validation is crucial for reliable forecasts.

Residual Analysis:ยถ

  • Mean should be zero: No systematic bias

  • No autocorrelation: White noise residuals

  • Constant variance: Homoscedasticity

  • Normal distribution: For confidence intervals

Diagnostic Plots:ยถ

  • Residuals vs Time: Check for patterns

  • ACF of residuals: Should show no significant correlations

  • Q-Q plot: Normality check

  • Residuals vs Fitted: Heteroscedasticity check

from statsmodels.graphics.tsaplots import plot_acf
from scipy import stats
import statsmodels.api as sm

def model_diagnostics(model_fit, residuals, model_name="Model"):
    """Perform comprehensive model diagnostics"""
    
    fig, axes = plt.subplots(2, 2, figsize=(12, 10))
    
    # 1. Residuals over time
    axes[0,0].plot(residuals.index, residuals.values, 'b-', alpha=0.7)
    axes[0,0].axhline(y=0, color='r', linestyle='--', alpha=0.7)
    axes[0,0].set_title(f'{model_name}: Residuals over Time')
    axes[0,0].grid(True, alpha=0.3)
    
    # 2. ACF of residuals
    plot_acf(residuals.values, lags=20, ax=axes[0,1], alpha=0.05)
    axes[0,1].set_title(f'{model_name}: ACF of Residuals')
    
    # 3. Q-Q plot for normality
    sm.qqplot(residuals.values, line='45', ax=axes[1,0])
    axes[1,0].set_title(f'{model_name}: Q-Q Plot')
    
    # 4. Histogram of residuals
    axes[1,1].hist(residuals.values, bins=20, alpha=0.7, edgecolor='black')
    axes[1,1].axvline(x=residuals.mean(), color='r', linestyle='--', label=f'Mean: {residuals.mean():.4f}')
    axes[1,1].set_title(f'{model_name}: Residual Distribution')
    axes[1,1].legend()
    axes[1,1].grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.show()
    
    # Statistical tests
    print(f"\n=== {model_name} Diagnostics ===")
    
    # Basic statistics
    print(f"Residuals - Mean: {residuals.mean():.4f}, Std: {residuals.std():.4f}")
    print(f"Residuals - Min: {residuals.min():.4f}, Max: {residuals.max():.4f}")
    
    # Normality test (Shapiro-Wilk)
    stat, p_value = stats.shapiro(residuals.values)
    print(f"Shapiro-Wilk normality test: statistic={stat:.4f}, p-value={p_value:.4f}")
    if p_value > 0.05:
        print("โœ… Residuals appear normally distributed")
    else:
        print("โŒ Residuals do not appear normally distributed")
    
    # Ljung-Box test for autocorrelation
    from statsmodels.stats.diagnostic import acorr_ljungbox
    lb_test = acorr_ljungbox(residuals.values, lags=[10], return_df=True)
    lb_pvalue = lb_test['lb_pvalue'].iloc[0]
    print(f"Ljung-Box test (lag=10): p-value={lb_pvalue:.4f}")
    if lb_pvalue > 0.05:
        print("โœ… No significant autocorrelation in residuals")
    else:
        print("โŒ Significant autocorrelation in residuals")

# Fit a model and check diagnostics
model = ARIMA(train_data['value'], order=(1,1,1))
model_fit = model.fit()

# Get residuals
residuals = pd.Series(model_fit.resid, index=train_data.index[1:])  # Skip first residual

# Run diagnostics
model_diagnostics(model_fit, residuals, "ARIMA(1,1,1)")

# Forecast and evaluate
forecast = model_fit.forecast(steps=len(test_data))
mae = mean_absolute_error(test_data['value'], forecast)
print(f"\nTest set MAE: {mae:.4f}")

๐ŸŽฏ Key Takeawaysยถ

  1. Moving averages are simple but effective for trend identification

  2. Exponential smoothing adapts quickly to changes in level and trend

  3. ARIMA models capture complex temporal dependencies

  4. SARIMA extends ARIMA to handle seasonal patterns explicitly

  5. Model diagnostics are crucial for validating forecast quality

  6. No single method works best - always compare multiple approaches

๐Ÿ” Method Selection Guideยถ

Use Moving Averages When:ยถ

  • Simple trend following is sufficient

  • Computational resources are limited

  • Interpretability is important

Use Exponential Smoothing When:ยถ

  • Recent data is more relevant

  • Trend or seasonality is present

  • Automatic parameter optimization is desired

Use ARIMA/SARIMA When:ยถ

  • Complex temporal dependencies exist

  • Stationarity can be achieved

  • Probabilistic forecasts are needed

  • ACF/PACF show clear patterns

๐Ÿ’ก Pro Tipsยถ

  1. Always check stationarity before fitting ARIMA

  2. Use AIC/BIC for model selection, not just accuracy

  3. Validate on holdout data - donโ€™t trust in-sample fit alone

  4. Check residuals - they should be white noise

  5. Consider ensemble methods - combine multiple models

  6. Domain knowledge matters - incorporate business constraints

๐Ÿš€ Next Stepsยถ

Now that you understand classical methods, youโ€™re ready for:

  • Facebook Prophet - Automated forecasting with interpretable parameters

  • Deep Learning approaches - LSTM, Transformer-based forecasting

  • Advanced techniques - Bayesian forecasting, ensemble methods

  • Real-world applications - Production deployment and monitoring

๐Ÿ“š Further Readingยถ

  • โ€œForecasting: Principles and Practiceโ€ by Hyndman & Athanasopoulos

  • โ€œTime Series Analysis and Its Applicationsโ€ by Shumway & Stoffer

  • โ€œPractical Time Series Forecasting with Rโ€ by Hyndman & Khandakar

Ready to explore automated forecasting with Prophet? Letโ€™s continue! ๐Ÿ“Š๐Ÿ”ฎ