:: AForge.NET Framework :: Articles :: Forums ::

ANN TimeSeries Forcasting

Forum to discuss AForge.NET Framework, its features, API, how-tos, etc.

ANN TimeSeries Forcasting

Postby MarcDHall » Tue Dec 06, 2016 4:46 pm


I am looking into using an ANN as a means to forecast a time series. I'm using an activation network with resilient backprop. Right now my training data is algorithmicly generated (e.g. sin + sin + ramp + noise). I'm using a sliding window, so n inputs and 1 output. The training data is segmented into n+1 size chunks, randomised, fed to the learner. All fairly straight forward.

Actually it works well for many styles of training data, but only with small windows. Actually, even with a window of 2 it works. With more noisy data a window of 3 might be needed.

Oddly though, once the window size gets to about 18 the NN no longer finds a solution. The error leaps around inside a min-max range but never drops.

In general error graphs they either:

1) have a fast swoop down
2) have a period of noise limited between two values then a sudden fast swoop down. Sometimes with a second short, shallow noisy region, then finally a drop right down to <0.001
3) or they never leave the first noisy region (with larger windows).

It could be that they would resolve just taking longer, but the times involved, if that is true, leap up massively. I am passing in 720 samples per epoch and I time-out at 4000 epochs, checking at each epoch for a <0.001 error average over some range of errors.

Is there any theoretical or technical reason for this?
Posts: 1
Joined: Tue Dec 06, 2016 4:32 pm

Return to AForge.NET Framework