AForge.NET

  :: AForge.NET Framework :: Articles :: Forums ::

Scaling massive network inputs

The forum is to discuss topics from different artificial intelligence areas, like neural networks, genetic algorithms, machine learning, etc.

Scaling massive network inputs

Postby ovanwijk » Mon Jul 04, 2011 11:23 pm

At first I need to solve a certain problem that I cannot completely explain(not allowed) but I will try to define my problem as good as possible.

The main problem is that the network needs to become BIG, possible 1.000 inputs, and just 1-5 outputs.
The problem is that the source of the inputs goes thorough heavy preprocessing and then an according (classification of inputs) input node is selected. However the amount of inputs might differs every time(ie input 1-10 are used while the other time 20-1-2-4-5-45-23 are used) and this network needs to be used about every minute!

As I said the inputs might differ from massive amounts to like only 50 and it still needs to give a decent approximation. How should I do it? Every time train a network based on the current inputs and then use it for calculating the approximation or just create 1 massive network(lets say 2.000 inputs) and re-train when a 'new' input variable is added. Consider I have (now, will be MUUUCH more later) about 5.000 training samples.

And what influence does all this have on performance? Can this run on just a normal notebook/desktop? Or something more powerfull will be required?

Any thoughts are welcome!!! GReets
ovanwijk
 
Posts: 4
Joined: Mon Jul 04, 2011 10:38 pm

Re: Scaling massive network inputs

Postby andrew.kirillov » Tue Jul 05, 2011 8:45 am

If you really sure your network needs to grow over time, then I would suggest starting from here. There are lot's of things to read, for example this one.

However, I would think about the task you are trying to solve in details trying to understand if the network should really grow. You did not mention anything about the task, so hardly possible to comment on this.

ovanwijk wrote:Consider I have (now, will be MUUUCH more later) about 5.000 training samples.

Increasing number of training sample does not have relation number of inputs (just in case there is confusion about it). Every new training sample must not be represented with a new input.
With best regards,
Andrew


Interested in supporting AForge.NET Framework?
User avatar
andrew.kirillov
Site Admin, AForge.NET Developer
 
Posts: 3443
Joined: Fri Jan 23, 2009 9:12 am
Location: UK

Re: Scaling massive network inputs

Postby ovanwijk » Tue Jul 05, 2011 10:07 am

Thank you for your reply,

The reason the network needs to grow is because the source information comes from statistical methods on natural language texts.
Naturally this input is not constant except from the fact that 'keywords'(inputs) 1 to 50 do exist and 51-2000+ not. So parts of the network will simply not be used. In time more words with 'meaning' will be added to the network and some that appear useless might be deleted (probably not but still).

The point why I mentioned the training samples is because all the training samples together will determine the total 'input' nodes and so is not defined by me.

To give an example to might come close to what I exactly want is:
Training Sample1: Food:0.8,Apple:0.5,Healthy:0,9
Training Sample2: Food:0.3,Fruit:0.3,Banana:0.7,Calories:0,9,Healthy:0,3
Training Sample3: Gazoline:0.1,Oil:0.9,Food:0,002,Healthy:0,001

Each training sample will have 30-100 of those inputs that can all differ.

PS. Is it possible to have negative weights in these networks ?

Greets
ovanwijk
 
Posts: 4
Joined: Mon Jul 04, 2011 10:38 pm

Re: Scaling massive network inputs

Postby andrew.kirillov » Tue Jul 05, 2011 11:29 am

ovanwijk wrote:PS. Is it possible to have negative weights in these networks ?

Weights is something which is found by the network learning algorithm. Weights can be any numbers, positive, negative, zeros ...
With best regards,
Andrew


Interested in supporting AForge.NET Framework?
User avatar
andrew.kirillov
Site Admin, AForge.NET Developer
 
Posts: 3443
Joined: Fri Jan 23, 2009 9:12 am
Location: UK

Re: Scaling massive network inputs

Postby ovanwijk » Tue Jul 05, 2011 2:19 pm

Yeah like I thought, what are requirements for such a network to easily compute and train?
1000 inputs, 2 hidden layers with each about 1.500 neurons and 5 outputs?
Will a normal PC suffice or a more powerfull machine should be used?

Greets
ovanwijk
 
Posts: 4
Joined: Mon Jul 04, 2011 10:38 pm

Re: Scaling massive network inputs

Postby ovanwijk » Mon Jul 11, 2011 3:12 pm

Ok still developing this and I am facing a problem that I dont really know how to solve.

My network currently looks like this:
1000 Inputs, of which only 50 are used per input/training set and the rest is set to 0.0
1 hidden layer(varying from 50-500 neurons)
14 Outputs

I am using a Sigmoid Function with changing momentum and learning rate.

The problem is this:
When I run ActivationNetwork.Run() or RunEpoch() the error does simply not change!
And it doesnt matter how big the size of the hidden layer is or how small, changing learning rate or momentum.

Not even Randomize() changes the error :S.

Maybe someone faced similar problems once!

GReets
ovanwijk
 
Posts: 4
Joined: Mon Jul 04, 2011 10:38 pm

Re: Scaling massive network inputs

Postby justforgarbage » Sat Oct 15, 2011 10:05 pm

ovanwijk wrote:Ok still developing this and I am facing a problem that I dont really know how to solve.

The problem is this:
When I run ActivationNetwork.Run() or RunEpoch() the error does simply not change!
And it doesnt matter how big the size of the hidden layer is or how small, changing learning rate or momentum.


I now have faced the same problem.
I'm trying to make a network that takes 140 or so inputs and 3 outputs (and 400+ training samples). And the error from RunEpoch does not decrease.
I've found that if I get the XORProblem sample and increase amount of inputs and outputs there (just by duplicating them) I get the same outcome.
How can this problem be solved?

Also, this: viewtopic.php?f=5&t=2057 seems slightly relevant.
justforgarbage
 
Posts: 1
Joined: Sat Oct 15, 2011 10:00 pm




Return to Artificial Intelligence