Page 1 of 1

### using backpropagation for single layer perceptron

Posted: Tue Jun 14, 2011 5:05 am
as the title said, can I use backpropagation learning library for single layer perceptron?

so there only 6 input and 26 output. that's all, no hidden layer.

### Re: using backpropagation for single layer perceptron

Posted: Tue Jun 14, 2011 9:04 am
You can try it actually ...

In general it should work. However it is a bit of an overkill. For a single layer network network updates are calculated simply by the PerceptronLearning class (AForge.NET network), which is a simple algorithm described here.

### Re: using backpropagation for single layer perceptron

Posted: Tue Jun 14, 2011 11:37 am
sorry, for another question

in aForgenet do you have any bias node in ANN?

actually i try to compare result from aForgenet and manual calculation in excel. and it's different. so maybe in aforge has any bias node.

how can i access the first weight that already randomize?

### Re: using backpropagation for single layer perceptron

Posted: Tue Jun 14, 2011 12:15 pm
rads wrote:in aForgenet do you have any bias node in ANN?

The question is actually answered in documentation. There is a Threshold value, which is added to weighted sum of inputs. See Compute() for details.

### Re: using backpropagation for single layer perceptron

Posted: Tue Jun 14, 2011 5:31 pm
thanks it's helps.
so in review

step 1.
sum of weight * input

step 2.

step 3.
put it in activation function which is bipolar sigmoid.

step 4.
where can I get this? not the squared error output.
error = desired output - actual network output

### Re: using backpropagation for single layer perceptron

Posted: Tue Jun 14, 2011 5:37 pm
i'm doing this to access the error.

first: try to get the output from the input
outputLooping = network.Compute(input[i]);

2nd: desired output - result from compute in step 1
outputLooping[j] = (valueOuput2[i][j] - outputLooping[j]);

but, when I compare to my manually calculate, is still different.

is it my method is false or not?