Posted 2011.11.07 17:51


Create feed-forward backpropagation network


  • net = newff(P,T,[S1 S2...S(N-l)],{TF1 TF2...TFNl}, 


newff(P,T,[S1 S2...S(N-l)],{TF1 TF2...TFNl}, BTF,BLF,PF,IPF,OPF,DDF) takes several arguments

R x Q1 matrix of Q1 sample R-element input vectors
SN x Q2 matrix of Q2 sample SN-element target vectors
Size of ith layer, for N-1 layers, default = [ ].
(Output layer size SN is determined from T.)
Transfer function of ith layer. (Default = 'tansig' for
hidden layers and 'purelin' for output layer.)
Backpropagation network training function (default = 'trainlm')
Backpropagation weight/bias learning function (default = 'learngdm')
Performance function. (Default = 'mse')
Row cell array of input processing functions. (Default = {'fixunknowns','removeconstantrows','mapminmax'})
Row cell array of output processing functions. (Default = {'removeconstantrows','mapminmax'})
Data divison function (default = 'dividerand')

and returns an N-layer feed-forward backpropagation network.

The transfer functions TFi can be any differentiable transfer function such as tansig, logsig, or purelin.

The training function BTF can be any of the backpropagation training functions such as trainlm, trainbfg, trainrp, traingd, etc.

    Caution    trainlm is the default training function because it is very fast, but it requires a lot of memory to run. If you get an out-of-memory error when training, try one of these:

  • Slow trainlm training but reduce memory requirements by setting net.trainParam.mem_reduc to 2 or more. (See trainlm.)
  • Use trainbfg, which is slower but more memory efficient than trainlm.
  • Use trainrp, which is slower but more memory efficient than trainbfg.

The learning function BLF can be either of the backpropagation learning functions learngd or learngdm.

The performance function can be any of the differentiable performance functions such as mse or msereg.


Here is a problem consisting of inputs P and targets T to be solved with a network.

  • P = [0 1 2 3 4 5 6 7 8 9 10];
    T = [0 1 2 3 4 3 2 1 2 3 4];

Here a network is created with one hidden layer of five neurons.

  • net = newff(P,T,5);

The network is simulated and its output plotted against the targets.

  • Y = sim(net,P);

The network is trained for 50 epochs. Again the network's output is plotted.

  • net.trainParam.epochs = 50;
    net = train(net,P,T);
    Y = sim(net,P);


Feed-forward networks consist of Nl layers using the dotprod weight function, netsum net input function, and the specified transfer function.

The first layer has weights coming from the input. Each subsequent layer has a weight coming from the previous layer. All layers have biases. The last layer is the network output.

Each layer's weights and biases are initialized with initnw.

Adaption is done with trains, which updates weights with the specified learning function. Training is done with the specified training function. Performance is measured according to the specified performance function.

See Also

newcf, newelm, sim, init, adapt, train, trains

'Enginius > Matlab' 카테고리의 다른 글

Histogram 그리기(2d / 3d)  (2) 2012.05.31
Matlab에서 "Out of Memory" 해결하기  (5) 2012.05.24
newff  (0) 2011.11.07
syms + ezplot = rocks!  (0) 2011.10.27
eig (Eigenvalue and Eigenvector)  (0) 2011.10.27
mldivide \, mrdivide / (Matrix division)  (0) 2011.10.26
« PREV : 1 : ··· : 48 : 49 : 50 : 51 : 52 : 53 : 54 : 55 : 56 : ··· : 65 : NEXT »