python - Why a Trained Pybrain network yield different results even with an input use for training -


i have trained neural network using pybrain. when test network using same input 1 used training, complete different result. here code

from pybrain.structure import feedforwardnetwork pybrain.structure import linearlayer, sigmoidlayer pybrain.structure import fullconnection import numpy np pybrain.datasets import superviseddataset pybrain.supervised import backproptrainer pybrain.tools.xml.networkreader import networkreader pybrain.tools.xml.networkwriter import networkwriter pybrain.utilities import percenterror  n = feedforwardnetwork()  inlayer = linearlayer(2) hiddenlayer = sigmoidlayer(3) outlayer = linearlayer(1)  n.addinputmodule(inlayer) n.addmodule(hiddenlayer) n.addoutputmodule(outlayer)  in_to_hidden = fullconnection(inlayer, hiddenlayer) hidden_to_out = fullconnection(hiddenlayer, outlayer)  n.addconnection(in_to_hidden) n.addconnection(hidden_to_out) n.sortmodules()  x = np.array(([3,5], [5,1], [10,2]),dtype=float) y = np.array(([75], [82], [93]),dtype=float) x/=np.amax(x, axis=0) y/=100  print(n.activate([ 1, 2])) print(in_to_hidden.params) ds = superviseddataset(2,1) in range(len(x)):   ds.addsample(x[i],y[i])  trainer=backproptrainer(n,ds, learningrate=0.5, momentum=0.05,verbose=true) trainer.trainuntilconvergence(ds) trainer.testondata(ds, verbose=true) 

now when want test on input using code print("testing",n.activate([3,5])) ('testing', array([ 1.17809308])) . should have had around0.75 input n.activate([3,5]). dont understand why strange result

if understand correctly, 1 aspect of model validation have undertake. network seeks minimise error against of training data, not each result exactly. improve prediction accuracy running more epochs more hidden neurons. however, doing lead over-fitting through excessive flexibility. it's bit of balancing act.

as analogy, take regression. in linear case below, model not match of training (blue) data, captures trend blue , red (external test) data. using linear equation give me wrong answer data it's decent approximator. fit polynomial trendline data. has lot more flexibility, hitting of blue points error on testing data has increased.

regression

once have network built, need rerun of data through it. can validate on absolute average deviation, mse, mase etc. in addition things k-fold cross validation. tolerance of error based on application: in engineering, might need within 5% error, , exceeds threshold (which occur in second graph) have fatal consequences. in language processing, might able tolerate 1 or 2 real mess-ups , try catch them way if majority of predictions close, i'd possibly take second graph.

playing learning rate , momentum might converge on better solution.

edit: based on comments

the comment "should have been able recognise it" implies me different basis of neural network. there not vague concept of memory in network, uses training data develop convoluted set of rules try , minimise error against data points. once network trained, has no recollection of of training data, it's left spaghetti of multiplication steps perform on input data. no matter how network is, never able reverse-map training inputs right answer.

the idea of "convergence" cannot taken mean have network. network might have found local minima in error , given learning. why must validate models. if not happy result of validation, can try improve model by:
- re-running again. random initialisation of network might avoid local minima
- changing number of neurons. loosens or tightens flexibility of model
- change learning rate , momentum
- change learning rule e.g. swapping levenberg-marquardt bayesian regularisation


Comments

Popular posts from this blog

javascript - Using jquery append to add option values into a select element not working -

Android soft keyboard reverts to default keyboard on orientation change -

jquery - javascript onscroll fade same class but with different div -