machine learning - Why does Neural Network give same accuracies for permuted labels? -


i have datset of 37 data points , around 1300 features. there 4 different classes , each class has around same number of data points. have trained neural network , got accuracy of 60% 2 hidden layers not bad (chance level 25%).

the problem p-value. i'm calculating p-value permutation test. i'm permuting labels 1000 times , each permutation i'm calculating accuracy. p-value calculate percentage of permutation accuracies aver on original accuracy.

for permutations of labels i'm getting same accuracy original labels, i.e. neural network not seem include labels in learning.

if svm i'm getting permutations different accuracies (in end gaussian distribution).

why case?

by way, i'm using deeplearntoolbox matlab.

is 60% success rate on training data or validation dataset set aside?

if you're computing success rate on training data expect high accuracy after permuting labels. because classifier overfit data (1300 features 37 data points) , achieve performance on training data.


Comments

Popular posts from this blog

javascript - Using jquery append to add option values into a select element not working -

Android soft keyboard reverts to default keyboard on orientation change -

Rendering JButton to get the JCheckBox behavior in a JTable by using images does not update my table -