I am training AlexNet from scratch. I have total 604 images of 10 different diseases in Mango leaves (self collected). Furthermore, I am using 5-fold cross-validation technique. My testing accuracy is great, but validation accuracy is not converging. I have pasted the evaluation model code. And attached the graphs of 2 trainings.

The average of 5 evaluation accuracies is 96%.

As I have less dataset can I use the validation dataset as testing dataset?

I am working on publishing a research paper. So I have one more question when people mention their accuracy is 90% or 92% in research papers does it mean test accuracy or validation accuracy or evaluation accuracy?

```

from sklearn.model_selection import KFold

import keras

def evaluate_model(model, dataX, dataY, n_folds=5):

scores, histories = list(), list()

# prepare cross validation

kfold = KFold(n_folds, shuffle=True, random_state=1)

# enumerate splits

val = 0

for train_ix, test_ix in kfold.split(dataX):

val+=1

print("k fold", val )

# select rows for train and test

trainX, trainY, testX, testY = dataX[train_ix], dataY[train_ix], dataX[test_ix], dataY[test_ix]

# fit model

history = model.fit(trainX, trainY, epochs=200, batch_size=32, validation_data=(testX, testY), verbose=0)

# print(history.history)

# evaluate model

result = model.evaluate(testX, testY, verbose=0)

print(result)

# stores scores

scores.append(result)

histories.append(history)

# Here we do the reinitialization

keras.backend.clear_session()

return scores, histories

```

More Ahmed Waheed's questions See All
Similar questions and discussions