### 2.4 Test the model¶

The model can generate output predictions for the input samples.

In [10]:
prediction_values = model.predict_classes(X_test)


### 2.5 Accuracy¶

Test-Accuracy :

In [11]:
print("Test-Accuracy:","%.2f%%" % (np.mean(results.history["val_accuracy"])*100))

Test-Accuracy: 98.91%


### 2.6 Evaluate the model to see the accuracy¶

Now we can check the accuracy of our model

In [12]:
print("Evaluating on training set...")
(loss, accuracy) = model.evaluate(X_train, y_train.T, verbose=0)
print("loss={:.4f}, accuracy: {:.4f}%".format(loss,accuracy * 100))

print("Evaluating on testing set...")
(loss, accuracy) = model.evaluate(X_test, y_test.T, verbose=0)
print("loss={:.4f}, accuracy: {:.4f}%".format(loss,accuracy * 100))

Evaluating on training set...
loss=0.0042, accuracy: 100.0000%
Evaluating on testing set...
loss=0.0050, accuracy: 100.0000%


### 2.7 Summarize history for accuracy¶

In [13]:
plt.plot(results.history['accuracy'])
plt.plot(results.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='down right')

Out[13]:
<matplotlib.legend.Legend at 0x2930f4fcf28>

### 2.8 Summarize history for loss¶

In [14]:
plt.plot(results.history['loss'])
plt.plot(results.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')

max_loss = np.max(results.history['loss'])
min_loss = np.min(results.history['loss'])
print("Maximum Loss : {:.4f}".format(max_loss))
print("")
print("Minimum Loss : {:.4f}".format(min_loss))
print("")
print("Loss difference : {:.4f}".format((max_loss - min_loss)))

Maximum Loss : 0.6824

Minimum Loss : 0.0043

Loss difference : 0.6781