Handwritten Digit Classification with TensorFlow

If Exclusive OR (XOR) implementation were a letter A of the alphabet, then handwritten digit classification from MNIST database would be letter B for machine learning studies. Even though, this study requires image processing, solution can be modelled similiar to Exclusive OR sample. Although, new approaches such as convolutional neural networks show higher performance for both training and detecting, traditional neural networks can classify these numbers successfully. Today, we are going to mention how to model MNIST with TensorFlow.

mnist
Handwritten digit instances


๐Ÿ™‹โ€โ™‚๏ธ You may consider to enroll my top-rated machine learning course on Udemy

Decision Trees for Machine Learning

We can already access MNIST database under the TensorFlow library. Referencing the following library provides to retrieve database.

from tensorflow.examples.tutorials.mnist import input_data

Then, reading data set command downloads instances into specified location at initial run whereas reuses downloaded instances at second run.

MNIST_DATASET = input_data.read_data_sets(MNIST_STORE_LOCATION)

Handwritten digits are stored as 28×28 image pixel values and labels (0 to 9). Moreover, instances are already distinguished as train and test sets.

mnist-train-xs
MNIST_DATABASE.train.images

There are 55K instances in train set. That’s why, x-axis states trainset size. On the other hand, each instance in trainset is stored as 28×28 pixel image. That’s why, there are 28×28=784 features for each instance.

mnist-train-ys
MNIST_DATABASE.train.labels

Each instance in trainset is classified as a number 0 to 9. Black fields in a column states label of a image. Additionally, only a field can be black in a column. For example, 5th, 0th, 4th and 1th fields are filled black respectively based on trainset images.

So, we can work on train and test set pixels and labels as illustrated below.

train_data = np.array(MNIST_DATASET.train.images, 'int64')
train_target = np.array(MNIST_DATASET.train.labels, 'int64')

test_data = np.array(MNIST_DATASET.test.images, 'int64')
test_target = np.array(MNIST_DATASET.test.labels, 'int64')

Now, we can implement learning for deep neural networks classifier.

feature_columns = [tf.contrib.layers.real_valued_column("", dimension=len(MNIST_DATASET.train.images[1]))]

classifier = tf.contrib.learn.DNNClassifier(
 feature_columns=feature_columns
 , n_classes=10 #0 to 9 - 10 classes
 , hidden_units=[128, 32] #2 hidden layers consisting of 128 and 32 units respectively
 , optimizer=tf.train.ProximalAdagradOptimizer(learning_rate=learningRate)
 , activation_fn = tf.nn.relu
 , model_dir="model"
)

classifier.fit(train_data, train_target, steps=epoch)

Trainset consists of 55K instances. It is hard to process them on a CPU. That’s why, you might want to process randomly selected ones instead of all. You should change the training step as illustrated below. In this way, training is over in couple of minutes.





def generate_input_fn(data, label):
image_batch, label_batch = tf.train.shuffle_batch(
 [data, label]
 , batch_size=batch_size
 , capacity=8*batch_size
 , min_after_dequeue=4*batch_size
 , enqueue_many=True
 )
 return image_batch, label_batch

def input_fn_for_train():
 return generate_input_fn(train_data, train_target)
#learning step
classifier.fit(input_fn=input_fn_for_train, steps=epoch)

 

We should evaluate testing instance when network training is over.

accuracy_score = classifier.evaluate(test_data, test_target, steps=epoch)['accuracy']
print("accuracy:", 100*accuracy_score)

This configuration produces 97.83% accuracy on test set and 99.77% accuracy for (randomly selected) trainset. Of course, model would produce different accuracy scores for each run because of random weight initialization.

mnist-accuracy
Accuracy score

Funnily, this models classifies incorrectly the following examples. It is hard to classify these instances even for most of people.ย This accuracy score might leave a wrong impression but it is actually good score.

inccorect-classifications-for-mnist
Incorrect classifications

We can also plot a handwritten image based on features and its predicted class as demonstrated below.

predictions = classifier.predict_classes(test_data)

index = 0

for i in predictions:
 if index == 10: #to plot 10th instance and prediction
  print("prediction: ", i)
  #-----------------------
  pred = MNIST_DATASET.test.images[index]
  pred = pred.reshape([28, 28]);
  plt.gray()
  plt.imshow(pred)
  plt.show()
 index = index + 1
mnist-sample-output
Classifying an instance in test set

Loss decreases over iterations as expected. We can trace it on TensorBoard.

mnist-loss
Loss change

Importantly, we have used the following parameters to train network.

epoch = 15000
learningRate = 0.1
batch_size = 40

To Sum Up

So, we have developed a model for handwritten digit classification with only 12 lines of effective code. It seems that model produces partially successful results. On the other hand, training lasts almost 3.3 hours for 10K epoch value on my local computer (Intel Core i7-6600U CPU @ 2.60 GHz, 16GB RAM and 64-bit OS).ย We will mention how to model same problem with convolutional neural networks in next posts. In this way, we can produce more successful results with less computation power.

If you would like to dig deeper, you should check out the online course TensorFlow 101: Introduction to Deep Learning.






Support this blog if you do like!

Buy me a coffee      Buy me a coffee