Even Superheroes Need to Rest: Working on Trained Neural Networks in Weka

Applying neural networks could be divided into two phases as learning and forecasting. Learning phase has high cost whereas forecasting phase produces results very quickly. Epoch value (aka training time), network structure and historical data size specify the cost of learning phase. Normally, the larger epoch produces the better results. However, increment of epoch value will cause to be taken longer time. That’s why, picking up very large epoch value would not be applicable for online transaction if learning is implemented instantly.

snoozing-superhero
Even superheroes need to rest

However, we can apply learning and forecasting steps asynchronously. We would perform neural network learning as batch application (e.g. periodic day-end or month-end calculation). Thus, epoch would be picked up as very large value. Besides, weights of neural networks will be calculated on low system load (most probably late night hours). In this way, no matter how long neural networks learning lasts. Thus, we can even make forecasts for online transactions in milliseconds. You might imagine this approach like that human nervous system updates its own weights while sleeping.


🙋‍♂️ You may consider to enroll my top-rated machine learning course on Udemy

Decision Trees for Machine Learning

In previous post, we’ve mentioned java implementation of building neural networks with weka for xor example. Also, project code is shared on GitHub. Now, we will modify that code a little to apply this approach. Thus, we can make predictions faster.

When network is trained, we would store its binary content as illustrated below. We can either save it as a file or store it on a database as BLOB type. I prefer to save it as a file on a operation system in order to explain the subject in simple way.

//network training
MultilayerPerceptron mlp = new MultilayerPerceptron();
mlp.setOptions(Utils.splitOptions(backPropOptions));
mlp.buildClassifier(trainingset);

//store trained network
byte[] binaryNetwork = serialize(mlp);
writeToFile(binaryNetwork, "C:\\");

...

public static byte[] serialize(Object obj) throws Exception {
 ByteArrayOutputStream b = new ByteArrayOutputStream();
 ObjectOutputStream o = new ObjectOutputStream(b);
 o.writeObject(obj);
 return b.toByteArray();
}

public static void writeToFile(byte[] binaryNetwork, String location)
throws Exception {
 FileOutputStream stream = new FileOutputStream(location
+"network.txt");
 stream.write(binaryNetwork);
 stream.close();
}

For instance, learning process of xor problem lasts 22.70 seconds for 10M epoch value. This means that if this process is located behind an online transaction, you have to wait until training finished. In this example, network structure is too basic (2 nodes in input layer and 1 hidden layer consisting of 3 nodes). Moreover, historical data consists of 4 instances. Calculation time would increase dramatically (even in hours) for different problems which requires complex network structures, larger training sets or large epoch values.

performance-with-learning
First Time Network Training

Network content actually includes network structure (nodes and layers) and final weights. Stored network content provides us to use final weights without making any calculation. We would perform the following code block to restore network.

MultilayerPerceptron mlp = readFromFile(location);
System.out.println("network weights and structure are load...");

...

public static MultilayerPerceptron readFromFile(String location) {

 MultilayerPerceptron mlp = new MultilayerPerceptron();

 //binary network is saved to following file
 File file = new File(location+"network.txt");

 FileInputStream fileInputStream = null;

 //binary content will be stored in the binaryFile variable
 byte[] binaryFile = new byte[(int) file.length()];

 try{

  fileInputStream = new FileInputStream(file);
  fileInputStream.read(binaryFile);
  fileInputStream.close();

 }
 catch(Exception ex){
  System.out.println(ex);
 }

 try{
  mlp = (MultilayerPerceptron) deserialize(binaryFile);
 }
 catch(Exception ex){
  System.out.println(ex);
 }

 return mlp;
}

public static Object deserialize(byte[] bytes) throws Exception{
ByteArrayInputStream b = new ByteArrayInputStream(bytes);
ObjectInputStream o = new ObjectInputStream(b);
return o.readObject();
}

Testing

Thus, same results would be produced in 80 milliseconds. Almost 280 times faster than the previous example!

performance-without-learning
Working on Trained Network

In this post, predicting with neural network is thought to consist of two phases as learning and forecasting. We tried to show the advantages of working on trained network. Moreover, We’ve metioned how this approach makes calculations faster. Finally, this approach is added into the shared project on GitHub. You might change the dataset and monitor the changes on time.


Like this blog? Support me on Patreon

Buy me a coffee


1 Comment

Comments are closed.