Building Neural Networks with Weka In Java

Building neural networks models and implementing learning consist of lots of math this might be boring. Herein, some tools help researchers to build network easily. Thus, a researcher who knows the basic concept of neural networks can build a model without applying any math formula.

photo-20151201173445476
Weka caution


🙋‍♂️ You may consider to enroll my top-rated machine learning course on Udemy

Decision Trees for Machine Learning

7ba48d0d69fc380c3d93d58a80164ede
Endemic Bird

So, Weka is one of the most common machine learning tool for machine learning studies. It is a java-based API developed by Waikato University, New Zealand. Weka is an acronym for Waikato Environment for Knowledge Analysis. Actually, name of the tool is a funny word play because weka is a bird species endemic to New Zealand. Thus, researchers can introduce an endemic bird to world wide.

As well as weka provides a cute graphic user interface, we will focus on how to build neural network models in our java programs by referencing its library. Thus, machine learning would be directly adapted into the daily business flows.

Normally, weka works on files with .arff extention. File content looks like the following form if standard xor example transformed to arff format. The choice reason of xor dataset is that linear approaches cannot classify this problem successfully.

xor.arff

@relation xor

@attribute input1 numeric
@attribute input2 numeric
@attribute target numeric

@data
0,0,0
0,1,1
1,0,1
1,1,0

After then, a neural networks model will be built as demonstrated below.

//network variables
String backPropOptions =
 "-L "+0.1 //learning rate
 +" -M "+0 //momentum
 +" -N "+10000 //epoch
 +" -V "+0 //validation
 +" -S "+0 //seed
 +" -E "+0 //error
 +" -H "+"3"; //hidden nodes.
 //e.g. use "3,3" for 2 level hidden layer with 3 nodes

try{
 //prepare historical data
String historicalDataPath
= System.getProperty("user.dir")+"\\dataset\\xor.arff";
 BufferedReader reader
 = new BufferedReader(new FileReader(historicalDataPath));
 Instances trainingset = new Instances(reader);
 reader.close();

 trainingset.setClassIndex(trainingset.numAttributes() - 1);
 //final attribute in a line stands for output

 //network training
 MultilayerPerceptron mlp = new MultilayerPerceptron();
 mlp.setOptions(Utils.splitOptions(backPropOptions));
 mlp.buildClassifier(trainingset);
 System.out.println("final weights:");
 System.out.println(mlp);

 //display actual and forecast values
 System.out.println("\nactual\tprediction");
 for(int i=0;i<trainingset.numInstances();i++){

  double actual = trainingset.instance(i).classValue();
  double prediction =
mlp.distributionForInstance(trainingset.instance(i))[0];

  System.out.println(actual+"\t"+prediction);

 }

 //success metrics
 System.out.println("\nSuccess Metrics: ");
 Evaluation eval = new Evaluation(trainingset);
 eval.evaluateModel(mlp, trainingset);

 //display metrics
 System.out.println("Correlation: "+eval.correlationCoefficient());
 System.out.println("MAE: "+eval.meanAbsoluteError());
 System.out.println("RMSE: "+eval.rootMeanSquaredError());
 System.out.println("RAE: "+eval.relativeAbsoluteError()+"%");
 System.out.println("RRSE: "+eval.rootRelativeSquaredError()+"%");
 System.out.println("Instances: "+eval.numInstances());

}
catch(Exception ex){

 System.out.println(ex);

}

Of course, you do not have to work with files with arff extention. Alternatively, you might transform comma seperated files to attribute relation file format on runtime as illustrated below.

xor.txt

input1,input2,target
0,0,0
0,1,1
1,0,1
1,1,0

Transforming comma seperated file to attribute relation file format.





String historicalDataPath
= System.getProperty("user.dir")+"\\dataset\\xor.txt";
BufferedReader br
= new BufferedReader(new FileReader(historicalDataPath));
String line = br.readLine(); //header

FastVector fvWekaAttributes = new FastVector(4);
for(int i=0;i				<line.split(",").length;i++){
 fvWekaAttributes.addElement(new Attribute(line.split(",")[i]));
}
Instances trainingset = new Instances("train", fvWekaAttributes, 0);
//initialize file

while (line != null) {
 line = br.readLine();

 if(line != null){
  String[] items = line.split(",");
  Instance trainingItem = new Instance(items.length);
  int j = -1;

  for(int i=0;i<items.length;i++){
   trainingItem.setValue(++j, Double.parseDouble(items[i]));
  }

  trainingset.add(trainingItem);

 }
}
br.close();

return trainingset;
weka-nn-output
Outputs and Success Metrics of Built Neural Network Model

So, building neural networks with weka is too easy. Moreover, it has a high performance. Although, epoch param is picked up 10K, model is built in seconds. Also, model produces very successful results. Correlation is almost 100% and MAE is almot 0%.

Handicaps

Although, weka is easy to build neural networks models, it is not perfect. Firstly, it supports only back propagation algorithm. For instance, MatLab nntool also supports Levenberg–Marquardt algorithm for learning. Moreover, only sigmoid function is supported as an activation function by Weka. However, activation function should be picked up depending on the problem. Furthermore, weights should be initialized randomly. However, Weka initializes weights depending on same procedure. That’s why, classifier always produces same outputs. This might cause to get stuck in a large local minimum. Finally, in weka incremental models can learn using one instance at a time, but neural networks is not incremental. Memory problems might appear in large scale dataset studies. In other words, weka is not applicable for nn-based big data studies.

Nevertheless, weka has some handicaps, it is a practical one.

Finally, I’ve shared this project on My GitHub profile under the nn-weka repository. You might clone repository, play around the code and observe the results.


Support this blog if you do like!

Buy me a coffee      Buy me a coffee