Programmierhausübung Neurale Netzwerke

Hallo liebes Forum,
ich muss in der Uni eine Hausübung machen, in der ich auf Grundlage eines in Python programmierten Neuralen Netzwerkes ein Java Programm schreiben soll, welches genau das gleiche macht.
Da ich leider mit sowohl Python als auch mit Java total überfordert bin, bitte ich hier um eure Hilfe!

Den folgenden Code sollte ich in Java umschreiben:
Python:
In [1]:
# python notebook for Make Your Own Neural Network
# code for a 3-layer neural network, and code for learning the MNIST dataset
# (c) Tariq Rashid, 2016
# license is GPLv2

In [2]:
import numpy
# scipy.special for the sigmoid function expit()
import scipy.special
# library for plotting arrays
import matplotlib.pyplot
# ensure the plots are inside this notebook, not an external window
%matplotlib inline

In [3]:
# neural network class definition
class neuralNetwork:
  
  
   # initialise the neural network
   def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate):
       # set number of nodes in each input, hidden, output layer
       self.inodes = inputnodes
       self.hnodes = hiddennodes
       self.onodes = outputnodes
      
       # link weight matrices, wih and who
       # weights inside the arrays are w_i_j, where link is from node i to node j in the next layer
       # w11 w21
       # w12 w22 etc
       self.wih = numpy.random.normal(0.0, pow(self.inodes, -0.5), (self.hnodes, self.inodes))
       self.who = numpy.random.normal(0.0, pow(self.hnodes, -0.5), (self.onodes, self.hnodes))

       # learning rate
       self.lr = learningrate
      
       # activation function is the sigmoid function
       self.activation_function = lambda x: scipy.special.expit(x)
      
       pass

  
   # train the neural network
   def train(self, inputs_list, targets_list):
       # convert inputs list to 2d array
       inputs = numpy.array(inputs_list, ndmin=2).T
       targets = numpy.array(targets_list, ndmin=2).T
      
       # calculate signals into hidden layer
       hidden_inputs = numpy.dot(self.wih, inputs)
       # calculate the signals emerging from hidden layer
       hidden_outputs = self.activation_function(hidden_inputs)
      
       # calculate signals into final output layer
       final_inputs = numpy.dot(self.who, hidden_outputs)
       # calculate the signals emerging from final output layer
       final_outputs = self.activation_function(final_inputs)
      
       # output layer error is the (target - actual)
       output_errors = targets - final_outputs
       # hidden layer error is the output_errors, split by weights, recombined at hidden nodes
       hidden_errors = numpy.dot(self.who.T, output_errors)
      
       # update the weights for the links between the hidden and output layers
       self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs))
      
       # update the weights for the links between the input and hidden layers
       self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs))
      
       pass

  
   # query the neural network
   def query(self, inputs_list):
       # convert inputs list to 2d array
       inputs = numpy.array(inputs_list, ndmin=2).T
      
       # calculate signals into hidden layer
       hidden_inputs = numpy.dot(self.wih, inputs)
       # calculate the signals emerging from hidden layer
       hidden_outputs = self.activation_function(hidden_inputs)
      
       # calculate signals into final output layer
       final_inputs = numpy.dot(self.who, hidden_outputs)
       # calculate the signals emerging from final output layer
       final_outputs = self.activation_function(final_inputs)
      
       return final_outputs

In [4]:
# number of input, hidden and output nodes
input_nodes = 784
hidden_nodes = 200
output_nodes = 10

# learning rate
learning_rate = 0.1

# create instance of neural network
n = neuralNetwork(input_nodes,hidden_nodes,output_nodes, learning_rate)

In [5]:
# load the mnist training data CSV file into a list
training_data_file = open("mnist_dataset/mnist_train.csv", 'r')
training_data_list = training_data_file.readlines()
training_data_file.close()

In [6]:
# train the neural network

# epochs is the number of times the training data set is used for training
epochs = 5

for e in range(epochs):
   # go through all records in the training data set
   for record in training_data_list:
       # split the record by the ',' commas
       all_values = record.split(',')
       # scale and shift the inputs
       inputs = (numpy.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
       # create the target output values (all 0.01, except the desired label which is 0.99)
       targets = numpy.zeros(output_nodes) + 0.01
       # all_values[0] is the target label for this record
       targets[int(all_values[0])] = 0.99
       n.train(inputs, targets)
       pass
   pass

In [7]:
# load the mnist test data CSV file into a list
test_data_file = open("mnist_dataset/mnist_test.csv", 'r')
test_data_list = test_data_file.readlines()
test_data_file.close()

In [8]:
# test the neural network

# scorecard for how well the network performs, initially empty
scorecard = []

# go through all the records in the test data set
for record in test_data_list:
   # split the record by the ',' commas
   all_values = record.split(',')
   # correct answer is first value
   correct_label = int(all_values[0])
   # scale and shift the inputs
   inputs = (numpy.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
   # query the network
   outputs = n.query(inputs)
   # the index of the highest value corresponds to the label
   label = numpy.argmax(outputs)
   # append correct or incorrect to list
   if (label == correct_label):
       # network's answer matches correct answer, add 1 to scorecard
       scorecard.append(1)
   else:
       # network's answer doesn't match correct answer, add 0 to scorecard
       scorecard.append(0)
       pass
  
   pass

In [9]:
# calculate the performance score, the fraction of correct answers
scorecard_array = numpy.asarray(scorecard)
print ("performance = ", scorecard_array.sum() / scorecard_array.size)


performance =  0.9712

In [ ]:

Vielen Dank für eure Hilfe, ich hoffe es kennt sich jemand aus und kann mir weiterhelfen!
 
Zuletzt bearbeitet von einem Moderator:
Ein Programmieransatz in Java wäre ganz nett, weil ich auch nicht so recht verstehe, wie das Python Programm arbeitet! Ich verlange nicht, dass ihr mir den ganzen Code in Java liefert, jedoch wäre mir schon geholfen, wenn ihr mir zeigen könntet, wie das Java Programm aufgebaut sein sollte
 

Manuel.R

Bekanntes Mitglied
das ist ein neuronales netz.....

weist du denn was das ist?

guck dir mal DeepLearning4J an. Die Implementierung von KI unterscheidet sich in den Programmiersprachen etwas. Python ist nicht Typensicher. Deshalb sucht man Datentypen vergeblich.

Die Anwendung importiert numpy. Eine mathematische Bibliothek für Python. Viele Berechnungen werden darüber sichergestellt.

Alles zu erklären sprengt den Rahmen.

Neuronales Netz eben....

Input Layer ---> viele oder einen Hidden Layer ---> Output Layer

Variablen der Berechnung werden durch Matrizen dargestellt.
 
Ähnliche Java Themen

Ähnliche Java Themen

Neue Themen


Oben