Network Class for Neural Network¶
Author: Cole Howard
An extinsible neural net designed to explore Neural Network Architecture via extensive visualizations.
-
class
finnegan.network.
Network
(layers, neuron_count, vector)¶ A multi layer neural net with backpropogation.
Parameters: - layers (int) – Number of layers to use in the network.
- neuron_count (list) – A list of integers that represent the number of neurons present in each hidden layer. (Size of input/output layers are dictated by dataset)
- vector (list) – Example vector to get size of initial input
-
possible
¶ list
A list of possible output values
-
_backprop
(guess_vector, target_vector)¶ Takes the output of the net and initiates the backpropogation
- In output layer:
- generate error matrix [(out * (1-out) * (Target-out)) for each neuron] update weights matrix [[+= l_rate * error_entry * input TO that amount] for each neuron ]
- In hidden layer
- generate error matrix [out * (1-out) * dotproduct(entry in n+1 error matrix, n+1 weight of that entry)] update weights matrix [[+= l_rate for each weight] for each neuron]
Parameters: - guess_vector (numpy array) – The output from the last layer during a training pass
- target_vector (list) – List of expected values
Returns: As evidence of execution
Return type: True
-
_pass_through_net
(vector, dropout=True)¶ Sends a vector into the net
Parameters: - vector (numpy array) – A numpy array representing a training input (without the target)
- dropout (bool) – Whether or not you should perform random dropout in the pass through the net. (Set False for the tesing set vectors)
Returns: Output of the last layer in the chain
Return type: numpy array
-
_softmax
(w, t=1.0)¶ Author: Jeremy M. Stober, edits by Martin Thoma Program: softmax.py Date: Wednesday, February 29 2012 and July 31 2014 Description: Simple softmax function. Calculate the softmax of a list of numbers w.
Parameters: - w (list of numbers) –
- t (float) –
Returns: Return type: a list of the same length as w of non-negative numbers
Examples
>>> softmax([0.1, 0.2]) array([ 0.47502081, 0.52497919]) >>> softmax([-0.1, 0.2]) array([ 0.42555748, 0.57444252]) >>> softmax([0.9, -10]) array([ 9.99981542e-01, 1.84578933e-05]) >>> softmax([0, 10]) array([ 4.53978687e-05, 9.99954602e-01])
-
report_results
(guess_list, answers)¶ Reports results of guesses on unseen set
Parameters: - guess_list (list) –
- answers (list) –
-
run_unseen
(test_set)¶ Makes guesses on the unseen data, and switches over the test answers to validation set if the bool is True
For each vector in the collection, each neuron in turn will either fire or not. If a vector fires, it is collected as a possible correct guess. Not firing is collected as well, in case there an no good guesses at all. The method will choose the vector with the highest dot product, from either the fired list or the dud list.
Parameters: test_set (list) – List of numpy arrays representing the unseen vectors Returns: a list of ints (the guesses for each vector) Return type: list
-
train
(dataset, answers, epochs)¶ Runs the training dataset through the network a given number of times.
Parameters: - dataset (Numpy nested array) – The collection of training data (vectors and the associated target value)
- answers (numpy array) – The array of correct answers to associate with each training vector
- epochs (int) – Number of times to run the training set through the net