Machine Learning Meetup Notes: 2009-03-11: Difference between revisions

From Noisebridge
Jump to navigation Jump to search
(Add Ping's perceptron (in Python))
 
(11 intermediate revisions by 8 users not shown)
Line 1: Line 1:
==Machine Learning Meetup Notes: 2009-03-11==
==Machine Learning Meetup Notes: 2009-03-11==
[[Image:IMG_4478.JPG|816px]]


We made perceptrons.  To learn how, you should go to Wikipedia [http://en.wikipedia.org/wiki/Perceptron]
We made perceptrons.  To learn how, you should go to Wikipedia [http://en.wikipedia.org/wiki/Perceptron]


We had the following success:
Also reasonably good: [http://www.cs.usyd.edu.au/~irena/ai01/nn/travtute.htm Neural Networks Tutorial], from the Introduction to Part 5.
 
===Code people wrote===


* Mathematica, by Christoph ([https://www.noisebridge.net/wiki/Image:NoisebridgeNeuralNetworks.pdf PDF], because noisebridge is ''a little'' too anti-commercial)
* Mathematica, by Christoph ([https://www.noisebridge.net/wiki/Image:NoisebridgeNeuralNetworks.pdf PDF], because noisebridge is ''a little'' too anti-commercial)
* Matlab/Octave by Jean
* [[Machine Learning Meetup Notes Perceptron Matlab|Matlab/Octave perceptron]] by Jean
* Python by Skory and Rachel
* [[User:Elgreengeeto/Python_Perceptron|Python]] by [[User:Elgreengeeto|Skory]]
* [[RachelPerceptronPython|extremely naive Python]] by Rachel
* Ruby by Zhao [https://www.noisebridge.net/wiki/Machine_Learning_Meetup_Notes_Ruby_Zhao]
* Ruby by Zhao [https://www.noisebridge.net/wiki/Machine_Learning_Meetup_Notes_Ruby_Zhao]
* C by Tristan
* C by Cristian [https://www.noisebridge.net/wiki/User:Cortiz]
* [https://www.noisebridge.net/wiki/davids_perceptron.pl Python implementation by David Stainton]
* [http://github.com/david415/ml-py/tree/master Python implementation by David Stainton]
* [[User:Kaufman/LISP_Perceptron|LISP!]] by John Kaufman
* [[User:Ping/Python_Perceptron]] by [[User:Ping]]


Everyone should upload their code!
Everyone should upload their code!
===Follow-on notes===
Now, if you look at your weights and think about what they mean, you'll notice something odd.  At the end, the weights aren't equal!  We trained a NAND gate, so every input should have an equal opportunity to change the output, right?  Given that last leading question, what would you expect the ideal weights to be?  Do the learned weights match that expectation?  Why?  (Hint: What does "overfitting" mean, and how is it relevant?)
Can you build a training set for an OR gate, and train it?  What other operators can you implement this way?  All you need to do is build a new training set and try training, which is pretty awesome if you think about it.  (Hint: What does "separability" mean, and how is it relevant?)
Let's say we wanted to output smooth values instead of just 0 or 1.  What wouuld you need to change in your evaluation step to get rid of the thresholding?  What wouuld you need to change about learning to allow your neuron to learn smooth functions?  (Hint: in a smooth output function, we want to change the amount of training we do by how far we were off, not just by which direction we were off.)
''One answer: [https://www.noisebridge.net/wiki/Image:NoisebridgeNeuralNetworks_2009MAR17.pdf PDF of Mathematica workspace]''
What if we wanted to do something besides multiple each input by its weight?  What if we wanted to do something crazy, like take the second input, square it, and multiply _that_ by the weight?  That is: what if we wanted to make the output a polynomial equation instead of a linear one, where each input is x^1, x^2, etc, with the weights as their coefficients?  What would need to change in your implementation?  What if we wanted to do even crazier things, like bizarre trig functions?

Latest revision as of 19:19, 18 March 2009

Machine Learning Meetup Notes: 2009-03-11[edit]

IMG 4478.JPG

We made perceptrons. To learn how, you should go to Wikipedia [1]

Also reasonably good: Neural Networks Tutorial, from the Introduction to Part 5.

Code people wrote[edit]

Everyone should upload their code!

Follow-on notes[edit]

Now, if you look at your weights and think about what they mean, you'll notice something odd. At the end, the weights aren't equal! We trained a NAND gate, so every input should have an equal opportunity to change the output, right? Given that last leading question, what would you expect the ideal weights to be? Do the learned weights match that expectation? Why? (Hint: What does "overfitting" mean, and how is it relevant?)

Can you build a training set for an OR gate, and train it? What other operators can you implement this way? All you need to do is build a new training set and try training, which is pretty awesome if you think about it. (Hint: What does "separability" mean, and how is it relevant?)

Let's say we wanted to output smooth values instead of just 0 or 1. What wouuld you need to change in your evaluation step to get rid of the thresholding? What wouuld you need to change about learning to allow your neuron to learn smooth functions? (Hint: in a smooth output function, we want to change the amount of training we do by how far we were off, not just by which direction we were off.)

One answer: PDF of Mathematica workspace

What if we wanted to do something besides multiple each input by its weight? What if we wanted to do something crazy, like take the second input, square it, and multiply _that_ by the weight? That is: what if we wanted to make the output a polynomial equation instead of a linear one, where each input is x^1, x^2, etc, with the weights as their coefficients? What would need to change in your implementation? What if we wanted to do even crazier things, like bizarre trig functions?