Starting from January 2020 I will be successfully updating the content to incorporate the newest research.





MKNN - Mirek Kordos Neural Networks

MKNN is a software I created for learning and analyzing properties of MLP networks.
The three last pictures to the right were generated with this software.

MKNN home page




Self Organizing Maps

I am fascinated with SOM for two reasons: Read more at my SOM page


Visualization of learning in neural networks



Rule extraction from neural networks




Tutorials and introductory papers

(Almost) All Machine Learning Open Software
A Periodic Table of Visualization Methods Jonathan R. Shewchuk in "Introduction to the Conjugate Gradient Method Without the Agonizing Pain" wrote: "When I decided to learn the Conjugate Gradient Method (CG), I read four different descriptions, which I shall politely not identify. I understood none of them. By the end of the last, I swore in my rage that if I ever unlocked the secrets of CG, I should guard them as jealously as my intellectual ancestors."

Some tutorials and introductory papers that I was able to understand:




High dimensional spaces

Here you can find the Presentation on Visualization of Mutidimensional Spaces

MLP networks typically contain hundreds of weights. This fact means that the error surface is a surface of very high dimensionality. It is however not difficult to imagine the properties of the spaces, using a simple extrapolation from 1,2 and 3-dimensional spaces. First, as the dimensionality N increases, the ratio of a volume of hypercube to the volume of a hypersphere inscribed in it grows to infinity. (It is 1 for one dimension, 1,27 for two dimensions, 1,91 for three dimensions and so on). Thus more and more of the space is concentrated in its corners. Another effect is that the angle between diagonal and coordinate axes asymptotically approaches 90 degrees, as N increases (its 0 for one dimension, 45 for two dimensions,...). More information can be found in the paper "The Curse of Dimensionality"

It is also getting more and more difficult to find parallel directions (or the proper direction) in multidimensional spaces, when the dimensionality grows. (Therefore it is difficult to train big neural networks even if there are no local minima). I tried to write a simple program to demonstrate this phenomenon. source code There is somewhere a bug which I cannot find (either in the program or in my imagination), because the program results and my imagination converge not to the same values. Find the bug and I will send you a 300g Milka Chocolate.

other links



Creative Commons License. You are free to copy, share and adapt all articles and software from my web page for noncommercial purposes, provided that you attribute the work to me and place a link to my home page. What you build upon my works may be distributed only under the same or similar license and you may not distort the meaning of my original texts.