The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen. He defines a neural network as:
"...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.
In "Neural Network Primer: Part I" by Maureen Caudill, AI Expert, Feb. 1989
ANNs are processing devices (algorithms or actual hardware) that are loosely modeled after the neuronal structure of the mamalian cerebral cortex but on much smaller scales. A large ANN might have hundreds or thousands of processor units, whereas a mamalian brain has billions of neurons with a corresponding increase in magnitude of their overall interaction and emergent behavior. Although ANN researchers are generally not concerned with whether their networks accurately resemble biological systems, some have. For example, researchers have accurately simulated the function of the retina and modeled the eye rather well.
Although the mathematics involved with neural networking is not a trivial matter, a user can rather easily gain at least an operational understanding of their structure and function.
(Score: 4, Informative) by goodie on Friday December 16 2016, @06:15PM
Andrew Ng's Machine Learning course on Coursera (free) has a class on ANN and an assignment on it. It's not exactly trivial to do the assignment in my experience (not much of a fan of Octave anyway) but the concepts are well explained and there are a few examples of applications of this (self-driving cars). Interestingly, this topic was deemed useless for a while until it re-emerged as a promising area of research with people like Ng, Bengio etc. Seems that a lot of current advances in AI are based primarily on ANN.