ArtificialNeuralNetworks perform arbitrary non-linear mappings between (potentially vague or misleading) inputs and outputs, and are typically used for pattern recognition and association. The computation is performed with a number of "neurons", which were inspired by, but rarely bear any resemblance to biological neurons.
The neurons are conneted in a graph, and typically propagate signals forward (FeedForward?) through the network. The signal's path through the network is controlled by both the structure of the network, and weights that are placed at each branch.
Finding the network structure and weights is the trick, and the most common approach is to also propagate errors backward through the network (BackPropagation?). Alternative training methods are HomeoBoxing?, GeneticAlgorithms, and other really mysterious training techniques (nature seems to prefer this strategy).
A huge number of people seem to have been influenced by hype over the years and believe that neural nets are powerful little black boxes that can solve just about any problem.
Not true; see for instance pp 247-280 of the 1988 second edition of the famous book "Perceptrons" by Marvin Minsky and Seymor Papert; to a large extent, the results proven for single layer networks in the 1968 edition (which temporarily killed that whole field) still hold true for most kinds of multi-layer networks.
In particular, a non-looping non-recursive neural net is quite limited in power, certainly less powerful than a TuringMachine; most of the nicer results that people have obtained with neural nets depended on recursive preprocessing algorithms to feed the nets easier work.
The huge surge of interest concerning back propagation to train neural nets faded once it was recognized that it amounted to the ancient technique of hill-climbing/local optimization; there was very little press on the subject, but basically anything a neural net can do, older statistical techniques can also accomplish.
Fully recursive neural nets are more powerful, but less pragmatic (e.g. they may not converge at all).
They're still a handy tool, of course, but they're not magic. If someone suggests a neural net to solve a problem, ask them why not using simulated annealing or hidden markov models. If they can't answer that, they probably bought into the hype.
On the other hand, hey, they are cool. :-) -- DougMerritt
Here are some things to read:
Related: FuzzyNeuralNet SimulatedAnnealing, GeneticAlgorithm, AnalogyProcessing