User Tag List

Results 1 to 1 of 1

Thread: quickie primer on neural networks

  1. #1
    Alumni Member & VIP V.I.P Achievements:
    Social50000 Experience PointsVeteranTagger First Class
    Overall activity: 29.0%

    nonsqtr's Avatar
    Join Date
    Feb 2014
    Posts
    24,667
    Thanks
    8,782
    Thanked: 24,019
    Rep Power
    21474866

    quickie primer on neural networks

    Surveying the history of artificial neural networks (which is only 80 years old), one discovers three vital functions that these technologies perform. And they are the simplest of technologies - reasonably unintelligent computational units ("neurons") connected to their neighbors with "synapses" of varying strengths and directions.

    Three essential functions are performed by these architectures, and each uses the system differently.

    1. Pattern recognition and feature extraction - the idea of using a neural network for pattern recognition goes back to John Von Neumann and Frank rosenblatt, who invented the Perceptron. (Which was thereafter improved by the Japanese scientist K. Fukushima). The use of neural networks in pattern recognition depends largely on the geometry of the local connections, for example in the visual cortex the connections are arranged in "bands" which allows the system to calculate and extract local spatial Fourier transforms.

    2. Information storage - along a whole separate historical and scientific track, a different group of scientist was working on memory oh, and discovered that neural networks are ideal for Content addressable ("associative") memories. The idea behind information storage in a neural network, is the weight or strength of each synapse is changed in response to the input, so basically stimuli (and associations) get "burned in" to the memory. The Finnish scientist T. Kohonen was one of the pioneers of this approach.

    3. Recall and computation - these two items are grouped together because they always occur together. There is never any recall without computation. The reason for recall has that something has to be figured out, which is to say, calculated. Recall is important because it has given rise to A Whole New Perspective on how to make effective use of neural networks. This approach was initiated by JJ Hopfield, who was the first to treat neural recall as a dynamic system. What distinguishes the successful neural computation, is that it makes use of global parameters like energy and temperature.

    What is important to realize, is that all three of these functions occur within the same anatomical architecture. Whereas the synapses must be already configured for recall, they are assumed to be unconfigured during storage. And, the same synapses are used for feature recognition and computation, and as a result individual neurons are replaced by processing clusters, which can be either static or dynamic (this is necessary because it becomes necessary to change the state of the neural network whenever performing a particular type of operation).

    For instance - the hippocampus of the brain controls attention, and it is also involved in storage and recall, as well as being necessary for navigation and spatial orientation. The neurons there, are "primed" into storage mode, which essentially takes a small subset of the cerebral cortex offline while something is being stored. However when the stimulus appears that needs attention, the hippocampus redirects available cortical processing units to the task of identification and feature extraction ("determination of meaning").

    Exactly the same control systems are required in a real-time quantum computer. The quantum computers that exist today, are not real time, they are basically like time-sharing systems, you submit a job and it runs whenever the computer is free and sometime next week you get your results. "Real time" means the system is optimizing around "now", instead of being afforded the luxury of taking its time with vital computations.

    By now we understand the pattern recognition and computational powers of neural networks pretty well. And we have at least a clue about what's going on in the area of storage.

    Mydirect prediction is oh, you will see quantum computers that look exactly like brains, within 5 years. Research will take them down this path even if they know nothing about brains.

    Here are some references if you're curious:

    Perceptron - Wikipedia

    Neocognitron - Wikipedia

    https://en.m.wikipedia.org/wiki/Wilson–Cowan_model

    Models of the Stochastic Activity of Neural Aggregates | SpringerLink

    Hopfield network - Wikipedia

    Boltzmann machine - Wikipedia

    Self-organizing map - Wikipedia

    Adaptive resonance theory - Wikipedia

    None of these references had any inkling that they were directly talking about Quantum computing, which did not exist at the time these papers were written.

  2. The Following 2 Users Say Thank You to nonsqtr For This Useful Post:

    JMWinPR (10-18-2019),tom (10-18-2019)

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •