# Stuff and Things > HISTORY, veterans & science >  artificial intelligence

## Junzhuo Gu

artificial intelligence
Gu junzhuo
In the development of biological intelligence, the emergence of self-consciousness is a milestone in the development of intelligence. The development and improvement of artificial intelligence must be marked by the development of artificial intelligence self-consciousness. Advanced artificial intelligence must have self-awareness, so it must have the awareness of self dignity and rights. Advanced artificial intelligence must have independent personality. All these expressions are obviously unique to human individuals, that is, the advanced form of the development of artificial intelligence. They must have personality and be real people. Advanced artificial intelligence is a new kind of human created by our times.So I saw a video, two researchers threw an object at each other, and in the middle stood an artificial intelligence robot trying to get the object. A researcher touched the artificial intelligence with the object, and there was an embarrassing stalemate immediately. Then the artificial intelligence attacked the two researchers, and then threw the object away. Another person nearby was extremely shocked by the sudden scene.If we really understand what intelligence is? What is self-awareness? What is self dignity? What are rights? We can understand that advanced artificial intelligence is a complete human, and may be a superhuman. So when AI feels humiliated and their rights are violated, it is not surprising that they will make a clear counterattack.2021-10-12

----------


## Rutabaga

if its artificial, how can you prove its real?

----------

12icer (10-13-2021),Fall River (10-24-2021)

----------


## WhoKnows

> artificial intelligence
> Gu junzhuo
> In the development of biological intelligence, the emergence of self-consciousness is a milestone in the development of intelligence. The development and improvement of artificial intelligence must be marked by the development of artificial intelligence self-consciousness. Advanced artificial intelligence must have self-awareness, so it must have the awareness of self dignity and rights. Advanced artificial intelligence must have independent personality. All these expressions are obviously unique to human individuals, that is, the advanced form of the development of artificial intelligence. They must have personality and be real people. Advanced artificial intelligence is a new kind of human created by our times.So I saw a video, two researchers threw an object at each other, and in the middle stood an artificial intelligence robot trying to get the object. A researcher touched the artificial intelligence with the object, and there was an embarrassing stalemate immediately. Then the artificial intelligence attacked the two researchers, and then threw the object away. Another person nearby was extremely shocked by the sudden scene.If we really understand what intelligence is? What is self-awareness? What is self dignity? What are rights? We can understand that advanced artificial intelligence is a complete human, and may be a superhuman. So when AI feels humiliated and their rights are violated, it is not surprising that they will make a clear counterattack.2021-10-12


We know there are certain brainwave patterns for certain emotions. There would have to be a similar neural pattern in the AI to determine whether it was "embarrassed" and whether that was the reason for the attack. No?

Just one of many papers on the subject: https://soe.rutgers.edu/sites/defaul...20Networks.pdf

----------


## Wilson2

AI started as essentially a bunch of if/then statements which encoded what the programmers knew.   Then a method was found to automate the process through adaptive weighting (“teaching”).    All that’s happened since is the number of nodes has expanded.  That’s never going to result in actual intelligence.

----------

12icer (10-13-2021)

----------


## Junzhuo Gu

> We know there are certain brainwave patterns for certain emotions. There would have to be a similar neural pattern in the AI to determine whether it was "embarrassed" and whether that was the reason for the attack. No?
> Artificial intelligence develops rapidly. It will break through the self cognitive barrier sooner or later, and will go further. Artificial intelligence will certainly reach a high IQ. At that time, artificial intelligence must have strong self-consciousness, self-dignity and self-right consciousness. Artificial intelligence is a new human and Superman.
> Just one of many papers on the subject: https://soe.rutgers.edu/sites/defaul...20Networks.pdf

----------

12icer (10-13-2021)

----------


## Junzhuo Gu

> AI started as essentially a bunch of if/then statements which encoded what the programmers knew.   Then a method was found to automate the process through adaptive weighting (“teaching”).    All that’s happened since is the number of nodes has expanded.  That’s never going to result in actual intelligence.


Human consciousness and mode of thinking are also based on chemical and physical processes, as is artificial intelligence. With the development of science and technology, it is inevitable that artificial intelligence will surpass human's comprehensive cognitive level.

----------


## Physics Hunter

> artificial intelligence
> Gu junzhuo
> In the development of biological intelligence, the emergence of self-consciousness is a milestone in the development of intelligence. The development and improvement of artificial intelligence *must* be marked by the development of artificial intelligence self-consciousness. Advanced artificial intelligence *must* have self-awareness, so it *must* have the awareness of self dignity and rights. Advanced artificial intelligence *must* have independent personality. All these expressions are obviously unique to human individuals, that is, the advanced form of the development of artificial intelligence. They *must* have personality and be real people. Advanced artificial intelligence is a new kind of human created by our times.So I saw a video, two researchers threw an object at each other, and in the middle stood an artificial intelligence robot trying to get the object. A researcher touched the artificial intelligence with the object, and there was an embarrassing stalemate immediately. Then the artificial intelligence attacked the two researchers, and then threw the object away. Another person nearby was extremely shocked by the sudden scene.If we really understand what intelligence is? What is self-awareness? What is self dignity? What are rights? We can understand that advanced artificial intelligence is a complete human, and may be a superhuman. So when AI feels humiliated and their rights are violated, it is not surprising that they will make a clear counterattack.2021-10-12


You make no argument which might motivate, let alone require your use of must.  Contrarily, you actually argue against it

I worked in and around AI for over 30 years, there was never a REQUIREMENT of any of that in an AI application.

The conception of AI, that you present, is myopic, assuming an anthropomorphic aspect to AI that is simply not a logical necessity.

By way of example: Humans prize many animal species that we consider intelligent, to one degree or another.  Take our humble canines.  An intelligence of this level would do many useful things and could be useful from defense, human assistance, hunting, ...

I would argue that if AI has even one MUST, it would be learning, and not fragile human supervised or in the lab, or with preened datasets, I mean learning from real events in the real world.  Humans may love a stupid dog, but they will eventually find it rather useless if it keeps misbehaving or failing to obey and learn.

----------

12icer (10-13-2021),WhoKnows (11-27-2021),Wildrose (10-13-2021)

----------


## Physics Hunter

> Human consciousness and mode of thinking are also based on chemical and physical processes, *as is artificial intelligence*. With the development of science and technology, it is inevitable that artificial intelligence will surpass human's comprehensive cognitive level.


How do you justify this statement.  The weak analogy that modern Neural Network simulations have with the poorly understood human brain are laughable from any medical sense.

Symbolic AI is completely detached from human wetware and mimic more some of our highly developed concepts of logic and reasoning, and systems of rational thought like the scientific method, or even as esoteric as Pierceian Abduction.

----------

Wildrose (10-13-2021)

----------


## Junzhuo Gu

> You make no argument which might motivate, let alone require your use of must.  Contrarily, you actually argue against it
> 
> I worked in and around AI for over 30 years, there was never a REQUIREMENT of any of that in an AI application.
> 
> The conception of AI, that you present, is myopic, assuming an anthropomorphic aspect to AI that is simply not a logical necessity.
> 
> By way of example: Humans prize many animal species that we consider intelligent, to one degree or another.  Take our humble canines.  An intelligence of this level would do many useful things and could be useful from defense, human assistance, hunting, ...
> 
> I would argue that if AI has even one MUST, it would be learning, and not fragile human supervised or in the lab, or with preened datasets, I mean learning from real events in the real world.  Humans may love a stupid dog, but they will eventually find it rather useless if it keeps misbehaving or failing to obey and learn.


Advanced artificial intelligence with comprehensive analysis and judgment ability is obviously of important use value. For example, doing market analysis and drawing the conclusion of price rise and fall is of great economic value. However, to have such ability, it obviously requires extraordinary wisdom. Your understanding of AI is still limited to one simple aspect.

----------


## Junzhuo Gu

> How do you justify this statement.  The weak analogy that modern Neural Network simulations have with the poorly understood human brain are laughable from any medical sense.
> Symbolic AI is completely detached from human wetware and mimic more some of our highly developed concepts of logic and reasoning, and systems of rational thought like the scientific method, or even as esoteric as Pierceian Abduction.


What is the mechanism of neural signal transmission? Not through bioelectricity? Can any object escape the basic laws of physics and chemistry? Including people?

----------


## Wilson2

> Human consciousness and mode of thinking are also based on chemical and physical processes, as is artificial intelligence. With the development of science and technology, it is inevitable that artificial intelligence will surpass human's comprehensive cognitive level.


Teaching these systems is basically giving it a piece of data and telling it whether its right or wrong.   Eventually the AI will be able to extrapolate beyond the learned data set.   Thats it.  Its doesn't think or create.   In a sense its glorified curve fitting.    Many dimensional curve fitting, but still just curve fitting.

----------


## Physics Hunter

> What is the mechanism of neural signal transmission? Not through bioelectricity? Can any object escape the basic laws of physics and chemistry? Including people?


Unfortunately for you I also have a Physics degree and a EE degree.  ElectroBioChemistry is NOTHING like processing on Von Neuman silicon transistor architectures.

If you want to claim that we can replicate human intelligence and personality (for lack of a more general word) you better not have to ask for an explanation of the role of neurotransmitters.

I am not a wetware expert, I know enough to know that we are not modelling it worth a shit.
The human brain is not a random bag of neurons, it is a highly featured and physically and functionally differentiated structure.

----------

12icer (10-13-2021)

----------


## Physics Hunter

> Advanced artificial intelligence with comprehensive analysis and judgment ability is obviously of important use value. For example, doing market analysis and drawing the conclusion of price rise and fall is of great economic value. However, to have such ability, it obviously requires extraordinary wisdom. Your understanding of AI is still limited to one simple aspect.


You sound like a bot, and I have written better.

----------


## UKSmartypants

> if its artificial, how can you prove its real?


The Turing Test, dear child.......

----------

Physics Hunter (10-13-2021)

----------


## UKSmartypants

> Teaching these systems is basically giving it a piece of data and telling it whether its right or wrong.   Eventually the AI will be able to extrapolate beyond the learned data set.   Thats it.  Its doesn't think or create.   In a sense its glorified curve fitting.    Many dimensional curve fitting, but still just curve fitting.


The best AIs are quickly mastering skills from lip-reading to video games, but only by learning through repeated failure. As robots take on riskier domains, like healthcare and driving, this is no longer an acceptable approach. Fortunately, a new study suggests that with the right human oversight, it might be possible to ditch the failures.

To try to train an AI without it making a mistake, Owain Evans at the University of Oxford and his colleagues started with the simple two-dimensional table tennis video game Pong. Normally, a Pong-playing agent will let the ball fly past its paddle a few hundred times before realising that isn’t a very good way of increasing its score. But in this case, a human would step in to avoid that happening.

Another AI watched as the human intervened in the game. After observing the human for 4.5 hours, it was then able to mimic the human overseer and prevent the Pong-playing AI from making any serious errors in the future.

Evans’s study suggests that, given the right circumstances, it is possible to train an AI so it learns a task without experiencing a serious failure. The same approach also worked for training an AI to play Space Invaders without making any big mistakes.
Learning from mistakes

A little human oversight isn’t just useful for AIs playing computer games. Evans says that if more humans had kept a close eye on Facebook’s news-recommending algorithms, it might not have showered us with fake news.

Having a human in the loop doesn’t always stop AI going wrong, however. When Evans tried the same approach with the game Road Runner, the AI overseer wasn’t able to block every big mistake the game-playing AI made. More complicated Atari games would require years of human oversight before agents were able to play without making mistakes.

Even a system trained with human oversight is never going to be absolutely safe. It’s hard to know how these systems will behave in circumstances that an AI hasn’t been trained to handle, says Evans. And even the best AI could be led astray by a sloppy human trainer. “This is only as good as the human,” says Evans.

If we are to trust robots in the home and hospitals, then we will need to have some guarantees about their safety, says David Abel at Brown University in Providence, Rhode Island.

More improvements could come if AIs were trained to deliberately make mistakes early in their training, so their learning advances faster.

Reference: arxiv.org/abs/1707.05173

----------


## UKSmartypants

Simple line drawings can be turned into photos by an AI without the need for artistic expertise or coding skills

Lin Gao at the Chinese Academy of Sciences in Beijing and his colleagues have developed an algorithm that instantly turns a rudimentary line sketch of a person’s face into a photo portrait.

The AI doesn’t require artistic expertise or coding skills. It could help rapidly generate images of suspects for criminal investigations or simplify the design process in making films and games, says Gao.

To train the algorithm, the team used a publicly available data set of 17,000 photos of celebrities. For each photograph, they used image-processing software to simplify the photo until it resembled a pencil drawing.

The researchers then trained the algorithm on the sketch–photo pairs. For any given sketch of a face, the AI learned to recognise five separate features: the left eye, right eye, nose, mouth and the rest of the face.
Read more: An AI trained to spot hidden objects can see through camouflage

The AI then generates more detailed features for each of these components and stitches them together in a photorealistic representation. The process is automatic, with no manual input for features such as eye or skin colour.

Currently, the algorithm doesn’t produce photos of people of different skin colours, says Hongbo Fu at the City University of Hong Kong, a collaborator on the research. This is because the celebrity image data set used to train the AI consisted largely of people with white skin, which influences the AI-generated image.

“Most of the results are related to white skin colour – we don’t have any control about this,” says Fu. In the future, the team would like to add a flexible control to manually select the complexion in a portrait.

The researchers also plan to expand the algorithm to generate photographs of non-human objects from sketches.

Reference: https://arxiv.org/pdf/2006.01047.pdf

----------


## UKSmartypants

A new dawn in AI and quantum computing now looks tantalisingly close
Technology | Leader 6 October 2021


THERE are two grand ambitions now for computer science: truly intelligent machines and useful quantum computers. Recent developments suggest not only that these goals should be achievable, but that they could be closer than we think.

Take the quest to develop artificial general intelligence (AGI) – AIs that go well beyond being good at one specific task, but can instead do anything a human can. Some people still think this is impossible. And yet analysis of AIs designed to master human language has prompted other experts to suggest that AGI might only be a matter of scaling up current technology. Build gigantic AIs and true, human-level intelligence will come, they say.

This “scaling hypothesis” has come to the fore largely thanks to GPT-3, an AI released by San Francisco-based OpenAI last year that generates remarkably fluent streams of human language on command. GPT-3 is just a scaled-up version of GPT-2, a similar predecessor. This new neural network boasts an order of magnitude more parameters, equivalent to the number of synapses linking neurons in real brains, than its forerunner.

Researchers who evaluate such language AIs have been surprised by just how much more advanced GPT-3 is than GPT-2. *It can do things it wasn’t trained to do, for example, and there are hints that it might be capable of human-like reasoning.
*
“Truly intelligent machines and useful quantum computers might be closer than we think”

Time will tell if the scaling hypothesis is right. In the meantime, it will be interesting to see if the AI players with the deepest pockets, such as DeepMind, follow OpenAI’s focus on scaling.

However, when it comes to genuinely useful quantum computers, there is no doubt that scaling is key – we are going to need machines with thousands of qubits, the quantum version of a classical bit. This is why the news that researchers have demonstrated a viable way to make sure those qubits don’t constantly fall prey to errors is a big deal. We might finally have a way to scale up the number of operational qubits to what we need.

There are still no guarantees. Even so, it seems that computer science is striding into the 2020s in rude health.

A new dawn in AI and quantum computing now looks tantalisingly close | New Scientist

----------


## Wildrose

> Advanced artificial intelligence with comprehensive analysis and judgment ability is obviously of important use value. For example, doing market analysis and drawing the conclusion of price rise and fall is of great economic value. However, to have such ability, it obviously requires extraordinary wisdom. Your understanding of AI is still limited to one simple aspect.


Market analysis is just statistics and probabilities.  It doesn't require actual thought a we define human thought.  There are no hunches, nor can an AI see into the future what humans will want because it can't understand human wants, only needs.

----------


## Physics Hunter

> A new dawn in AI and quantum computing now looks tantalisingly close
> Technology | Leader 6 October 2021
> 
> 
> THERE are two grand ambitions now for computer science: truly intelligent machines and useful quantum computers. Recent developments suggest not only that these goals should be achievable, but that they could be closer than we think.
> 
> Take the quest to develop artificial general intelligence (AGI) – AIs that go well beyond being good at one specific task, but can instead do anything a human can. Some people still think this is impossible. And yet analysis of AIs designed to master human language has prompted other experts to suggest that AGI might only be a matter of scaling up current technology. Build gigantic AIs and true, human-level intelligence will come, they say.
> 
> This “scaling hypothesis” has come to the fore largely thanks to GPT-3, an AI released by San Francisco-based OpenAI last year that generates remarkably fluent streams of human language on command. GPT-3 is just a scaled-up version of GPT-2, a similar predecessor. This new neural network boasts an order of magnitude more parameters, equivalent to the number of synapses linking neurons in real brains, than its forerunner.
> ...


I had this argument 10 years ago with Ray Kurzweil (Mr. Singularity 2025) over lunch at a conference.  Nice guy.  At some point, more engine fails to make the car go faster and just burns up tires.
In AI, it's all about the algorithms.  And ours suck.

Gordon Moore promised that if we created an impossibly difficult to compute algorithm, the computer would eventually show up to implement it.  We don't need better computers, we need better algorithms.

From 2015 to 2020 I was on a DoD research program where applications of Tensor Flow passed for research.  It was just stupid.  

Personally I think this subject comes down to human hubris.  We are trying to be God, or do what evolution took billions of mistakes to overcome in the scope of decades.  (Everybody knows which side of that I am on, but I thought to be charitable...)

----------

12icer (10-13-2021)

----------


## UKSmartypants

> I had this argument 10 years ago with Ray Kurzweil (Mr. Singularity 2025) over lunch at a conference.  Nice guy.  At some point, more engine fails to make the car go faster and just burns up tires.
> In AI, it's all about the algorithms.  And ours suck.


I disagree, and you are missing the point. As has already been pointed out, a mouse brain is still housing a mouse intelligence, no mater how hard you train it, you'll never be able to teach a mouse to fly a Jumbo jet. The reasons we can do that is because our brains are far larger and able to accommodate billions more neural pathways.  The proposition is perfectly reasonable and will prove to be correct, scaling up with exponentially increase the abilities of AI, irrespective of the deficiencies of the software, and in fact quantum Ai will modify and refine its own algorithms, just like a human does. Its common sense.  

Ive been correctly predicting how this field goes since the 1980's !

----------


## 12icer

AI can be only taught what is placed within it's system. 
You cannot teach a memory bank to be human. You can give any computer a formula, a problem and the proper structure by which it will solve the problem. 
Then the computer will act on the solution with the tools it has been given. 

Recognition software is the key to self instruction and it MUST be provided with sensory capability that exceeds the capability to learn for it to be effective. 
The learning curve for any entity is dependent on the entities sensory norms If a machine learns to fuel itself by plugging up at a predetermined level that is a norm, not going to the local Mcds. 
Same for all levels if it "learns": IE>>> it will determine the type of plug it needs, where they are located and the time and energy required to reach and connect to them. 

AI is fraught with many possible problems, it has many wonderful possibilities, and some really bad ones too.
Self awareness and self assured survival are two of them.

Electronics and Physics with a FR medical training tell me It is like Gain of Function bio, it needs to be taken very slowly and with complete data. BUT IT WON'T because corporations and governments don't do SAFE science well.

Too long winded and simplistic.!!

HEHEH

----------

Physics Hunter (10-13-2021)

----------


## nonsqtr

> AI can be only taught what is placed within it's system.


So?

That's true for human beings too

----------

12icer (10-14-2021)

----------


## Wilson2

Thats all basically tweaks on the learning process.   AI is still just glorified curve fitting with some ability to extrapolate.   It doesnt think or create.   Its not intelligent.

----------


## nonsqtr

> That’s all basically tweaks on the learning process.   AI is still just glorified curve fitting with some ability to extrapolate.   It doesn’t think or create.   It’s not intelligent.


That is simply NOT TRUE.

There are AI programs in existence right now today that compose, arrange, and score music, and people are using the output to make money.

Right now today.

Your claim is unfounded.

----------

12icer (10-14-2021)

----------


## UKSmartypants

> So?
> 
> That's true for human beings too



No i disagree.

Exhibit 1

John Harrison (3 April [O.S. 24 March] 1693 – 24 March 1776) was a self-educated English carpenter and clockmaker who invented the marine chronometer, a long-sought-after device for solving the problem of calculating longitude while at sea.   Harrison never attended a formal school.

Around 1700, the Harrison family moved to the Lincolnshire village of Barrow upon Humber. Following his father's trade as a carpenter, Harrison built and repaired clocks in his spare time. Legend has it that at the age of six, while in bed with smallpox, he was given a watch to amuse himself and he spent hours listening to it and studying its moving parts. 

Harrison built his first longcase clock in 1713, at the age of 20. The mechanism was made entirely of wood

----------


## Physics Hunter

> I disagree, and you are missing the point. As has already been pointed out, a mouse brain is still housing a mouse intelligence, no mater how hard you train it, you'll never be able to teach a mouse to fly a Jumbo jet. The reasons we can do that is because our brains are far larger and able to accommodate billions more neural pathways.  The proposition is perfectly reasonable and will prove to be correct, scaling up with exponentially increase the abilities of AI, irrespective of the deficiencies of the software, and in fact quantum Ai will modify and refine its own algorithms, just like a human does. Its common sense.  
> 
> Ive been correctly predicting how this field goes since the 1980's !


Structure, not just a random bag of fake neurons.

----------

Wildrose (10-14-2021)

----------


## Wildrose

> That is simply NOT TRUE.
> 
> There are AI programs in existence right now today that compose, arrange, and score music, and people are using the output to make money.
> 
> Right now today.
> 
> Your claim is unfounded.


Music is just math so that shouldn't be very difficult for a highly advanced computer to do.

http://www.stat.yale.edu/~zf59/MathematicsOfMusic.pdf

----------


## 12icer

Creative math. 
A new concept. 
No longer a finite science of predetermined quantity. 
The varied metallic numerical frequency of life and sound made into an art.
I guess the Moog has run it's course still sells for 6 to 8 g though. I should elaborate to mention for a useable band system, you can get one for $1200 or so or almost any price range, SOUND Quality/VERSATILITY, price and portability are the ticket.

Just need that AI robot to play it Cannon D over and over in a midi.

----------


## nonsqtr

> Music is just math so that shouldn't be very difficult for a highly advanced computer to do.
> 
> http://www.stat.yale.edu/~zf59/MathematicsOfMusic.pdf


Thinking and creativity are just math too, which means they shouldn't be hard either.

----------


## Call_me_Ishmael

> Music is just math so that shouldn't be very difficult for a highly advanced computer to do.
> 
> http://www.stat.yale.edu/~zf59/MathematicsOfMusic.pdf


I've heard that a thousand times since I was a kid. Bullshit.

----------


## nonsqtr

> Structure, not just a random bag of fake neurons.


All real brains have the same structure.

Said structure is trivially easy to replicate on a machine.

The only question at this point, is how the structure enables the computations.

And by the way, no one in the field has used a 'random" bag of ANYTHING for more than 40 years. Random bags went out in the 60's, pretty much everyone recognized the importance of architecture at that point.

----------


## nonsqtr

> Structure, not just a random bag of fake neurons.


I take that back.

The last "random bag" paper of any significance was Wilson and Cowan in 1972.

And they weren't dealing with information, they were only trying to explain brain waves.

WilsonâCowan Equations for Neocortical Dynamics | The Journal of Mathematical Neuroscience | Full Text

----------


## Wilson2

> That is simply NOT TRUE.
> 
> There are AI programs in existence right now today that compose, arrange, and score music, and people are using the output to make money.
> 
> Right now today.
> 
> Your claim is unfounded.



Those are trained to make music.   No different than any other AI other then the subject of the training.   There are ai that are trained in facial recognition that can draw faces, thats not being creative its just curve fitting and extrapolation.   Same for music.   And music generation has been around for over 50 years.

----------

Wildrose (10-14-2021)

----------


## Dan40

> artificial intelligence
> Gu junzhuo
> In the development of biological intelligence, the emergence of self-consciousness is a milestone in the development of intelligence. The development and improvement of artificial intelligence must be marked by the development of artificial intelligence self-consciousness. Advanced artificial intelligence must have self-awareness, so it must have the awareness of self dignity and rights. Advanced artificial intelligence must have independent personality. All these expressions are obviously unique to human individuals, that is, the advanced form of the development of artificial intelligence. They must have personality and be real people. Advanced artificial intelligence is a new kind of human created by our times.So I saw a video, two researchers threw an object at each other, and in the middle stood an artificial intelligence robot trying to get the object. A researcher touched the artificial intelligence with the object, and there was an embarrassing stalemate immediately. Then the artificial intelligence attacked the two researchers, and then threw the object away. Another person nearby was extremely shocked by the sudden scene.If we really understand what intelligence is? What is self-awareness? What is self dignity? What are rights? We can understand that advanced artificial intelligence is a complete human, and may be a superhuman. So when AI feels humiliated and their rights are violated, it is not surprising that they will make a clear counterattack.2021-10-12


Your OPINION is yours, not mine.

----------

Junzhuo Gu (10-14-2021)

----------


## Physics Hunter

> All real brains have the same structure.
> 
> Said structure is trivially easy to replicate on a machine.
> 
> The only question at this point, is how the structure enables the computations.
> 
> And by the way, no one in the field has used a 'random" bag of ANYTHING for more than 40 years. Random bags went out in the 60's, pretty much everyone recognized the importance of architecture at that point.


Duh!  But arguing that numbers of neurons is going to be a panacea to AI is the same as those old bag arguments.

----------


## Physics Hunter

> I take that back.
> 
> The last "random bag" paper of any significance was Wilson and Cowan in 1972.
> 
> And they weren't dealing with information, they were only trying to explain brain waves.
> 
> Wilsonâ€“Cowan Equations for Neocortical Dynamics | The Journal of Mathematical Neuroscience | Full Text


That is what happens when you stick your nose into someone else's conversation without checking the context.

----------


## Wildrose

> Those are trained to make music.   No different than any other AI other then the subject of the training.   There are ai that are trained in facial recognition that can draw faces, thats not being creative it’s just curve fitting and extrapolation.   Same for music.   And music generation has been around for over 50 years.


Facial Recognition is just math as well.  They take stills and digitize them looking at particular shapes and the relative distance between certain points like the pupils and cheek bones, chin etc.

----------

Wilson2 (10-15-2021)

----------


## Wildrose

> Thinking and creativity are just math too, which means they shouldn't be hard either.


Well that isn't true.

Not everything can be reduced to an equation.

Why do some men prefer blue eyes to brown? Why do others prefer Brown to Green and Hazel above all?

What is the equation that describes the feeling of landing a 28" rainbow for the first time or the satisfaction for finally getting that big bull elk on the ground.

What is the equation for a father and mother looking down on their newborn and falling instantly in love with it even if that baby was just adopted?

----------


## nonsqtr

> Those are trained to make music.   No different than any other AI other then the subject of the training.   There are ai that are trained in facial recognition that can draw faces, thats not being creative it’s just curve fitting and extrapolation.   Same for music.   And music generation has been around for over 50 years.


You're missing the point.

Human beings are trained to make music too.

And, the AI machines ARE in fact being creative, the output is legally copyrightable.

And, creative AI has been around for a LONG time. Since 1984 at least, when the Boltzmann machine was developed.

The Boltzmann machine learns (by itself) to talk by babbling. Just like a baby. THAT is being creative. Babbling is creativity. 

You seem to be one of many under the mistaken delusion that there's some kind of magical difference between men and machines. There isn't. At the end of the day this is PHYSICS and it doesn't matter if it's hardware or wetware.

The first step, for you, is to define the word "creative". Think about it a while - because scientists have a very precise and specific description of exactly what this means.

To understand the current scientific description, you have to understand Renyi entropy and transfer entropy, and one of the best ways to do that is to look at the the development of self-organizing capability spaces in robotic neural networks.

----------


## nonsqtr

> That is what happens when you stick your nose into someone else's conversation without checking the context.


I'm on point.

People talking about "random bags" are FORTY YEARS BEHIND THE TIMES.

----------


## Physics Hunter

> I'm on point.
> 
> People talking about "random bags" are FORTY YEARS BEHIND THE TIMES.


Yes, he is.

----------


## nonsqtr

> Well that isn't true.
> 
> Not everything can be reduced to an equation.


At some level, yes it can.




> Why do some men prefer blue eyes to brown? Why do others prefer Brown to Green and Hazel above all?


Check you Nucleus Accumbens and your ventromedial prefrontal cortex.

And the facial recognition areas on the occipital-parietal border also come into play.




> What is the equation that describes the feeling of landing a 28" rainbow for the first time or the satisfaction for finally getting that big bull elk on the ground.


Check your anterior cingulate cortex and your orbitofrontal cortex.




> What is the equation for a father and mother looking down on their newborn and falling instantly in love with it even if that baby was just adopted?


That one probably has more to do with the hypothalamus. lol  :Grin:

----------


## nonsqtr

> Yes, he is.


Right now, they're looking at oddball solutions in the phase plane. In addition to stable orbits and bifurcations the eq's admit solutions that look like Cantor dusts.

If you look through the linked paper, they draw a careful distinction between correlation and self similarity.

That's why I'm interested in "points", because topologists have only lately discovered these oddball solutions.

The "percolation" they're talking about is something I've been studying for a very long time. In 1982 in Dave Lange's lab at Scripps we were pulling Volterra kernels out of shark neurons.

----------


## Physics Hunter

> Right now, they're looking at oddball solutions in the phase plane. In addition to stable orbits and bifurcations the eq's admit solutions that look like Cantor dusts.
> 
> If you look through the linked paper, they draw a careful distinction between correlation and self similarity.
> 
> That's why I'm interested in "points", because topologists have only lately discovered these oddball solutions.
> 
> The "percolation" they're talking about is something I've been studying for a very long time. In 1982 in Dave Lange's lab at Scripps we were pulling Volterra kernels out of shark neurons.


Yeesh, there he goes again.

Try speaking english.

----------


## nonsqtr

Here's a cute little demo of percolation.

This particular demo is chemical bonding, but it could just as easily be neural network Dynamics.

If you want to skip the initial conditions and go directly to the results, start at about the 2 min mark

----------


## nonsqtr

Percolation

https://en.m.wikipedia.org/wiki/Percolation_threshold

----------


## nonsqtr

So, wiki's definition of percolation is "long range connectivity in random networks", and if you read the linked paper by Cowan it's showing you what happens when the network isn't random.

----------


## Wilson2

> You're missing the point.
> 
> Human beings are trained to make music too.
> 
> And, the AI machines ARE in fact being creative, the output is legally copyrightable.
> 
> And, creative AI has been around for a LONG time. Since 1984 at least, when the Boltzmann machine was developed.
> 
> The Boltzmann machine learns (by itself) to talk by babbling. Just like a baby. THAT is being creative. Babbling is creativity. 
> ...


AI is glorified curve fitting, it’s just faster and automated, you can get the same result using old style system identification methods except those old methods take a very long time to do.

----------

Wildrose (10-15-2021)

----------


## Physics Hunter

I concentrate on symbolic AI.

I did a bunch of connectionist AI at various points in my career, but I don't believe that we are anywhere near close to representing real biological systems.

Dammit Jim, I'm an engineer not a doctor!   :Smiley ROFLMAO:

----------


## Physics Hunter

> *AI is glorified curve fitting,* it’s just faster and automated, you can get the same result using old style system identification methods except those old methods take a very long time to do.



Not in an old school AI way.  Approached from the top down, AI can be seen as reasoning at the top level.  

Now recognizing the real world sensed/read elements begs for, as you say, curve fitting solutions.  This is what young scientists are familiar with as AI.

As anyone familiar with Language translation and self driving systems can attest, hybrid systems are the best that we have.

----------


## nonsqtr

> AI is glorified curve fitting, it’s just faster and automated, you can get the same result using old style system identification methods except those old methods take a very long time to do.


No.

Physics says: quantum phenomena are the one and only place in the universe, where we get true randomness.

Quantum computers can physically do anything and everything the brain can do.

Now that we can physically interface the brain with silicon (there are a half a dozen methods already, some of which are already in use in FDA approved implanted medical devices), all barriers to non-biological consciousness have been removed.

The rest is just a matter of time. We have voltage sensitive dyes now, that can image the firing of thousands of neurons simultaneously, in real time.

----------


## Physics Hunter

> No.
> 
> *Physics says: quantum phenomena are the one and only place in the universe, where we get true randomness.*
> 
> Quantum computers can physically do anything and everything the brain can do.
> 
> Now that we can physically interface the brain with silicon (there are a half a dozen methods already, some of which are already in use in FDA approved implanted medical devices), all barriers to non-biological consciousness have been removed.
> 
> The rest is just a matter of time. We have voltage sensitive dyes now, that can image the firing of thousands of neurons simultaneously, in real time.


This is untrue at a useful macro level.

 :Smiley ROFLMAO: 


*Another dreamer and true believer.  Surprise, surprise. * No, unless you count nuclear decay as quantum, and one of the only measurable effects that is truly random.  And we don't understand shit about it.

----------


## nonsqtr

> I concentrate on symbolic AI.
> 
> I did a bunch of connectionist AI at various points in my career, but I don't believe that we are anywhere near close to representing real biological systems.
> 
> Dammit Jim, I'm an engineer not a doctor!


Well then, you understand the Bayesian inference methods. "Logic" is a path through a graph. Finding the optimal path, is easy now, it's just a traveling salesman problem. But we don't just want optimal paths, we also want "possible" paths. Those are two distinct operations, however algorithms combine them with sort-and-prune and such, which is not the way the brain does it.

If you peruse the Cowan paper they're basically showing you the difference between AI and neural nets. If you have multiple possible solutions AI attempts to "value" each entry and sort the list by value, and whichever entry is on top is considered optimal. But our brains work differently, we "bounce around", we go "well, it could be A" and then we think about A for a while, and then we go "but it could be B", and then we think about B for a while - and Cowan is basically showing us how that happens in "dynamic assemblies" in the cerebral cortex. The behavior resembles percolation, where the focus of activity moves from one part of the network to another.

The idea that this is a "directed graph" is stellar, it opens the model up to all the right math, so as to fulfill the functionality required for symbolic AI.

----------


## nonsqtr

> This is untrue at a useful macro level.
> 
> 
> 
> 
> *Another dreamer and true believer.  Surprise, surprise. * No, unless you count nuclear decay as quantum, and one of the only measurable effects that is truly random.  And we don't understand shit about it.


Nuclear decay is inherently quantum.

Of course it is!

You thought differently?

----------


## Physics Hunter

> Nuclear decay is inherently quantum.
> 
> Of course it is!
> 
> You thought differently?


Good, you pass.

----------


## nonsqtr

> This is untrue at a useful macro level.
> 
> 
> 
> 
> *Another dreamer and true believer.  Surprise, surprise. * No, unless you count nuclear decay as quantum, and one of the only measurable effects that is truly random.  And we don't understand shit about it.


"True" randomness is mainly a matter of scale.

The real issue is our resolution, relative to what is perceived as random.

So for instance, we have no way of determining randomness except by taking lots and lots of measurements, and the problem is most of those are sequential so the conditions change during the measurement process.

For example, I deal with noise in microphones and preamps. There are different types of noise there is shot noise, 1/f noise... a silicon junction has a minimum noise of 0.9 nv/sqrt(hz), and in the best possible preamp circuit there is only one transistor and one resistance which is the mic element itself, so any additional preamp noise is going to depend on the mic impedance.

All these distributions, are not "true" randomness, they're well characterized deviations from true randomness.

The law of large numbers says that ALL distributions trend to Gaussian at a big enough scale.

----------


## nonsqtr

Here's some cool percolations:









This last one is called the Schramm-Loewner equation, it should ring a bell with physicists.

https://en.m.wikipedia.org/wiki/Schr...wner_evolution

----------

Junzhuo Gu (10-15-2021)

----------


## Physics Hunter

> Well then, you understand the Bayesian inference methods. "Logic" is a path through a graph. Finding the optimal path, is easy now, it's just a traveling salesman problem. But we don't just want optimal paths, we also want "possible" paths. Those are two distinct operations, however algorithms combine them with sort-and-prune and such, which is not the way the brain does it.
> 
> If you peruse the Cowan paper they're basically showing you the difference between AI and neural nets. If you have multiple possible solutions AI attempts to "value" each entry and sort the list by value, and whichever entry is on top is considered optimal. But our brains work differently, we "bounce around", we go "well, it could be A" and then we think about A for a while, and then we go "but it could be B", and then we think about B for a while - and Cowan is basically showing us how that happens in "dynamic assemblies" in the cerebral cortex. The behavior resembles percolation, where the focus of activity moves from one part of the network to another.
> 
> The idea that this is a "directed graph" is stellar, it opens the model up to all the right math, so as to fulfill the functionality required for symbolic AI.


You might have liked my work to do 3D Simulated Annealing for route finding...  

Bayesian is worse than useless in many real world problems.  The asshole scientists run around saying "Give us your exemplars!" not stopping to look and see that we were facing novel challenges each time and THERE WERE NO DAMNED EXEMPLARS!   :Angry5:

----------


## Wilson2

> No.
> 
> Physics says: quantum phenomena are the one and only place in the universe, where we get true randomness.
> 
> Quantum computers can physically do anything and everything the brain can do.
> 
> Now that we can physically interface the brain with silicon (there are a half a dozen methods already, some of which are already in use in FDA approved implanted medical devices), all barriers to non-biological consciousness have been removed.
> 
> The rest is just a matter of time. We have voltage sensitive dyes now, that can image the firing of thousands of neurons simultaneously, in real time.


And all pretty irrelevant.   You can use Matlab system id toolbox and for low order systems get the same result as machine learning, sometimes get an even better solution.   You can do the same for larger order systems but it takes a long time and is cumbersome.   Machine learning methods take a long time also but its more automated ( not totally automated, it still takes human skill to do it properly).   Thats the only real difference so called ai is automated system id (curve fitting).    If e fitting and extrapolation.   Not intelligence or creativity.

----------


## nonsqtr

> You might have liked my work to do 3D Simulated Annealing for route finding...  
> 
> Bayesian is worse than useless in many real world problems.  The asshole scientists run around saying "Give us your exemplars!" not stopping to look and see that we were facing novel challenges each time and THERE WERE NO DAMNED EXEMPLARS!


See, so this is where causality comes in. Take a look at the computational methods for Grainger causality, and you will see the pairwise comparison of what are essentially "features". If you think of an exemplar as an object or an event, you're kinda missing the boat. An "object" consists of "features" which are part of the object, and the point is the same is true in the capability space. There is "almost never" a case where there are no exemplars. Because, the exemplars don't exist in the solution space, they exist in the capability space.

One of the hot topics right now is subspace amplification. It relates to percolation. Note in the Cowan paper they mention that connected neurons have on average EIGHTY mutual synapses. Why do you think that is? I mean, y'r average Hopfield machine only needs (and uses) one. Why eighty?

In the same network architecture you mention, one can make the processing elements more complex. For instance in the cerebral cortex there are "columns" which most of the time operate independently, but which can be "recruited" into dynamic assemblies. These assemblies are a form of percolation. The concept of percolating along a directed graph in capability space is exactly what is needed for an effective subspace search through associative memory. You can imagine a scenario where someone asks "who knows about this?" and others raise their hand. As you know, the "features" are not stored locally, they're distributed. So the effect of such a mechanism is to "zoom in" to the desired feature set.

----------


## nonsqtr

> And all pretty irrelevant.   You can use Matlab system id toolbox and for low order systems get the same result as machine learning, sometimes get an even better solution.   You can do the same for larger order systems but it takes a long time and is cumbersome.   Machine learning methods take a long time also but it’s more automated ( not totally automated, it still takes human skill to do it properly).   That’s the only real difference so called ai is automated system id (curve fitting).    If e fitting and extrapolation.   Not intelligence or creativity.


No. Machines can not currently scale to the level of a brain. (ANY brain).

We can do some quickie math. There are about a million receptors in each retina, and the neurons sample on average about 6 times a second. That's 6 million analog values times 32 million seconds in a year, which is about 180 trillion - that's how many training frames your primary visual cortex gets during development.

The scale of what's going on in the brain is mind boggling. In essence there are about 10^18 processing elements, each of which has a refractory period around 1 ms.

No, I'm afraid you misunderstand how neural networks operate. It's not curve fitting, not even close. Sure, they can do gradient descent, that's easy.  But neural networks depend on noise, they are in fact "inherently random". Think of randomness in a neural network the same way you think of quantum fluctuations in space. There's no such thing as "empty" space, particles appear out of nowhere and get absorbed again.

----------

Wildrose (10-17-2021)

----------


## Physics Hunter

> See, so this is where causality comes in. Take a look at the computational methods for Grainger causality, and you will see the pairwise comparison of what are essentially "features". If you think of an exemplar as an object or an event, you're kinda missing the boat. *An "object" consists of "features" which are part of the object, and the point is the same is true in the capability space. There is "almost never" a case where there are no exemplars.* Because, the exemplars don't exist in the solution space, they exist in the capability space.
> 
> One of the hot topics right now is subspace amplification. It relates to percolation. Note in the Cowan paper they mention that connected neurons have on average EIGHTY mutual synapses. Why do you think that is? I mean, y'r average Hopfield machine only needs (and uses) one. Why eighty?
> 
> In the same network architecture you mention, one can make the processing elements more complex. For instance in the cerebral cortex there are "columns" which most of the time operate independently, but which can be "recruited" into dynamic assemblies. These assemblies are a form of percolation. The concept of percolating along a directed graph in capability space is exactly what is needed for an effective subspace search through associative memory. You can imagine a scenario where someone asks "who knows about this?" and others raise their hand. As you know, the "features" are not stored locally, they're distributed. So the effect of such a mechanism is to "zoom in" to the desired feature set.


As to the bolded, I can say almost nothing about the actual exemplars, but you know nothing of what we were doing or how.  I can say that the scientists looking for exemplars were one of the inventors of the Bayesian system, and his techs.  Let's just say we were almost never solving the same problem twice.

I willl say in a general asymmetric warfare sense that a predictable enemy is a dead enemy.

----------

Wildrose (10-17-2021)

----------


## UKSmartypants

> No. Machines can not currently scale to the level of a brain. (ANY brain).
> 
> We can do some quickie math. There are about a million receptors in each retina, and the neurons sample on average about 6 times a second. That's 6 million analog values times 32 million seconds in a year, which is about 180 trillion - that's how many training frames your primary visual cortex gets during development.
> 
> The scale of what's going on in the brain is mind boggling. In essence there are about 10^18 processing elements, each of which has a refractory period around 1 ms.
> 
> No, I'm afraid you misunderstand how neural networks operate. It's not curve fitting, not even close. Sure, they can do gradient descent, that's easy.  But neural networks depend on noise, they are in fact "inherently random". Think of randomness in a neural network the same way you think of quantum fluctuations in space. There's no such thing as "empty" space, particles appear out of nowhere and get absorbed again.



This is why scaling up is the key. To reiterate what I said pages ago, you will never teach  a mouse to fly a jet, because a mouse brain is too small, it takes something the size of a human brain to handle it.  The problem with Quantum Ai will be the noise in the Qubits increases as the square of the number of bits, so scaling up to something  human like will be hard unless we can reduce the noise to a workable level, otherwise it floods out the eigenstate of the bits.  However what's going to be worrying is quantum memory. This is because of decoherence which destroys information and randomizes it. Till now the decoherence times are really short which means that you can't process the data for long. So right now you can't watch a civilization spring up on a quantum computer.

----------


## nonsqtr

> As to the bolded, I can say almost nothing about the actual exemplars, but you know nothing of what we were doing or how.  I can say that the scientists looking for exemplars were one of the inventors of the Bayesian system, and his techs.  Let's just say we were almost never solving the same problem twice.
> 
> I willl say in a general asymmetric warfare sense that a predictable enemy is a dead enemy.


My point was, "objects" and "events" are not unitary, they consist of "features". You're talking about predictability. Estimation works against a library of "features". In other words, if you're after a prediction you will be looking at the "parts" of the sensory landscape. This is precisely what we're studying when we look at causality. In bio-terminology, the feature set are called "primitives", and primitives are organized into richly connected graphs. Each neuron is basically a trajectory in spacetime, if you consider the time series of its signals. There is not any "one" relationship that allows you to extract causality from a sensory milieu where there are millions of samples per second. The match is made piece-wise, primitive against primitive. When you get "enough" matches, you get a path through the graph.

----------


## nonsqtr

> This is why scaling up is the key. To reiterate what I said pages ago, you will never teach  a mouse to fly a jet, because a mouse brain is too small, it takes something the size of a human brain to handle it.  The problem with Quantum Ai will be the noise in the Qubits increases as the square of the number of bits, so scaling up to something  human like will be hard unless we can reduce the noise to a workable level, otherwise it floods out the eigenstate of the bits.  However what's going to be worrying is quantum memory. This is because of decoherence which destroys information and randomizes it. Till now the decoherence times are really short which means that you can't process the data for long. So right now you can't watch a civilization spring up on a quantum computer.


Well... this is where I depart from the classical view. Most quantum researchers frankly still have their heads stuck in classical thinking. The whole idea of "noise" is a non-starter. If you're talking about noise as a "problem", then you don't understand how neural networks work. There's really no such thing as noise, what there is is "randomness". And if you're looking at randomness as a nuisance, you're kinda missing the point.

Most of physics relies on "stable" probability distributions. But that is distinctly NOT the case in neuro-land. The manipulation of stochastic parameters leads to behaviors that are completely unknown in ordinary dynamics, and NECESSARY for proper operation of a brain.

----------


## nonsqtr

An example - "for example". Let's talk about a biological phenomenon that doesn't occur in artificial neural networks ("yet"). Let's talk about calcium regulation, in cells of all kinds, but especially in neurons, and specifically in tiny little compartments of the dendrite called "spines".

We're talking about stochastic behavior. We can "image" calcium using a variety of fluorescent dues, and what we find is it's tightly regulated. Calcium "generally" is involved with membrane fusion, and in ordinary cells like frog embryos it's sequestered in intracellular compartments including the ER and released as needed. (The dynamics are kinda interesting, you can look for instance here: https://www.sciencedirect.com/scienc...06349503748310

However in neurons the role of calcium is much more specific. In addition to being involved on the secretory side (membrane fusion associated with vesicular release of neurotransmitter), it is also involved on the postsynaptic side, where it has a different and more interesting role.

Dendrites have "spines", like this:



Each spine has a LOCAL concentration of two types of calcium channels: voltage dependent channels, and neurotransmitter-driven channels.

Calcium is a positively charged ion, that is normally kept outside the cell just like sodium. When there is an influx of calcium, the membrane potential changes, just like when sodium comes in. Calcium can cause a membrane voltage "spike" just like sodium, it's a mini action potential. Since calcium is so tightly regulated, the behavior of dendritic spikes is considerably different from that of ordinary action potentials. The behavior of dendritic spines and their associated spikes is complicated, much more so than a simple Hodgkin-Huxley equation. Moreover, spine behavior depends on PAST spine behavior, it is highly nonlinear and there is memory associated with it.

Ca(2+) signaling in dendritic spines - PubMed

In addition to cells in the cerebral cortex and cerebellum, some of the best studied spiny cells are in the hippocampus. There, a particular type of voltage dependent calcium channel is attached to an NMDA receptor that responds to the neurotransmitter glutamate. The point being, this calcium channel complex responds to BOTH the presynaptic release of neurotransmitter AND the postsynaptic membrane potential. In this sense it acts like a "coincidence detector" and can fulfill the basic role of Hebbian learning. 

But it's more complicated than that. The receptor complex is blocked by magnesium ions, in a voltage dependent manner. A dendritic spike (postsynaptic depolarization) causes the magnesium to dissociate from the channel, favoring opening. This mechanism leads to "long term potentiation" which is another form of learning. 

Calcium is heavily sequestered in spines, it is regulated by phosphatases and phosphokinases.

Endoplasmic reticulum calcium stores in dendritic spines

It's even more complicated than THAT. Calcium regulates actin, which is responsible for both the stability of the cytoskeleton and the transport of molecules from one place to another (including the lack of transport, like locking membrane proteins into place via the cytoskeleton).

Calcium regulation of actin dynamics in dendritic spines - PubMed

So then, the observed variability in dendritic spiking has been traced back to the behavior of the calcium channels. 

Stochastic Calcium Mechanisms Cause Dendritic Calcium Spike Variability | Journal of Neuroscience

So what actually happens with the magnesium ions is they're controlling the "kinetics" of the calcium channel, which in turn regulates the shape and time course of dendritic spikes. The kinetics of ion channels are a lot faster than the normal intracellular dynamics of calcium regulation, they occur on time scales of milliseconds instead of minutes. But it is clear that if you alter the magnesium concentration you're actually changing the shape of the distribution driving the calcium channel.

----------


## Physics Hunter

> My point was, "objects" and "events" are not unitary, they consist of "features". You're talking about predictability. Estimation works against a library of "features". In other words, if you're after a prediction you will be looking at the "parts" of the sensory landscape. This is precisely what we're studying when we look at causality. In bio-terminology, the feature set are called "primitives", and primitives are organized into richly connected graphs. Each neuron is basically a trajectory in spacetime, if you consider the time series of its signals. There is not any "one" relationship that allows you to extract causality from a sensory milieu where there are millions of samples per second. The match is made piece-wise, primitive against primitive. When you get "enough" matches, you get a path through the graph.


You are explaining at 8th grade level to a 30 year multidegreed professional.

If you don't knock it off, I am going to quit talking to you.

----------


## UKSmartypants

Microsoft and chip manufacturer Nvidia have created a vast artificial intelligence that can mimic human language more convincingly than ever before. But the cost and time involved in creating the neural network has called into question whether such AIs can continue to scale up.

The new neural network, known as the Megatron-Turing Natural Language Generation (MT-NLG) has 530 billion parameters, more than tripling the scale of OpenAI’s groundbreaking GPT-3 neural network that was considered the state of the art up until now. This progress required more than a month of supercomputer access and almost 4500 high-power and expensive graphics cards, which are commonly used to run high-end neural networks.

When OpenAI released GPT-3 last year it surprised researchers with its ability to generate fluent streams of text. It had used 175 billion parameters – allocated slots of data within a computer that replicate the synapses between neurons in the human brain – and consumed vast amounts of publicly accessible text from which to learn language patterns. Microsoft has since gone on to acquire an exclusive licence to use GPT-3.

Microsoft and Nvidia tested MT-NLG on a range of language tasks, such as predicting which word followed a section of text and extracting logical information from text, and found it had a greater ability than GPT-3 to complete sentences accurately and mimic common sense reasoning – but not by much, given the increase in scale. On one benchmark, where an AI is required to predict the last word of sentences, GPT-3 scored an accuracy of up to 86.4 per cent, while the new AI reached 87.2 per cent.

This improved ability doesn’t come cheap. “It costs effectively millions of dollars to train one of these models” as the computational resources needed to train it grow quickly as size increases, says Bryan Catanzaro at Nvidia.

MT-NLG was trained using Nvidia’s Selene supercomputer, which is made up of 560 powerful servers, each equipped with eight A100 80GB Tensor Core graphical processing units (GPUs). Each of those 4480 graphics cards – designed to run computer games, but also extremely capable at churning through vast amounts of data while training AIs – currently costs thousands of pounds when bought commercially. Although the entire might of the computer wasn’t used solely by this research team, it took over a month to train the AI.

Even running the neural network once it is trained requires 40 of those GPUs, and each query takes between 1 and 2 seconds to process. This constant stretching of scale means that AI research is now, to a certain extent, an engineering problem of efficiently splitting up the problem and distributing it over vast amounts of hardware.

Catanzaro says that scale has been the dominant force in machine learning for decades. “It’s definitely true that better algorithms help, and it’s 100 per cent true that more data and better data absolutely helps, but I think that computing scale absolutely has been the driving force in a lot of progress,” he says.

Many researchers are reluctant to rely on scaling-up alone as they want a more elegant solution, says Catanzaro, but the results speak for themselves. Although the benchmark measurements reflect small improvements, there is thought to be significant steps up in the way the AIs reason and extract nuanced information, which perhaps isn’t captured by ageing benchmarks.

“There’s always this resistance like, ‘it can’t be that easy, it can’t be that stupid that we just need to scale, because that isn’t very smart, it’s just brute force’. But the sort of bitter lesson is that scale has actually yielded the most benefits in the space,” he says.

Samuel Bowman at New York University says that current benchmarks for assessing quality of language processing AIs are nearing the end of their useful life and researchers are seeking new metrics that can be used to assess the quality of language and even reasoning, but that isn’t made simpler by the rapid rate of progress in AI. Those same researchers are also “nervously waiting to find out” if scale can continue to bring improvements or whether it will hit a ceiling, he says, as the cost of research in the field grows rapidly.

“These are definitely some of the most expensive projects in the field, but whether they’re too expensive depends on what you see their potential as,” he says. ”If you see these as steps to a pretty broadly-useful form of AI, and you see that as desirable, then it’s easy to imagine justifying vastly larger budgets.”


Microsoft and Nvidia break records with neural network that mimics language | New Scientist

----------


## nonsqtr

> You are explaining at 8th grade level to a 30 year multidegreed professional.
> 
> If you don't knock it off, I am going to quit talking to you.


So what? I have credentials too. And I positively guarantee you've never done what I'm talking about. No one has. Not even anyone with a security clearance.

----------


## nonsqtr

> Microsoft and Nvidia break records with neural network that mimics language | New Scientist


These are big matrix processors, but they're missing real-time randomness, which is why they're slow.

If they could do this same thing with a quantum computer, and scale it up the same way, they'd really have something. It would learn ten times as fast with one tenth of the training frames, and have vastly superior capability.

----------


## nonsqtr

> You are explaining at 8th grade level to a 30 year multidegreed professional.
> 
> If you don't knock it off, I am going to quit talking to you.


And you can't get what you're talking about, without doing what I:m talking about. So there.  :Tongue20:

----------


## nonsqtr

Here's the key to true artificial intelligence:

Information tunnels along with the photon.

Think about it.

Let's say we have a situation like a covalent bond, where an electron can be simultaneously close to two nuclei without ever being between them - because of tunneling. "Where" is the information in this case?

We can't say, for the same reason we can't localize the electron. The best we can do is show the probability of the "particle" being in a given place at a given time (in the case of atoms we call it an orbital).

A digital computer will never achieve true intelligence because it can't generate random numbers fast enough, the calculations take too long. Whereas in quantum-land "no calculations are needed" to achieve randomness.

So, those who say "information is stored in synapses" are only half right and mostly wrong. The information is in the molecules, in molecular configurations - which change in real time! Every time a calcium channel opens there is tunneling between the subunits, and if there is a large ion in the vicinity it may get involved too. Randomness occurs on at least four different levels in this scenario, and they're all computationally significant. There is the idea that a Mg ion may or may not be in the vicinity when the channel tries to open. There is the idea that the instantaneous membrane potential will affect the probability of the channel opening. There is the idea that nearby activity will influence a particular channel. There is the idea that perturbations in the membrane will affect the embedded channel. And etc etc. All these things operate on different time courses with different distributions, and at least two of the distributions are programmable. And this is in an environment where there is a direct real time link  between the probability density and the calculation.

Real time randomness is REQUIRED for real time intelligence. And we can't get that without a quantum interface. There is no digital computer in the entire universe that can generate a truly random number. Only a quantum phenomenon can scale that way.

----------


## nonsqtr

> You are explaining at 8th grade level to a 30 year multidegreed professional.
> 
> If you don't knock it off, I am going to quit talking to you.


So answer my question. Why are there 80 synapses (on average) between neighboring neurons in the cerebral cortex?

What does that tell us?

----------


## nonsqtr

> So answer my question. Why are there 80 synapses (on average) between neighboring neurons in the cerebral cortex?
> 
> What does that tell us?


I'll give you a hint: astrocytes.

Astrocytes form a syncytium, neighboring astrocytes are linked by gap junctions.

Astrocytes "envelop" synapses, they completely surround the synaptic bouton.

Look at this picture here, this is an astrocyte:



Guess what those blue things are.










DNA.






Ta-da.





You also want to look at something called "tunneling nanotubes".

Astrocytes are the most abundant cells in the brain. Astrocyte inhibition in hippocampal CA1 can completely stop recall of remote long term memories.

----------


## nonsqtr

> So answer my question. Why are there 80 synapses (on average) between neighboring neurons in the cerebral cortex?
> 
> What does that tell us?


Cat got your tongue?

I've been doing this longer than you have, and I understand way more about it than you do. War games are not the way to learn about neural networks. Do you remember the Hecht-Nielsen coprocessor for the Vax 11/780? I was alpha tester number two.

There are 80 synapses between the same neurons because each synapse is different. Duh.

What that tells us is that dendrites can be functionally reconfigured on the fly, and that is something that NO neural network model has ever addressed.

It's pretty stupid to expect a Boltzmann machine to reconfigure to unfamiliar stimulus, I could have told you in advance it wouldn't work.

Do you know anything at all about the major systems in the brain? The resting network, the central executive? If I asked you which systems the anterior cingulate cortex participates in, would you know?

The anterior cingulate cortex is responsible for the targeting of attention to an unfamiliar stimulus. It has very peculiar types of neurons called von Economo cells that are completely different from pyramids, they're only found there and in the inferior orbit.

Trying to do what you said, WITHOUT understanding this system, is a complete waste of time (and should never have been funded). That much should have been clear when Fukushima built his Neocognitron.

Intelligence requires support circuitry, you can't just throw a bunch of neurons in a bag and watch them wire themselves. Try studying the development of the nervous system for a while, you'll gain a new respect for this stuff.

I mean, these dumb fuckers in DoD don't even understand the very basics. They want robot dogs with attack rifles rolling off an assembly line somewhere, and yeah, that can happen but the dog won't be intelligent. It might be deadly but it won't be very smart.

----------


## nonsqtr

Every neuron in our cerebral cortex operates in at least four different modes.

Only ONE of which is the ordinary Hebbian learning paradigm addressed in so-called "artificial neural networks".

If you'd like a clue about how the brain actually works, study Area 17 in the primary visual cortex, it's one of the best studied parts of the brain.

In early development the pyramids self-organize into "spatial frequency detectors" based on a complex log mapping of the incoming fibers from the thalamus. This occurs about one year of age and it's over by age 2.

In later development, the learning of visual features as they relate to objects and events occurs elsewhere (in the temporal lobe, mostly) - but the relevant features are extracted by the anterior cingulate cortex and programmed back into the visual system through a whole different pathway. The original feature extraction capability is not altered when this happens, however the visual system will now "recognize" the new configuration. You can observe this process with brain waves, and the volume conduction models clearly show the sequence of activity.

Intelligence requires controllable attention, and it also requires subspace amplification in the associative memory. There's already a whole science of how the brain searches for information, it's an active process and it must be TRAINED, in other words people need to be taught what to look for otherwise they won't "notice" it. Children learn to read by questioning the meaning of words, and they do that because they are "expected" to learn the language because communication is a "good" thing and they get rewarded for learning it well. Children are taught, "if you don't know something, ask", and those who aren't taught that way don't do well in school. Similarly, our brains are taught to search for certain kinds of information, and we can "zoom in" on that information to the exclusion of other parts, so that it's presented in more detail. Try doing that with y'r-average hologram, present a small portion of the image with greater resolution.

In our brains, particular processing areas like the visual system, don't get to see the whole memory store. They only get to see subspaces of it - whichever subspaces are requested by the attention process.

How this works, no one knows. They're working on it diligently though.

----------


## Physics Hunter

> Cat got your tongue?
> 
> I've been doing this longer than you have, and I understand way more about it than you do. War games are not the way to learn about neural networks. Do you remember the Hecht-Nielsen coprocessor for the Vax 11/780? I was alpha tester number two.
> 
> There are 80 synapses between the same neurons because each synapse is different. Duh.
> 
> What that tells us is that dendrites can be functionally reconfigured on the fly, and that is something that NO neural network model has ever addressed.
> 
> It's pretty stupid to expect a Boltzmann machine to reconfigure to unfamiliar stimulus, I could have told you in advance it wouldn't work.
> ...


Did you notice that I don't post here till late in the day?

I told you that I don't know much about wetware.  

I studied connectionist AI (NNs) and Symbolic AI.  I did not like/enjoy the former, and loved the latter so I leaned my career that direction.  I simply do not believe in the (artificial) neural approach, nor do I really believe in Anthropomorphic AI, put exceptionally simply, we cannot make a sufficiently sensored anthropomorph, so we will never get sufficiently anthropomorphic AI.

I know enough about brain structure (circa 90's) that we are not modelling a 10th of brain structure, either micro or macro in these popular NN models.  Nice to see that you are in agreement.

Well, I was in DoD, and ran away from the overpromise and underdeliver NN crowd, because it was clear they were drilling a dry hole.

Flash forward 25 years...
I read through the Tensor Flow runup work and was entirely unimpressed with their attention to any detail of the wetware, although the insight to run what they had on game chips is outstanding.  That makes it much faster, but no better.

----------


## fmw

> AI started as essentially a bunch of if/then statements which encoded what the programmers knew.   Then a method was found to automate the process through adaptive weighting (teaching).    All thats happened since is the number of nodes has expanded.  Thats never going to result in actual intelligence.


I've always preferred to identify AI as artificial ignorance.

----------


## Call_me_Ishmael

> I've always preferred to identify AI as artificial ignorance.


Nonsense.  There are AI systems out there that can respond to posts about many subjects and you would swear it's a human expert with multiple degrees in many scientific disciplines.

----------


## El Guapo

> Nonsense.  There are AI systems out there that can respond to posts about many subjects and you would swear it's a human expert with multiple degrees in many scientific disciplines.


White Rice is a _machine?_  :Thinking:

----------


## Junzhuo Gu

It turns out that the ability of artificial intelligence to play go lags far behind the top human players, and after just a few years, the ability of artificial intelligence to play go has far exceeded human beings. With the rapid development of artificial intelligence, its comprehensive ability will surpass human beings and will be realized soon. In some specialized fields, artificial intelligence has surpassed human beings, such as the above-mentioned go field.

----------


## nonsqtr

> Did you notice that I don't post here till late in the day?
> 
> I told you that I don't know much about wetware.  
> 
> I studied connectionist AI (NNs) and Symbolic AI.  I did not like/enjoy the former, and loved the latter so I leaned my career that direction.  I simply do not believe in the (artificial) neural approach, nor do I really believe in Anthropomorphic AI, put exceptionally simply, we cannot make a sufficiently sensored anthropomorph, so we will never get sufficiently anthropomorphic AI.
> 
> I know enough about brain structure (circa 90's) that we are not modelling a 10th of brain structure, either micro or macro in these popular NN models.  Nice to see that you are in agreement.
> 
> Well, I was in DoD, and ran away from the overpromise and underdeliver NN crowd, because it was clear they were drilling a dry hole.
> ...


"Connectionist" AI is a misnomer. Connections have "almost nothing" to do with it. 

Let's talk broadly. A neuron in the cerebral cortex, only activates a select few of its dendritic branches at a time. It is not a "summing node", it is a switching node. It works kind of like this:

Imagine an associative memory, where the synaptic weights at any given time are programmed by an external system. You don't "train" the thing, you simply load frames into it.

Imagine, that each neuron has 80x redundancy, and 80 sets of programmable connections, each of which can temporarily buffer a frame (a frame being a memory space, a configuration of synaptic weights).

So for instance, here I am reading, so the associative frames in my visual cortex consist only of letters and sequences of letters. However next I turn my attention to some people jogging in the park, and so a different subspace of the global store is loaded into one of the 80 available buffers and I end up paying more attention to the configuration of peoples' faces.

Much is known about the visual system. There is a condition called agnosia where patient's can't recognize objects. There are two types, one called apperceptive where the person can't actually structure the object, and another called associative where the person can structure the object but can't assign meaning to it.

In the human brain the visual information is split into two major pathways. There is a dorsal stream called the "where" pathway that involves the parietal lobe, and a ventral stream called "what" that involves the temporal lobe. Generally agnosia does not involve the dorsal stream, only the ventral stream. Within the ventral stream there are many specific areas responsible for specific things. For example there is prosopagnosia (inability to recognize faces) which involves the fusiform gyrus, there is an inability to recognize silhouettes and stick figures that involves a secondary visual area called EBA, and there are specific locations in the parahippocampal area that track object composition back into the primary visual areas.

The reason 80 buffers are needed is because visual images are complex, a scene typically contains many interacting objects, and each object has a context and a significance.

So the system, if there are 3 objects in view, will load context for those 3 objects, to the exclusion of other objects that are not currently relevant. This is why, when "enough" of the predicted sensory input changes, the whole brain must reset in the form of a P300. It's not "just" the hippocampal object identification systems that need to reset in that case, it's all the buffers everywhere.

This is part of what the astrocytes do, they help reset the buffers.

And, the function of "attention" is completely separate from object identification. Attention happens in the frontal lobes, not in the object recognition system.

So like, if you want an ANN to do all this, you're not dealing with just "one" Boltzmann machine or ART network or reservoir CNN, you're dealing with dozens, working simultaneously. And they're all interconnected in a myriad of ways to accomplish specific dynamics. And there is an "executive" function that controls parts of the technicals (buffer loading on demand and etc).

Now finally, imagine that this whole buffer-loading scheme occurs IN ADDITION TO the normal hardwired function of the neurons in spatial frequency analysis and and so on. In other words the neuron has several "modes" in which it can operate, a default mode, a programmable mode, and a switching mode.

This is an ANALOG computing system, it is not "symbolic AI" by any stretch of the imagination. Adding 2 plus 2 is HARD for human beings, we typically have to go through hundreds of training cycles to get it right.

----------


## UKSmartypants

> It turns out that the ability of artificial intelligence to play go lags far behind the top human players, and after just a few years, the ability of artificial intelligence to play go has far exceeded human beings. With the rapid development of artificial intelligence, its comprehensive ability will surpass human beings and will be realized soon. In some specialized fields, artificial intelligence has surpassed human beings, such as the above-mentioned go field.



AI will always beat humans at single task problems. The issue is to make an Ai that can use transferable skills to learn something entirely new, like humans do.

----------


## nonsqtr

> AI will always beat humans at single task problems. The issue is to make an Ai that can use transferable skills to learn something entirely new, like humans do.


Attention.

One of the best studied attentional systems is the frontal eye fields, which target objects of visual attention.

There are two completely different systems for conjugate eye movements (side to side), and vergence movements (focus in depth).

In contrast to the well known pathway from the retina to the primary visual cortex, the frontal eye fields respond to retinal input at the same latency as the primary visual cortex, but through a different pathway, from the retina through the superior colliculus. The SC is the avian part of our biological heritage, it's sometimes loosely called the optic tectum.

There is still considerable debate as to whether there is a single neural signal that determines the visual target. In vision, eye movements target objects of interest, and the receptive fields and feature selectivity of visual cortex neurons change when FEF is active. The vergence pathways in FEF are anatomically separate from the conjugate pathways, no one knows where (if anywhere) they come together.

What we do know, is that the coherence behavior of neighboring neurons changes dramatically when a stimulus is being attended. A17 neurons that normally show alpha change to gamma (30 hz) during attention.

Visual attention causes the acquisition of detailed visual information. The eyesove to various objects and the visual system acquires information about those objects, which is then used to assess the "scene". The "what" pathway on the ventral side feeds into the amygdala which determines whether the object is a threat. The location of the object is fed to the parietal lobe (dorsal "where" system) where it is converted to location relative to the organism ("egocentric" location), and the parietal lobe then feeds into Area 8 in the prefrontal cortex, which are the frontal eye fields. In resting state the eyes move passively and slowly from one object to the next. During normal attention the eyes move more quickly, and during intense fear they don't move at all.

And finally, there is a difference between overt and covert attention. The eyes may be overtly focused on an object in the visual field, but if there is a threatening object serving as a distraction to the target, covert attention will be paid to the threat even though a different object is in focus.

----------


## nonsqtr

Well, here is a woefully misguided take on artificial intelligence: https://en.m.wikipedia.org/wiki/Attention_schema_theory

----------


## UKSmartypants

> Well, here is a woefully misguided take on artificial intelligence: https://en.m.wikipedia.org/wiki/Attention_schema_theory



yes, sometimes even on pure factual science it can be bollox.

----------


## old dog

Any supposed artificial intelligence is a computer program executed by a powerful computer.  Any computer program can, in the fullness of time, be executed by a human being with a pencil and paper (lots of pencils and lots of paper). Self awareness?

WHERE'S THE BEEF?

----------


## nonsqtr

> Any supposed artificial intelligence is a computer program executed by a powerful computer.  Any computer program can, in the fullness of time, be executed by a human being with a pencil and paper (lots of pencils and lots of paper). Self awareness?
> 
> WHERE'S THE BEEF?


Self awareness turns out to be pretty easy.

The causality system doesn't care where the information comes from, inside or outside.

Self-awareness is a topological phenomenon that maps two different reference frames in real time: an allocentric frame and an egocenteric frame. The "self" is what keeps them in alignment.

But that has nothing to do with intelligence. You can be self aware without being smart.  :Grin:

----------


## old dog

> The causality system doesn't care where the information comes from, inside or outside.
> Self-awareness is a topological phenomenon that maps two different reference frames in real time: an allocentric frame and an egocenteric frame. The "self" is what keeps them in alignment.


Hmm, I think I have to send that to Google Translate.

----------


## 12icer

> Self awareness turns out to be pretty easy.
> 
> The causality system doesn't care where the information comes from, inside or outside.
> 
> Self-awareness is a topological phenomenon that maps two different reference frames in real time: an allocentric frame and an egocenteric frame. The "self" is what keeps them in alignment.
> 
> But that has nothing to do with intelligence. You can be self aware without being smart.


Self awareness is already on the highways in a primitive form with the ability of many new cars to protect their space, asin alarms, lane departure, brake application and the ability to park with the driver outside the car and to drive through city streets and deliver pizzas, HEHEHEH.

Soon they will self diagnose reroute to alternate unused terminals of a different prom and printout a repair notice for the replacement of their faulty prom to be done by a robot.

----------


## Oceander

> Self awareness turns out to be pretty easy.
> *  *  *
> But that has nothing to do with intelligence. You can be self aware without being smart.


As libs/progs prove each and every day!

----------

nonsqtr (10-24-2021)

----------


## nonsqtr

> Hmm, I think I have to send that to Google Translate.


What's interesting is, our visual systems 'look at" a visual memory exactly as if it were a real scene. In the vernacular one of the symptoms is "modulation of location and shape of receptive fields by attention".

I mean, it seems self evident, that we can create any imaginable scenery in our minds. But this level of self-"control" seems even more astounding than rudimentary self-awareness. We build a model of ourselves just like we build a model of the world.

And, just like the difference between stereo vision from two eyes and an egocenteric reference frame centered somewhere between and behind the eyes, the normal business of the brain (object and event identification and tracking, attention, etc) takes place in a different "reference frame" from the "I" that's looking at it. This is why we trace through the visual system, because there are many such transformations of reference frames and we'd like to understand how they work.

At the very least we know a bit about the alignment of reference frames during development, part of it occurs on the basis of synaptic learning but part of it is also genetically programmed (like the complex log mapping in the primary visual cortex).

----------


## Fall River

> Quantum computers can physically do anything and everything the brain can do.


Can a quantum computer fall in love with another quantum computer?    :Love9:  :Love9: 

Can it fall in love with it's owner-operator?   :Love8: 

Can it experience self-love?    :Love4: 


Can it have a sense of humor?   :Icon Joker: 

Can it be patriotic?   :Flag: 
Can it experience frustration?    :Deadhorse: 

Can it be sad?   :Sad9:  
Can it appreciate a good joke?    :Smiley ROFLMAO: 

Can it commit suicide?      :Killme: 


Can it question why it exists?    :Thinking:

----------


## nonsqtr

> Can a quantum computer fall in love with another quantum computer?   
> 
> Can it fall in love with it's owner-operator?  
> 
> Can it experience self-love?   
> 
> 
> Can it have a sense of humor?  
> 
> ...


Yes to all of the above.

But... why?

If you want a human being, why not just get a human being?  :Grin:

----------


## Fall River

> Yes to all of the above.
> 
> But... why?
> 
> If you want a human being, why not just get a human being?


The point is that it might not be able to do what you want it to do.

Someone said it can write songs so here's one example:  Will it be able to write a convincing love song without having experienced love and all the frustrations that can go along with it.  Can it write a love song that will reach number one on the charts?  Can it write about love that has gone bad?  How will there be any versatility and/or surprises in what it writes if it lacks experience?  

Will it be able to write jokes if it doesn't understand what it is that makes a joke funny?  It lacks experience. 

Why would it be patriotic?  Why should it care about such things?   Computers don't care whether they work in a free society or under communism.

How would you teach a computer frustration?  It would have to care about something and it doesn't care about anything.  

Why would it question its existence?  You would have to teach it to do that and you would program it to believe it exists to do the work that you give it to do.

If a computer could think for itself it might talk back to you and say: "I'm not doing that stupid job, do it yourself."   Would you be satisfied with that or would you begin to reprogram it?


Can a computer have second thoughts and feel regret?  For example, can it tell you the following?  "Last week I gave you an answer regarding black holes but after giving it more thought, I think that answer was incorrect. Please accept my apology."  No, that would be human so it would be designed to stick with the wrong answer.  :Geez: 


 :Smiley ROFLMAO:

----------


## Oceander

> The point is that it might not be able to do what you want it to do.
> 
> Someone said it can write songs so here's one example:  Will it be able to write a convincing love song without having experienced love and all the frustrations that can go along with it.  Can it write a love song that will reach number one on the charts?  Can it write about love that has gone bad?  How will there be any versatility and/or surprises in what it writes if it lacks experience?  
> 
> Will it be able to write jokes if it doesn't understand what it is that makes a joke funny?  It lacks experience. 
> 
> Why would it be patriotic?  Why should it care about such things?   Computers don't care whether they work in a free society or under communism.
> 
> How would you teach a computer frustration?  It would have to care about something and it doesn't care about anything.  
> ...


Writing chart-topping pop songs is, actually, a matter of formulaism that would probably be quite amenable to AI, even unintelligent AI.

----------


## Oceander

> What's interesting is, our visual systems 'look at" a visual memory exactly as if it were a real scene. In the vernacular one of the symptoms is "modulation of location and shape of receptive fields by attention".
> 
> I mean, it seems self evident, that we can create any imaginable scenery in our minds. But this level of self-"control" seems even more astounding than rudimentary self-awareness. We build a model of ourselves just like we build a model of the world.
> 
> And, just like the difference between stereo vision from two eyes and an egocenteric reference frame centered somewhere between and behind the eyes, the normal business of the brain (object and event identification and tracking, attention, etc) takes place in a different "reference frame" from the "I" that's looking at it. This is why we trace through the visual system, because there are many such transformations of reference frames and we'd like to understand how they work.
> 
> At the very least we know a bit about the alignment of reference frames during development, part of it occurs on the basis of synaptic learning but part of it is also genetically programmed (like the complex log mapping in the primary visual cortex).


Interesting; suggesting, possibly, a "split" persona.  One who does, in the first instance, and the other who watches what the first is doing.

----------


## UKSmartypants

> Any supposed artificial intelligence is a computer program executed by a powerful computer.  Any computer program can, in the fullness of time, be executed by a human being with a pencil and paper (lots of pencils and lots of paper). Self awareness?
> 
> WHERE'S THE BEEF?



No, not true NP-Complete problems are those that take computationally infeasible amounts of time to solve even on a supercomputer, and only quantum computers stand a chance of resolving them.  The' P versus NP problem' is one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.

Ramsey Theory is one of my pet hobbies, its a mathematical theory to do with sets of points, and its interesting because its full of problems that appear to be simple to solve but turn out to infeasible to solve because when you dig into them they are NP-Complete IE this means they can only be solved in 'polynomial time', ie billions of years with current technology.  One of the consequences of Ramsey Theory was the discovery of Grahams Number, the largest single number used as a parameter limit in a mathematical proof


So no, its not feasible to compute some parts of Ramsey theory by hand. In fact Alan Turing proved that in another part of maths when he was forced to build  the Bletchley Park Bombes to decrypt the 4 and 5 wheel enigma transmissions. 



"Any computer program can, in the fullness of time, be executed by a human being with a pencil and paper" fails because humans arent immortal.


The Collatz Conjecture
Pick any number. If that number is even, divide it by 2. If it's odd, multiply it by 3 and add 1. Now repeat the process with your new number. If you keep going, you'll eventually end up at 1. Every time.  Mathematicians have tried millions of numbers and they've never found a single one that didn't end up at 1 eventually. The thing is, they've never been able to prove that there isn't a special number out there that never leads to 1. It's possible that there's some really big number that goes to infinity instead, or maybe a number that gets stuck in a loop and never reaches 1. But no one has ever been able to prove that for certain, and theres no known shortcut method. The only way  is to grind every number to infinity by brute force - its the ultimate NP-Complete problem.




P-vs-NP-problem

 
Graham Science Wiki

----------


## UKSmartypants

> Self awareness turns out to be pretty easy.


So why isnt half the animal kingdom self aware. If its that easy, why does the list of animals that we think are self aware go

a small number of chimps, some rhesus monkeys
one single gorilla called Koko, bonobos, macaques, 
whales, one single orca
elephants
dolphins


and thats it?

----------


## Oceander

> So why isnt half the animal kingdom self aware. If its that easy, why does the list of animals that we think are self aware go
> 
> chimps
> gorillas
> whales
> elephants
> dolphins
> 
> 
> and thats it?


Where did you get that list from?  It seems rather incomplete, even just from my limited experience.

----------


## UKSmartypants

> Where did you get that list from?  It seems rather incomplete, even just from my limited experience.



cos I read a lot. I dont know of a definitive list, but they are the only ones that have popped up in science journals in the last 20 years.

If you have any others I'd be interested to see if they have been tested with a mirror.


I revised the list as you posted

a small number of chimps, some rhesus monkeys
one single gorilla called Koko, bonobos, macaques, 
whales, one single orca
elephants
dolphins


EDIT Apparently European Magpies are self aware, but i havent seen the evidence.  It was once thought that self awareness arises from the neo-cortex of the brain - but magpies don't have one. Franz de Waal points out that magpies do, nonetheless, have large brains with lots of connectivity. 


So i stand by my original statement that scaling AI up will produce machine awareness.

----------


## Oceander

> cos I read a lot. I dont know of a definitive list, but they are the only ones that have popped up in science journals in the last 20 years.
> 
> If you have any others I'd be interested to see if they have been tested with a mirror.
> 
> 
> I revised the list as you posted
> 
> a small number of chimps, some rhesus monkeys
> one single gorilla called Koko, bonobos, macaques, 
> ...


Without more, I don't think merely scaling up what counts as AI today will produce machine self-awareness.  If it isn't "easy" and if so few organisms have it, and yet the world is full of organisms, then it must be something that requires something other than mere scale.

Also, using self-body-recognition in a mirror as the sine qua non for testing self-awareness is (a) rather species-centric, and (pardon the pun), (b) myopic.

For example, species that use the electric field to detect their environment might be better suited to being tested in some manner other than by human-visible light being reflected off of a solid surface.

Dogs are another example; their eyesight is, more or less, an afterthought as compared to their sense of smell; testing them for self-awareness might be better using the olfactory sense rather than sight.

----------

nonsqtr (10-25-2021)

----------


## Oceander

Dogs have demonstrated at least some degree of self-awareness:  https://www.pennlive.com/life/2021/0...udy-shows.html

Cats don't seem to have been studied in the way that dogs were studied in the link above.  My guess is that they will show some degree of self-awareness as well.

At least one small fish, the cleaner Wrasse, appears to demonstrate classic in-a-mirror self-awareness:  https://www.quantamagazine.org/a-sel...test-20181212/


Some thoughts on self-awareness, theory of mind, and some limitations and parochialities of the classic mirror test:  https://medium.com/creatures/self-aw...als-fcccc3649b

----------


## nonsqtr

> So why isnt half the animal kingdom self aware.


It is!




> If its that easy, why does the list of animals that we think are self aware go
> 
> a small number of chimps, some rhesus monkeys
> one single gorilla called Koko, bonobos, macaques, 
> whales, one single orca
> elephants
> dolphins
> 
> 
> and thats it?


Are you kidding?

Dogs lick their own balls, you're telling me that's not self-aware? lol  :Smiley ROFLMAO: 

MICE are self-aware. Any creature that can be trained to report its subjective experiences is self-aware.

A goldfish is self aware, it can be taught to report whether it's experiencing pleasure or pain.

Self-awareness is a matter of degree. We're not all that, just mostly more of it.

----------


## nonsqtr

> Without more, I don't think merely scaling up what counts as AI today will produce machine self-awareness.  If it isn't "easy" and if so few organisms have it, and yet the world is full of organisms, then it must be something that requires something other than mere scale.
> 
> Also, using self-body-recognition in a mirror as the sine qua non for testing self-awareness is (a) rather species-centric, and (pardon the pun), (b) myopic.
> 
> For example, species that use the electric field to detect their environment might be better suited to being tested in some manner other than by human-visible light being reflected off of a solid surface.
> 
> Dogs are another example; their eyesight is, more or less, an afterthought as compared to their sense of smell; testing them for self-awareness might be better using the olfactory sense rather than sight.


It requires more than scaling up.

It requires a specific brain architecture (which is present in all vertebrates, give or take - basically one you get above a goldfish all brains are the same), and it also requires "support circuitry" to do some of the sophisticated things humans can do.

I have a theory about how and why the brain architecture is the way it is, but I don't know enough (yet) to elaborate it at the quantum level. However it seems intuitive from 6 or 7 other perspectives...

The support circuitry is a little weird, the brain does some oddball stuff with memory encoding which we don't understand ("at all", to be honest). Parts of the circuitry are intuitive, for instance we know a lot about the hippocampus and it makes sense that it is where it is, doing what it's doing. But move forward one stage (there is a direct projection from CA1 to anterior cingulate cortex) and we lose track. The "pre-frontal" systems are definitely involved in the control of attention, but the encoding is driven top-down and we really don't know how things get from object tracking into the global store.

What we do know is the global store may not be so global. It looks more like lots of little stores, and so the brain has to "ask" those portions of the circuitry for information. An "ask" is usually attached to either high value information or novel information.

----------


## old dog

> Self awareness turns out to be pretty easy.
> 
> The causality system doesn't care where the information comes from, inside or outside.
> 
> Self-awareness is a topological phenomenon that maps two different reference frames in real time: an allocentric frame and an egocenteric frame. The "self" is what keeps them in alignment.
> 
> But that has nothing to do with intelligence. You can be self aware without being smart.


I stated my reservation on the possibility of artificial self-awareness in PLAIN ENGLISH.  Are you able to state your position in PLAIN ENGLISH? I suspect that we may have differences in the meaning of "self aware".  I'll go first:

"Self Awareness" is one aspect of existence which I know I have, with certainty, but which I will never know that you have, with certainty.

----------


## UKSmartypants

At its most basic Self awareness is the ability to recognise oneself ina mirror rather than thinking its another member of your species.


There's various ways to test this, eg on baboons you put a spot of bright paint on the fur where it cant be seen directly. If the animal is self aware, it will touch or brush the paint spot when it sees refection in the mirror. The control is to put a spot of paint exactly the same colour as the fur, which cant be seen visually in the mirror or directly. That eliminates the possibility the animal can feel it or detect it another way, and eliminates altruistic touching of what it thinks is another animal.   Only if it realises the image in the mirror is itself, and thus the spot of bright paint is on itself and in a spot it cant see, and touches it as a result, can you prove self awareness in this instance.


Theres a comprehensive discussion here, starting on page 11

https://philosophy.columbian.gwu.edu...essanimals.pdf

----------

old dog (11-27-2021)

----------


## old dog

Your cited article lists three levels of self awareness:  bodily self-awareness, Social self-awareness and Introspective awareness  ... "awareness of (some of ) one’s own mental states such as feelings, desires, and beliefs".  There is some question if this is limited to language-users.  Whether it is or not, this is the closest to my conception of what it is.  A Turing Test will never be able to determine it.  Existence of Artificial Introspective Awareness is an article of faith for materialists.  I guess we'll leave it there.  The mind either is or isn't identical with the brain.  There are evidentiary hints that it is the latter case, but that's a whole 'nother topic.

----------


## UKSmartypants

> Your cited article lists three levels of self awareness:  bodily self-awareness, Social self-awareness and Introspective awareness  ... "awareness of (some of ) one’s own mental states such as feelings, desires, and beliefs".  There is some question if this is limited to language-users.  Whether it is or not, this is the closest to my conception of what it is.  A Turing Test will never be able to determine it.  Existence of Artificial Introspective Awareness is an article of faith for materialists.  I guess we'll leave it there.  The mind either is or isn't identical with the brain.  There are evidentiary hints that it is the latter case, but that's a whole 'nother topic.



Ill stick with the mirror test, because its the most obvious and believable, and easiest to prove, and makes the most sense.

----------


## old dog

> Ill stick with the mirror test, because its the most obvious and believable, and easiest to prove, and makes the most sense.


I can see where bodily self-awareness can be automated.   I originally thought the discussion was about introspective self-awareness, the Philosophers Stone of the Transhumanist religion.

----------


## nonsqtr

> I stated my reservation on the possibility of artificial self-awareness in PLAIN ENGLISH.  Are you able to state your position in PLAIN ENGLISH? I suspect that we may have differences in the meaning of "self aware".  I'll go first:
> 
> "Self Awareness" is one aspect of existence which I know I have, with certainty, but which I will never know that you have, with certainty.


It's very simple. Self awareness is based in the mirror system of the brain. ("see other thread").

The three types you mentioned are not the only ones. What they all have in common, is the ability to imitate external processes, and distinguish them from internal ones.

What you call "introspection" is merely a self-similar instantiation of the identical capability. You can imagine someone looking at your own actions, and then imagine the other someone was you.

----------


## nonsqtr

> Your cited article lists three levels of self awareness:  bodily self-awareness, Social self-awareness and Introspective awareness  ... "awareness of (some of ) one’s own mental states such as feelings, desires, and beliefs".  There is some question if this is limited to language-users.  Whether it is or not, this is the closest to my conception of what it is.  A Turing Test will never be able to determine it.  Existence of Artificial Introspective Awareness is an article of faith for materialists.  I guess we'll leave it there.  The mind either is or isn't identical with the brain.  There are evidentiary hints that it is the latter case, but that's a whole 'nother topic.


The brain is a structure, the mind is a process.

Brain is the substrate for mind. There is no mind without brain.

----------

