This is the single most important topic in Computing today, and look here -

https://www.tandfonline.com/doi/abs/...9.2018.1445740

The Chinese! Chinese scientists, funded by the Chinese National Science Foundation.

Looks like they pulled a couple of Bernoulli distributions out of thin air, but still... they're thinking about it.

And meanwhile, Google is out there patting themselves on the back and trying to get publicity for a mere 53 qubits - and when the chip finally gets here with 64,000 qubits, they're going to have no idea how to program it because they're too busy patting themselves on the back.

I've been studying this from the perspective of neural networks, but obviously it pertains directly to Quantum computing.

"Incomplete information" is what you get in that little area around t=0 I was telling you about.

"Stochastic transmission times" is what you get in a neural network, or equivalently in a quantum computer, when you don't really know how fast things are happening, or which path exactly is being taken by the information.

The current quantum computing paradigm is a variation of the Hopfield neural network. It basically involves exciting the system into a higher energy state, followed by the imposition of initial conditions, which carry the information related to the problem being solved, followed by a computational period that involves a reduction in the global energy of the system, followed by a readout of the "answer" which is stored in the configuration of the elements in the lower energy state.

This is a "problem solving" paradigm, and it is the exact opposite of a real-time artificial intelligence.

In a real-time system, you do not have external control over the Global energy level, you cannot start and stop the calculation in a convenient way. This has to happen internally, by design of the system.