So much of this reasoning can be summed up as:
"Current limitations mean AI will never be usable for expert-level tasks"
But then how do we explain AI out-diagnosing doctors? How do we explain the creativity in AI generated images and music, or the ability to drive? How do we explain the silent fall of the Turing Test? If we're already seeing these changes, what will we see in another 10 years?
It's a moving target, because the tech keeps getting better and the applications keep expanding. We never want to put ourselves in the position of saying, for example, "the GM EV1 had such poor performance that it proved electric vehicles will never happen", or "digital music files have such poor sound quality and massive storage requirements they are unlikely to displace CDs anytime soon". Read The Innovator's Dilemma by Clayton Christensen before measuring the future off of the present.
Consciousness is not sufficient but necessary for cognition.
If current technology is not capable of creating consciousness it won´t ever have cognition.
Thus the question becomes if the limitations of current technology are such that they preclude consciousness.
There are reasons to believe that the current technology of digital computers is incapable of creating consciousness, but let´s first look where the idea that digital computers are capable of developing consciousness is coming from.
Basically, the metaphor of the brain as a computer leads to the reverse metaphor of the computer as a brain.
There is no question that brains and digital computers can do computations but digital computers operate within a restricted number space (floating point operations) whereas brains do not deal with numbers but with continuous variables (extended real numbers). The number of possible values for a continuous variable is infinitely higher than the finite number of values a variable can take in digital computing (all continuous variables are expressed as discrete variables).
Thus, digital computers are deterministic with a finite number of states whereas brains can assume an infinite number of states. Brains are therefore more like analog computers (or hybrid) than digital computers. Analog computers work by manipulating physical variables as analogues for real world processes.
The brain also deals with physical (electrochemical) variables but the comparison with analog computers breaks down somewhat as these physical variables are mostly not analogues of real world processes at levels beyond sense perception.
The architecture of a brain and a digital computer are of course totally different and the concept of hardware and software (and calling hardware wetware does not change a thing) does not apply to brains. Insofar that one can call it programming, analog computers, like brains, are programmed by altering their physical characteristics and brains do it by rewiring on the fly. In addition, memory is not a discrete structure in analog computers but part of the active process.
Finally, the fastest supercomputers and the human brain are estimated to perform up to 1 exaflop/s and the human brain only uses 12-20 watts to do this - no matter the complexity of the task performed consciously.
Supercomputers use energy proportional to the number of operations and memory use.
There is more but you get the idea.
Digital computers are so fundamentally different from brains that the widespread confusion is a real head scratcher and requires explanation.
I think it might come from two different misconceptions and some reasoning by analogy (pun intended): 1. mind body dualism 2. software that is agnostic to the physical hardware it is run on.
But, as we have seen, the dualistic distinction between hard- and software is already meaningless when talking about simple analog computers (slide rules etc.).
Digital computers deal with simulations of reality (ironically, digital computers are simulations themselves created by analog machines), brains are more like plugged into reality with no clear border between external and internal reality (maybe seeing them as extensions of each other is more appropriate).
AI with consciousness will never be created with digital computers as we know them, but the first steps away from this dead end are being taken. Deep South is up and running.
Neuromorphic Computing
A primary objective in artificial intelligence research is to create computers capable of learning and reasoning similar to human cognition. While there are various approaches to achieve this, the consensus within the engineering community is that the most effective approach involves creating computer models that replicate the human brain's architecture.
Neuromorphic computing is a process that mimics the human brain's structure and functionality, using artificial neurons and synapses to process information.
Using artificial neurons and synapses, neuromorphic models simulate how our brain processes information, allowing them to solve problems, recognize patterns, and make decisions more quickly and efficiently than conventional computing models. A Neuron, also called a node, is a basic computational unit that processes inputs and produces an output, using a weighted sum and an activation function. A Synapse is a connection between two neurons.
Unlike the von Neumann model, where processing units and memory are separate, the neuromorphic computing model integrates memory and processing units in the neurons and synapses. Neuromorphic algorithms are defined by the structure of the neural network and its parameters rather than by direct instructions, as in von Neumann architecture.
Another significant distinction is how input data is processed.
Instead of encoding information as numerical values in binary format, neuromorphic computing uses Spikes as inputs, where the timing, magnitude, and shape of these spikes encode numerical information.https://deepsouthai.gitbook.io/whitepaper/neuromorphic-computingDeepSouth Makes Advanced AI Supercomputer that Simulates a Human Brain!
https://www.youtube.com/watch?v=Dr76kOfN64M