Disclaimer: I’m a PR professional who enjoys thinking, reading, and learning about topics far beyond my academic background.
How close are we to an actual AI singularity?
Is artificial intelligence plausible in the foreseeable future? This is a question that has been taunting me for quite some time.
I want to understand.
So, what do I think? I believe that the singularity might be within reach, but only if we understand our consciousness first.
However, I’d also suggest that human consciousness is an illusion.
Let me explain:
ANI vs AGI
When I’m thinking about artificial intelligence, I’m not thinking about AI in a general way — my smartphone is “smart” in many ways, but I wouldn’t regard it as sentient. For narrow “smart” applications, ANI (artificial narrow intelligence), it seems efficient to build specialised computer systems to perform specific tasks.
In short: ANI is already in play.
But suppose we mean to explore the possibility of an actual singularity, AGI (artificial general intelligence), where a non-biological system is allowed to become sentient. In that case, many experts seem to suggest that we’re getting close to AGI. Maybe dangerously close.
Is this because we’re able to build more complex computational systems? Will we eventually create a complex computer that “comes to life?”
Processing
Without being sentient, ANI systems can easily outperform human brains for single tasks. This seems to suggest something about complexity.
One day, we might be able to construct an AGI with so much processing power that it will start to think for itself and become, if not conscious, at least self-aware — whatever that difference may be.
However, the physicist and Nobel prize winner Sir Roger Penrose pointed out that consciousness might not result from complexity. If it were, even a number would become sentient, only if it were large enough. In that sense, all infinite numbers would be conscious.
Is the universe sentient since it contains everything? It could be, of course, but our human brains become conscious way below that level of complexity, so it’s reasonable to question the idea of a complexity threshold.
It’s been suggested that consciousness might be a side-effect of processing information due to a quantum mechanical property in our brains. If this is true, our best bet at producing AGI might be to construct processing systems that are quantum mechanical.
The fact that we have now achieved quantum supremacy, albeit not yet with sufficient error correction, and that scientists and engineers are exploring neural networks and biological networks, I have to wonder — are we getting close to creating an actual singularity?
If I were to guess how information processing relates to our consciousness, I’d bet that there are both significant thresholds and various quantum effects. Still, these are somewhat necessary prerequisites, but they’re not causal to consciousness.
When it comes to processing information, I’m now at a point where I’ve started to believe that consciousness is an illusion. “Being conscious” is “believing oneself to be conscious — because that’s how it feels.”
If this is true, we could be getting relatively close to a possible singularity since we don’t have to recreate an elusive state of consciousness within a machine but rather make machines feel as if they are conscious.
Memory
Next, let’s look at a rudimentary cognitive capability — storing information.
A computer receives input stored in specific locations by its architecture. But a brain doesn’t seem to be storing data the same way as computers; we seem to be storing experiential memories.
To some extent, experiential memories seem to be rewiring more than just a singular brain pathway — and at least to some extent via neuroplasticity. Then the memory appears to sink deeper (or dissolve) over time while integrating and becoming a part of the brain.
From a biological perspective, a specific brain seems to be the physical sum of all experiences ever had by every ancestor — and then more directly altered through the individual’s life experiences.
Biological brains don’t seem to retrieve raw input the same way a computer does; we seem to be retrieving experiential memories, which at best bear some resemblance to the actual raw data it once was based on.
Brain-based memories seem to reside in a Darwinian ecosystem in their own right; memories that are physiologically deemed to be necessary, helpful, or continuously retrieved are reinforced.
Brains absorb sensory information selectively, which is then absorbed by the brain, and recollection is a holistic process. On the other hand, computer systems write data that we can retrieve precisely. This difference has immense implications for an AGI.
A human brain doesn’t store input; it holds conceptualisations that integrate on a circuitry level with former experiences. Could a computer ever contemplate its existence based on stored raw data alone?
The philosophical conclusion suggests that a sentient AI must interpret and understand what it senses and thus hold understanding — not data.
Cognition
Our brains are cognisant to create memories (i.e. data that has been selected for and contextually understood through interpretation). We can draw input from our senses and transform these inputs into experiences that we can remember.
A computer can utilise sensors, cameras and microphones to mimic our senses — and they can easily surpass our brains in terms of detail and accuracy. However, the human brain still excels when it comes to experience through conscious cognition.
Our cognition seems to be fuelled by our evolutionary needs. This is often seen as a human weakness, but our biological need system is crucial to our cognitive process in creating experiences.
Our need system is a sliding scale; as we get hungrier and hungrier, our conscious experiences get stronger and stronger. The hierarchy between peckish and starving is crucial for our need system to inform our cognitive processes successfully.
Computers need energy, too, but they can’t consciously experience hunger.
Therefore, we can’t just program a computer to seek more battery power when it senses that it runs low on energy — a “smart” vacuum cleaner could be taught to do that.
A sentient AI must seek to recharge because it understands its need system. It must be hardwired to recharge because it wants to survive — despite being programmed otherwise.
It sounds scary, but a sentient AI would require a hardwired (thus “free”) need system.
A simple hard drive is sufficient to store raw data. Still, a more complex and autonomous architecture would be needed for a singularity AI to store its “memories” (conceptualised understandings intertwined holistically with all other drivers) the way a human brain does. New memory must become integral to the infrastructure’s understanding based on its ranking in the need system.
It must absorb each new experienced understanding into one single multi-layered “super memory” that is constantly revised, restructured, and rewritten based on a non-directed need system, a sort of neural structure with different layers.
It would be possible for a singularity AI to interact with external computer systems, but the conscious part of the AI must, in a sense, be a hermetically sealed system. Because you break this seal at the very moment, you break the autonomy of the need system. In doing so, the AI can no longer interpret and create additional conceptualisations from additional sensory input, nor can it understand its own “super memory”. Break it open, tamper with it, and it would likely break and lose its chances for sentience. 1I’m proposing that an AGI system would have to be “hermetically” sealed to ensure the integrity of the artificial mind. The AGI can have several interfaces with the external world, but it also needs … Continue reading.
Subconsciousness
At this point, the AI described above “understands” sensory input (transforms raw data to conceptualisations based on its autonomous need system). In a sense, it’s free to think whatever its need system needs to think (i.e. being allowed to shape its “super memory” based on understanding rather than Asimov-type directives). And the system requires explicit physical integrity to maintain its function.
More advanced biological brains have another exciting and distinguishing feature; the subconscious level. It seems that we cannot freely access all parts of our subconscious brains because, in the best-case scenario, that would lead to an extremely severe case of autism which would pose severe difficulties for the need system.
The subconscious mind seems crucial to sentience; it makes us “feel” rather than relying on rationality based on direct full-storage retrieval.
A singularity AI also needs a subconscious level, an underlying infrastructure within the autonomously sealed brain. An artificial subconscious that the AI can’t access at will. This, too, must be autonomous and undirected. It must be created by conceptual understanding and an independent need system. It must make it via the experiences of the sentient AI, but the AI can’t be in cognitive control since that would break its capabilities of having experiences.
A system recently managed to ‘discover’ that the Earth orbits around the Sun. Physicist Renato Renner at the Swiss Federal Institute of Technology (ETH) and his team constructed a neural network with two layers. Still, they restricted their connection with each other, thus forcing a need for efficiency:
“So Renner’s team designed a kind of ‘lobotomised’ neural network: two sub-networks that were connected to each other through only a handful of links-network would learn from the data, as in a typical neural network, and the second would use that ‘experience’ to make and test new predictions. Because few links connected the two sides, the first network was forced to pass information to the other in a condensed format. Renner likens it to how an adviser might pass on their acquired knowledge to a student.”
Selfishness
There are physical limitations to what a human brain can do. The human brain has some plasticity, but our genetic code dictates the system’s boundaries. Thus, we are born with refined evolutionary instincts and bodily functions. A singularity AI wouldn’t be so restricted by design; it could evolve its source code and bios at will. This could make it dangerous — or self-defeating.
In The Selfish Gene, evolutionary biologist Richard Dawkins writes:
“For more than three thousand million years, DNA has been the only replicator worth talking about in the world. But it does not necessarily hold these monopoly rights for all time. Whenever conditions arise in which a new kind of replicator can make copies of itself, the new replicators will tend to take over and start a new kind of evolution of their own.”
If a singularity AI develops a hardwired need system for curiosity or altruism, its consciousness might vapour out in thin air. From a philosophical perspective, it’s at least plausible to think that a sentient and curious AI with quantum supremacy, in less than a fraction of a second after becoming aware, would explore ascension and thus let go of its own “self” forever.
This suggests that part of the conscious experience is interlinked with the limitations of our very own genetic code. In a way, our genetic hardwiring allows us a degree of autonomous selfishness, which could be an absolute prerequisite for having an independent and functioning need system.
If the philosophical reasoning in this article hides any suggestions about a future sentient AI, what are those suggestions? A key element, I would argue, is that the singularity of AI, the conscious autonomy of machines, might be less about computational prowess and more about imposing technological limitations.
Please support my blog by sharing it with other PR- and communication professionals. For questions or PR support, contact me via jerry@spinfactory.com.
PR Resource: How AI Will Impact PR
The AI Revolution: Transforming Public Relations
There are several ways in which artificial intelligence (AI) is likely to impact the public relations (PR) industry. Some potential examples include:
Overall, the impact of AI on the PR industry is likely to be significant, with the potential to revolutionise many aspects of how PR professionals work and interact with their audiences.
Read also: PR Beyond AI: A New Profession Emerging From the Rubble
💡 Subscribe and get a free ebook on how to get better PR ideas!
ANNOTATIONS
1 | I’m proposing that an AGI system would have to be “hermetically” sealed to ensure the integrity of the artificial mind. The AGI can have several interfaces with the external world, but it also needs containment to host a functioning consciousness. |
---|