The PR BlogPR TrendsAI & PRThe Singularity Problem

The Singularity Problem

If consciousness is an illusion, the singularity might be closing in on us.

Cover photo: @jerrysilfwer

Disclaimer: I’m a PR pro­fes­sion­al who enjoys think­ing, read­ing, and learn­ing about top­ics far bey­ond my aca­dem­ic background.

How close are we to an actu­al AI singularity?

Is arti­fi­cial intel­li­gence plaus­ible in the fore­see­able future? This is a ques­tion that has been taunt­ing me for quite some time. 

I want to understand.

So, what do I think? I believe that the sin­gu­lar­ity might be with­in reach, but only if we under­stand our con­scious­ness first.

However, I’d also sug­gest that human con­scious­ness is an illusion.

Let me explain:


When I’m think­ing about arti­fi­cial intel­li­gence, I’m not think­ing about AI in a gen­er­al way — my smart­phone is “smart” in many ways, but I would­n’t regard it as sen­tient. For nar­row “smart” applic­a­tions, ANI (arti­fi­cial nar­row intel­li­gence), it seems effi­cient to build spe­cial­ised com­puter sys­tems to per­form spe­cif­ic tasks. 

In short: ANI is already in play.

But sup­pose we mean to explore the pos­sib­il­ity of an actu­al sin­gu­lar­ity, AGI (arti­fi­cial gen­er­al intel­li­gence), where a non-bio­lo­gic­al sys­tem is allowed to become sen­tient. In that case, many experts seem to sug­gest that we’re get­ting close to AGI. Maybe dan­ger­ously close.

Is this because we’re able to build more com­plex com­pu­ta­tion­al sys­tems? Will we even­tu­ally cre­ate a com­plex com­puter that “comes to life?”


Without being sen­tient, ANI sys­tems can eas­ily out­per­form human brains for single tasks. This seems to sug­gest some­thing about complexity. 

One day, we might be able to con­struct an AGI with so much pro­cessing power that it will start to think for itself and become, if not con­scious, at least self-aware — whatever that dif­fer­ence may be.

However, the phys­i­cist and Nobel prize win­ner Sir Roger Penrose poin­ted out that con­scious­ness might not res­ult from com­plex­ity. If it were, even a num­ber would become sen­tient, only if it were large enough. In that sense, all infin­ite num­bers would be conscious. 

Is the uni­verse sen­tient since it con­tains everything? It could be, of course, but our human brains become con­scious way below that level of com­plex­ity, so it’s reas­on­able to ques­tion the idea of a com­plex­ity threshold.

It’s been sug­ges­ted that con­scious­ness might be a side-effect of pro­cessing inform­a­tion due to a quantum mech­an­ic­al prop­erty in our brains. If this is true, our best bet at pro­du­cing AGI might be to con­struct pro­cessing sys­tems that are quantum mechanical.

The fact that we have now achieved quantum suprem­acy, albeit not yet with suf­fi­cient error cor­rec­tion, and that sci­ent­ists and engin­eers are explor­ing neur­al net­works and bio­lo­gic­al net­works, I have to won­der — are we get­ting close to cre­at­ing an actu­al singularity?

If I were to guess how inform­a­tion pro­cessing relates to our con­scious­ness, I’d bet that there are both sig­ni­fic­ant thresholds and vari­ous quantum effects. Still, these are some­what neces­sary pre­requis­ites, but they’re not caus­al to consciousness.

When it comes to pro­cessing inform­a­tion, I’m now at a point where I’ve star­ted to believe that con­scious­ness is an illu­sion. “Being con­scious” is “believ­ing one­self to be con­scious — because that’s how it feels.”

If this is true, we could be get­ting rel­at­ively close to a pos­sible sin­gu­lar­ity since we don’t have to recre­ate an elu­sive state of con­scious­ness with­in a machine but rather make machines feel as if they are conscious.


Next, let’s look at a rudi­ment­ary cog­nit­ive cap­ab­il­ity — stor­ing information. 

A com­puter receives input stored in spe­cif­ic loc­a­tions by its archi­tec­ture. But a brain does­n’t seem to be stor­ing data the same way as com­puters; we seem to be stor­ing exper­i­en­tial memories. 

To some extent, exper­i­en­tial memor­ies seem to be rewir­ing more than just a sin­gu­lar brain path­way — and at least to some extent via neuro­plas­ti­city. Then the memory appears to sink deep­er (or dis­solve) over time while integ­rat­ing and becom­ing a part of the brain.

From a bio­lo­gic­al per­spect­ive, a spe­cif­ic brain seems to be the phys­ic­al sum of all exper­i­ences ever had by every ancest­or — and then more dir­ectly altered through the indi­vidu­al’s life experiences.

Biological brains don’t seem to retrieve raw input the same way a com­puter does; we seem to be retriev­ing exper­i­en­tial memor­ies, which at best bear some resemb­lance to the actu­al raw data it once was based on. 

Brain-based memor­ies seem to reside in a Darwinian eco­sys­tem in their own right; memor­ies that are physiolo­gic­ally deemed to be neces­sary, help­ful, or con­tinu­ously retrieved are reinforced. 

Brains absorb sens­ory inform­a­tion select­ively, which is then absorbed by the brain, and recol­lec­tion is a hol­ist­ic pro­cess. On the oth­er hand, com­puter sys­tems write data that we can retrieve pre­cisely. This dif­fer­ence has immense implic­a­tions for an AGI. 

A human brain does­n’t store input; it holds con­cep­tu­al­isa­tions that integ­rate on a cir­cuitry level with former exper­i­ences. Could a com­puter ever con­tem­plate its exist­ence based on stored raw data alone? 

The philo­soph­ic­al con­clu­sion sug­gests that a sen­tient AI must inter­pret and under­stand what it senses and thus hold under­stand­ing — not data.


Our brains are cog­nis­ant to cre­ate memor­ies (i.e. data that has been selec­ted for and con­tex­tu­ally under­stood through inter­pret­a­tion). We can draw input from our senses and trans­form these inputs into exper­i­ences that we can remember. 

A com­puter can util­ise sensors, cam­er­as and micro­phones to mim­ic our senses — and they can eas­ily sur­pass our brains in terms of detail and accur­acy. However, the human brain still excels when it comes to exper­i­ence through con­scious cognition.

Our cog­ni­tion seems to be fuelled by our evol­u­tion­ary needs. This is often seen as a human weak­ness, but our bio­lo­gic­al need sys­tem is cru­cial to our cog­nit­ive pro­cess in cre­at­ing experiences. 

Our need sys­tem is a slid­ing scale; as we get hun­gri­er and hun­gri­er, our con­scious exper­i­ences get stronger and stronger. The hier­archy between peck­ish and starving is cru­cial for our need sys­tem to inform our cog­nit­ive pro­cesses successfully. 

Computers need energy, too, but they can­’t con­sciously exper­i­ence hunger.

Therefore, we can­’t just pro­gram a com­puter to seek more bat­tery power when it senses that it runs low on energy — a “smart” vacu­um clean­er could be taught to do that. 

A sen­tient AI must seek to recharge because it under­stands its need sys­tem. It must be hard­wired to recharge because it wants to sur­vive — des­pite being pro­grammed otherwise.

It sounds scary, but a sen­tient AI would require a hard­wired (thus “free”) need system.

A simple hard drive is suf­fi­cient to store raw data. Still, a more com­plex and autonom­ous archi­tec­ture would be needed for a sin­gu­lar­ity AI to store its “memor­ies” (con­cep­tu­al­ised under­stand­ings inter­twined hol­ist­ic­ally with all oth­er drivers) the way a human brain does. New memory must become integ­ral to the infra­struc­ture’s under­stand­ing based on its rank­ing in the need system. 

It must absorb each new exper­i­enced under­stand­ing into one single multi-layered “super memory” that is con­stantly revised, restruc­tured, and rewrit­ten based on a non-dir­ec­ted need sys­tem, a sort of neur­al struc­ture with dif­fer­ent layers.

It would be pos­sible for a sin­gu­lar­ity AI to inter­act with extern­al com­puter sys­tems, but the con­scious part of the AI must, in a sense, be a her­met­ic­ally sealed sys­tem. Because you break this seal at the very moment, you break the autonomy of the need sys­tem. In doing so, the AI can no longer inter­pret and cre­ate addi­tion­al con­cep­tu­al­isa­tions from addi­tion­al sens­ory input, nor can it under­stand its own “super memory”. Break it open, tamper with it, and it would likely break and lose its chances for sen­tience. 1I’m pro­pos­ing that an AGI sys­tem would have to be “her­met­ic­ally” sealed to ensure the integ­rity of the arti­fi­cial mind. The AGI can have sev­er­al inter­faces with the extern­al world, but it also needs … Continue read­ing.


At this point, the AI described above “under­stands” sens­ory input (trans­forms raw data to con­cep­tu­al­isa­tions based on its autonom­ous need sys­tem). In a sense, it’s free to think whatever its need sys­tem needs to think (i.e. being allowed to shape its “super memory” based on under­stand­ing rather than Asimov-type dir­ect­ives). And the sys­tem requires expli­cit phys­ic­al integ­rity to main­tain its function.

More advanced bio­lo­gic­al brains have anoth­er excit­ing and dis­tin­guish­ing fea­ture; the sub­con­scious level. It seems that we can­not freely access all parts of our sub­con­scious brains because, in the best-case scen­ario, that would lead to an extremely severe case of aut­ism which would pose severe dif­fi­culties for the need system. 

The sub­con­scious mind seems cru­cial to sen­tience; it makes us “feel” rather than rely­ing on ration­al­ity based on dir­ect full-stor­age retrieval.

A sin­gu­lar­ity AI also needs a sub­con­scious level, an under­ly­ing infra­struc­ture with­in the autonom­ously sealed brain. An arti­fi­cial sub­con­scious that the AI can­’t access at will. This, too, must be autonom­ous and undir­ec­ted. It must be cre­ated by con­cep­tu­al under­stand­ing and an inde­pend­ent need sys­tem. It must make it via the exper­i­ences of the sen­tient AI, but the AI can­’t be in cog­nit­ive con­trol since that would break its cap­ab­il­it­ies of hav­ing experiences.

A sys­tem recently man­aged to ‘dis­cov­er’ that the Earth orbits around the Sun. Physicist Renato Renner at the Swiss Federal Institute of Technology (ETH) and his team con­struc­ted a neur­al net­work with two lay­ers. Still, they restric­ted their con­nec­tion with each oth­er, thus for­cing a need for efficiency:

So Renner’s team designed a kind of ‘lobotom­ised’ neur­al net­work: two sub-net­works that were con­nec­ted to each oth­er through only a hand­ful of links-net­work would learn from the data, as in a typ­ic­al neur­al net­work, and the second would use that ‘exper­i­ence’ to make and test new pre­dic­tions. Because few links con­nec­ted the two sides, the first net­work was forced to pass inform­a­tion to the oth­er in a con­densed format. Renner likens it to how an adviser might pass on their acquired know­ledge to a student.”


There are phys­ic­al lim­it­a­tions to what a human brain can do. The human brain has some plas­ti­city, but our genet­ic code dic­tates the sys­tem’s bound­ar­ies. Thus, we are born with refined evol­u­tion­ary instincts and bod­ily func­tions. A sin­gu­lar­ity AI would­n’t be so restric­ted by design; it could evolve its source code and bios at will. This could make it dan­ger­ous — or self-defeating.

In The Selfish Gene, evol­u­tion­ary bio­lo­gist Richard Dawkins writes:

For more than three thou­sand mil­lion years, DNA has been the only rep­lic­at­or worth talk­ing about in the world. But it does not neces­sar­ily hold these mono­poly rights for all time. Whenever con­di­tions arise in which a new kind of rep­lic­at­or can make cop­ies of itself, the new rep­lic­at­ors will tend to take over and start a new kind of evol­u­tion of their own.”

If a sin­gu­lar­ity AI devel­ops a hard­wired need sys­tem for curi­os­ity or altru­ism, its con­scious­ness might vapour out in thin air. From a philo­soph­ic­al per­spect­ive, it’s at least plaus­ible to think that a sen­tient and curi­ous AI with quantum suprem­acy, in less than a frac­tion of a second after becom­ing aware, would explore ascen­sion and thus let go of its own “self” forever.

This sug­gests that part of the con­scious exper­i­ence is inter­linked with the lim­it­a­tions of our very own genet­ic code. In a way, our genet­ic hard­wir­ing allows us a degree of autonom­ous selfish­ness, which could be an abso­lute pre­requis­ite for hav­ing an inde­pend­ent and func­tion­ing need system.

If the philo­soph­ic­al reas­on­ing in this art­icle hides any sug­ges­tions about a future sen­tient AI, what are those sug­ges­tions? A key ele­ment, I would argue, is that the sin­gu­lar­ity of AI, the con­scious autonomy of machines, might be less about com­pu­ta­tion­al prowess and more about impos­ing tech­no­lo­gic­al limitations.

Please sup­port my blog by shar­ing it with oth­er PR- and com­mu­nic­a­tion pro­fes­sion­als. For ques­tions or PR sup­port, con­tact me via jerry@​spinfactory.​com.

PR Resource: How AI Will Impact PR

Artificial Intelligence and Public Relations - The Future Office - Doctor Spin - The PR Blog
Every path is going to lead you some­where. (Photo: @jerrysilfwer)

The AI Revolution: Transforming Public Relations

There are sev­er­al ways in which arti­fi­cial intel­li­gence (AI) is likely to impact the pub­lic rela­tions (PR) industry. Some poten­tial examples include:

  • More high-level tasks, less low-level. The use of AI-powered tools to auto­mate tasks such as media mon­it­or­ing, con­tent cre­ation, and social media man­age­ment. This could free up PR pro­fes­sion­als to focus on their work’s more stra­tegic and cre­at­ive aspects.
  • Improved ana­lys­is and bet­ter strategies. The devel­op­ment of AI-powered sys­tems that can ana­lyse large amounts of data to identi­fy trends and insights that can inform PR strategy and decision-making.
  • Using PR pro­fes­sion­als as AI train­ers. Using AI-powered chat­bots and vir­tu­al assist­ants to handle cus­tom­er inquir­ies and provide inform­a­tion to the pub­lic allows PR pro­fes­sion­als to scale PR training.
  • Better pub­li­city through inter­con­nectiv­ity. The cre­ation of AI-powered plat­forms and net­works that can facil­it­ate con­nec­tions and col­lab­or­a­tions between PR pro­fes­sion­als, journ­al­ists, pub­lics, influ­en­cers, and oth­er crit­ic­al stake­hold­ers in the industry.
  • Earlier detec­tions of poten­tial PR issues. AI-powered tools can help PR pro­fes­sion­als identi­fy and mit­ig­ate poten­tial crisis situ­ations by ana­lys­ing data and provid­ing early warn­ing sig­nals of poten­tial problems.
  • Increased edit­or­i­al out­put. In organ­isa­tions where the com­mu­nic­a­tions depart­ment is driv­ing the con­tent strategy, PR pro­fes­sion­als will have plenty of tools for increas­ing both the qual­ity and the quant­ity of the out­put (see also arti­fi­cial con­tent explo­sion).

Overall, the impact of AI on the PR industry is likely to be sig­ni­fic­ant, with the poten­tial to revolu­tion­ise many aspects of how PR pro­fes­sion­als work and inter­act with their audi­ences.

Read also: PR Beyond AI: A New Profession Emerging From the Rubble

1 I’m pro­pos­ing that an AGI sys­tem would have to be “her­met­ic­ally” sealed to ensure the integ­rity of the arti­fi­cial mind. The AGI can have sev­er­al inter­faces with the extern­al world, but it also needs con­tain­ment to host a func­tion­ing consciousness.
Jerry Silfwer
Jerry Silfwer
Jerry Silfwer, alias Doctor Spin, is an awarded senior adviser specialising in public relations and digital strategy. Currently CEO at KIX Index and Spin Factory. Before that, he worked at Kaufmann, Whispr Group, Springtime PR, and Spotlight PR. Based in Stockholm, Sweden.

The Cover Photo


Grab a free subscription before you go.

Get notified of new blog posts
& new PR courses

🔒 Please read my integrity- and cookie policy.

Get influencer outreach right by knowing the four types of influencer marketing. In this blog article, I've clearly defined them to mitigate confusion.
Most popular