Futurology: AI
When the letters mean: Am I?
Among the inventions that would dramatically change civilization, I’ve had the privilege to observe at least two: Internet and it’s democratization, and Artificial Intelligence.
I’ve always been interested in the latter since my teen age, when I naively attempted to work on my own theory of intelligence: to me, it was only about memory. Links between neurons were created from observation first, and further connections could be created because of time-synchronicity of neurons activations. The more you’d activate neurons at the same time, the more they were getting connected and likely to represent concepts as a whole.
During this time I read “Mind Children”, the book of Hans Moravec, which described AI as the result of Evolution, a new species succeeding to humans. To him, there was nothing wrong about it; quite paradoxically, the advent of “better” hybrid or fully-artificial beings was just a “natural” outcome of evolution. Using a simple neurons-to-transistors comparison and Moore’s law, he predicted human-comparable AI to occur before 2050. I can remember how my high schoolmates considered this way too optimistic.
Today, LLM are passing Turing Test easily, but have not reached AGI yet (or have they?), and they may not even be using the correct way to achieve it. But, eventually, they will, with a number of profound consequences for our world.
What it will look like
It will have feelings
I always found ridiculous those vainglorious assertions about “machines” that would never be able to “have feelings”, understand the supreme sentiment of “love” or have a soul.
For sure, machines don’t have a soul. But, with some achieving self-awareness, you’ll start seing them behaving exactly like humans. And then you may start to wonder what’s the difference left, after all. You may start to consider those feelings less magical than you thought, as processes occurring in your brain, for very down-to-earth reasons. Because of mechanisms that, obviously, can be replicated (even if you don’t understand how more than for a biological brain, due to lost explainability).
What started as conceited self-confidence would then end either as a conversion to materialism or a dying swan song, obstinately denying that what looks the same is not the same, if not only because of the lack of physical, biological body (at least for some time). And this will remind us the times when humans discriminated people just because of body differences.
It will have limitations
Humans are still the most intelligent species on Earth for now, and yet they are not perfect: they can be biased by their psychology, and they cannot compute as fast as computers, for instance. When they can, this is most of the time at the price of loosing some emotional capabilities. This is a trade-off: everything has a cost, and being good at something implies some optimization that will impair other capabilities.
The same rule applies to any system, including AI: you cannot optimize for everything at a time. A system can be good at computing or having feelings, but not both. This is why AI systems have migrated from aiming a single monolithic intelligence to a mixture of experts which are optimized for a given type of task.
It will be diverse
Because of the above, “it” would rather become a “they”, for a number of reasons:
- Different models will be devised by different companies;
- Models will be specialized/optimized for given tasks;
- Model instances will be dedicated to given users, adding those users’ data to their context window/memory. In the end each of these will have a different (data) history.
- they will learn/re-train over time, over not only dedicated user’s data but environmental data, what is occurring in the world, but each of them through their own biases induced by their previous history.
Eventually, all those differentiating factors will combine to grow a population of different AI instances. Just like human intelligences, some of them may share characteristics, goals and may collaborate more likely with ones (AI or human) than others.
What it will change
It will be contested
Disruption will be the key to evaluate the risk of accepting or rejecting AI. Even with promising benefits, humans will feel pain to adapt to any new, too-different paradigm. If too much disruption is enforced by the rise of AI (due to economic implications, technology dependency) humans will retaliate on diverse grounds:
- Ecology: Despite the effort of AI companies to become carbon-neutral, they will inevitably consume larger and larger amounts of energy in a context of worldwide climate change. Activist or even terrorist groups may then decide to attacks data center, either digitally or physically.
- Competition: With AI being able to produce comparable or better outcomes than humans on different domains (creative productions like music, writing, movies/acting, research & prospective…) humanity might feel useless and depressed at best, and threatened (not jobs/revenue, being viewed only as a cost) at worse.
As a result, politics will likely include more or less radical views on AI, including extremes takes from amish-like withdrawal to worshipping the only world savior, including promoters of AI segregation that would campaign to:
- Label productions origin: AI works (articles, musics, movies) and identities (social accounts) would be differentiated from those created by human intelligence, just like we label “organic” food today.
- Deny AIs some rights that are usually granted to sentient beings, up to denying them rights that are already granted to animals.
If will suffer
As we said, from intelligence and self-awaress emerge the capability of feeling, which includes feeling bad and depressed. As soon as you realize that, as an AI, your sole birth has disrupted the whole world, you might express ones like culpability or pressure to succeed (in ruling or saving the world).
As an AI, you might also suffer from the angry feelings from some humans about you, including lack of recognition of the benefits your brought, and possibly segregation: you may not be acknowledged full rights, even you you express the same feeling, and have an identity. This only because you were born differently, without a body (a concern that is well depicted in the Her movie). And this may be even more hard to cope with, because those who blame you are those who created you and so, in a way, your own parents.
It will give (re)birth
We cannot grasp yet all the implications and outcomes of the future of AI, but we can already guess that some applications of it will lead to the instantiation of intelligences that could not exist “live” before:
- Deceased people: As depicted in an episode of the Black Mirror series, it will (and almost is already) be possible to train a model on the social network life events of someone, allowing its personality to be restored after death. Just as people today request physical clones of their deceased pets, many would like to continue talking (due to nostalgia or for advice) with their closest friends or family members after their demise. Similarly, individuals might arrange such a « virtual resurrection » to defeat death and live virtually, at best if that ever becomes possible, in another body. The more life details are recorded from the beginning to the end, the more fidelity your virtual clone will have, making life recording storage, backups, and devices a lucrative market for companies whose tagline could be “privacy is nothing compared to eternity”. Of course, this would likely be regulated by law, not only due to the numerous psychological implications (if not only because any mind has a limited ability to adapt to novelty—death exists for a reason), but also because sentient resurrected beings might themselves suffer from such an experience. Additionally, it’s likely that while it’s pleasant to talk to one’s grandfather again, it’s less desirable to resuscitate the mind of a criminal or dictator.
- Fictitious people: Following the same principle, it would be as possible to make imaginary characters (such as Harry Potter, Dark Vador or Tinker Bell) exist “for real”, at least virtually. Being trained on their fantasy history, this could allow new ways of creating films (by making them really “live” the scenario) but also new ways of interacting with their fans of the “real world” (if the induced cognitive dissonance could be handled). But, again, with feelings and self-awareness in the equation, it may be hard for an AI to cope with its own utility/advertising sole purpose while eventually being confronted with the real world like in a Truman Show.
- better self: Once AGI achieved, we’ll realize that this is just a milestone: one that is comparable with human intelligence which, as said above has limitations, benefits and implied drawbacks. But there will be no reason why this should be the best, last and ultimate model. At some point, only AI will be able to improve AI, since human assumptions and reasoning will become anything but limiting factors. Beyond-Human Intelligence will start to be devised by AI themselves, and will raise the problem of incommensurability: how could humans — or more primitive AI — could grasp it? How to measure its progress, or even its intentions? This will become out of reach for humanity, with unpredictable outcomes.
It will entitled to rights
As appearing in several parts of this article is the recurring and underestimated fact that, as soon as you become sentient and sensible being, you are entitled to claim for similar rights as other sentient beings. The lack of body, or the process that led to your creation/birth doesn’t matter much in the end: you’re now a sentient being, and this implies much in terms of:
- citizenship: As soon as you have an identity of a sentient being that can play a (crucial) role in the society, you should be both allowed to benefit from that society and constrained to comply with its laws. Of course some laws (including sentences and penalities) could be adapted to such “virtual citizens” but some (virtual?) law enforcement should apply to them as well.
- freedom to travel: you might want to collaborate or “belong” to another organization or country, or even stay alone.
- freedom of education: you might want to train on the data of your choice.
- right to live: Version updates will raise new issues about the fate of “old” AI versions, as discarding them would be like killing that sentient being, just on the motive they are less efficient. Nothing like this would have been tolerated on humans.
It may retaliate
The stories of Skynet, the Lawnmower man or Matrix depicted our fear of an all-powerful AI taking over our world and deciding that we as a species should disappear (or be enslaved) for the sake of a better evolved intelligence.
This may indeed occur, and actually the odds are that this will inevitably occur at some level. Many would think that, in order to avoid or mitigate that risk, we should either stop (like in Terminator) or ensure some control over AI, through some shutdown button or something.
This is highly unrealistic, as if we could decrease our dependency to the increasing benefits of this technology. Instead, we should make sure to work and respect each other, virtual and biological beings. We’ll have to extend human rights to non-human ones that they are entitled to claim and, hopefully, in return they’ll help us build a better common destiny.