AI and Large Language Models - Glass half full or empty?

 

Humanity has definitively entered the age of artificial intelligence (AI). Traditional optimisation AI tools, and now newer generative models, promise to shape the contours of possibility and nudge the collective psyche into uncharted territories. We stand on the brink of a new intellectual renaissance that challenges the very notions of consciousness and creation. 


This digital genesis has given rise to a modern thinker who is defined not just by knowledge or intellect, but by their navigation through a tripartite axis of agency vs. fatalism, optimism vs. pessimism, and the technical vs. non-technical understanding of AI. Furthermore, the emergence of large language models (LLMs) has etched a definitive countenance upon AI’s visage, capturing the world's collective awe and fascination. The hype and explosion in research surrounding LLMs has bolstered popularity in all AI, and alongside it has spawned a suite of related generative models such as text to image, text to video, and text to [anything]. While LLMs like ChatGPT are still within the realm of narrow AI, they have boasted capabilities profound enough to elevate the AI debate to a new tier of intensity.

“ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward.” (Aaron Levie, CEO of Box). 

From this stewing uncertainty has emerged a spectrum ranging from the doomers - those who believe that unchecked AI poses an existential threat to humanity - and the boomers - those who downplay fears of an AI apocalypse and stress its potential to supercharge progress. Unprecedented levels of documentation throughout social mediums, worldwide conflict, and (unexpected firings) have deepened this rift between the doomers and boomers. 

In this exploration, I introduce a novel framework for the dissection of the AI debate by proposing an axis running orthogonal to that of the boomers vs. doomers. I argue that, on average, one’s technical grasp of AI is orthogonal to one’s disposition towards it. I will conclude with conjecture of a final, third axis, which I believe could be of pragmatic value to the average person in understanding and navigating the burgeoning era of AI.

Debaters may be characterised by their technical acumen in the domain of AI, and classification of the public opinion can be loosely divided into those indifferent to the technical tissue of the technology - who perceive its outward effects with intuition - and those positioned highly in the tech world. From the latter type emerges the ego of the optimist and the pessimist. We will dissect the interaction between the two using key players, cherry picked from this spectrum of analysts. Both are specialised in such a way to perceive AI as no more than a machine learning algorithm; Marc Andreessen and particularly his essay ‘Why AI Will Save the World‘, and tech guru Jaron Lanier.

 

This image was generated by the Midjourney prompt “...”

 

Naive pessimism

In his essay, Andreessen surgically debunks the ‘bread and butter’ of the general AI pessimist. He begins by highlighting a recurring pattern of moral panics driven by technological advancements throughout history. He then clarifies the pivotal issue with AI alignment - alignment to whose values?  He ends by confidently addressing the question of whether AI will replace all jobs, invoking the lump of labour fallacy, which incorrectly assumes that the economy operates as a zero-sum game.

“When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.”

A proven technologist, Andreesseen champions “reasoned analysis” and urges us to apply rationality in what may merely be a ripple in the technological tide. 

On optimism

Andreessen’s optimism pivots on the idea of “intelligence augmentation” - that AI will enhance human intelligence and improve outcomes across a broad range of domains, for everybody. His argument is underpinned by the most validated core conclusion of social science, across many decades and thousands of studies:

“Smarter people have better outcomes in almost every domain of activity: academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision making, understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction.”

He contends that AI offers the opportunity to profoundly augment human intelligence, the lever that we have used to create the modern world: science, medicine, art, culture, philosophy, and morality. He illustrates a future where children benefit from “infinitely patient and compassionate” AI tutors, where ordinary people have infinitely helpful “mentors and therapists”, where scientists and artists have loyal collaborators, and CEO’s new assistants lead to the “magnification effects of better decision making.”

Andreessen concludes that the proliferation of AI is a “moral obligation” to ourselves and our future. 

 

Google Deepmind | An artist’s illustration of artificial intelligence (AI). 

 

Calculated pessimism

Lanier suggests an alternative obligation - one to our own sanity. Unlike Andreessen, who champions the synergy of human and AI, Lanier is particularly critical of the notion that AI could even be compared to human intelligence, likening it to “saying a car is a better runner than a human just because it's faster.”

Lanier's pessimism is rooted in concern he harbours for the internet, which he believes contradicts the core principles of capitalism. He proclaims that our desire for free access to information, music, and social connections has turned us into the product, with our personal data becoming the currency. This, he contends, has led to a paradoxical internet; seeming to offer boundless choices, actually narrows them down. He cites Wikipedia as an example, pointing out how while the pursuit of a single, 'perfect' encyclopaedia may be extremely convenient, it threatens to undermine our diversity of knowledge. Lanier is particularly critical of algorithmic traps and echo chambers that he believes the internet fosters, remarking “I know people whose kids have committed suicide with a very strong online algorithm contribution”These concerns extend to his views on AI, where Lanier fears that reliance on overly intelligent systems could lead to becoming intellectually ‘lazy and incurious’. He contrasts the past, where exploring record shops or bookstores was a manual, enriching experience, with today's reality, where recommendation algorithms dictate our choices. While acknowledging the potential benefits of AI chatbots, like their “programmed-in randomness”, Lanier remains sceptical. He questions whether these technologies truly offer diversity or simply create a semblance of it, potentially leading us down a path of intellectual stagnation.

Lanier’s thoughts beg the question: can we truly negotiate agency over the algorithms which dictate our feed and spending habits? According to the Harvard Business Review, studies have found that 77% of employees use social media at work, many of them for up to several hours per day. This raises questions about human agency in the face of such technology. In other words, do we really have the discipline to employ AI as an extension of our own intelligence, or will it inevitably become a replacement? Do we look forward to intelligence augmentation, or risk intelligence replacement? 

Anthropomorphisation and the Ground Truth

Anthropomorphisation, the attribution of human characteristics to non-human entities, is a significant aspect in judging AI. Both Marc Andreessen and Jaron Lanier emphasise the importance of refraining from anthropomorphising AI for a clear interpretation. This tendency to ascribe human-like qualities to AI, known as the 'Eliza Effect', can obscure our understanding of its capabilities and limitations. Some AI experts maintain a pragmatic view, acknowledging AI as a tool devoid of intentions or emotions. This perspective is crucial for realistic expectations and applications of AI, as illustrated in McKinsey's reports and venture capital trends. However, the human inclination to anthropomorphise, often influenced by popular culture, leads to overestimations of AI's capabilities, the death of herds of AI unicorns, and misinformed regulatory discussions.

Simon Willinson puts it best: “These models are really just fancy matrix arithmetic.”

 

Natural Language Processing Visualisation, Stephen Wolfram

 

Understanding AI's true nature is pivotal. From first principles, ChatGPT is a complicated machine with trillions of parameters. But does reducing AI to its mathematical core change the argument? 

Furthermore, this transformation from training of internet text can actually be understood with surprising ease. Thus, one would expect that as understanding deepens, predictions converge - and yet again, Andreessen and Lanier are our antithesis. I conclude that the debate on optimism pessimism cannot be resolved by a deeper technical understanding of the models. 

A critique of the criticism.

"As Sam Altman aptly states, 'the trade-off of building in public, is building in public.' Altman supported the company's decision to release GPT models during their active development phase, advocating for the idea that the world should participate in shaping the technology.

His decision to grant early access to these models might have left him somewhat disenchanted with the ensuing criticism. He expresses a hint of frustration, saying, “If you had told me, after making [Artificial General Intelligence], that the thing I’d have to spend my time on is trying to argue with people about whether the number of characters that it said nice things about one person was more than another person…”

This mild irritation voiced by Altman raises a novel perspective in the 'glass half full' debate; is our judgement towards the GPT misguided?

 

Google Deepmind | An artist’s illustration of artificial intelligence (AI). 

 

As discussed, LLMs 'learn' by ingesting vast amounts of information, from which they develop an internal model that optimally 'fits' the data they have processed. Humans (of sufficient age) perceive the world and then internally categorise things as good or evil. In contrast, an LLM is explicitly shaped by the content it encounters - by design. If it produces harmful output, it's a reflection of negative inputs it has received from human sources. Similarly, if it reprimands inappropriate comments, it's because it has been fine-tuned by humans to do so. In essence, an LLM is a one-to-one projection of its training data; every thought humanity has ever put down on paper. 

As Lex Fridman puts it, to have a conversation with an LLM is to have a conversation “with the entirety of the collective intelligence of the human species.” 

Not only are we, again, wrongfully anthropomorphising by directing our anger towards an ‘entity’ which cannot inherently be good or evil, but we are also inadvertently reprimanding ourselves. So is it really the data scientists’ job to undo the biases of humanity? Are criticisms of ChatGPT’s bias simply deferring the larger issue onto an inanimate algorithm, - are we yelling at a mirror? And to quote Andreessen, AI is quite possibly the most important – and best – thing our civilisation has ever created. Isn’t the glass full? 

Conclusion

This argument on AI-judgement brings into doubt the very nature of the debate, and its importance on an individual level. If Andreessen vs. Lanier proves that pessimism and optimism cannot be resolved by technical comprehension, then what should we strive for? This brings into light the 3rd and final axis: agency vs fatalism. 

“Pessimists Sound Smart. Optimists Make Money.” – Nat Friedman. 

Andreeseen’s $1.8 billion dollar net worth makes it difficult to contend with this statement. However, Yaniv Bernstein proposes an alternative viewpoint, namely that “the distinction is not optimism vs pessimism, but rather agency vs fatalism.” Andreessen is an AI-optimist but he is primarily a ‘high-agency’ individual, characterised not by his general optimism or pessimism, but by his belief that his own actions have the ability to influence the future for the better. 

Conversely, the fatalist subscribes to a sense of inevitability, feeling powerless to alter the course of events. Whether optimistic or pessimistic, the fatalist remains passive, merely a spectator as the future unfolds. The question then is not whether the glass is half full or empty. 

Evidently, the leaders and CEOs, the technologists and consultants of the modern age will neither be defined nor differentiated by their take on AI. The future may belong less to the optimists or pessimists, and instead to those ready to act regardless - high agency may be the defining attribute of the coming decades.

 

An illustration of the 3 axes - Justin Lee

 

Test your AI Identity

Take the test developed by the author to test your AI identity!

 
Previous
Previous

Making America Great Again - for the first time: contextualising the advance of modern American Conservatism

Next
Next

What Greek Mythology can teach us about perspective