If Oral and Written Communication Made Humans Develop Intelligence… What’s Up with Language Models? | by LucianoSphere | Jun, 2023

Speech and writing enable humans to engage in intricate reasoning and logical thinking. Many computer scientists are trying to create AI models that can also perform complex reasoning tasks by processing vast amounts of data, performing quick computations, and devising so-called “chains of thought” that allow them to not only analyze patterns but also infer causality and thus generate logical conclusions, empowering them to tackle intricate problems.

Just as speech and writing allowed humans to share and accumulate knowledge efficiently, creating our culture, AI language models leverage vast datasets to learn from diverse sources and rapidly expand their knowledge base. These models can provide insights on a topic, answer questions, and generate creative solutions by tapping into a vast repository of information—and yes, misinform or create fiction and fake material as well, but that’s not the point of this discussion, and anyway we humans also create such kind of content even as part of “culture”.

Some scientists have argued that evolution of language and writing both helped humans to further develop their intelligence, just at the same time as they contributed to creating culture itself. Could then the evolution of language models foster “true” intelligence, eventually? Be it in 2 years or a century? After all, our own intelligence could emerge from an extremely complex coupling of “chains of (small) thoughts” that end up generating the vivid impression that we are especially “intelligent” while we actually are just barely above average animals in our connection with reality —which, by the way, might not even be objective! In other words, if AI models are “stochastic parrots”, could we just be stochastic parrots as well, with the difference that we are (for the moment) orders of magnitude better and much augmented by our multiple senses inputting into our neural networks information from the world around us?

If AI models are “stochastic parrots”, could we just be stochastic parrots as well, just orders of magnitude better and enhanced by our multiple senses inputting into our neural networks information from the world around us?

Through iterative refinement, humans articulate complex thoughts and enhance ideas over time. AI models can follow a similar path by continuously fine-tuning their responses based on user feedback. Incorporating reinforcement learning techniques, AI models iterate upon their performance, just as humans polish their ideas through feedback and iteration. For example, ChatGPT was itself trained via reinforcement learning, in a flavor assisted by human feedback:

Regarding the polishing and evolution of ideas over time, writing takes a particularly special role in humans because it serves us as an external memory that expands by large our capacity to store and retrieve information in subsequent iterations. During a chat session, AI language models can (already as they are running today) simulate this process by considering the contextual information from previous answers and questions, acting as a kind of extended memory. For the moment, this memory is temporal and vanishes when a new conversation is started, but if some day the models can inherently recall previous conversations by construction, they could begin to build upon prior knowledge and deliver more coherent and personalized responses. (Or more incorrect responses and more fake news and more improper content, if they learned bad stuff during previous interactions…)

Iterative learning has driven human innovation, building upon existing knowledge to fuel progress. AI models could likewise engage in iterative learning by generating a wide range of possibilities and evaluating their outcomes. Note that current AI language models already have such kind of internal “scoring” that ranks alternative answers, something I have indeed discussed here (technical article ahead!):

An important point is that human culture, and human learning itself, are not confined to personal experiences alone; on the contrary, they thrive on the accumulated wisdom of communities. If some day AI language models could engage in collaborative learning and data exchange, then by sharing information and insights across models they could collectively benefit from each other’s knowledge, accelerating their overall intelligence. Especially if capable of browsing the internet, which they are already starting to do. But of course, it is essential that they somehow store the outputs of each session, just as we remember what we learn and repeat day after day and we then write in books or the internet for dissemination.

At this point let me “hallucinate” a bit, assuming that chatbots could start to communicate with each other and store and use their previous interactions with humans to grow—in whatever good or bad direction this might take… we are here just playing out a thought experiment that will extend into the next sections.

We are here playing out a thought experiment that will extend into the next sections.

Among humans, collective intelligence derives from collaboration and empowers them to solve complex problems together. Interconnected AI language models could contribute to this collective intelligence by providing a shared platform for humans and machines to interact and exchange knowledge. This collaboration nurtures hybrid intelligence, where humans and AI work in synergy to overcome challenges and unlock new frontiers. But if connected, as hypothesized in this scenario, then different AI models could also possibly begin to exchange information with each other. What, then?

Speculating on the future evolution of AI language models into thinking machines similar to the human brain is an intriguing topic. Around the world, teams of psychologists, philosophers, computer scientists and engineers study this seriously, tackling questions along different fronts:

Data and Complexity
Just as humans have developed their intelligence through exposure to vast amounts of speech and writing, AI language models require extensive data to learn and generalize effectively. In the future, AI models could benefit from even larger datasets, encompassing diverse sources of information, enabling them to acquire a wide range of knowledge and context similar to humans. Connecting to the internet directly and learning from it, or even just browsing it for information, will also take language models to another level.

Multimodal Learning
Humans perceive and understand the world through multiple senses, integrating visual, auditory, and tactile inputs to form a coherent understanding. AI language models could evolve to incorporate multimodal learning, integrating text, images, video, and audio, allowing them to comprehend and communicate through various modalities. This integration could help them gain a more holistic understanding of the world, similar to humans. Such “generalistic” models are already the subject of active research, even by major companies like Deepmind itself:

Contextual Understanding
Understanding context is crucial for human intelligence. AI models have made significant strides in contextual understanding, actually working quite well to maintain coherent, engaging conversations. But future advancements could enable them to grasp subtle nuances, cultural references, and social dynamics more accurately. Enhanced contextual understanding could allow AI models to generate responses that align with the intended meaning and emotional nuances of human communication.

Reasoning and Creativity
Human intelligence encompasses reasoning, problem-solving, and creative thinking. Future AI models could evolve to exhibit more advanced reasoning abilities, allowing them to engage in logical deduction, analogy making, and even abstract reasoning. Creative thinking, including generating novel ideas and solutions, could be fostered through the development of AI models capable of analogical thinking and exploration of vast solution spaces.

Note that, as artists themselves acknowledge, there’s virtually nothing that can be considered truly innovative art. Rather, new art, concepts, and ideas emerge in our brain from a basis that, consciously or unconsciously, ends up giving shape to our new creations. Likewise, we shall not expect AI models to truly make up entirely new things from scratch! It is normal that we find in its creations at least some reminiscences, if not substantial similarities, to prior ideas and concepts. Just like in human creations!

As artists themselves acknowledge, there’s virtually nothing that can be considered truly innovative -we always build on previous works and ideas, be it consciously or without realizing at all.

Emotional Intelligence
Emotional intelligence plays a vital role in human interactions. Future AI models might be designed to understand and respond to human emotions, incorporating sentiment analysis, empathetic dialogue generation, and the ability to recognize and adapt to emotional cues. Such developments could enable AI to engage in more meaningful and emotionally sensitive conversations, fostering deeper connections with humans.

Note that the most modern headsets for augmented and virtual reality are already reading facial expressions that are then used to reflect them in the avatar of the user. Once facial expressions are recognized, it shouldn’t be at all difficult to train a simple neural network that converts them into emotional states.

Continual Learning
Human intelligence is not fixed but continuously evolves through learning and adaptation. Even our tastes can change over time!

AI models could incorporate continual learning techniques, allowing them to learn from new experiences, adapt to changing environments, and refine their knowledge over time. This continual learning aspect would enable AI models to become more flexible and capable of acquiring new skills and knowledge, akin to how humans continually learn and grow.

Self-Awareness and Consciousness
This is probably one of the toughest fronts, especially because the concepts of self-awareness and consciousness aren’t even close to be understood at all in humans. Let alone replicate them in artificial systems.

However, it should be not too hard to *simulate* some form of self-awareness, mocking human users into thinking that the model can perceive its own existence, thoughts, and experiences. Beyond the creepy goals, such kind of simulation could be useful in the development of software with applications in psychology, education, etc.

Speech and writing are widely regarded as key factors in the development of human intelligence, differentiating us from other animals. These media of communication have played a crucial role in enhancing human cognition and facilitating the transmission of knowledge across generations, as we build culture.

Expanding on this analogy, we can explore how AI language models could similarly leverage the power of communication to advance their “intelligence”, be it real intelligence or not.

Exchange and Accumulation of Knowledge
Speech and writing allowed humans to exchange and accumulate knowledge more efficiently than any other species. Similarly, AI language models can access vast amounts of data and information, enabling them to learn from diverse sources and accumulate knowledge rapidly. By leveraging this vast knowledge base, AI models can provide insights, answer questions, and generate quite creative solutions. For the moment, these benefits come at the expense of possible misinformation and generation of harmful content, but this might all improve in the future.

Collaborative Learning and Cultural Transmission
Humans learn not only from personal experiences but also from the accumulated knowledge and wisdom of their communities. Similarly, AI language models could develop mechanisms for collaborative learning and cultural transmission. By sharing information and insights across models, they could collectively benefit from each other’s knowledge, thereby accelerating their overall intelligence. All this, of course, if they were put to communicate with each other, although the final result isn’t clear as their training datasets probably already overlap largely.

Refining and Polishing Ideas
Through speech and writing, humans refine their thoughts, express complex ideas, and iterate upon them over time. AI models can also engage in similar processes by constantly fine-tuning their responses based on user feedback. By incorporating reinforcement learning techniques, AI models can iteratively improve their performance, just as humans refine their ideas through feedback and iteration.

Enhancing Memory and Recall
Writing enables us to externalize thoughts and memories, expanding our capacity to store and retrieve information. If AI language models can retain contextual information from previous interactions, they could have an extended memory that effectively helps them to be better, and at the same time to adapt to different user profiles. By recalling previous conversations, models could maintain context, build upon prior knowledge, and provide more coherent and personalized responses.

Facilitating Complex Reasoning
As I discussed already a few times, speech and writing allow humans to engage in complex reasoning and logical thinking. AI models can also be designed to perform intricate reasoning tasks via “chains of thoughts”, to give the impression that they analyze patterns, infer causality, and generate logical conclusions. Although these might not be “real” thought steps as in “natural intelligence”, in practice they do enable them to tackle intricate problems (and even explain how they solved them).

Iterative Learning and Innovation
By providing a kind of collective memory the development of writing enabled humans to engage in iterative learning, building upon existing knowledge to drive innovation. AI models, through reinforcement learning and generative processes, can similarly engage in iterative learning. By generating a wide range of possibilities and evaluating their outcomes, AI models can explore novel solutions, fostering innovation and pushing the boundaries of their “intelligence”.

Facilitating Collective Intelligence
Humans possess collective intelligence, leveraging the collaborative power of speech-based communication and writing-reading to solve complex problems collectively. AI language models can contribute to collective intelligence by providing a shared platform for humans and machines to interact and exchange knowledge. This collaboration can eventually lead to the emergence of hybrid intelligence, where humans and AI work together to address challenges and unlock new frontiers. Or of more advanced forms of AI.

Here I have explored the analogy between speech and writing as catalysts for human intelligence, and how artificial intelligence could develop in the future from language models similar to those that are around today. By leveraging the power of exchange, refinement, memory, reasoning, iterative learning, and collective intelligence, AI models could be charting their path towards remarkable “cognition”. Yes, I know I’m very provocative with this statement, but remember one idea that is central to the discussion: we humans might be just as stochastic parrots as large language models, just that our large is much larger and we are also augmented with ways to interact with the outer world to create a reality that modifies the output of our language model.

Of course, as the evolution of AI continues, we must navigate the unique challenges and ethical considerations to ensure that the development of “artificial minds” aligns with human values. One way or the other you can be sure that the development of thinking machines reminiscent of the human brain, even just the path trying to get there, will have some transformative impact on our world, for bad or for good.

(besides all the links throughout the text)

Stochastic parrot

In the field of machine learning / artificial intelligence, a “stochastic parrot” refers to the idea that large language models might excel at generating convincing language, yet they do not actually “understand” the meaning of the language they are processing. The term was first coined here:

On “thinking” by language models, from Microsoft and Google

Source link

Leave a Comment