Continuing the discussion at the frontier between the most modern technology, philosophical aspects of AI, and science fiction
AI has become a hot topic in recent times, with remarkable advancements in technologies like ChatGPT, Bard and other large AI language models that can engage in natural language conversations. Let’s explore the history of AI and one of its earliest and most famous tests and thought experiments: the Turing Test and the Chinese Room Argument, discussing their ideas in the context of modern language models.
This analysis continues from a previous article I wrote that seems to have been quite interesting among my readers:
We are barely past the first two decades of the 21st century and we have language models like ChatGPT and Bard that, let’s be honest, we didn’t even think were possible just when the century began. These models use advanced machine learning techniques to swallow huge amounts of text and then perform highly complicated text-related tasks by applying the patterns “learned” from the training texts, in the form of a natural conversation between the user and the computer model.
These models were shocking when they landed, as they seemed really “intelligent”. If you think I’m exaggerating, it’s because you’re surrounded by people who are too much into science and tech like you and me… but just go ask farther away from this circle of people.
While some claim that modern language models could likely pass the Turing test (see next sections), it is essential to understand the limitations of such tests. Most importantly, that the Turing test relies on the illusion of intelligence, not actual intelligence that involves any kind on actual understanding. Moreover, given…