Sunday, March 12, 2023

The storm over ChatGPT

The Wall St journal recently published an article "ChatGPT Heralds an Intellectual Revolution" which is possibly prophetic, but more likely promotional hyperbole.  The company has recently released its product to general use, and two of the three authors are highly invested in tech companies (but do not disclose whether they are invested in this one or not). Other language AI systems are in the works in other tech companies,  and this may be an effort to front-load this version for investment..   The technology combines strings of verbal expressions into sentences, and paragraphs, which are perceived as responsive to questions presented to the system.  A summary of features by Business Insider includes many pitfalls (https://www.businessinsider.com/everything-you-need-to-know-about-chat-gpt-2023-1) (and the Microsoft version may involve the authors financially).  ChatGPT can also be used as a text generator for school essays, applications, etc.  No one has yet proposed that it be used to generate legal documents, nor to write "scientific papers" with a data source.  To some, the ability to generate "human sounding" text that is related to an inquiry seems a high level of "human intelligence".  The authors of the WSJ piece compare its importance to the Gutenberg printing technology and the emergence of the enlightenment.  Similar pronouncements have accompanied other tech advances like the internet,  which in retrospect provides both opportunities and challenges to its effective use. (The same might be said of any technology including the printing press.)  

Chomsky has written a blistering critique of the technology claiming that it simplifies more complex human intellectual generative processes: (https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html )  OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.

Chomsky emphasizes the more sophisticated search and organization features of the human process,  which depend less on brute statistical operations.   Also to the point is the question posed by Wladawsky-Berger "Are GPT3 and Chat GPT truly intelligent?" (https://medium.com/mit-initiative-on-the-digital-economy/are-gpt-3-and-chatgpt-truly-intelligent-38475c0c1692) In their paper, Bender and Koller explain why LLMs like GPT-3 are likely to become innovative language tools — kind of like highly advanced spell checkers or word processors — while dismissing claims that they have the ability to reason and understand the meaning of the language they’re generating. Their explanations are couched in linguistic concepts which they carefully define: form, communicative intent, meaning, and understanding.

Viewed from this perspective the WSJ article does seem a promo for investment.  But the interesting question of "true intelligence" is raised by these systems.   ChatGPT demonstrates that a specific capacity of human function, the ability to link together verbal information in an apparently meaningful (to humans) pattern can be simulated by computers. What more do humans add and how do we do it?  The obvious answer is that we add data from our bodies, somatic patterns,  hormones, "emotional reactions", in summary a range of responses that are not purely verbal and cannot always be easily translated into verbal form.  (That this might not immediately occur to computer scientists is not surprising.)  We also add the data of aggregate sensory experiences, the combination of signals entering the body through sense organs as we move around the world.  None of this is directly available to ChatBots and must be translated into verbal patterns that approximate human experience.  The result is that the output of the ChatBot is never time stamped.  It is not about the current experiential space though it may seem as if it is.  There are other differences as well, so it is surprising that some humans are impressed by the limited accomplishments of this interactive verbal process.   Do they suppose that ChatBots can replace meaningful human exchange?  Can ChatBots be used to engage patients in psychotherapy?  negotiate contracts? interrogate possible criminals?  There is no evidence that they have the ability to perform any of these tasks.  Their major contribution is showing that verbal exchange based on combining random verbal information is a weak simulation of a facet of human intelligence.

The more extensive question of computation and intelligence was raised by Turing (A summary in https://medium.com/@jetnew/a-summary-of-alan-m-turings-computing-machinery-and-intelligence-fd714d187c0b.  

Full text PDF: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjVgo_bo9f9AhXbHkQIHcSBBjUQFnoECA4QAQ&url=https%3A%2F%2Fredirect.cs.umbc.edu%2Fcourses%2F471%2Fpapers%2Fturing.pdf&usg=AOvVaw1WyXuTomtTJiRGVWr5k5R3).   

Turing explored the range of issues as an interrogation by human of either a computer simulation or another human.  The analysis illustrates the fundamental issue of human intelligence: it is about the ability to interact with another intelligence in real time in a manner that can effectively influence the other, over a range of situations. (Computers can effectively play chess, go, and Jeopardy against humans, but this range of situations is very limited.)  Current technologies are far from this capacity,  but apparently can be used effectively to sell themselves in online opinion segments!


No comments: