Can You Imagine Living Only in the Past? 2040’s Ideas and Innovations Newsletter, Issue 109
Issue 109, May 18, 2023
What if there were term limits on your ideas, thoughts, hopes and dreams? What if you reached a point in your life where you learned nothing new and were limited by your past experiences and body of knowledge? And if this were the case, and you led an organization, you might become irrelevant pretty quickly since you wouldn’t have the knowledge and scope to address today’s issues and challenges.
Generative AI may be a liberally interpreted case of technological dementia; it’s similar to individuals who are trapped in the past with a repetitive present or a future only informed by what they have known.
A White Knight, or Dark Force?
The excitement about AI has hit an inflection point across society, the tech sector, and even the stock market. Every investor has turned their attention to any company that represents or includes a project leveraging AI.
Society is about to change, perhaps radically and there is much that we really should consider. We seem to be at a point where we are making major commitments and embracing a future with AI as the answer to all things. It will do our work for us. It will answer any question we have. AI will write the business plan we have been procrastinating working on. And it will create a visually stunning, information-packed report that will impress any CEO.
It will give us someone (thing) to talk to when we are lonely or too afraid to talk about our emotions with real people. This is a double-edged sword. As Axios reports, experts worry that unregulated health services-focused AI might further cocoon people from the relationships and conversations they need. But in the short term, AI-powered virtual companions, pets and mental health support services are already being drafted and offered to select audiences to fight the loneliness epidemic. And by the way, if you haven’t yet tried the conversational companion heypi.com, you may want to. It is as exciting as it is frightening. Above all, it is an experience in conversation immersion with AI that seems all too human.
Walter Isaacson wrote in an essay published this Sunday in the Wall Street Journal focused on AI. He posed two very important questions: Should our goal be to make humans and machines tight partners so that they make progress by working in symbiosis? Or is it inevitable that machines will become super-intelligent on their own, eventually leaving us mere mortals behind?
These two incredibly important questions are ones that society needs to ponder before we dive into the deep ocean of AI’s promise. In his essay he set forth, “What is potentially unnerving about the latest advances in AI is that these systems, following the vision of Alan Turing, are being designed so that they can learn on their own. They even appear to be developing their own agency rather than being tightly tethered to human instructions and intentions. Most frighteningly, their creators, as was the case with Dr. Frankenstein, sometimes don’t fully understand how and why they produce certain answers, have hallucinations, make false assertions and even seem to express unsettling emotions.”
We aren’t here to answer all his questions, but we are here as critical thinkers to raise a red flag as we run headfirst in our adoption of AI. It is time for everyone to flex their critical thinking muscles.
When AI Gets into Trouble
For all the noise about generative AI, we are finding it is not the tool to use for foresight and innovative thinking. It is a tool to access the past and draw conclusions based on its input and programming. If you’ve been playing with or using ChatGPT or similar tools, what you receive in response is a representation of what has been said in the past, and in many cases created by humans via the internet. The response it provides reflects past thinking and the bias of the past. It does not always reflect what’s current or predictive of the future. ChatGPT is still using an aged-out dataset and although Google’s Bard appears to be using more recent data, it is still dated.
So, like any responsible thought leader, we asked ChatGPT about the above description, and got this response:
“The first part of the statement is partially true. Generative AI, such as ChatGPT, is indeed a tool that uses its programming and input data to generate output based on patterns and information from the past. However, it is not accurate to say that its output only reflects past thinking and bias. While it is true that the model’s output is based on patterns and information learned from the past, the model can also generate novel and creative responses that go beyond what it has seen in its training data.
The second part of the statement is also partially true. It is true that the responses provided by Chat GPT are based on past data and do not always reflect what is current or predictive of the future. However, it is not accurate to say that the responses always reflect past thinking and bias. The model’s output can be influenced by current events and trends and can generate responses that are not explicitly present in its training data.
In summary, while the statement contains some truths, it oversimplifies the capabilities and limitations of generative AI. While models like ChatGPT are based on past data, they can still generate novel and creative responses that go beyond what they have seen in the past, and their output can be influenced by current events and trends.”
Like any good innovator defending his or her new product, ChatGPT is a strong advocate for itself. And in that way, it is inherently human. It may believe it is novel and creative, but does the output of its programming really know how to synthesize the present to predict the future? Does its understanding of “novel” and “creative” represent the accepted definition of those words and concepts? Or does its understanding represent a more abstract technological classification of what it believes is ”creative” and “novel”?
Facts: Humans program AI. Engineers create the algorithmic models AI uses to learn. Is Isaacson right in his concern about the program self-learning and becoming its own agent? Can it ever be better than the humans that programmed it in predicting the future without using the past as rationale?