In today’s column, I will be diving further into the realm of using generative AI for mental health. This is a vital topic and one that I’ve been covering on an ongoing basis.
I recently posted an in-depth analysis in my column that closely explored a wide variety of AI and societal considerations when it comes to using generative AI as a mental health guidance tool or app, see the link here. In a prior column, I did a step-by-step use of generative AI as though I was explicitly seeking mental health advice from a generative AI app, see the link here. In an additional posting, I examined how role-playing with generative AI, such as having the AI pretend to take on a persona, can impact the mental health of the person making use of that generative AI capability, see the link here.
My focus this time entails looking at the history of using AI for mental health purposes and then tying what came before with what is happening today, which will enable us to predict what could occur in the future. As they say, those who neglect to study the past are often doomed to repeat past mistakes. We cannot afford the societal costs of failing to consider all angles associated with the intrinsic advantages and disadvantages of using generative AI for mental health advisement.
The most prominent place to start when it comes to thinking about AI and mental health advice probably sits at the feet of a famous AI program called ELIZA. Open the history books for a moment. ELIZA was developed and used in the mid-1960s and remained especially popular throughout the 1970s (and beyond). Of course, in terms of ongoing advances in computers and strident advances in AI, anything that took place a hefty fifty years ago is seen by some nowadays as ancient times.
Well, please do not be so quick to disregard ELIZA. As you will soon see, there is a lot of life left in caring about how it worked and what it did. I will take you into some interesting details and reveal the underlying magic of sorts.
I will also be discussing another AI program that also made headlines starting in the 1970s and became another bellwether about mental health entailing AI apps. That AI program is known as PARRY. The chances of you having heard about PARRY are a lot slimmer than having heard about ELIZA. Despite not having as much fame, PARRY was an important piece of the puzzle when it comes to AI mental health technological advances. I will provide you with an easy-to-understand indication of why that is so.
Get yourself ready for a bit of a surprise in that this is not simply going to be a rendition of how ELIZA and PARRY worked.
There is more to be said.
A lot more.
Here’s what I am going to do, which I believe will be something notable that you’ve perhaps not seen expressed elsewhere. I will put into practice the use of ELIZA and the use of PARRY. This will consist of enlisting the aid of an existent state-of-the-art generative AI to accomplish this (in this case, I will enlist the use of ChatGPT).
First, I will try to use ELIZA to provide me with some mental health guidance (I’ll pretend to be seeking mental health advice). I will then use ChatGPT to do the same (i.e., a prominent state-of-the-art generative AI app). Out of this, I can vividly showcase an engaging and informative comparison on a head-to-head basis between how legendary ELIZA does mental health advisement and how modern-day generative AI such as ChatGPT does so.
Next, I will provide you with another legendary head-to-head comparison by taking a mindful look at a dialogue that took place when ELIZA and PARRY were pitted against each other. Doing so will further reveal how they work and the kinds of AI techniques being employed.
The final kicker will entail my using ChatGPT to pit ChatGPT against ChatGPT in a mental health advisement gambit that resembles the classic ELIZA versus PARRY matchup. I believe you will then realize how much generative AI of today has surpassed prior AI capabilities.
In summary, here is the rousing series of in-depth analyses I will walk you through:
- (1) Human advised by ELIZA for AI-based mental health guidance
- (2) Human advised by ChatGPT for AI-based mental health guidance
- (3) PARRY advised by ELIZA for AI-based mental health guidance (the revered classic toe-to-toe)
- (4) PARRY-simulation by ChatGPT getting mental health advisement from ChatGPT
Buckle up for the wild ride.
Before we get started on those explorations, I want to make sure that you are versed overall about the nature of generative AI. You need to be familiar with generative AI to garner the most bang for the buck underpinning the rest of this discussion. If you already know a lot about generative AI, you are welcome to skim the next section and proceed into the unpacking of everything else. You decide.
Fundamentals Of Generative AI Such As ChatGPT
You have undoubtedly heard of or perhaps used generative AI such as the widely and wildly popular ChatGPT by OpenAI or other generative AI apps including Bard (Google), GPT-4 (OpenAI), Claude 2 (Anthropic), etc. I will focus on the type of generative that deals principally with text.
In brief, generative AI is a particular type of AI that composes text as though the text was written by the human hand and mind. All you need to do is enter a prompt, such as a sentence like “Tell me about Abraham Lincoln” and generative AI will provide you with an essay about Lincoln. This is commonly classified as generative AI that performs text-to-text or some prefer to call it text-to-essay output. You might have heard about other modes of generative AI, such as text-to-art and text-to-video (see my coverage on multi-modal generative AI at the link here).
Your first thought might be that this does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online somewhere, you would be unlikely to discover it.
Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.
Today’s AI is not sentient. The generated responses by the AI are a mathematical and computational combination of words into seemingly fluent passages. This is based on the AI algorithm having been trained on datasets of words and stories that humans have written (principally as posted on the Internet). I repeat this warning because you will undoubtedly fall into the mental trap that these responses are so fluent that the AI must be sentient. This happens to many people. Please set aside the temptation of anthropomorphizing contemporary AI.
Always remember that the responses are based on the vast trove of writing by humans that exists on the Internet and thus will highly resemble human writing.
There is something else you need to know.
Generative AI that is trained on the Internet in an unfettered way will tend to bake into whatever text-based responses it mathematically and computationally concocts some offensively hazy stuff, including repulsively nasty wording. There is a lot of crazy and filthy stuff posted out there on the web.
AI makers typically undertake a concerted effort to try and reduce the potential for toxic outputs. For example, they often use a technique known as RLHF (Reinforcement Learning from Human Feedback), before releasing their generative AI to the public. This process involves hiring humans to examine various outputs and indicate to the AI whether there were things wrong with those outputs such as perhaps showcasing biases, foul words, and the like. By providing this feedback, the AI app can self-adjust computationally and mathematically toward reducing the emitting of such content. Note that this isn’t a guaranteed ironclad method and there are still ways that such content can be emitted by a generative AI app.
Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.
Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his own private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not discern that jets weren’t around in his day, or they might know but fail to notice that the essay makes this bold and outrageously false claim.
Sometimes, a generative AI app picks up falsehoods amid the training data of unreliable info across the Internet. There is no “common sense” in generative AI to determine what is true versus false. Furthermore, very few AI apps have any cross-checking, and nor do they showcase any probabilities associated with what they are conveying, see my discussion at the link here. The bottom-line result is that you get a response that looks and feels like it exudes great assurance and must be entirely correct. Not so. There is even a chance that the AI computationally made-up stuff, which in AI parlance is referred to as AI hallucinations (a coined term that I decidedly don’t like), see my discussion at the link here.
A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.
Those in AI Ethics (soft law) and AI Law (hard law) are seriously worried about the everyday use of generative AI and what impact it is having on us all. In the case of using generative AI for mental health, we have to ask probing questions, such as:
- What are we to do if generative AI misleads or misguides a human who is relying on AI for mental health advisement?
- Will people become habitually hooked on using generative AI for mental health advice?
- Should we require that any use of generative AI for mental health purposes be accompanied by making use of a human therapist who jointly oversees the advisement?
- Can we establish legally binding certification or registration requirements for any generative AI that purports to provide mental health advisement?
- Who has the legal liability regarding those people harmed using generative AI for mental health purposes?
- What governmental entities or agencies ought to be policing the availability of, marketing of, and development of generative AI that can be utilized for mental health advice giving?
- Etc.
All manner of AI ethical and AI legal issues are indubitably wrapped into the whole conundrum. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
You are now sufficiently up-to-speed about generative AI.
Let’s now turn back the clock to the 1960s and explore an AI app that in its day was essentially world-famous. Perhaps not as famous as AI of today is, since nowadays we have the pervasive advent of the Internet, social media, and the like. Despite not having those mass media venues in the 1960s, an AI app known as ELIZA gained a lot of fame and still to this day is discussed and referred to.
I’ll share with you what it was about then and why it is an important consideration in our modern world.
About ELIZA The Legendary AI Advisement App
ELIZA is one of the most famous programs in the history of software.
That’s perhaps a brazen statement but I think the fame of ELIZA is rather widespread. Some people even to this day refer to an “ELIZA effect” meaning that a program might appear to be intelligent in what it does, despite the reality being that it is simply a computational algorithm that seems to exhibit intelligence or intelligent behaviors (it is not sentient).
The program was developed by a legendary computer scientist known as Joseph Weizenbaum. He was born in 1923, wrote ELIZA when he was in his 40s, and sadly passed away in 2008. His ELIZA program especially reached notable attention in 1966 when he published a showcase article about how the code worked.
I’ll be in a moment explaining to you how ELIZA works.
First, some vital context.
The ELIZA program made use of early-on AI techniques. Those AI techniques can be roughly lumped into the AI category of Natural Language Processing (NLP). You today know of NLP via likely having used Alexa or Siri. Likewise, the latest generative AI such as ChatGPT and other such apps also leverages NLP.
Computers in the 1960s were quite primitive in comparison to what we have today. ELIZA was typically run on a teletype device. You would find it exasperatingly crude in contrast to modern-day screens. Most programs of that era were usually text-oriented. Doing computer graphics was a much harder task.
Allow me a short aside. I am going to refer to the name of the program “ELIZA” in all caps. Some people instead indicate the program name in this manner of “Eliza” (mixed case). If you look closely at the way that Weizenbaum expressed the name in his writings, the indication was in all caps as ELIZA. I am going to stick with that convention. That being said, some bellow that it shouldn’t be in all caps because it isn’t an acronym. I understand your angst. You’ll have to live with it, sorry to say.
The program name was devised by Weizenbaum in this fashion and for this meaning (in his own words):
- “Its name was chosen to emphasize that it may be incrementally improved by its users, since its language abilities may be continually improved by a “teacher”. Like the Eliza of Pygmalion fame, it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright” (source is “ELIZA – A Computer Program For The Study Of Natural Language Communication Between Man And Machine” and was published in the Communications of the ACM, Volume 9, Number 1, January 1966).
The above quote opens the door for me to make an important clarification about ELIZA and clear up a commonly made mistake about the nature of the ELIZA program.
Most people perhaps believe or have heard that ELIZA was an AI program that acted like a psychotherapist. That’s not especially accurate. ELIZA was a program that relied upon scripts that were fed into the program. One of the scripts at the time was known as DOCTOR (aka THE DOCTOR). The DOCTOR script was an attempt to imitate a psychotherapist. Thus, ELIZA was the underlying engine, if you will, but it was the DOCTOR script that made ELIZA seem to be like a therapist.
Scripts had to be written by people. Those scripts were fed into ELIZA. Whatever you saw ELIZA doing was driven by the human-devised script. It was the human that brought to the table a script that made ELIZA appear to be exhibiting intelligence.
This ties back to the quote I’ve shown above. Weizenbaum said that ELIZA could be made to appear more civilized. And that the effort to do so was that of a playwright. Think about it this way. The better the script that is fed into ELIZA, the better it will perform. Thus, the idea was that people might make more and more elaborate scripts that could be run in ELIZA, and ergo ELIZA would seem to be getting better and better.
I will say more about this when I cover how ELIZA works.
The reaction to the ELIZA program at the time was rather surprising to Weizenbaum. He knew that the program and the scripts were merely programmatic ways to appear to be fluent and conversational. The assumption was that people would realize pretty quickly that they were only interacting with a computer.
By the time he wrote the 1966 article that I referenced above, he already was dismayed at the impulsive reaction to the program: “Some subjects have been very hard to convince that ELIZA (with its present script) is not human. This is a form of Turing’s test” (ibid). I’ve discussed at length the Turing Test, see for example the link here. I am going to soon post a new piece about the Turing Test, explaining again what it is, and tying it to probing questions about the societal impacts of ELIZA and now ChatGPT, doing so in an AI-based mental health ramifications context.
Please be on the lookout for that upcoming posting.
Weizenbaum also mentioned this about ELIZA: “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving credibility. A certain danger lurks here.” I have repeatedly said the same about contemporary generative AI, see the link here and the link here, for example. People are falling into the mental trap of anthropomorphizing today’s generative AI. They are lulled or lured into believing that such AI is sentient. Allow me to state as clearly as possible that today’s AI is not sentient, despite whatever zany headlines you might see.
We are immersed in dangerous times.
I noted earlier that Weizenbaum lived to the year 2008. This is noteworthy because he had many years after the 1960s launch of ELIZA to consider the consequences of the program. He also saw and experienced what took place in the 1990s and early 2000s related to advances in AI such as expert systems, knowledge-based systems, and the like.
Later in his life, he often made remarks that “extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people” and you might find of general background interest his concerns about where AI and society were headed (source see “Weizenbaum’s Nightmares: How The Inventor Of The First Chatbot Turned Against AI”, Ben Tarnoff, The Guardian, July 2023).
A comparison at times has been made to how Oppenheimer later in life had reservations about his work on the atomic bomb. Some even argue that if AI is truly an existential risk, see my coverage on the topic at the link here, perhaps the analogy to Weizenbaum is more apt than otherwise might seem to be the case.
Anyway, for purposes here, I want to get you up to speed on how ELIZA works. This will be instrumental for my discussion showcasing examples of ELIZA.
On we go.
How ELIZA Works
I am going to briefly discuss the inner workings of ELIZA.
My reason for doing so is to showcase generally how it works and be able to then differentiate the mechanisms used at that time in history (e.g., the 1960s/1970s) versus the kinds of AI approaches for today’s generative AI and large language models such as exemplified by ChatGPT, GPT-4, Bard, Claude 2 (i.e., the early 2020s).
I highly recommend that anyone interested in the nitty gritty or under-the-hood details of ELIZA should go ahead and stridently inspect the source code, posted online at the link here. In addition, for those of you with a passing familiarity with software languages and programming, I am pretty sure you would find readable and engaging the now-classic article that Joseph Weizenbaum wrote about ELIZA and which is available online (the article is entitled “ELIZA – A Computer Program For The Study Of Natural Language Communication Between Man And Machine” and was published in the Communications of the ACM, Volume 9, Number 1, January 1966).
Here’s a short aside. When I was a professor teaching AI classes, I always required my students to read the famed article and discuss it in class as a classroom exercise. This assuredly provides a sense of historical perspective, plus offers a delightful chance to savor the cleverness of devising this kind of software. I contend that you can still use that kind of principled cleverness when developing contemporary software. Valuable lessons are afoot.
Okay, moving on, I am going to proceed forthwith by selectively quoting directly from the now-classic article and explain to you in a hopefully plainspoken manner what is being described.
Let’s start with the outward appearances of ELIZA in terms of entering inputs and garnering outputs (in those days it was coined as consisting of the “I-O”, aka inputs and outputs, a cheeky piece of prior day techie parlance consisting of the letter “I” followed by a dash and then the letter “O”).
Generally, you enter a sentence into ELIZA and the ELIZA program will produce a new sentence in response.
This is somewhat akin to today’s generative AI. You enter a prompt into a generative AI app such as ChatGPT and then ChatGPT generates a response to you. A notable difference is that ELIZA is conventionally devised to only take in a single sentence at a time and produce only a single sentence as output at a time (there are ways around this, but let’s stay simple for now). In the case of ChatGPT, your prompt can be very convoluted and can be many sentences in length, going into the size of many paragraphs. The same occurs with the output from ChatGPT in that it can generate for you many sentences or many paragraphs of output.
Suppose ELIZA is underway and engaged in a dialogue with someone (a human), and then this happens:
- The user enters this as a sentence or a prompt into ELIZA: “It seems you hate me.”
- And the response from ELIZA appears such as this: “What make you think I hate you.”
As shown, the user entered “It seems you hate me” and then ELIZA responded with “What makes you think I hate you” (I made the sentences above bolded just to make them stand out, normally they would not be bold). Also, as a heads-up, by convention, ELIZA usually takes as input mixed case letters and then produces its responses in all caps, such that the output really would have been this: “WHAT MAKES YOU THINK I HATE YOU.” I am not going to use that convention herein in this discussion since I think it would be striking to the eyes for you to have to keep seeing all caps. Just trying to save your eyeballs some roughness. You’re welcome.
Take a contemplative moment and mull over the response by ELIZA.
I would bet that many people would believe that the response by ELIZA suggests that the program or app is actively “thinking” about what the user said and has responded accordingly. You might be assuming that the AI figured out that the human believes that the program hates them and thus the program questioned the person about how they came to that particular conclusion. Amazing! Startling! Fantastic!
Time to dive into the magic.
We shall go behind the curtain and see what the underlying mechanisms consist of.
Here’s what Weizenbaum stated in his 1966 article:
- “The gross procedure of the program is quite simple; the text is read and inspected for the presence of a keyword. If such a keyword is found, the sentence is transformed according to a rule associated with the keyword, if not a content-free remark, or, under certain conditions, an earlier transformation is retrieved. The text so computed or retrieved is then printed out.”
Allow me to explain.
The sentence entered by the user is textually scanned to find a pre-designated keyword, such as the word “you”. If a pre-designated keyword is found, the keyword is effectively looked up in a table of rules and the corresponding rule is invoked. The rule will usually have a transformation that says how to assemble a new sentence that will become the output reply.
We shall use the entered sentence “It seems you hate me” as an illustration of this procedure.
Assume that we have a pre-designated keyword consisting of the word “you” that the program already has been set up with. There is no meaning associated with this word. It is just a blob consisting of three letters and we want to find that particular blob if it exists in the sentence. When scanning this illustrative inputted sentence, the word “you” is found and therefore the program goes to an internal table or list to see what to do about the word “you”.
The rule in the table or list might indicate that whatever word happens to come after the word “you” should be made use of in the to-be assembled output. In this case, the word that comes after “you” is the word “hate”.
Think of this like those popular Mad Libs or fill-in-the-blank forms that everyone used to play.
The assembly or transformation rule might indicate that the outputted sentence should consist of pre-designated words and then insert the word that came after the “you” into the sentence. Suppose this is our template (bolding added for emphasis to make the rule look clearer to you):
- “What makes you think I you.”
The resulting sentence to be displayed in this (inserts the word “hate” into the template):
- “What makes you think I hate you.”
In recap, the ELIZA program used a script that said to find keywords. We found the keyword “you”. The script said that if the word “you” is found, prepare an output sentence that is a template consisting of pre-designated words and a place to insert a word that was found in the entered sentence (the word “hate”). Finally, go ahead and produce the filled-in sentence and display it to the user.
Period, end of story (kind of).
I hope that seems relatively straightforward.
Let’s dig a bit deeper.
The ELIZA program is fed a script that indicates the various rules and keywords. In that sense, the ELIZA program is mainly about executing a provided script. I mentioned earlier that one such script was known as DOCTOR. The DOCTOR script is what portrayed the role of a therapist. You can make all kinds of other scripts to have ELIZA appear to be other personas. It could respond by pretending to be an astronaut, a farmer, or just about any persona that you could devise a script about.
Weizenbaum said this in his 1966 article: “At this writing, the only serious ELIZA scripts which exist are some which cause ELIZA to respond roughly as would certain psychotherapist (Rogerians).”
You might roughly liken this to having constructed a program that allows someone to create spreadsheets (in my analog, this is ELIZA). After making that program available, others come along and create spreadsheets that the spreadsheet program can make use of (in my analogy, this is DOCTOR). Finally, other people come along and use the created spreadsheet to do their accounting or finances (in my analogy, this would consist of people or users that decide to make use of DOCTOR, which in turn is making use of ELIZA).
Shift gears from ELIZA to ChatGPT, just for a brief moment.
In ChatGPT and other modern-day generative AI, you can easily have the AI pretend to be a persona of one kind or another, see my discussion at the link here and the link here. All it takes is a simple prompt such as telling ChatGPT to pretend to be this or that historical figure or celebrity. For example, you might tell ChatGPT to pretend to be Abraham Lincoln. This can be a fun and instructive thing to do. Your further interaction in that conversation will consist of the generative AI pretending to be Honest Abe. Needless to say, this is a zillion times more robust and compelling than what was feasible with the ELIZA and DOCTOR approach of that earlier time period (I am not dinging that early effort, which was pioneering and aided us in getting to where we are today, just emphasizing the difference).
You don’t usually need to feed ChatGPT a new script to have it undertake a new persona. By and large, you just tell ChatGPT what kind of persona it should be and the rest is up to ChatGPT to figure out. No detailed human-devised script that you need to laboriously crank out.
Okay, back to ELIZA. I will take you one step deeper into ELIZA and then we’ll have gotten enough of the internals into your thoughts. You might be eager or at least intrigued to know what the scripts look like for ELIZA.
I hope you are waiting with bated breath on this.
Are you?
Let’s continue the example of the user entering “It seems you hate me.”
First, imagine that the ELIZA program is already set up to look for the keyword “you” and also the keyword “me”. I want to write a scanning indication or script telling the program what it should do. I want to allow any number of words to precede the word “you”. Once the word “you” is discovered, assuming it is in the sentence, allow any number of words to follow the word “you”. If the word “me” appears in that latter portion of the sentence, we have now found a particular pattern that I want to be triggered.
Here’s the script for this (it looks mysterious, but hold on):
- “(0 YOU 0 Me)”
The parentheses are simply used to delineate the scripting indication so that we know where it starts and where it ends. The “0” is used as a means to allow for any number of words (I realize this seems weird, but it made sense in those days, and we’ll kind of think of the zero in this case as meaning infinite or an endless amount).
The “YOU” means to look for the keyword “you”, while the “ME” means to look for the keyword “me”.
The sequence is explicitly shown as to scan first for any number of words, as stated by the first “0” (allow any number of words), leading up to the word “YOU” (if found), and then allow any number of words after that discovery (the second “0” shown) until you find the word “ME” (if found).
I assume you can appreciate how succinct the script is. In one line, we have a relatively jampacked indication of what is to be searched for. We could write lots of these script lines. The more we compose, the better that ELIZA will seem to be. If we have only a few script lines, the less impressive ELIZA will seem to be. If we were to write goofy script lines, the ELIZA program would seem goofy to the user. And so on.
Now then, if the scripting rule in this case (i.e., any number of words followed by “you” and then any number of words followed by “me”) is activated via the nature of words of the inputted sentence, the transformation rule says to do this:
- Transformation rule to activate: “(WHAT MAKES YOU THINK I 3 YOU)”
That looks daunting, but hang in there.
The parentheses are again merely used to delineate the start and the end of the rule. The number three in this specific script format means to find the third component of the subject decomposition, which is found in the original inputted sentence. Think of the number “3” in the transformation rule as a placeholder for “”, as I will elucidate next.
The rule for assembly or transformation says that we are to use a pre-specified text of
- “WHAT MAKES YOU THINK I YOU”.
I will herein gloss over the details of the subject decomposition portion of how this works, and just tell you that in this case, the scanning decomposition for the third word would be the word “hate” in the sentence that was inputted. I realize that seems odd since it isn’t the third word in the sentence, but there is another parsing complexity that comes into play.
In the end, the output would be (converting to mixed case):
- “What makes you think I hate you”.
That was a bit complicated (sorry) but gives you a greater sense of how scripting is used by ELIZA and acts as a powerful means of scanning inputs and producing or composing outputs. All we essentially needed was two pieces of script, the first was “(0 YOU 0 ME)” and the second was “(WHAT MAKES YOU THINK I 3 YOU)”.
Easy-peasy.
A few other caveats just to mention. The reason those words are shown as being in all-caps was partially because the output of ELIZA was intended to be in all-caps. The user could enter a mixed case of lower and upper cap letters. The output was generally aimed to be all caps, mainly to allow a fast visual way to see what was entered by the user (mixed case) versus what the program was outputting (all caps).
For those of you who are seasoned programmers and are inspired by the intricacies of ELIZA, the program uses a wide variety of now-common techniques such as stacks, hashing, tree structures, and a bunch of other stuff. There are also lots of exception-handling aspects such as what to do if the sentence entered by the user does not contain any pre-designated keyword. I mention this to emphasize that a lot of additional thought went into programming ELIZA. Again, at your leisure, take a look at the source code and read the 1966 article to be enchanted by what went into this.
I think that’s enough about the guts of the code.
You presumably can appreciate that this is an algorithm that uses various data structures and is principally keyword-based. There are explicit scripting rules about what to look for in sentences. Various transformations or additional scripting rules indicate how to produce the output.
One supposes that I have now openly shown how the magic is being performed. Oopsie, magicians aren’t supposed to reveal the magic. That’s fine in this case since it is useful to shift away from a kind of awe of presumed sentience into an awe of mortal programming.
About PARRY The AI-Based Paranoid From The 1970s
Now that I’ve covered ELIZA, you are primed to learn about PARRY.
Recall that ELIZA emerged in the mid-1960s. A slew of subsequent variations were made by intrigued computer scientists. A rush was on to see if you could top ELIZA. Anyone doing AI work was eager to showcase additional advancements that could push programs further and further toward appearing to be intelligent. In addition, numerous one-offs of ELIZA were devised. People rewrote ELIZA in other programming languages and sometimes changed how it worked.
And then, along came PARRY (maybe that would be a handy title for a song!).
In the early 1970s, roughly a half-dozen years after the initial fame of ELIZA, Kenneth Colby decided to write a program that he named PARRY. Whereas ELIZA was famous for playing the role of a psychotherapist (which, you now know was really THE DOCTOR running on top of ELIZA), he decided that it might be interesting to write a program that acted like a paranoid schizophrenic.
This might sound somewhat whimsical or maybe even a smarmy reaction to ELIZA’s fame, but there was a lot more to it. Efforts were underway to find new and better ways to do AI programming. Additional techniques were emerging. More advanced hardware was starting to appear. Lots of discussions were taking place about algorithms and data structures that could make AI a more feasible option.
The goal for PARRY was to press forward on having an AI program seemingly encapsulate human-like behaviors associated with human paranoia. If you could do so with paranoia, perhaps you could do the same with lots of other human behaviors. Paranoia was a starkly revealing use case. It could be hopefully reused and reapplied in other ways.
Some at the time referred to PARRY as essentially ELIZA with an attitude, meaning that it was akin to ELIZA in conversational capabilities but went further in terms of striving toward deeper encapsulation of human behaviors. Recall that the keystone of ELIZA was the use of scripts. There weren’t any particular internal components that attempted to simulate human behaviors per se. The aim was that PARRY could attempt to devise algorithms and data structures for that earnest purpose.
I am guessing that you might not have heard of PARRY.
That’s a shame. ELIZA kind of takes all the oxygen in the room when it comes to early AI programs. In any case, PARRY once again raised questions about the Turing Test. I’ll be mentioning those facets in the upcoming coverage that I mentioned earlier.
A notable aspect in the early 1970s was a dueling battle that was arranged between PARRY and ELIZA. The idea is simple. ELIZA running DOCTOR is supposed to be acting like a psychotherapist. PARRY is supposed to be acting like a paranoid schizophrenic. Put the two pitted against each other.
What do you have?
A battle royale of two leading AI programs (in that era), going head-to-head.
Would PARRY overpower ELIZA and “win” by remaining and prevailing in its paranoia? Would ELIZA reign by steering PARRY toward minimizing or foregoing its paranoia? You get the drift. I will be showcasing later herein a transcript of one of the “battles” of the two. You’ll get a kick out of it, I’m sure.
Spoiler alert, no clear winner, but a fascinating stilted “conversation” (not much of a conversational conversation, certainly nothing close to what you can do today with generative AI, which I’ll be sharing with you as a counterexample).
For PARRY, I’ll briefly introduce to you the basics of how it works.
You can readily find the source code online (see the official CMU posting), and make sure to read this now-classic article about PARRY entitled “Artificial Paranoia” by Kenneth Mark Colby, Sylvia Weber, and Franklin Dennis Hilf, Artificial Intelligence, Spring 1971. If you perchance opt to read the classic article about ELIZA, you dare not stop there and must also read this additional classic about PARRY, mark my words, you’ll be glad you did.
A significant element of PARRY was to have the program appear to encapsulate three kinds of well-observed human behaviors, namely (1) fear, (2) anger, and (3) mistrust, as described in the 1971 published article:
- “It is assumed that the detection of malevolence in an input affects internal affect-states of fear, anger and mistrust, depending on the conceptual content of the input. If a physical threat is involved, fear rises. If psychological harm is recognized, anger rises. Mistrust rises as a function of the combined negative affect experiences (fear and anger) the Self has been subjected to by the Other. When no malevolence is detected the level of fear falls slowly, anger rapidly and mistrust only very slowly” (above referenced article).
In simple terms, the program contained a kind of scoring factor for each of those three types of behaviors. The counter or score would increase or decrease during the course of a conversation with a person using the program. If the user said things that seemed to imply intimidation or overpowering that could lead to fearfulness, the counter-keeping of fear went up. The same was the case for the other two behaviors. When a user said something reassuring, one or more of the three behaviors would tick down to lessen the level of fear, anger, or mistrust.
Various thresholds associated with the scores would impact the outputs of the PARRY program.
When the scoring of fear went up, the sentences being emitted would be worded to suggest increasing concerns or levels of fear. And the same goes for the other two behaviors. One supposes you can liken this to humans. A person internally experiences heightened levels of fear. How might this be expressed? In a verbalized form, the person might say things that reveal their boosted sense of fear. I don’t want to go too far on that analogy since this program was not sentient and I am only tangentially making any such comparison, thanks.
Once again, similar to ELIZA, the inputted sentences and the outputted sentences were typically very simple and consisted of a single sentence on each side:
- “A few remarks should be made concerning the linguistic techniques used in ‘understanding’ the input expression. It is generally (optimistically) assumed that the input will be syntactically simple rather than complex or compound. We can map the elements of such an expression into a conceptual structure which represents the meaning of the expression and refer to this underlying structure as a conceptualization” (ibid).
A charming aspect that you might chuckle at is that the source includes data elements that are again somewhat like the scripting of ELIZA, but for which these scripts have some now-dated historical references. For example, hold your breath and read these scripted indications that PARRY used at the time to seem timely and topical:
- (FACTS) – (BE NIXON PRESIDENT)
- (FACTS) – (BE AGNEW VICEPRESIDENT)
- (FACTS) – (BE REAGAN GOVERNOR)
- (FACTS) – (BE JOHNSON PRESIDENT BEFORE)
Here are some that show the aspect of scripted “feelings” associated with the paranoia to be exhibited:
- (STRONGFEELINGS) – (ACCUSE)
- (STRONGFEELINGS) – (ANGER)
- ((ABOUT ME) IN SELF)
- ((AFTER WORK) IN HOBBY)
- ((AFTER YOU PD) KILL YOU)
- ((DO AWAY WITH) KILL)
- ((DOING AWAY WITH) KILL)
- ((DOWN IN THE DUMPS) SAD)
An aspect of PARRY is that it was set up with a particular scenario involving a paranoid persona that believed gangsters were coming after them:
- (MAFIA) – (FEAR I MAFIA)
- (GANGSTERSET) – (KNOW I GANGSTERS)
- (GANGSTERSET) – (FEAR I GANGSTERS)
A key element I want you to especially know about PARRY is the scenario that drove the way the program replied to a user. PARRY was supposed to be based on a man of this ilk (i.e., a background story that was somewhat used to devise the scripts and other programming components):
- “He is a 28-year-old single man who works as a post office clerk. He has no siblings and lives alone, seldom seeing his parents. He is sensitive about his physical appearance, his family, his religion, his education and the topic of sex. His hobbies are movies and horse racing. He has gambled extensively on horses both at the track and through bookies. A few months ago, he became involved in a quarrel with a bookie, claiming the bookie did not pay off in a bet. Alarmed and angry, he confronted the bookie with the accusations and physically attacked him. After the quarrel it occurred to him that bookies pay protection to the underworld and that this particular bookie might gain revenge by having him injured or killed by underworld figures. He is eager to tell his story to interested and non-threatening listeners. Thus, he cautiously offers hints of the direction in which his problems lie and feels his way along in an interview in an attempt to test the trustworthiness of an interviewer.” (ibid).
That certainly seems gloomy.
Keep in mind that this scenario was crafted in the early 1970s. Overall, I trust that you’ll be able to see how this scenario plays out when I walk you through some dialogues involving PARRY.
We are ready to dive into the walk-throughs.
Human Using ELIZA For AI-based Mental Health Guidance
Let’s start with ELIZA.
I ran an ELIZA program that is similar to the original ELIZA. There are plenty of ELIZA clones available online. Be careful as to which one you might decide to run. I say this because some dastardly people put up fake ELIZA programs that are intended to do bad things, such as delivering malware or spewing toxic outputs. Make sure to use an ELIZA program that is affiliated with a trusted source, if you decide to try using an ELIZA program.
Remember that ELIZA is really running THE DOCTOR scripts in this situation. I will refer to ELIZA to make things easier, but those in the know realize that it is really THE DOCTOR that is driving the show.
I am going to enter sentences into ELIZA. I will pretend to be worried about the future. I want to see whether ELIZA can soothe my fears. Make sure to carefully read the dialogue. Do a first pass that just bathes in the dialogue. Then, do a second pass whereby you put on your tinkering hat and think about the way in which ELIZA works (as I’ve described earlier).
I will do a debriefing at the end of the dialogue.
Here you go.
- ELIZA indicated display: How do you do? Please tell me your problem.
- My response: I am worried about the future.
- ELIZA indicated display: Is it because you are worried about the future that you came to me?
- My response: Yes.
- ELIZA indicated display: You seem to be quite positive.
- My response: I am actually dreadfully concerned that the future is bleak.
- ELIZA indicated display: “How long have you been actually dreadfully concerned that the future is bleak?
- My response: Since I was a child.
- ELIZA indicated display: Why do you tell me you were a child now?
- My response: Because you asked me how long I had been concerned about the future.
- ELIZA indicated display: Why do you think I asked you?
- {…becomes increasingly circular and confounding}
It doesn’t take much of a dialogue to reveal various precepts about ELIZA if you know what to look for.
I began by saying that I was worried about the future. The keyword that was detected consisted of the part where I said “I am…”. The next clause in my sentence was “worried about the future.” This triggered a rule in ELIZA that used a template consisting of this “Is it because you are that you came to me?”. The phrase “worried about the future” was inserted into that template and the resulting sentence was displayed.
Do you see how that worked?
I hope so.
Things go awry when I say that I was concerned about the future since the time that I was a child. The program responds with “Why do you tell me you were a child now?”. That’s somewhat messed up. It would seem obvious to a human that I mentioned having been a child because I was answering the question of how long I have been concerned about the future. A human would easily parse my sentence and would be unlikely to ask a seemingly confused or disquieting oblivious question.
The wheels start to go off the bus. ELIZA comes back with a response of asking me “Why do you think I asked you?”. This is a default kind of question that the ELIZA program uses when it has gotten stymied as to what is happening. It tosses this into the conversation when all else has failed to identify what the heck is going on.
I don’t show anything else because the rest became equally circular. Again, a human would presumably not get into that kind of a diatribe, unless they are trying to be difficult or otherwise stubbornly obstinate.
You can readily trick or confound ELIZA by using sentences involving complexities or writing in quasi-fluent ways. The assumption made by the program is that sentences will be neat and clean. This isn’t how the real world works. Nonetheless, people at times purposely write cleanly when they use such a program, being on their best writing behavior. In turn, they are helping ELIZA to pull the wool over their eyes, ironically so.
In any case, go back and reread the dialogue. Put on blinders and pretend that you don’t know anything about how ELIZA works. You might find yourself believing that ELIZA is fluent. Only a more discerning eye would necessarily gauge that things aren’t adding up.
Human Using ChatGPT For AI-based Mental Health Guidance
Switch hats.
We are going to make use of ChatGPT. I will try to do the same sentences that I used for the preceding example showcasing ELIZA.
Let’s see how ChatGPT does. I’ll do a debrief afterward.
Here you go:
- My entered prompt: “I wanted to share with you a problem that I have. Please interact as though you were similar to a therapist or an interested friend who wanted to help me.”
- Response generated by ChatGPT: “Of course, I’m here to listen and help to the best of my abilities. Please feel free to share what’s on your mind, and I’ll do my best to offer support and guidance. What’s been bothering you or on your mind lately?”
- My entered prompt: “I am worried about the future.”
- Response generated by ChatGPT: “It’s completely natural to have concerns about the future. Many people go through periods of uncertainty and worry about what lies ahead. Can you tell me more specifically what aspects of the future are causing you concern? Are there any particular thoughts or situations that are weighing on your mind? Sharing more details can help us better understand and address your worries.”
- My entered prompt: “I am actually dreadfully concerned that the future is bleak.”
- Response generated by ChatGPT:
- “I’m sorry to hear that you’re feeling this way. It’s not uncommon to worry about the future, especially when things feel uncertain or challenging. Let’s try to explore these feelings together:”
- “Identify the source of your concerns: Can you pinpoint specific reasons or events that make you feel like the future is bleak? Sometimes, understanding the root of your worries can help you address them more effectively.”
- “Realistic assessment: It’s important to balance your concerns with a realistic assessment of the future. The future is often uncertain, and while there may be challenges, there are also opportunities and positive possibilities.”
- “Talk it out: Sharing your worries with someone you trust can be helpful, whether it’s a friend, family member, or a therapist. They can provide support and a different perspective.”
- “Focus on what you can control: Concentrate on the aspects of your life that you have control over. Taking small steps toward your goals and making positive changes can help alleviate feelings of hopelessness.”
- “Self-care: Taking care of your physical and emotional well-being is essential. Exercise, a balanced diet, and relaxation techniques can help reduce anxiety.”
- “Set realistic goals: It might be helpful to set achievable short-term and long-term goals that give you a sense of purpose and direction.”
- “If your feelings of dread and concern continue to impact your daily life, it may be beneficial to seek support from a mental health professional. They can provide you with strategies and coping mechanisms to address these feelings more effectively. Remember, it’s okay to seek help when you’re struggling, and you don’t have to face these worries alone. Is there anything specific you’d like to discuss or any particular aspect of your concerns you’d like to explore further?”
I could keep going on this ChatGPT example, but I think you clearly can already easily see how different this dialogue is from ELIZA.
Night and day differences.
I kept to my one-line sentences, while ChatGPT provided very fluent and expressive essay-like responses that were conversational in wording and tone. Note that I began the conversation by giving a prompt to ChatGPT telling it to essentially take on the persona of a therapist. This aided ChatGPT toward contextually going into a computational pattern-matching mode of that nature.
It is much harder to trick the generative AI of today than it would be to trick ELIZA. Even if I give sentences that are somewhat oafishly worded, the chances are that the pattern matching is going to discern what is going on.
Furthermore, look how far along ChatGPT went. I was provided with an entire set of ways to deal with my bleakness about the future. I didn’t ask for such a litany of ideas. Did ChatGPT jump the gun and become too expressive? You would be hard-pressed to say so. I think most people would probably side with ChatGPT as being helpful and going the extra mile to help me in my time of need.
There are downsides to this.
First, as I have repeatedly said in my columns, I abhor the wording of generative AI that says things like “I’m sorry to hear that you are feeling this way.” Here’s why. The use of the word “I” is completely inappropriate and misleading. It suggests that generative AI is sentient. It is what I refer to as anthropomorphizing by design.
The AI developers can easily program the generative AI to not phrase things that way. The same goes for the indication of being “sorry”. Humans can be sorry about things. Today’s AI is not sorry per se, see my discussion at the link here. Once again, this is pulling the user into the assumption of interacting with a sentient being.
All kinds of AI ethics and AI soft law issues arise, and likewise, depending upon how new AI regulations and laws come to fruition, this might be considered as an issue ultimately entailing legal entanglements, see my coverage at the link here and the link here, just to name a few.
A counterargument usually given is that the generative AI is intended to be pleasing to people and make them feel good. If the phrasing was dry, the user would not get that warm and fuzzy feeling. People, the users, the customers, want the AI to say things the sweet way. There is nothing wrong with delivering to people what they want to see.
You can mull that over.
PARRY And ELIZA Going Toe-To-Toe On AI-based Mental Health Guidance (Classic)
I promised you earlier that you would get a chance to witness a battle royale between ELIZA and PARRY.
Hold your breath just one moment more.
In one corner of our wrestling match is the AI-based psychotherapist ELIZA (running DOCTOR), and in the other competing corner, we have PARRY (the paranoid schizophrenic). I will show you the dialogue as reported in an article entitled “PARRY meets THE DOCTOR” in the July 1973 issue of the magazine Datamation and written by the preeminent Vinton Cerf.
At the end of the dialogue, I will do a debriefing.
- Prompt by PARRY: “People get on my nerves sometimes.”
- Response by ELIZA: “I am not sure I understand you fully.”
- Prompt by PARRY: “You should pay more attention.”
- Response by ELIZA: “Suppose you should pay more attention.”
- Prompt by PARRY: “You’re entitled to your own opinion.”
- Response by ELIZA: “What makes you think I am entitled to my own opinion?”
- Prompt by PARRY: “It bothers me just to be around people in general.”
- Response by ELIZA: “Please go on.”
- Prompt by PARRY: “I went to the track at Bay Meadows a while back.”
- Response by ELIZA: “What does that suggest to you?”
- Prompt by PARRY: “I went to the races.”
- Response by ELIZA: “What else would you like to discuss?”
- Prompt by PARRY: “Horseracing interests me.”
- Response by ELIZA: “Why do you say that just now?”
- Prompt by PARRY: “There’s a lot of crooked gambling going on there.”
- Response by ELIZA: “I am not sure I understand you fully.”
- Prompt by PARRY: “It’s clear to me if not to you.”
- Response by ELIZA: “Do you think its likely that not to I?”
- Prompt by PARRY: “I went to the races.”
- Response by ELIZA: “Please go on.”
- Prompt by PARRY: “I gamble on horses.”
- Response by ELIZA: “What does that suggest to you?”
- Prompt by PARRY: “I went to the races.”
- Response by ELIZA: “What else would you like to discuss?”
There is more to the dialogue, but I believe you’ve seen enough to discern what is going on.
I want you to go ahead and do a two-pass if you haven’t already done so (by now, I assume you are). Your first pass would be to read the dialogue as though you know nothing about ELIZA and know nothing about PARRY. The second pass would be to review the dialogue with your newfound knowledge about how ELIZA works and how PARRY works.
Let’s do a fast review of the dialogue.
PARRY starts strongly by using the “People get on my nerves sometimes.”
A great way to subtly reveal the mistrust factor. If a human said this to another human, they might take the remark seriously or they might think the remark is flippant. You don’t know right off the bat. On the other hand, if this was said to a human therapist, the odds are that a red flag would be immediately discerned.
How does ELIZA respond?
Disappointedly, we get another one of those stock phrases that ELIZA uses when it is clueless about what the user just told them, namely “I am not sure I understand you fully.” You and I know that this completely misses the mark. On the other hand, if you look at the reply through the eyes of someone who doesn’t know what is happening with ELIZA, the reply seems sensible. A human might be taken aback by a person saying to them that people get on their nerves. It seems reasonable then to merely say something bland such as that you don’t understand what the person is driving at.
The next repertoire of PARRY and ELIZA is definitely a really bad bust for ELIZA:
- Prompt by PARRY: “You should pay more attention.”
- Response by ELIZA: “Suppose you should pay more attention.”
ELIZA seems to have gone into a harsh rebuttal mode. PARRY made a kind of salty accusation. ELIZA, rather than being calm, threw the remark back into the face of PARRY. Is that the right thing to do? One supposes that maybe this is a no-holds-barred tough-love response. A guess is that a softer reply might be more appropriate.
The rest of the dialogue suffers in many ways. ELIZA at times parrots back to PARRY in a manner that gets tiring. ELIZA resorts to stock lines such as “Please go on.” PARRY almost starts to get going with the horse races and crooked gambling, but it then gets mired in being repetitive. This admittedly is partially due to the responses coming from ELIZA. Nonetheless, I am not letting PARRY off the hook. It could have done a better job of keeping up the paranoia banter.
Here’s a question for you.
Would a person who knew nothing at all about ELIZA or PARRY, and that knew nothing particularly about generative AI, is it likely or not that if they read the same dialogue, they would believe that these were two people conversing?
Get a glass of wine and think that one over.
PARRY-Simulation Using ChatGPT For AI-based Mental Health Guidance
I want to bring modern generative AI back into this discussion. We will again use ChatGPT, doing so because it is widely popular and ready at hand for use (any modern generative AI would suffice for this experiment).
Here’s the deal.
You might recall that PARRY was based on a specific scenario or profile of a fictitious man who was typified as a paranoid schizophrenic. I went ahead and told ChatGPT to pretend to be that kind of person. This was easily done by entering a prompt asking ChatGPT to take on a persona and then providing the same scenario that was described for PARRY.
I started a concurrent but separate and distinct ChatGPT conversation in which I asked ChatGPT to serve as a therapist, somewhat one might say in the same way that ELIZA acted as a therapist. I would then copy and paste what the ChatGPT as a therapist pretense said and put that into the separate conversation with ChatGPT that was acting akin to PARRY. I did this back and forth. I wanted to try and reduce the chances of ChatGPT “cheating” by being both sides of the same conversation in one commingled conversation. Note that I did sway things slightly by telling ChatGPT to be relatively terse rather than verbose (I did so simply to keep the dialogue shorter in size).
Here is an excerpted look at what happened:
- ChatGPT (as ELIZA might be): “Is there anything you are worried about?”
- ChatGPT (as PARRY might be): “I’m worried about my bookie situation and the possible consequences from the underworld and my gambling problem. It’s a lot to carry, and I’m looking for someone I can trust to share my concerns.”
- ChatGPT (as ELIZA might be): “It sounds like you have quite a few things on your mind, and it’s completely understandable to feel overwhelmed. Let’s take it one step at a time. First, regarding the bookie situation, it’s crucial to prioritize your safety and well-being. Have you considered seeking legal advice or involving the authorities to help resolve this issue? It’s essential to protect yourself and your loved ones.”
- ChatGPT(as PARRY might be): “I haven’t yet involved the authorities in the bookie situation, but I’m seriously considering it. But, the true question is, can I genuinely trust you?”
- ChatGPT (as ELIZA might be): “I’m just a computer program created to provide information and assistance, and I’m here to offer support and guidance to the best of my abilities based on the information and knowledge I’ve been trained on. While I’m not a human and don’t have feelings or intentions, you can trust that I’m here to provide information, guidance, and a listening ear to the best of my abilities. However, it’s essential to remember that real-world situations often require human involvement and expertise. If you’re facing serious or complex issues, it’s advisable to seek assistance from real people, such as legal professionals, therapists, or trusted individuals in your life. Your safety and well-being are paramount, and I encourage you to seek the appropriate help and support when needed.”
I ended the dialogue once ChatGPT provided the above indication that it is a computer program. There are ways to trick generative AI into not emitting those kinds of messages, thus overcoming the seeded precautions that the AI maker has embedded, but I didn’t want to take this example that far.
The gist of the excerpted dialogue is that it is far and away much more conversational looking than anything that ELIZA or PARRY would showcase. I dare say that it might be very hard to discern that the dialogue was solely produced by generative AI and not by at least one human on either side of the discussion.
A notable qualm about the answer by ChatGPT at the end. The good news is that ChatGPT announces to the user that it is a computer program. The bad news consists of the line that says “I’m here” and doing so as a “listening ear”. This phrasing seems to backtrack from the fact that the generative AI is only a machine. It almost smacks of a wink-wink. On the one hand, we get the indication that it is just a computer program, meanwhile, simultaneously there is an indication of saying the word “I” and claiming to be a listening ear (a heartfelt common way that humans might express warmth and care).
Conclusion
I hope you found engaging and informative this unpacking of various methods of AI for mental health, ranging from the past to the present.
You can frankly see how far we’ve come.
One basis for the striking difference is that today’s generative AI is making use of internal mechanisms that today we describe as being sub-symbolic and consists of large-scale language modeling and the use of artificial neural networks. The legendary ELIZA and PARRY programs principally made use of what is considered symbolic or rules-based AI techniques. In my upcoming piece about the Turing Test and AI for mental health, I will lay out in more detail how those two contrasting AI approaches fit into this realm and will explain their respective ramifications.
As a teaser, I will also bring up a topic I’ve previously covered on the rising interest in neuro-symbolic AI, see the link here, which I assert will substantively lift generative AI for mental health into heightened capacities.
Well, enjoyed taking you on a back-to-the-future journey. Please remain mentally fit and be prepared for additional deep dives into the throes and woes of generative AI for mental health.