Is Generative AI the Next Big CX Thing Despite Its Risks?

OpenAI’s innovative artificial intelligence breakthrough with ChatGPT presents advancements in retail marketing and uplifting improvements in customer experience (CX) solutions.

Launched by OpenAI in November 2022 as a prototype, ChatGPT — or Chat Generative Pre-trained Transformer — has become increasingly popular across multiple industries. With recent releases, its chatbox structure built on the GPT family of large language models (LLM) delivers detailed responses and articulate answers across many knowledge domains.

Generative AI’s futuristic capability lets users quickly generate content based on a variety of inputs. This approach also brings new tools that optimize e-commerce and marketing campaigns.

Present and future use cases are not only fascinating but can truly help deliver promises to customers, according to Harry Folloder, chief digital officer at CX solutions firm Alorica. It focuses on finding the best correct answer by layering on generative AI within and on top of other tool sets to deliver the service promised in a much quicker manner.

“Generative AI is literally a whole new ballgame. Utilizing those large language models allows context to be uniquely created on each [human] interaction,” he told CRM Buyer.

However, it’s still uncertain whether employing generative AI in marketing and CRM operations can circumvent potential programming abuses and ChatGPT inaccuracies.

A related question is: Can guardrails prevent AI’s machine learning (ML) and LLM based on natural language processing (NLP) from running amok?

Enhanced Features Pose Abuse Potential

Efforts to improve CX capabilities for marketers and retailers offer much-anticipated solutions within businesses. But generative AI left unchecked can damage brands’ reputations, experts warn. Folloder does not disagree with the need for built-in guardrails on AI’s capabilities to prevent runaway behavior and results.

Key to this concern is this nagging question: Can Generative AI pass the Turing test for human intelligence in computers?

Earlier AI implementations were mostly limited to results based on pre-trained sequences. AI could only do what it was programmed to do. Generative AI, on the other hand, has the potential to create human-quality artifacts at scale that include fake and misleading written, visual, and audio content.


As AI developers experiment with incorporating safety guardrails, a long-established measuring method may help keep AI programming from leading to unchecked “free thinking” by AI networks. The Turing test, devised by the English mathematician Alan M. Turing in 1950, is a simple method to determine a computer’s ability to demonstrate human intelligence via thinking.

To pass this test and thus demonstrate human intelligence, AI-enhanced computers must engage in a conversation with a human without being detected as a machine. No computer has come close to passing the Turing test yet.

But that threshold might well be just around the next corner.

Curation Accuracy Is the Thing

For CX experts, the primary goals are to provide customers with accurate answers, mitigate their frustration, and enhance their overall experience. Folloder underscores that the ultimate aspiration of AI-driven CX is to solve problems more effectively and to strengthen brand protection.

Consider this new technology as possessing the capability to explore all information sources without restrictions. The absence of search limits endows generative AI with unlimited access to new materials, enriching the integrative process for its results.

“At the speed of compute, you can propagate that information or something that seems harmless on the surface and the computing world. But it could be brand tarnishing in the real world,” he warned.

Striking the right balance when curating content within an AI platform necessitates consideration for client protection, posited Folloder. He thinks that is one of the essential factors left unanswered currently.

“Imagine giving a program the ability to search everything on the internet. How we use it allows you to basically fence in the content from this search or not,” he said. “Key to this ongoing discussion is the approaching ability of generative AI to cross the computer intelligence boundary.”

Enter the Turing Test

This context is where the Turing test may become a controlling factor. Its background is significant to the discussion.

While at the University of Manchester, Turing wrote a paper detailing his thought experiment called “The Imitation Game.” In it, he predicted that by the year 2000, a computer “would be able to play the imitation game so well that an average interrogator will not have more than a 70% chance of making the right identification (machine or human) after five minutes of questioning.”

Turing’s work in 1939 helped the British intelligence agency MI6 crack German codes, including the infamous Enigma Machine. Those efforts are loosely portrayed in a 2014 movie titled “The Imitation Game” based on the 1983 biography “Alan Turing: The Enigma” by Andrew Hodges.


Ongoing rapid advancements in AI are raising alarm bells globally about the need to build safeguards, whether for business use cases or beyond — as discussed in April on TechNewsWorld’s The AI Revolution Is at a Tipping Point.

At this point, we see limitations in the technology. How long will those limitations remain until software developers install guardrails? Folloder interjected.

As more consumers engage with brand-integrated generative AI, some will try to test its limits. The Turing test remains a challenging threshold for Generative AI to cross.

Unpacking the Potential and Pitfalls of Gen AI

We pressed Folloder to elaborate on the necessary advances in the areas of contextual understanding, emotional intelligence, and decision-making within AI. We also asked him to share his expertise on how generative AI can meet the demands of complex customer interactions without overstepping safety limits.

CRM Buyer: Do you think that generative AI can ever pass the Turing test?

Harry Folloder: The Turing test still applies because you can still easily tell that it is not a human by asking it a simple logic question. But it has the ability now to utilize LLM in a conversation that would be more humanistic.

What is the deciding factor in your safety concerns with generative AI?

Folloder: The logic is still missing today. So you could still easily confuse it and trick it. There are tons of different things on different social media platforms showing you how you can easily mislead these AI things to write malicious code for you or tell you things against its programmed ethics.

How comfortable are you with the potential for generative AI to do no harm?


Folloder: I spent a few years inside the intelligence community for the United States government focusing on building cyber practices and building SATCOM for the White House. And so I’m probably not a great person to ask that because I see it in everything.

It has already been proven to be used in malicious manners. There are cyber terrorists that have used the power of this AI tool to make their malware better to construct key loggers that have the ability to bypass modern-day safety components.

What must be done to make generative AI Safer?

Folloder: That is the key question! I think an extremely large, multibillion-dollar business will spin up around just that because there are really no great answers today. AI is growing faster than we can keep up with the protective terms of it.

So, you doubt much can be done other than ban its use like some European countries have already done?

Folloder: I do not have a great answer because I do not think there is one yet. I am conflicted because it is such an incredible technology that needs to be further used and perpetuated for good. But they are not wrong in their fears. This technology has a ton of ability to do a lot of harm if not properly watched.