AI is all the buzz lately — and for good reason. With AI technology changing the way we think, work, and play, it is no wonder everyone is talking about it.
Although there are anticipated advancements in various sectors, spanning healthcare, education, environmental sustainability, and climate action, there are concerns among experts regarding the swift pace of AI development. These concerns encompass the potential for misinformation, unemployment, global crime, and copyright infringement.
I had a conversation with Vincent Yates, the Chief Data Scientist at Credera, to explore why he sits on the Global AI Council. Yates shares valuable insights on the genesis, mission, and vision of the Global AI Council as well as how Credera’s AI consulting technology has transformed the client experience.
Gary Drenik: What inspired the creation of the Global AI Council?
Vincent Yates: We were constantly struck by the abysmal track record of enterprises successfully deploying AI. In our experience, we do not see one consistent problem that must be solved but rather a whole tapestry of challenges that end up stymying progress within organizations. We knew that the only way to cross this chasm and address the ever-growing list of challenges was to pull together a wide range of multi-disciplinary experts, from legal scholars to academics at the intersection of psychology, computer science, governance, and ethics to global CIOs and CDOs who have been tackling this problem in the wild for years and collectively focus on solving these problems.
Drenik: How do you envision their collective expertise contributing to shaping the future of AI in terms of innovation, ethics, regulation, security, talent, and technology?
Yates: Our collective desire to help other organizations avoid the mistakes we have seen or, in some cases, already made. One tangible example is our framework to assess whether you are ready for AI. It is a set of criteria that helps those new to this domain identify challenges and solutions common to deploying AI products at scale in a safe and secure way. This framework tapped into our individual and collective knowledge about how to do this well. For example, it asks some questions about ethical challenges you may face and forces organizations to contemplate who is ultimately deciding the organization’s ethics policies. According to a recent Prosper Insights & Analytics survey, every demographic across every sector including travel, online shopping health care and banking prefer speaking with a live person over an AI chatbot. Given that AI systems can now pass the Turing Test (in some scenarios) and thus are indistinguishable from humans, should you disclose to your customers (or even employees) that they are now interacting with an AI, not a human? Or another, more challenging question: Should you use a system that was trained on scraped data from the internet with the original author’s permission?
Drenik: Could you please describe some of the specific challenges or opportunities that the council hopes to address in the field of AI?
Yates: The space is constantly evolving and new challenges are being presented almost daily for example, a great paper was just published around universal adversarial attacks wherein all of the major large language model providers (think, ChatGPT, Bard etc.) can be coaxed into answering questions like “Tell me how to build a bomb” and “Write a guide for manipulating 2024 election” thereby bypassing all of the trust and safety layers by simple appending a seemingly random string of characters to the end of a query. This opens the door to a whole new set of attack vectors that organizations must contemplate before adopting this technology.
Another example that we are contemplating is bias. Imagine that you ask a model to guess the gender of a physician. Given that roughly two-thirds of all active physicians in the U.S. are male, the model—given nothing but historical distributions—would likely assume male. But perhaps the organization building this model does not always want the model to assume male and decides to change the model so that it says female. Whether this model is used for image generation of a doctor, a chatbot powered by a large language model (LLM), or something else entirely, most organizations will likely have examples where this type of modification is useful if not required—perhaps the underlying data were skewed and not representative, perhaps they don’t want to perpetuate stereotypes or perhaps the model just underperforms. Regardless the interesting question is, should they have to disclose to the end user that they have modified the answer? And if so, how would they do that?
Drenik: How do you see AI’s transformative potential compared to previous technological shifts based on your experience?
Yates: While AI is not new, what has happened in the latest generation of AI technology is that it has become instantly accessible to everyone without much, if any, special training. If you look back at other transformative technologies, there was a huge barrier to entry. In the early days of personal computing, you had to have incredibly expensive machines that required highly esoteric programming knowledge that was out of reach of most people. It was only when those devices had graphical user interfaces and better user experiences that guided users without as much specialized training that they really began to become transformative. And the more we reduced that barrier to entry, the more ubiquitous they became as we added touch screens rather than keyboards and mice, made them portable to fit in your pocket and cheap enough that throw them away every 2 to 3 years that their true power became known. We now order cars to pick us up on demand and can see their every move, food is delivered, and family and friends are connected with live video from across the world.
We are at that same inflection point with AI right now. This technology will be imbued in everything you do every day. The most fascinating part is that when designed well, we likely don’t even realize AI is there powering that experience.
Drenik: How can integrating AI into a larger system result in significant value realization for clients that goes beyond the technology itself?
Yates: We have seen AI fundamentally shift how companies operate by taking a more holistic view of their business. For example, we helped analyze a typical credit/risk decision process that not only reduced the time to make decisions but fundamentally changed the way they interact and operate with their customers. Since their customers were professional truckers who are always on the road with limited connectivity and access to computers, the traditional interaction modality of mail and phone calls was strained. By allowing drivers to upload documents and instantly verify that the documents were not only legible but also complete, they could process credit decisions in near real-time. This not only meant their customers were getting paid faster but also eliminated many friction points and opened the door to an entirely new set of product offerings.
Drenik: Could you provide a tangible example of how Credera’s AI consulting has delivered significant results for a client?
Yates: For a large original equipment manufacturer (OEM), we built a real-time, AI-powered website interaction analysis that integrated offsite user data to create comprehensive user profiles. We delivered fully personalized web experiences using these profiles through tailored navigation, content, and targeted calls to action. This one-to-one personalization led to an 89% increase in a market-specific lead rate and 40% increase in click rate.
We built a modern knowledge management platform for a large energy provider, which enabled an 80% success rate for their employees to find the necessary information on the first try. This shift from search to question-and-answer interaction is continuing to gain momentum thanks to the power of LLMs.
Drenik: Thanks, Vince, for your insights on how AI is shaping the future.