It is a mild understatement to say the world has taken note of the release of ChatGPT, Google’s Bard, and Microsoft’s announcement/release of their Artificial Intelligence (AI) tool CoPilot, all accompanied by greater awareness of AI in general. Most of the focus is on Large Language Models (LLM) like ChatGPT and Bard. There has been a lot written about whose job is at risk, how AI is changing education, questions about what it means to be human, and the inevitable Terminator references to SkyNet. We’ve even seen a group of scientists and entrepreneurs publish a warning that AI technology is advancing beyond our ability to comprehend the implications, and adoption should be slowed down. (Although at least one of the entrepreneurs is hastily developing their own AI tool, so that call seems a bit self-serving).
But what is it about these LLM that seems so unnerving at times? Let’s explore a few things and see if we can determine what all the fuss is about.
Five Reasons AI Freaks People Out
- LLM tools respond conversationally. It seems like we are engaged with an intelligent entity because it ‘sounds’ like one. That’s quite different from our prior experiences with other software tools where the responses were terse or presented in a list of options.
- They’re fast. Building on the conversational style noted above, the response comes fully formed very quickly – one hallmark of human genius.
- They’re confident. Well, they sound confident. Even when caveats are presented, the responses are authoritative in tone. In fact, the caveats themselves sound authoritative.
- They’re a little creepy. The initial releases were a bit sloppy and could generate responses that seemed – inappropriate. This was quickly contained by adding internal guardrails to the tool, but those initial ‘conversations’ were a bit creepy.
- It makes us uneasy. Even though we know it’s a machine, it doesn’t seem like a machine. Or the machines we can identify with were mostly bad robots from sci-fi/horror movies. So, we’re a little edgy.
The common denominator seems to be the anthropomorphizing of a piece of software. It seems, or is, more human-like than any other piece of software we’ve interacted with, and that is enough for us to project onto it the full range of human strengths and weaknesses, flaws and ambitions.
Two Questions and Two Examples to Put AI in Perspective
We’re struck by two questions, which may be rhetorical or easily answered by actual computer scientists (we’re marketers, after all). These two questions seem to diminish the novelty of this iteration of AI, a technology that has been in development for decades.
- While this machine-to-human communication seems fast to us, how fast is it compared to machine-to-machine (or computer to computer) communication? We’ll guess it is a lot slower.
- Is this really revolutionary? Or does it seem that way because we lay people haven’t been paying attention?
We’re drawn to compare these LLM tools to Google’s “semantic search” capabilities that have been around for several years. Semantic search is when your query is something like “I feel like Chinese food”.
In this case, Google needs to comprehend that you are hungry, that you are not saying you have a medical condition with symptoms that make a person the same consistency as Chinese food, and you probably want the locations of nearby Chinese restaurants that are open. There’s quite a bit of background intelligence that is necessary to produce the list you are looking for. And those results come very quickly.
We think a comparison to chatbots is appropriate, too. While some chatbots are merely a decision tree-like tool with canned responses, others have more AI behind them. At least the technology has more intelligence behind it. We believe most lay people don’t draw a distinction, viewing them more as annoying (or infuriating) self-service, cost-shifting impediments to solving a problem. And they are not without precedent (e.g. multi-layered automated attendant prompts when calling your bank or the cable company), so people are desensitized to the tool.
The answer may be that while people perceive these AI LLM tools as revolutionary, they are more accurately considered as evolutionary products.
Where Is All This Headed?
Putting aside the rise-of-the-machines scenarios, we think the public-facing LLM tools will specialize by industry or profession. The underlying technology will continue to evolve and the tools themselves will “learn” to give better responses through repeated interactions with users. But the usefulness of a LLM tool for lawyers, for example, will be more quickly improved by having a focused version that interacts primarily with users in the legal profession.
This specialization is also a way to monetize the tools (let’s not forget about the need for monetization). The tools themselves could become subscription-worthy if they become essential parts of functioning in an industry. And while we are not forgetting things, let’s also not forget that placing advertising remains the primary driver in the online world. While we don’t expect an ad to pop up inside the tools necessarily, you can bet that what you are searching for is still going to be fed back into the online advertising machine.
AI LLMs and Digital Marketing
We have finally come around to our area of practical expertise – marketing. We think there are two primary ways the LLM tools will impact marketers and their clients in the near term. The first we already mentioned above, namely as a better targeting filter for advertising and other content placement.
The second, which has all the copy writers worried, is around content generation. It may be possible that a focused version of the tools are developed that can generate really good copy that could lead one to consider replacing a really good writer. But that is far different than the tool being able to generate passable high school essays. We see really good writers using the tool to enhance their really good output, but frankly not as a replacement – yet.
Related are issues regarding originality, plagiarism, and who really owns the content that is generated by these LLM tools. When you start thinking in this direction, it is easy to imagine lawsuits galore and case law that will take years to generate established precedent. What if it turns out that the LLMs are generating similar content based on similar queries from different marketers on behalf of the clients? Certainly the effectiveness of that content will be diminished. (We work a lot in Financial Services, which has long been hampered by compliance-driven, cookie cutter content, so we’ve seen how original content helps our clients break free of their competition). What if content that was presented to clients as “original” turns out not to be? Do they get a refund? Do they sue for breach?
Ultimately, all these questions will be worked out, as they have been with every new technology that enters the commercial space. Until then, we believe the prudent marketer, and their prudent clients, would be wise not to dive too deeply, too quickly into this potential morass related to content generation.
What Does Bard Say About This?
It seems appropriate to pose a related question to an LLM and see what it says. Since Bard is a Google product, we thought we’d query it regarding the thoughts of Google engineer and futurist Ray Kurzweil.
Here’s a compilation of the replies:
Ray Kurzweil, a futurist and Google engineer, has long argued that the pace of technological change is accelerating. He believes that this acceleration will lead to a “technological singularity” in the 21st century, when artificial intelligence will surpass human intelligence and usher in a new era of human history.
Kurzweil’s predictions about the pace of technological change have been controversial, but they have also been influential. Many people believe that he is correct in his assessment of the future, and that we are on the cusp of a period of unprecedented technological change.
There you go – an original, human-composed article about Marketing, AI, and LLMs that includes a passage from Bard, an LLM, with (we hope!) appropriate attribution. Live long and prosper*.
*that last bit may belong to Paramount, but it’s not original…
JQLaCorte is a full-service digital marketing firm for the Professional Services and Manufacturing industries. To learn more, please email email@example.com, call 203-571-8067, or visit www.jqlacorte.com.