close
close

Meta’s new GenAI is theatrical. Here’s how to make it valuable

Meta just launched its AI assistant on Instagram, WhatsApp, Messenger and Facebook like firing a missile at a corporate battlefield. And the media plays along and talks about a ‘battle with ChatGPT’. But this chatbot war will not make it into the history books as a real robot war. When Meta, OpenAI, Microsoft and others make seemingly competitive moves, they are not fighting for control of a source of great power. Rather, it is largely theatrical, a struggle for attention and status.

After all, the field of generative AI is nothing but a cosmetic game. Instead of demonstrating concrete, proven value, it promotes itself primarily with grand visions of limitless possibilities.

But while chatbots like Meta’s AI Assistant and ChatGPT are easier to use than other forms of AI, they are harder to use Good– in a way that generates measurable value for a company. Other types of AI may not enjoy the same level of user-friendliness, but some, such as predictive AI, often deliver higher returns than genAI.

Given today’s abundance of sizzle without enough steak, consternation is growing. The Washington Post says: “The AI ​​hype bubble is deflating, say journalists”struggle to find examples of transformative change”, investors perceive a resistance to genAI’s over-promises and much more agree.

Even studies intended to demonstrate the value of genAI sometimes find that it fails. Stanford researchers, bullish on the technology, studied the productivity of teams using genAI to solve business problems, such as scaling B2B sales, and compared them to teams working without an AI assistant. Much to their surprise, the researchers found that using genAI led to more average ideas, partly because most of the data it is trained on naturally reflects common ‘inside the box’ thinking – and partly because people using genAI might look overly trusting and exert less cognitive effort themselves.

But these researchers remain optimistic and suggest new guidelines for how best to use this new technology.

Benchmark GenAI to determine its concrete value

With genAI, we usually don’t know what its return on investment might be because genAI projects simply neglect to benchmark. But if you don’t measure value, you don’t pursue value. Only by assessing the company’s profits (or lack thereof) will you receive the feedback needed to successfully complete the project. Companies can measure profits as efficiencies, such as time savings or increased productivity.

One factor that slows down benchmarking is that if you stress test your project, you may be seen as a party-pooper. The AI ​​hype can be intoxicating, and nothing can kill the buzz of fantasizing about immense future potential more than a sober reading of today’s value. When measuring performance, you might find that genAI is only good rather than great, and that’s a far cry from the dream of breakthrough AI.

But a proven victory is better than a dream. Meet the rare standards of Ally Financial, the largest fully digital bank in the US. It reported that “marketers were able to reduce the time it took to produce creative campaigns and content by as much as 2-3 weeks, resulting in an average time savings of 34%.”

Or follow the example of an unusual Fortune 500 software company studied by MIT Sloan. By using a conversational assistant, the company’s customer support team increased the number of issues resolved per hour by 14% – and the increase was even higher, 34%, for entry-level and low-skilled workers. This benchmarking is unusual. Other companies such as Airbnb, Intuit and Motorola report that they are just beginning to measure the value of genAI, but have yet to report what they find.

Such successes are partly due to the judicious application of genAI. For example, it can often generate useful first drafts for routine tasks, such as certain customer support activities. In contrast, it often results in generic content that is too obvious or clichéd to assist in higher-order writing tasks, such as in journalism, where it is instead best used for editorial work or preliminary research (as long as the facts are manually are checked). In general, any attempt to use genAI involves an ad hoc, experimental process. We live in the Wild West of genAI, which is untamed and unpredictable. Its value is not guaranteed.

And yet, even if your genAI initiative proves valuable, you’ll likely find that it still won’t deliver the revolutionary victories that industry leaders want us to believe possible. The current wave of hype continues a long-standing tradition of AI theater. AI has always tipped its hat on the seductive but brutally vague word “intelligence.” AI in general, and genAI more specifically, taps into an ingrained excitement born of decades of science fiction and breathless AI speculation. And genAI offers an immediate appeal that is arguably broader than that of any other technology: anyone can intuitively interact with it in English or other languages ​​(although critics say it should support more). But the common narrative that technology is well on its way to general human-level capabilities is unfounded.

The best way to defend against genAI’s disillusionment is to benchmark business performance. Instead of indulging in the heady story of machine intelligence, focus on credible use cases that deliver concrete value – and measure that value to keep your projects on track.