Blog
Technologie

How do you create (and destroy) value with generative AI?

Benoît Mazzetti
March 20, 2024
5
min read
IconIconIconIcon

An experiment conducted by the BCG (Boston Consulting Group) with the support of the Harvard Business School, MIT Sloan School of Management, the Wharton School, the University of Pennsylvania and the University of Warwick, demonstrated that generative AI will be a powerful vector of competitive advantage in the years to come for companies that manage to use it to address their operational challenges.

A paradoxical scientific study

In this novel scientific experiment, it was proven that when generative AI (in this case, GPT-4 from OpenAI) is used in the right way with consultants, her abilities are such that she can sometimes turn against them. In fact, it's not always easy to determine when AI is (or isn't) right for your needs. Moreover, this can have serious consequences. When misused, for the wrong tasks, generative AI can lead to significant Destruction of value.

Nevertheless, the possibility of improving its performance is staggering overall. When using generative AI to design new products, a task involving Ideation and content creation, approximately 90% of participants improved their performance. What's more, they converged on a 40% higher level of performance to that of people working on the same task without GPT-4.

Finally, the conclusions describe A paradox : the interviewees seem be wary of technology in areas where it can provide considerable added value and Trusting him too much in areas where it is not competent. In addition, the study shows that the relatively uniform production of generative AI can reduce the diversity of thought in a group by 41%.

The exact extent of the effects observed may legitimately vary in other contexts. But these results highlight a crucial decision-making moment for managers from all sectors of activity. They need to think critically about the work their organization does and what tasks can benefit from, or suffer from, generative AI. This implies approaching its adoption as a change management effort covering data infrastructure, rigorous testing and experimentation, and an overhaul of existing talent sourcing strategies. Perhaps more importantly, leaders will need to continually reassess their decisions As theThe frontier of generative AI skills is expanding.

The crucial issue of value

The results of the study clearly show that Adopting generative AI is a double-edged sword. Indeed, participants who used GPT-4 for creative product innovation Have obtained 40% better results to those in the control group (those who completed the task without using GPT-4). On the other hand, for solving business problems, the use of the GPT-4 resulted in d23% lower performance to those in the control group.

The creative product innovation task required participants to develop new products and go-to-market plans. The business problem solving task required participants to identify the root causes of a company's difficulties based on performance data and interviews with executives. Quite counterintuitively, current generative AI models tend to be more successful at the first type of task. In fact, It's easier for LLMs to come up with creative, new, or useful ideas based on vast amounts of data on which they have been trained. On the other hand, the margin of error is greater when it comes to studying nuanced qualitative and quantitative data to answer a complex question.

Even more striking, many participants who used GPT-4 for this task accepted the tool's erroneous results. It is likely that GPT-4's ability to generate persuasive content contributed to this result. Moreover, many of them have confirmed that they found the justification that GPT-4 offered for its results very convincing (even though, as an LLM, he proposed the rationale after the recommendation, rather than creating the recommendation based on the rationale).

In what context should generative AI be used?

The close link between performance and the context in which generative AI is used This raises an important question about training: can the risk of value destruction be mitigated by helping people understand how well technology is suited to a given task? It would be rational to assume that if participants knew the limitations of GPT-4, they would know not to use it, or would use it differently, in these situations.

However, the negative effects of GPT-4 on the task of solving business problems did not disappear when the subjects received an overview of how to use GPT-4 and the limitations of the technology.

Even more astonishing, they performed significantly worse on average than those who did not receive this simple training before using the GPT-4 for the same task. This contradictory result could reveal Overconfidence by participants in their own abilities to use GPT-4, precisely because they had been trained.

About StoryShaper:

StoryShaper is an innovative start-up that supports its customers in defining their digital strategy and the development of automation solutions tailor-made.

Sources: StoryShaper, BCG.

Share this post
IconIconIconIcon

Check out our latest blog articles

Original articles at the intersection of technology, philosophy and economics

Are you interested in knowing more about how to improve your operations with AI and automation?