Technology

Gen AI’s Most Underrated Strategic Risk? It’s Likely Inaction

Editor’s Note: Tom Davenport will help keynote our upcoming May 29 Masterclass on AI strategy, a half-day deep learning from three world-class experts built exclusively for CEOs and senior executives keen on moving past AI apprehension and into action. Join Us >

In an era where generative AI is rapidly transforming business—or at least promising to—we’re all grappling with the urgency to integrate this groundbreaking technology into our strategies. But the journey from acknowledging its significance to effectively deploying it is fraught with hesitations about risks, a reluctance to falter, and a challenge in scaling beyond the initial trials.

Enter Tom Davenport, who’s been working at the crossroads of artificial intelligence and business for about as long as that crossroads has existed. With a decades-long career steeped in the study and application of analytics, information technology, and knowledge management, Davenport is a prolific author and a trusted advisor to organizations around the globe. His expertise is grounded in years of academic excellence as a professor at Babson College, complemented by his role as a fellow of the MIT Initiative on the Digital Economy and a senior advisor to Deloitte Analytics.

Davenport’s experience encompasses not only the theoretical nuances of AI but also its practical impact on the competitive business world. His counsel has guided many through the complexities of digital transformation, and his insights have been the lighthouse for organizations navigating the murky waters of innovation.

We talked recently with Davenport about the state of generative AI, what people are doing with it, what they aren’t—and why being a “fast follower” is likely a bad plan with this particular technology. The following is edited for length and clarity. 

When we last spoke, it was about a year ago, and ChatGPT had just started gaining traction. It felt like the beginning of something significant, but it hadn’t yet begun to pervade business as it has now. What has changed in the conversations you’re having about generative AI in business this year?

Well, we still have a long way to go. Sam Altman recently admitted that GPT-4 hasn’t significantly impacted the global economy. The technology itself is astounding, but using it effectively requires a lot of work and discipline that organizations haven’t cultivated. I did a survey with Amazon Web Services of Chief Data and Analytics Officers a couple of months ago. Around 80% said they thought it would transform their organizations, but 57% had done nothing to prepare their data. Only about a third had a data strategy well-suited to generative AI, and only 11% felt strongly that they were ready for it.

There’s a lot of work to do on the data side. We’ve been managing structured data, like rows and columns of numbers, for a long time, but managing text, documents, and images is something that companies are not very adept at. So, we’ve still got a long way to go. There have been stories in the press about whether generative AI will lead to a productivity boom. I believe it offers a lot of potential for productivity improvements, but it requires significant work. You need to be good at experimentation and conducting controlled experiments with test groups using and not using it to see the difference.

Most organizations aren’t very comfortable conducting those kinds of experiments. You have to teach people the right way to use it. One thing that scares me is that, in one of the early studies on the use of generative AI by a couple of MIT professors, 68% of the participants in the control group using ChatGPT didn’t edit the output at all. That’s dangerous as it can lead to poor quality content. For example, Allen and Overy, a prominent global law firm known for their use of generative AI, pointed out the importance of having an expert in the loop to ensure the quality of legal content.

When I teach, I emphasize to my students the necessity of showing their work. They must show me their prompts, the generated output, how they edited it, and that they verified the facts. They often argue that it’s easier just to write from Wikipedia, indicating a belief that this technology isn’t a shortcut to productivity. Therefore, I believe it will take time for us to adopt the necessary disciplines to truly derive value from generative AI.

How should companies start to think about strategy towards using this in a business model to get the benefit out of it? What should CEOs, CFOs, and board members consider now?

First, determine if this technology is existential for your industry. If it’s existential, it requires significant effort to figure out its use. If not, identify areas where it could have the most impact. For example, a telecom firm might not need generative AI for structured data but could find it useful for FCC filings or contracts.

I spoke with a board member at a consumer products company who mentioned an experiment with generative AI that far outperformed a human marketing team in creating appealing content. Are you seeing other organizations finding similar benefits?

I do think that understanding and reacting to the voice of the customer will be much improved with generative AI, partly because we’ve never had the time to fully read and make sense of all the customer comments that come in. What I find particularly appealing about generative AI is its ability to interpret sarcasm and humor, something most social media listening tools struggle with. Another strong point is its capability to determine where to direct specific pieces of customer feedback.

For instance, a consulting company I’m familiar with worked with a retailer that used a system listing all department heads. If a complaint came in about slow checkout lines at the south Charlotte store, the system could automatically forward that complaint to the store manager. Similarly, if there was an issue with the meat department at another location, it could direct the feedback appropriately. This type of task has traditionally been very labor-intensive, so it’s impressive that generative AI can handle it.

Is anyone truly ‘late’ to adopting generative AI now, or is there still time for companies to catch up?

I think there’s a significant advantage to being an early adopter. For instance, in an article about law firms, I examined Allen & Overy and Wilson Sonsini, one based in the UK and the other in Silicon Valley. As early adopters, they have made considerable progress. For law firms, adopting new technology is a major organizational change. It requires developing a culture of innovation, which can take time. This is not an area where being a ‘fast follower’ is beneficial; the technology will improve, but the process of curating data is critical.

In an HBR article on whether data is ready for generative AI, I mentioned Morgan Stanley, an early adopter that had curated a hundred thousand wealth management documents for use with generative AI. When I asked Jeff McMillan, their Chief Data and Analytics Officer, about the timeline for this curation, he indicated they started five years ago in response to complaints about document quality. They set up a team in the Philippines to evaluate each document on various criteria, a process that takes a long time. Starting late means risking falling behind as other early adopters advance.

For mid-sized companies not on the scale of Morgan Stanley or Wilson Sonsini, what are the key actions to take in the next few months to stay competitive?

Well, don’t be overly concerned about the risk issues at this point. I think there are ways to get around most of those. It could be that at some point down the road, customers might start questioning us about what we’re doing with generative AI, but I think that’s unlikely. And if you have an agreement with your cloud provider not to share your own data with everyone else, you should be fine.

Put someone in charge of investigating this and conducting those kinds of experiments. Have your senior executives discuss where this might be influential for your organization and start large-scale experimentation to figure out where you need to move it into production more quickly. And I don’t think any of that is impossible for a midsize company. The technology is not that expensive, and someone can bring you up to speed quite quickly on what you need to know. So there’s not much excuse, except perhaps a lack of awareness of how important this might be.

So, to develop a strategy, start by experimenting.

[Laughs] Exactly.


Dan Bigman

Dan Bigman is Editor and Chief Content Officer of Chief Executive Group, publishers of Chief Executive, Corporate Board Member, ChiefExecutive.net, Boardmember.com and StrategicCFO360. Previously he was Managing Editor at Forbes and the founding business editor of NYTimes.com.

Share
Published by
Dan Bigman

Recent Posts

Successful Negotiation Involves Managing Tensions

A seasoned negotiator shares tactics for getting the deal you want.

14 hours ago

Healthcare Costs Continue to Rise: How Much Should Your Company Pay?  

Healthcare packages provided to employees are a massive and continuously rising cost to businesses—and recent…

15 hours ago

Werner, USAA, First Command, RecruitMilitary and Scrum Honored With 2024 Patriots In Business Awards

Presented by Chief Executive and Thayer Leadership, the award recognizes businesses that lead our nation…

2 days ago

The 3 Roadblocks To Growth, According To Canadian CEOs 

Poll of 300 CEOs across Canada finds three recurring themes impeding growth, with near-complete agreement…

2 days ago

World Wide Technology CEO Jim Kavanaugh: ‘The Harder You Work, The Luckier You Get’

In this edition of our Corporate Competitor Podcast, Jim Kavanaugh, the CEO and co-founder of…

3 days ago

Leverage Your Company’s GenAI Pioneers 

Look internally to tap the transformative potential of GenAI in learning and development. (And if…

3 days ago