Technology

Tariq Shaukat On GenAI: ‘The Devil Is In The Details’

From startups to global giants, the buzz about how to integrate Generative AI into daily operations has become ubiquitous. And it’s not just hype, but a reflection of the potential companies see in AI to revolutionize the way they work, innovate and compete. As stories of successful AI implementation begin to surface, the excitement grows, prompting even the most traditional businesses to take a closer look at how this technology might shape their future.

Tariq Shaukat has been into GenAI from its early days. While president of Google Cloud from 2016 to 2020, he was responsible for, among a wide range of functions, overseeing the company’s AI Solutions Lab. After serving as president of dating app Bumble for three years and helping take the company public, in September he took the role of co-CEO of Sonar, a $4 billion leader in “clean code” solutions, which aims to help companies write their own code in a way that won’t come back to bite them. With AI in the picture, an increasing number of companies—and not just software providers, either—are relying on the technology to create personalized software solutions for their businesses, but that’s opened up a pandora’s box of problems. Shaukat points to a Stanford University study that found that relying on AI leads to buggier code. “But the software developers believe that it is more secure,” he says. “So you have almost this worst of both worlds here.”

He adds that AI has led to “an explosion in the number of software developers and software organizations that are writing code with AI assistance,” and that has led to a huge uptick in business for Sonar, which offers tools to help companies analyze their code and identify where it might have security or reliability issues.

But code development is just one of several GenAI use cases companies are now experimenting with. In the following interview, Shaukat, who is also a board member with Gap and Public Storage, talks about where GenAI is likely to have the greatest business impact, how knowledgeable CEOs need to be about the technology, and why it will prove a game-changer for small and midsize businesses.

How do you envision GenAI shaping corporate strategies in the near term?

There’s a world of potential that it unlocks because now a typical company can do things that were really, really hard to do previously. Take content generation, for example. Photo shoots for ad campaigns were financially out of reach for a lot of small companies. Now with AI, they can generate really custom creative. So the art of the possible has become much more interesting for a lot of companies.

For example, at Bumble, we did photo shoots for all of our ad campaigns. That meant you could only end up with one or two or three pieces of creative. Now, using AI, I can come up with creative uniquely targeted to each customer segment. That lets us service our customers better, and it saves money and improves effectiveness.

So it’s really a different world. Companies who have not thought of themselves as software companies can start writing a lot more code. They can start doing things that are a lot more sophisticated than they were able to do before in terms of analytics, in terms of data processing, customer engagement, that sort of thing. It really does change the game. And yes, there will be some functions where it will replace the people who are involved with those functions, but it creates new opportunities because of the use cases we’re talking about.

In your experience at Google Cloud and Bumble, and now at Sonar, how has AI been instrumental in scaling business operations?

I think every company over the last 10 years has recognized that data is at the heart of the company. It doesn’t matter if you’re a consumer product company or a car manufacturer or a tech company—data really is at the center. If you go back to the 2012, 2015 timeframe, how you managed and stored that data was the big question, and that’s where you saw a lot of innovation. That’s where cloud really came in to be super useful. As you’ve gotten a handle on all that data and companies have started recognizing that, you start getting to the, what are you going to do about the data? And this is where AI has been really helpful, whether it’s in your supply chain algorithms—like, how do you use AI to optimize the routes that your supply chain is taking—or things like content creation. It kind of levels the playing field in a lot of ways. FedEx could always hire a ton of PhDs to do this, but a typical mom and pop operation could not.

So for smaller companies, this is a game-changer.

Yeah. I mean, think about chatbots as an example. Amazon probably could always have written a great chatbot that feels value added. Now any company, even one with two people, you can get a chatbot that sounds like a human, more or less, that is answering your customer service questions. So where you needed a lot of expertise before in certain areas, you don’t need that same expertise now. That creates a lot of opportunity for mid-size companies. I do think the web is an interesting analogy in that, before the web, only Walmart had distribution, but with the web anyone could set up a website and have global distribution. From a knowledge standpoint, AI is very similar. Now anybody can become an expert in a topic or can service people using advanced technology.

As AI becomes more and more integrated into operations, how should companies be approaching ethical guidelines?

One of the nice things about AI is it’s fairly easy to get access to. One of the downsides of it is it’s fairly easy to get access to. So I am a very strong believer that every company needs to make their AI principles and their AI acceptable use policies incredibly explicit.

At Google Cloud, we listed our principles and said, we will not do a deal that does not meet these principles—and we checked every deal against that. At Sonar, we have an acceptable use policy that says, here are the circumstances or the use cases we will allow for AI, here are the ones we won’t, and here’s how you have to do it—with this approved tool, with this enterprise contract, etc. It can’t just be the wild west. I do think companies are almost certainly putting themselves at risk today if they don’t have this acceptable use policy, and again, it needs to be very, very explicit. It can’t just say, you can use this for code development, but not for marketing content. You need to actually start saying, your partners can be Microsoft and Google, because we believe in their enterprise security and their data privacy, but not company X.

The devil is really in the details on the implementation of all of this. So being very explicit and taking a risk mitigation mindset is important. At one of the companies I’m on the board of, we’ve said, AI code generation is fine, but it must have a code scanner like Sonar, and it must have a human who reviews the results. So it can’t just be the human, and it can’t just be the tool—it has to be the combination. We are super explicit about that. I think everyone really needs to get to that level of detail.

What are the AI risks CEOs and board members should be most concerned with?

Because of the world we live in right now, people are very attuned to cybersecurity risks. So when I talk to people, I find that the majority of the questions they’re asking on AI are to do with cybersecurity and data privacy, as two sides of the same coin. That’s important, but it’s only one dimension.

You need to think about the reputational aspects. You need to think about the quality aspects of what you’re getting. The legal aspects, is this infringing on copyrights potentially? And are you comfortable with that? One company I worked with said that they’ll allow AI code generation, but not marketing. And I asked them, why not marketing? And they said, ‘Well, we are a creative company. If it gets out there that we are—accidentally because of the vendors we’re using—infringing on copyright, that tarnishes our brand, our reputation. And that’s not something we can live with.’

So every company’s going to value that a little bit differently, and an industrial company may care less about that than a creative company. But approaching this as a 360-degree problem—reputational, legal, ethical, quality, security and privacy—is really important. Don’t let the CISO or the CTO be the only person weighing in on this. This should be a CEO-level topic.

Most CEOs don’t have a tech background. How up-to-speed do they need to be on the technology?

They don’t have to be experts. They should know what questions to ask, and they should be setting the tone of where do we care about reputation? Where do we care about quality? Because there’s no black and white in any of this—a lot of it will be judgment. And the question is, is it a technology judgment or is it a business judgment? In most cases, these are business and brand and governance judgments, not simply technology judgment.

On that note, I have heard that some technologists don’t understand the business challenges around this, which can lead them to minimize the risks associated with certain implementations. Is that a thing? And if so, how do you mitigate that?

It’s certainly a thing, depending on who you’re talking to. There are a lot of people who will get very wrapped up in, what’s the definition of fair use according to copyright law, and therefore, are you really infringing or not really infringing? Legally, it’s a very interesting debate, but it’s not the most pragmatic debate for a company. So I find that there are some people in the tech world who are trying to understand the letter of the law so they can make sure they abide by it, but they don’t necessarily understand the pragmatic world that some of these businesses live in. So that is an issue.

One of the things that we did at Google Cloud was we started hiring a set of industry experts who were not necessarily technologists—in some cases they were, in some cases they weren’t. I hired, for example, the former chief compliance officer of Morgan Stanley to come in and help us work with customers to bridge the gap between technology and compliance. A lot of value was created from that. We hired the former chief digital officer of a large retailer who was not an engineer, but she had run digital operations. I think you’re seeing more and more of that, but it is something the tech world needs to keep doing.

What foundational knowledge do you think CEOs and board members need to have on this?

I may be extreme on this, but I think AI is going to be central to all parts of every business. I remember back when I used to do work in the manufacturing world, people would say, you can’t become CEO of a manufacturing company if you haven’t spent some time understanding how the manufacturing floor works. I think AI is going to be one of those areas. So again, do you need to be a technologist? No. But understanding the basics of how the models work, what the principles are and the limitations, that’s critical.

For example, there is fairly compelling evidence that ‘hallucinations’ are a feature of GenAI, not a bug. Meaning the way the models work, you will always have some level of hallucination. The question is, how do you operate in a world in which you will always have some level of hallucination? For a CEO or a C-suite exec to understand that changes how they approach the problem. Because it’s not, we’re gonna wait until Microsoft figures out how to solve the hallucination problem, because while it can be mitigated, it probably can’t be solved. So you need enough understanding of the foundational elements to have an appreciation for things like that.

A lot of companies are jumping into AI investment. Is this going to be like the early days of ERPs when companies threw megamillions at implementation and wound up with unwieldy systems that delivered lackluster ROI?

There will always be excessive hype, and that will lead to excessive activity in some ways. But I would argue that it’s a little different than ERPs, because of the learning that you will get from a lot of these AI experiments. If you implement an ERP system incorrectly, it’s kind of wasted money, right? You’re not getting partial value. I think with a lot of the efforts around AI, as long as you have a system in place that allows you to learn from what you’re doing, you’ll find it to be additive. It can get quite expensive, so I would make sure, again, that you’ve got governance around this. But I do think this is different because you can take incremental steps and you can learn along the way in a way that’s harder with some of these other big bang types of projects.

How can one measure ROI on AI initiatives? Are there any specific metrics to focus on?

Some of it is just 101. If you’re looking to improve customer service, then you look at handle times and the NPS scores and things like that. Some of this is remembering that the AI should be an enabler of what you do as opposed to something distinct. Almost nobody should start at scale on day one, for obvious reasons. Companies need to recognize and embrace that this is so early that having a real learning agenda around AI is important—and the learning is on the AI itself and how to govern it and how to make sure that you are getting ready to scale. So having a program in place that says, okay, we’re gonna pick a team and that team is going to use GitHub’s copilot to help write software more efficiently. So you can measure developer productivity, etc. But it’s even more interesting to look at how many security issues and how many bugs got through the system and did our system of scanning and human review and quality control work or did it not work? Being very deliberate about this as a step-by-step approach is really how companies should think about it.

Are there any AI misconceptions or myths you want to bust for us?

The one that I hear more often than I would like is that a lot of this is dependent on what tool you choose. So if you go with Open AI or you go with somebody else. But I think companies are missing the more fundamental question of, what is the data that you’re putting in and how is it structured and how well have you maintained it? Where I suspect a lot of people will go wrong is thinking they can just buy that tool off the shelf and it will just work. But you’ll get generic results because you haven’t tailored it for you as a company. So that’s something people should really keep in mind.

The second piece is, as cool as these tools are, we really still are in the first innings. There are other parts of machine learning that are much more mature, but on GenAI, people sometimes forget that we are just five, seven years in. It will be really interesting to see how things evolve.


C.J. Prince

C.J. Prince is a regular contributor to Chief Executive and other business publications. Her work has appeared in the New York Times, SmartMoney, Entrepreneur, Success, BusinessWeek, Working Mother, and others.

Share
Published by
C.J. Prince

Recent Posts

Successful Negotiation Involves Managing Tensions

A seasoned negotiator shares tactics for getting the deal you want.

14 hours ago

Healthcare Costs Continue to Rise: How Much Should Your Company Pay?  

Healthcare packages provided to employees are a massive and continuously rising cost to businesses—and recent…

15 hours ago

Werner, USAA, First Command, RecruitMilitary and Scrum Honored With 2024 Patriots In Business Awards

Presented by Chief Executive and Thayer Leadership, the award recognizes businesses that lead our nation…

2 days ago

The 3 Roadblocks To Growth, According To Canadian CEOs 

Poll of 300 CEOs across Canada finds three recurring themes impeding growth, with near-complete agreement…

2 days ago

World Wide Technology CEO Jim Kavanaugh: ‘The Harder You Work, The Luckier You Get’

In this edition of our Corporate Competitor Podcast, Jim Kavanaugh, the CEO and co-founder of…

3 days ago

Leverage Your Company’s GenAI Pioneers 

Look internally to tap the transformative potential of GenAI in learning and development. (And if…

3 days ago