Audio icon
S2 E2
May 30, 202439 mins

Choosing the Right
AI Technologies and Algorithms

About our Guest
Burhan Hamid

Burhan Hamid

Chief Technology Officer
TIME
Connect With Us For Your Inquiries

Nate Rackiewicz talks to Burhan Hamid, Chief Technology Officer at TIME, about the most successful strategies for implementing AI in organizations. From leveraging internal capabilities to navigating meaningful partnerships, Burhan shares insights honed through his experiences leading technology initiatives at the major media company. Don’t miss this discussion on selecting the right AI technologies and algorithms, structuring AI programs, avoiding pitfalls, and more.

Introduction

Nate Rackiewicz: Well, greetings again, my friends. I'm Nate Rackiewicz, and this is FuseBytes. It's a podcast about AI readiness in companies this season, and I'm excited to have joining us today Burhan Hamid, who is the CTO of TIME. Welcome, Burhan!

Burhan Hamid: Thanks, Nate. I'm so happy to be here.

Nate Rackiewicz: It's great to see you again, and congratulations on your recent promotion.

Burhan Hamid: Thank you. Thank you so much.

Nate Rackiewicz: It's been a long run with your ride over there at TIME for sure, and TIME Inc. before that.

Burhan Hamid: Yeah, I've been in the tech space for 24 years, mostly at TIME Inc. and Time Media Publishing. It's been quite the ride. I've seen a lot of ups and downs. I've seen it all.

Our Latest Insights

  • From Data to Action: How AI Empowers Decision Makers
    Read more
  • A CEO’s Guide to AI Business Transformation
    Read more
  • Unlocking the AI Advantage: Strategies for Smarter Leadership
    Read more
  • How Collaborative AI Drives Cross-Functional Integration for Executive Success
    Read more

Nate Rackiewicz: So, could you explain a little bit about your background and how you came to be the CTO of TIME?

Burhan Hamid: How much time do we have, Nate?

Nate Rackiewicz: We have as much time as you'd like. You could take it all the way back to when we actually worked together as part of the same TIME Warner family.

Burhan Hamid: You know, we did. It's been a big family. TIME Warner, I mean. I started when it was right when AOL bought TIME Warner in 2000. I started answering phones at a help desk, and way back then, I missed Y2K, but I remember showing up and seeing stickers of Y2K on every approved computer across the entire office space.

Nate Rackiewicz: I was part of Time Warner for Y2K. So I went through that process over at HBO, and so we had Y2K, and they were still having us work on COBOL applications at the time.

Burhan Hamid: Wow, yeah, it was, TIME Inc. specifically, was a special place. It was, I just got together last night with a group of people that had been there for 30-plus years. And it's basically like everybody I worked with was family, but the most interesting thing, and the best part of TIME Inc. though for me, was that I was able to bounce around from place to place to place. I started answering phones, I moved over to desktop support helping people fix their computer problems at People magazine.

I grew from that into a management role at Entertainment Weekly, where I got to meet some of the most amazing people in the media space. Some of the smartest people I've ever worked with. So I was able to do so many things and evolve my career and learn so much about the different aspects of technology, both from what used to be called IT, to software engineering, to operations, to all of it, over the course of 17 years.

And so when TIME was spun off and bought by Marc and Lynne Benioff. I got an opportunity to come here and really separate the company from Meredith, who had bought TIME Inc. and we got to build everything from the ground up at TIME, at the new independent TIME, and that was super exciting to me. I really wanted to be a part of what that future for TIME would be.

At that time, back then it was, we were a 97-year-old, 96-year-old company, and I wanted to make sure that I helped TIME get to its 100th year and set it up to be a force for another 100 years, and I'm really grateful and honored to have had the opportunity to do that, now, as CTO. But effectively leading the product and data and engineering organizations over the four and a half years that I've been here.

So, title is a title, not a big deal. It's really about the work and doing the work and the team that I've got being able to deliver on some really fun and interesting work at the end of the day.

Discover how your business can grow and transform with AI.

Discover how your business can grow and transform with AI.

Book a meeting

Nate Rackiewicz: Well, a huge congratulations to you on reaching TIME 100. I know there's been a lot of celebrations about that. It's a huge milestone for you and for the company, and it's an honor to have you on the program today.

Burhan Hamid: Thank you, Nate. I really appreciate it. Looking forward to the conversation. You know we've been partners with Fusemachines for 4 years now. And yeah, I'm really excited to see what you all are building as well as this grows.

Nate Rackiewicz: Yeah, for sure. And we appreciate the customership. That's really great.

Decoding AI Success: Strategies for Evaluation and Implementation

Nate Rackiewicz: So the podcast today and this season is focused on AI readiness in companies. We hear all about AI everywhere you go, you can't get away from it, and you think it would be easy for companies to implement AI given that everybody's talking about it. But the reality is, it's complicated to implement and what we're trying to do with this podcast is bring together thought leaders, like yourself, C-level executives, to share insights, best practices that you've seen as you've brought in AI for the first time, or the second time, or the third time. Whatever it's been.

You know, what are some of the things that you've seen, because it certainly is complicated when you try to bring it in for the first time? So I'd love to learn a little bit more about your experience, because I imagine that AI is just one part of your portfolio as a CTO. You're looking at all technology within the company. So how are you thinking about evaluating AI and setting it up to maximize the probability of success given that it can be so complicated in companies to get off the ground.

Burhan Hamid: Yeah, I mean, I think that you know, this is not the first wave of AI right? There have been companies, including TIME and technologists for many years that have been in the machine learning space and the AI space, and we built AI products for years, right? Now, the new light that has been shined on AI really is exciting, because it brings it to the forefront of what every company is thinking about, and the emergence of large language models has really accelerated that, now in terms of maximizing the probability of success, right or figuring out how to successfully deploy AI roadmaps.

I think it starts with first defining what success means to you as a company, right? So For me, I'm thinking about AI success through the products that we're building, right? So there are AI products that we are working on, that are meant to drive a particular metric for success. So, for example, for TIME.com, we would want to build a product that increases a very specific metric, like pages per session or time spent on a page.

So for us at TIME, very specifically, we've defined that metric of success then thought about, okay, what are the ways that we can leverage all of our toolset that's available to us, including AI, to be able to build a product that does help improve that metric?

“Now, if we're going to use AI, how do we make sure we do it in a way that's responsible, ethical but keeps a human in the loop, right? like that's super important, especially with large language models today. How do we listen to our audience and make sure we're building something that actually resonates with them, right? And enriches their experience on our platform.”

And then finally, we'll talk about this, I'm sure, a little bit later as well, but like it’s super important for me to be able to get something to market relatively quickly, and test it out and see what the feedback is, see what the data tells us how it's performing on, right? Like you've defined the metrics for success, is this actually helping to drive the needle towards that success? And if it is, then how do we iterate on that and get it to scale in a way that will continue to drive that metric for success.

Overcoming Obstacles: Scaling AI from Concept to Production

Nate Rackiewicz: That makes a lot of sense, starting with the business outcome that you're trying to achieve, and then layering up the AI applications towards that and making that success metric something that you can measure so that you can really see the tangible output of AI. But getting it to production can be such an impediment and there can be so many obstacles that you can face in trying to scale it up to a production level, and with the traffic that TIME sees, I imagine, even getting it to that point, to even test is a challenge, and I wonder if you could talk through some of the impediments or obstacles that you have to go through? When trying to scale up these things, even to be able to test them.

Burhan Hamid: Yeah, I mean, I think the tech portions of that are relatively straightforward, right? Like you can release to 1% of your audience or 10% of your audience to test things out, and we do that all the time. But I think some of the impediments for AI-specific projects are, I mean, the most obvious one, and the most challenging one for us is all of this is really new.

Even though it's now, you know, a year and a half old, it feels like every few months there is an update to a model being released right where we're at Gemini 1.5 pro now. We've got small language models now, we've got, you know, there's so much that's being released.

“And it's really important if you're an engineer working on this to keep track of what is the label on the release? Is it a preview release? Is it a pre-release? Is it generally available? Because we've learned through our testing that some things in preview, things might break without us even being aware that changes were made in the background to a model, right? So definitely keep an eye out for using stable versions of software.”

I mean, we've always known to do this, but with AI, it's just kind of like, Oh, I wanna test the latest and greatest. Let's check it out and see what we can do and test it. But for your product releases, make sure you're using stable code.

The other impediment is hype. You know, everybody, the expectations are sky high now for what the possibilities are with AI, and I have to do a job in managing that, right? If there are expectations, you know, people are throwing numbers out there like 30 to 40% increase in productivity, using AI. What does that mean?

Nate Rackiewicz: Right.

Burhan Hamid: How do you measure that? Right? Like you have to, now every executive is asking for 30 to 40 increase in productivity, right? So managing the hype is probably the biggest impediment in terms of an AI project because there's expectations that it will get done right away, it'll be perfect, and it'll transform the entire business overnight, and that's just not the reality of it for most businesses. It goes back to what we talked about earlier.

“The way to build a roadmap is to define the success, build out a product, build out a small version of it, test it out with a small user base, and then continue to invest in the growth of the product and pivot if you need to help it go in the right direction.”

What to Consider When Picking AI Technologies and Algorithms

Nate Rackiewicz: You talked about the hype, managing the hype. I imagine culture comes into play within an organization, and you know there are different layers of culture, and so, when and among that, you've got the layer that's the executive management, you've got mid-level management, you've got staff people. All of them have, you know, their own culture and their different and their own expectations about that hype. How do you go about managing that across those different levels?

Burhan Hamid: Lots of conversations, right? Lots of one-on-one conversations, but also each layer, and it's not just layers, but it's also horizontal layers, but it's also vertical layers, right? Each department is going to have a different purview on or point of view on what AI brings to the table, right? There are going to be departments that are concerned about the loss of jobs right?

Subscribe to the FuseBytes podcast

So it's really about getting back to responsible AI and thinking about what are the ways that we can align as an organization in the best way to use this tool to help us grow as an organization. That aligns with the overall values that we have as a company. And TIME’s biggest value is trust, right? So we have to make sure that we are maintaining that not only externally, but also internally. So it's challenging, but it's also once the leadership and the company is aligned, it trickles down from there.

Nate Rackiewicz: Got it. So the episode today, we're honored to have you here as a Chief Technology Officer there at TIME. And we wanna get into choosing the right AI technologies and algorithms as a topic as well. So I'd love to know what factors that you consider when choosing the right AI technologies and algorithms for the solutions that you've identified to tackle.

Burhan Hamid: Yeah, I mean for me, specifically, I'm big on partnerships. I think that the most important thing and, going back to trust, right? Like it's building a relationship with tech partners that will help both companies, right? So, to me, we've got excellent partnerships with Google Cloud. They've come to the table and helped us build for several years now. We've got an excellent partnership with Fusemachines. We've got an excellent partnership with several other platforms out there. And, to me, that is key right? Because that way you're all in it for the long run, and everybody has skin in the game.

“I'm big on partnerships. I think that the most important thing and going back to trust, right? Like it's building a relationship with tech partners that will help both companies,”

After that, I think about speed to market, how fast we can get things built when evaluating different AI tech. So, is something in preview, or is it actually stable, right? Is this just an announcement, that's a marketing announcement, or is there an actual real product behind this? So that’s important for me.

And then, you know, cost is a huge factor, right? Like, if I'm thinking about what it's going to cost to be able to deploy something and doing an ROI analysis on it, making sure that it aligns with what we talked about earlier, the success metrics, and do those success metrics actually drive revenue or a savings of cost? That all ties, that all comes into the equation as I'm thinking about all the potential for different AI technologies that we could be using.

In-House vs Off-the-Shelf AI Solutions

Nate Rackiewicz: When starting with a use case, how do you try to line up those technologies with the use case? So, I think that's great that you're talking about the partnerships and evaluating the specific technologies within those partnerships that they might have to offer. How do you think about lining those up, the business questions, the use cases with the specific technologies that are offered either by these third-party vendors or by custom solutions that you might build in-house?

Transform your business

Leverage AI to elevate your operations and gain a competitive advantage for your business.

Book a meeting

Burhan Hamid: Yeah, it's really an interesting one to think about, Nate. It's you know, I'm hesitant to say this, but I feel like, what's been happening is it's been backwards, right? So the technology has been released and everybody's looking to retrofit used cases to it versus the other way around which is like, you have a business need and, the technology helps solve business need, right? Or you have a use case, and the technology helps solve that.

So, to me as a technologist, that's very exciting, right? Like there's a new thing out, and you want to see what you can use it for, and I've been very careful at TIME to make sure we're not just slapping a chatbot onto the user interface of TIME.

Because that's everybody's inclination, oh, there's this new cool tool, let’s make TIME GPT and call it a day. So, it's going back to what we talked about all the way at the beginning, thinking about how we build products that are going to enrich the audience experience on TIME.com, and a use case for that for us, and it's a natural one for AI and has been for quite some time, is content recommendation, right? And we've developed something with Fusemachines called CRX. It's our content recommendation engine at TIME. It's been in market for 3 years now, and what is the evolution of that product, right? As a use case for driving more engagement on the site, what is the evolution of that now, in light of the advancements in AI?

“a use case for us, and it's a natural one for AI and has been for quite some time, is content recommendation, right? And we've developed something with Fusemachines called CRX. It's our content recommendation engine at TIME. It's been in market for 3 years now.”

So, can we now take content recommendation to the next level using LLMs? that's the type of use cases I'm thinking of and then, when we think about what types of technologies do apply to it- which LLM to work with right? And what we do is we experiment with all of them. We will run multivariate testing, we're running both GPT 4, GPT 3.5, Gemini 1.0, Gemini 1.5 pro and preview mode, and we've set up ways to evaluate the performance of each of those for the product that we're building and we're evaluating performance with a human in the loop.

So, it's not really a data-intensive way of evaluating performance, because what we're getting is, and just to be transparent, we have the LLMs reading the article and pulling out topics from it, right? It's an old NLP use case that we're now applying an LLM to. So now, a human can review those topics and compare, okay, does one model extract the topics that I would have chosen versus another versus another.

And so, that's how we're actually evaluating different algorithms, different models with the products that we're trying to build for the use cases that we have.

Nate Rackiewicz: Got it. So you talked about the importance of those partnerships as well. How do you think about internalizing some of the capability, as you move forward with these things, how do you think about the trade-offs between using off-the-shelf AI solutions or third parties versus doing this in-house? Is there a transition that goes on along that spectrum?

Subscribe to the FuseBytes podcast