Audio icon
S2 E2
May 30, 202439 mins

Choosing the Right
AI Technologies and Algorithms

About our Guest
Burhan Hamid

Burhan Hamid

Chief Technology Officer
TIME
Connect With Us For Your Inquiries

Nate Rackiewicz talks to Burhan Hamid, Chief Technology Officer at TIME, about the most successful strategies for implementing AI in organizations. From leveraging internal capabilities to navigating meaningful partnerships, Burhan shares insights honed through his experiences leading technology initiatives at the major media company. Don’t miss this discussion on selecting the right AI technologies and algorithms, structuring AI programs, avoiding pitfalls, and more.

Introduction

Nate Rackiewicz: Well, greetings again, my friends. I'm Nate Rackiewicz, and this is FuseBytes. It's a podcast about AI readiness in companies this season, and I'm excited to have joining us today Burhan Hamid, who is the CTO of TIME. Welcome, Burhan!

Burhan Hamid: Thanks, Nate. I'm so happy to be here.

Nate Rackiewicz: It's great to see you again, and congratulations on your recent promotion.

Burhan Hamid: Thank you. Thank you so much.

Nate Rackiewicz: It's been a long run with your ride over there at TIME for sure, and TIME Inc. before that.

Burhan Hamid: Yeah, I've been in the tech space for 24 years, mostly at TIME Inc. and Time Media Publishing. It's been quite the ride. I've seen a lot of ups and downs. I've seen it all.

Our Latest Insights

  • From Data to Action: How AI Empowers Decision Makers
    Read more
  • A CEO’s Guide to AI Business Transformation
    Read more
  • Unlocking the AI Advantage: Strategies for Smarter Leadership
    Read more
  • How Collaborative AI Drives Cross-Functional Integration for Executive Success
    Read more

Nate Rackiewicz: So, could you explain a little bit about your background and how you came to be the CTO of TIME?

Burhan Hamid: How much time do we have, Nate?

Nate Rackiewicz: We have as much time as you'd like. You could take it all the way back to when we actually worked together as part of the same TIME Warner family.

Burhan Hamid: You know, we did. It's been a big family. TIME Warner, I mean. I started when it was right when AOL bought TIME Warner in 2000. I started answering phones at a help desk, and way back then, I missed Y2K, but I remember showing up and seeing stickers of Y2K on every approved computer across the entire office space.

Nate Rackiewicz: I was part of Time Warner for Y2K. So I went through that process over at HBO, and so we had Y2K, and they were still having us work on COBOL applications at the time.

Burhan Hamid: Wow, yeah, it was, TIME Inc. specifically, was a special place. It was, I just got together last night with a group of people that had been there for 30-plus years. And it's basically like everybody I worked with was family, but the most interesting thing, and the best part of TIME Inc. though for me, was that I was able to bounce around from place to place to place. I started answering phones, I moved over to desktop support helping people fix their computer problems at People magazine.

I grew from that into a management role at Entertainment Weekly, where I got to meet some of the most amazing people in the media space. Some of the smartest people I've ever worked with. So I was able to do so many things and evolve my career and learn so much about the different aspects of technology, both from what used to be called IT, to software engineering, to operations, to all of it, over the course of 17 years.

And so when TIME was spun off and bought by Marc and Lynne Benioff. I got an opportunity to come here and really separate the company from Meredith, who had bought TIME Inc. and we got to build everything from the ground up at TIME, at the new independent TIME, and that was super exciting to me. I really wanted to be a part of what that future for TIME would be.

At that time, back then it was, we were a 97-year-old, 96-year-old company, and I wanted to make sure that I helped TIME get to its 100th year and set it up to be a force for another 100 years, and I'm really grateful and honored to have had the opportunity to do that, now, as CTO. But effectively leading the product and data and engineering organizations over the four and a half years that I've been here.

So, title is a title, not a big deal. It's really about the work and doing the work and the team that I've got being able to deliver on some really fun and interesting work at the end of the day.

Discover how your business can grow and transform with AI.

Discover how your business can grow and transform with AI.

Book a meeting

Nate Rackiewicz: Well, a huge congratulations to you on reaching TIME 100. I know there's been a lot of celebrations about that. It's a huge milestone for you and for the company, and it's an honor to have you on the program today.

Burhan Hamid: Thank you, Nate. I really appreciate it. Looking forward to the conversation. You know we've been partners with Fusemachines for 4 years now. And yeah, I'm really excited to see what you all are building as well as this grows.

Nate Rackiewicz: Yeah, for sure. And we appreciate the customership. That's really great.

Decoding AI Success: Strategies for Evaluation and Implementation

Nate Rackiewicz: So the podcast today and this season is focused on AI readiness in companies. We hear all about AI everywhere you go, you can't get away from it, and you think it would be easy for companies to implement AI given that everybody's talking about it. But the reality is, it's complicated to implement and what we're trying to do with this podcast is bring together thought leaders, like yourself, C-level executives, to share insights, best practices that you've seen as you've brought in AI for the first time, or the second time, or the third time. Whatever it's been.

You know, what are some of the things that you've seen, because it certainly is complicated when you try to bring it in for the first time? So I'd love to learn a little bit more about your experience, because I imagine that AI is just one part of your portfolio as a CTO. You're looking at all technology within the company. So how are you thinking about evaluating AI and setting it up to maximize the probability of success given that it can be so complicated in companies to get off the ground.

Burhan Hamid: Yeah, I mean, I think that you know, this is not the first wave of AI right? There have been companies, including TIME and technologists for many years that have been in the machine learning space and the AI space, and we built AI products for years, right? Now, the new light that has been shined on AI really is exciting, because it brings it to the forefront of what every company is thinking about, and the emergence of large language models has really accelerated that, now in terms of maximizing the probability of success, right or figuring out how to successfully deploy AI roadmaps.

I think it starts with first defining what success means to you as a company, right? So For me, I'm thinking about AI success through the products that we're building, right? So there are AI products that we are working on, that are meant to drive a particular metric for success. So, for example, for TIME.com, we would want to build a product that increases a very specific metric, like pages per session or time spent on a page.

So for us at TIME, very specifically, we've defined that metric of success then thought about, okay, what are the ways that we can leverage all of our toolset that's available to us, including AI, to be able to build a product that does help improve that metric?

“Now, if we're going to use AI, how do we make sure we do it in a way that's responsible, ethical but keeps a human in the loop, right? like that's super important, especially with large language models today. How do we listen to our audience and make sure we're building something that actually resonates with them, right? And enriches their experience on our platform.”

And then finally, we'll talk about this, I'm sure, a little bit later as well, but like it’s super important for me to be able to get something to market relatively quickly, and test it out and see what the feedback is, see what the data tells us how it's performing on, right? Like you've defined the metrics for success, is this actually helping to drive the needle towards that success? And if it is, then how do we iterate on that and get it to scale in a way that will continue to drive that metric for success.

Overcoming Obstacles: Scaling AI from Concept to Production

Nate Rackiewicz: That makes a lot of sense, starting with the business outcome that you're trying to achieve, and then layering up the AI applications towards that and making that success metric something that you can measure so that you can really see the tangible output of AI. But getting it to production can be such an impediment and there can be so many obstacles that you can face in trying to scale it up to a production level, and with the traffic that TIME sees, I imagine, even getting it to that point, to even test is a challenge, and I wonder if you could talk through some of the impediments or obstacles that you have to go through? When trying to scale up these things, even to be able to test them.

Burhan Hamid: Yeah, I mean, I think the tech portions of that are relatively straightforward, right? Like you can release to 1% of your audience or 10% of your audience to test things out, and we do that all the time. But I think some of the impediments for AI-specific projects are, I mean, the most obvious one, and the most challenging one for us is all of this is really new.

Even though it's now, you know, a year and a half old, it feels like every few months there is an update to a model being released right where we're at Gemini 1.5 pro now. We've got small language models now, we've got, you know, there's so much that's being released.

“And it's really important if you're an engineer working on this to keep track of what is the label on the release? Is it a preview release? Is it a pre-release? Is it generally available? Because we've learned through our testing that some things in preview, things might break without us even being aware that changes were made in the background to a model, right? So definitely keep an eye out for using stable versions of software.”

I mean, we've always known to do this, but with AI, it's just kind of like, Oh, I wanna test the latest and greatest. Let's check it out and see what we can do and test it. But for your product releases, make sure you're using stable code.

The other impediment is hype. You know, everybody, the expectations are sky high now for what the possibilities are with AI, and I have to do a job in managing that, right? If there are expectations, you know, people are throwing numbers out there like 30 to 40% increase in productivity, using AI. What does that mean?

Nate Rackiewicz: Right.

Burhan Hamid: How do you measure that? Right? Like you have to, now every executive is asking for 30 to 40 increase in productivity, right? So managing the hype is probably the biggest impediment in terms of an AI project because there's expectations that it will get done right away, it'll be perfect, and it'll transform the entire business overnight, and that's just not the reality of it for most businesses. It goes back to what we talked about earlier.

“The way to build a roadmap is to define the success, build out a product, build out a small version of it, test it out with a small user base, and then continue to invest in the growth of the product and pivot if you need to help it go in the right direction.”

What to Consider When Picking AI Technologies and Algorithms

Nate Rackiewicz: You talked about the hype, managing the hype. I imagine culture comes into play within an organization, and you know there are different layers of culture, and so, when and among that, you've got the layer that's the executive management, you've got mid-level management, you've got staff people. All of them have, you know, their own culture and their different and their own expectations about that hype. How do you go about managing that across those different levels?

Burhan Hamid: Lots of conversations, right? Lots of one-on-one conversations, but also each layer, and it's not just layers, but it's also horizontal layers, but it's also vertical layers, right? Each department is going to have a different purview on or point of view on what AI brings to the table, right? There are going to be departments that are concerned about the loss of jobs right?

Subscribe to the FuseBytes podcast

So it's really about getting back to responsible AI and thinking about what are the ways that we can align as an organization in the best way to use this tool to help us grow as an organization. That aligns with the overall values that we have as a company. And TIME’s biggest value is trust, right? So we have to make sure that we are maintaining that not only externally, but also internally. So it's challenging, but it's also once the leadership and the company is aligned, it trickles down from there.

Nate Rackiewicz: Got it. So the episode today, we're honored to have you here as a Chief Technology Officer there at TIME. And we wanna get into choosing the right AI technologies and algorithms as a topic as well. So I'd love to know what factors that you consider when choosing the right AI technologies and algorithms for the solutions that you've identified to tackle.

Burhan Hamid: Yeah, I mean for me, specifically, I'm big on partnerships. I think that the most important thing and, going back to trust, right? Like it's building a relationship with tech partners that will help both companies, right? So, to me, we've got excellent partnerships with Google Cloud. They've come to the table and helped us build for several years now. We've got an excellent partnership with Fusemachines. We've got an excellent partnership with several other platforms out there. And, to me, that is key right? Because that way you're all in it for the long run, and everybody has skin in the game.

“I'm big on partnerships. I think that the most important thing and going back to trust, right? Like it's building a relationship with tech partners that will help both companies,”

After that, I think about speed to market, how fast we can get things built when evaluating different AI tech. So, is something in preview, or is it actually stable, right? Is this just an announcement, that's a marketing announcement, or is there an actual real product behind this? So that’s important for me.

And then, you know, cost is a huge factor, right? Like, if I'm thinking about what it's going to cost to be able to deploy something and doing an ROI analysis on it, making sure that it aligns with what we talked about earlier, the success metrics, and do those success metrics actually drive revenue or a savings of cost? That all ties, that all comes into the equation as I'm thinking about all the potential for different AI technologies that we could be using.

In-House vs Off-the-Shelf AI Solutions

Nate Rackiewicz: When starting with a use case, how do you try to line up those technologies with the use case? So, I think that's great that you're talking about the partnerships and evaluating the specific technologies within those partnerships that they might have to offer. How do you think about lining those up, the business questions, the use cases with the specific technologies that are offered either by these third-party vendors or by custom solutions that you might build in-house?

Transform your business

Leverage AI to elevate your operations and gain a competitive advantage for your business.

Book a meeting

Burhan Hamid: Yeah, it's really an interesting one to think about, Nate. It's you know, I'm hesitant to say this, but I feel like, what's been happening is it's been backwards, right? So the technology has been released and everybody's looking to retrofit used cases to it versus the other way around which is like, you have a business need and, the technology helps solve business need, right? Or you have a use case, and the technology helps solve that.

So, to me as a technologist, that's very exciting, right? Like there's a new thing out, and you want to see what you can use it for, and I've been very careful at TIME to make sure we're not just slapping a chatbot onto the user interface of TIME.

Because that's everybody's inclination, oh, there's this new cool tool, let’s make TIME GPT and call it a day. So, it's going back to what we talked about all the way at the beginning, thinking about how we build products that are going to enrich the audience experience on TIME.com, and a use case for that for us, and it's a natural one for AI and has been for quite some time, is content recommendation, right? And we've developed something with Fusemachines called CRX. It's our content recommendation engine at TIME. It's been in market for 3 years now, and what is the evolution of that product, right? As a use case for driving more engagement on the site, what is the evolution of that now, in light of the advancements in AI?

“a use case for us, and it's a natural one for AI and has been for quite some time, is content recommendation, right? And we've developed something with Fusemachines called CRX. It's our content recommendation engine at TIME. It's been in market for 3 years now.”

So, can we now take content recommendation to the next level using LLMs? that's the type of use cases I'm thinking of and then, when we think about what types of technologies do apply to it- which LLM to work with right? And what we do is we experiment with all of them. We will run multivariate testing, we're running both GPT 4, GPT 3.5, Gemini 1.0, Gemini 1.5 pro and preview mode, and we've set up ways to evaluate the performance of each of those for the product that we're building and we're evaluating performance with a human in the loop.

So, it's not really a data-intensive way of evaluating performance, because what we're getting is, and just to be transparent, we have the LLMs reading the article and pulling out topics from it, right? It's an old NLP use case that we're now applying an LLM to. So now, a human can review those topics and compare, okay, does one model extract the topics that I would have chosen versus another versus another.

And so, that's how we're actually evaluating different algorithms, different models with the products that we're trying to build for the use cases that we have.

Nate Rackiewicz: Got it. So you talked about the importance of those partnerships as well. How do you think about internalizing some of the capability, as you move forward with these things, how do you think about the trade-offs between using off-the-shelf AI solutions or third parties versus doing this in-house? Is there a transition that goes on along that spectrum?

Subscribe to the FuseBytes podcast

Burhan Hamid: Yeah, I actually think about it from both perspectives. I like to get something to market with something off-the-shelf because that will help get something out there to prove out and test, and we did this thing for our CRX algorithm.

“We started with an off-the-shelf recommendation engine, and then we had the team internally. That gave the team internally the time that they needed to build something that would compete with it. And so then the team built something that competed with it, and until our internal product performed better than the off-the-shelf product, we kept the off-the-shelf product. But as soon as the internal product was performing better, we had a winner.”

And so that's the way I think about it, because you unlock a bunch of features, right? By doing that, you give your team time to build something. You give them a point of reference. You still get something out to market relatively quickly. And you collect the data that you need to be able to measure against and see if it's even worthwhile continuing to invest in the product to build in-house.

So that's the approach I like to take unless there is an opportunity to build something that is IP internally, right? That will give you a competitive advantage that no one else has, and in that case, you absolutely will invest internally. But still, I would still continue the approach of test, iterate, test, iterate, test, iterate.

ROI Strategization

Nate Rackiewicz: So, as you're running things in parallel, how do you think about ROI? And justifying the additional parallel processing that's going on there with the team members?

Burhan Hamid: Well, with something off-the-shelf, that's where the partnerships come into play, right? You can leverage using a partner to help build the thing off-the-shelf. While your internal team now has time to build something that competes with it. So that's exactly how it all ties together with the partnerships where we're looking to work with a partner. And it's incentive for the partner also to build something better.

At the end of the day, they're getting product feedback to help their product grow while we're incentivizing our team internally to compete with this third-party product, and so everybody gets something out of that at the end of the day.

Nate Rackiewicz: And as they're building those things as third parties, how do you ensure that the chosen AI technologies align with the company's existing technology infrastructure? From that integrations standpoint, I often find that integration testing and integration of systems is sometimes the most complex part. So, how do you tackle that?

Burhan Hamid: We do a lot of work on Google Cloud. So our stack is built there and so we looked for partners that are going to work in that platform. But that said everything is so interoperable these days that it's less of a concern than it used to be, right? We can easily, even though we're working in Google Cloud, we're basically writing functions or Python code, or JavaScript code that is calling an API, and that API can be opening eyes. API doesn't have to be, you know, Gemini's API.

So it's really not been a challenge for us to have to just be contained to one particular environment. As long as the technology we're working with makes itself available through an endpoint that we can hit. That's all we need.

Transform your business

Discover how your business can grow and transform with AI.

Book a meeting

Nate Rackiewicz: So it sounds like you've got some good approaches here to interoperability, bringing these things together, bringing in the third party vendors to really fast track the use of AI, having the specific IP-focused AI handled by custom internal staff. It sounds like you're on top of it over there at TIME, which is great to hear.

How do you stay abreast of all the technologies and all these changes in order to be on top of it, like what are some of the trends that you're watching out there? And how are you staying abreast of all of these changes and technology that are happening? Because there is always a new thing. There's a new shiny object out almost every day. How are you staying abreast of that?

Burhan Hamid: It's really hard, Nate. It's really difficult, too. It's funny I set up a bot in my Slack, it's just monitoring search terms in Google and sending across any search results that are tagged AI, and you can only imagine what that bot is sending me right now.

But no, I think, there's first of all, plug for TIME.com, TIME is writing quite a bit about AI, we're releasing the TIME 100 AI in September. It's a list of the 100 most influential people in AI. TIME has actually invested from an editorial standpoint in AI as well. And there's some really influential and important coverage about ethical AI that the TIME team does editorially. The things that keep me in the loop the most, though, are one working on it right like literally being immersed in it on a day-by-day basis. You discover everything that's available to you.

“TIME is writing quite a bit about AI, we're releasing the TIME 100 AI in September. It's a list of the 100 most influential people in AI. TIME has actually invested from an editorial standpoint in AI as well. And there's some really influential and important coverage about ethical AI that the TIME team does editorially.”

Two, the partnerships are key right? Going back to that where your partners, you have to lean on them to say alright, if we're partnering with, you know, one of the top cloud providers in the world and the company that's pushing forth an agenda in AI, they're going to keep you up to date in that by partnering with them. So we meet regularly with our partners and learn about what the roadmaps are.

Then, you know, podcasts. I like the Hard Fork podcast there, I like Lex Fridman’s podcast. You know, I like the FuseBytes podcast.

Subscribe to the FuseBytes podcast

Nate Rackiewicz: Nice.

Burhan Hamid: And then there's some AI tools out there that are actually great for improving how I operate on a day-to-day basis in terms of learning. I'm a big fan of Perplexity.AI, they have a discover feed that covers the top 10 or 15 topics every day that you can look at. Some newsletters also.

So yeah, it's really a lot of consumption of media in very different formats. That helps. But really you have to just immerse yourself in it and build it and do it, and then you'll learn about everything that's out there.

Nate Rackiewicz: You've mentioned the ethical use of AI a few times over the course of the podcast and I imagine that comes up as a topic pretty regularly in these news feeds that you're monitoring. How do you think about that in the context of deploying AI earlier in the episode, you talked about it relative to the LMS that are out there. The ethical use of LLM. How are you thinking about the ethical use of AI overall?

Burhan Hamid: I think it has to start with empathy. Right? You have to think about the impact to humanity. And you know, from an ethical point of view, I think about it, and sometimes these things are juxtaposed but from a technologist's point of view as well, right?

Because the engineer in me wants to just plow forward at all costs. Right? Like, figure out like, what can we solve? Can we cure cancer? Can we live in outer space? Can we, you know, colonize Mars right like, can we do all these things? And it really excites me to think about what the future can be. The science fiction/reality of the future can be through this technology. But then you have to pull it back a little bit and make sure that what you're doing is not causing more harm than good.

And the real examples of that are job displacement. Although I don't think AI is going to necessarily replace a significant amount of jobs.

I think people are at risk of losing their job if they don't learn how to use AI versus people being replaced by AI, right? And I think there's definitely a fine line there. But there is something to think about there.

“Every person in the job market today should be thinking about how they can use AI to improve the work that they do. It's like the smartest person you know is right there for you, anytime you need them, and if you're not leveraging that, then you risk falling behind.”

But ethically as a leader at TIME, I'm encouraging my teams to learn and start to use the technology. But I wanna be very careful about our editorial use of AI, right? I think that there are lots of challenges with content that is not human-produced or human-reviewed out in the marketplace today. And that's just making for a terrible internet for everybody. And so you know that there's risks there.

There's also risks around, I mean, this is an election year not just in the United States, but all over the world there's tons of elections happening this year. So we're very, very careful about making sure that we understand what the implications are there and communicating that. And some of the coverage TIME is doing is around that right? So we don't use AI to produce any content at TIME. And that's one of the things that we are absolutely like, we're not going to do this right now.

And that's the judgments that we have to make. That doesn't mean we're not trying to use it for all other parts of business trying to improve efficiency, trying to build interesting digital products, using the content that we already have or even using it for use cases like tagging and you know, suggesting headlines for authors, right like, though those are all things that I think we can in the future use it for. But in terms of making sure that we're being honest to the people that are coming to TIME.com or reading TIME Magazine, and know like making sure those people know that what they're reading is real journalism and not something that an AI produced is super important, And I think that's an ethical concern for us.

Transform your business

Leverage our AI products and solutions.

Schedule a meeting

Foundations of a Successful AI Project

Nate Rackiewicz: Yeah, I certainly agree that there's a lot of ethical considerations when you speak about journalism in particular. So I imagine that that's really close to your heart. As you've made clear.

So we've got a great audience that listens to this podcast of senior executives, and they're battling with the same challenges that you've gone through. And you've really outlined nicely how you've tackled these things. You know, from the start of aligning them with the business case and testing them against that.

How would you frame or structure an AI program, If you were one of them? What are some of the key pillars that you would really have at the foundation of your AI program if you're just getting started?

Burhan Hamid: Yeah, It depends on what your goals are right. But I suppose I would say, start with your goals. Right, Are you going to use AI to build products? Are you going to use AI to improve workflow? Are you going to use AI for quality assurance like, think about what your goals are and what areas of your company you want to apply AI to and once you do that make sure to, as we discussed earlier, get alignment across the company or across those departments.

“Start with your goals. Are you going to use AI to build products? Are you going to use AI to improve workflow? Are you going to use AI for quality assurance like, think about what your goals are and what areas of your company you want to apply AI to and once you do that make sure to, as we discussed earlier, get alignment across the company or across those departments.”

Then build out a plan, build out a plan, build out a roadmap and evaluate the ROI on that roadmap right for each of the things that are on that roadmap and continue to do that over the course of executing that roadmap, because you may find that you know you're 3 months into an AI project, and it's not going the way you thought it would go. And you're losing money on it, or it's costing you more than it's worth, and you know, be okay with failure. Be okay with saying this is not something we want to continue to invest in and move on to the next thing.

So I think that's just kind of the way I would approach it. But one of the most important things is making sure there's clear alignment across board on what the plans are and how we're moving forward.

Nate Rackiewicz: Sometimes people call that governance, I think, is that similar to what you're talking about in terms of alignment.

Burhan Hamid: Yeah, I think so. I think we can. We can apply that term to it, and to me, governance sounds a little bit rigid, right? If I'm being honest, and it may apply to certain companies, and some other companies might not want to have that rigid process, right? Like, if you're you're relatively small company or just starting out you know, process can sometimes you know, it can slow you down a little bit, right? And sometimes that's good, and sometimes that's not good.

So I think it depends on the company, the size of the company, the talent that you have, and their ability to stay within certain bounds. Or do you want to allow them to just go free and build the cool stuff that they can possibly build and bring it back to you. So really depends on the culture of the company and how I would approach governance, but I'd call it get everybody aligned.

Subscribe to the FuseBytes podcast

Avoiding Common AI Adoption Pitfalls

Nate Rackiewicz: Yeah, get in line and get everybody aligned. That's good. So what are some of the pitfalls that you would also recommend to them to avoid things that they should be looking out for? so I guess it's the inverse to a degree of what you said they should be doing. But what are some common pitfalls that you would recommend to C-level executives to kind of keep their ear on the ground for and investigate to avoid.

Burhan Hamid: Yeah. So when I think when I talked about partnerships, right, that's a double-edged sword. So what I would encourage any executive to do is prove out the partnership right? So ask for a proof of concept and see if the company that you're partnering with or potentially partnering with can deliver on their promises and make sure that they can. And it's before signing a long-term deal with them, right?

So, There are lots of hungry startups in the AI space that are willing to go above and beyond to get your business. And so, you know, make sure that they can back that up so that would be one pitfall. The other is, understand what your costs are going to be, it's very confusing, especially in AI with tokens and you know, all this new, way of thinking about how resources are consumed. Make sure you understand what those costs are. I gave an example earlier of using or not using preview releases of software right.

“There are lots of hungry startups in the AI space that are willing to go above and beyond to get your business. And so, you know, make sure that they can back that up so that would be one pitfall.”

One of the benefits in certain cases of using preview releases are they don't cost anything right? So, you're part of the feedback process. But you might realize that as soon as that thing that you built a product on that was in preview is now in general availability also in your cost structure is changed.

So I would make sure to stay on top of that and then I mean, this isn't specific to AI. But, you know, I think there always needs to be ownership for anything that anybody is working on, right like in terms of a project, so make sure you have an owner. And that can be at the highest level or at the lowest level.

But owners and deadlines are super important, and it's one of the things I learned from a former CTO at TIME Inc. Everybody, every task that you put on a list of notes from a meeting, has to have an owner and a deadline. So, those are some just recommendations from my experience. I'm sure there are several other pitfalls that I'm missing, but would love to hear other people's feedback on that as well if you're listening.

Nate Rackiewicz: Excellent. And how could people provide that feedback to you?

Burhan Hamid: You can email me, or I imagine this is going to be posted on LinkedIn there. You can comment on it. I think that would be good. Great! Start a conversation. You know, it'd be interesting to also. See if there's opportunities to bring people together and talk about this kind of stuff. I've been part of a couple of groups of media CTO and others that have come together and just kind of talked stay in touch with each other. But maybe you know, that's something the few team can help us together as well.

Nate Rackiewicz: Yeah, that's part of the how do you stay abreast of current technologies. Right?

Burhan Hamid: Exactly exactly. Talk to your peers.

Nate Rackiewicz: Network network network. Right? Well, Burhan, I'm really grateful for your time today. Really grateful for you sharing some insights here best practices on how you've done it and made it successful for you. Certainly, it sounds like you have an AI-ready company, you've set up an AI-ready process. And you know, I'm very appreciative of you sharing that here with us today.

Burhan Hamid: Nate, Thank you so much for having me. This is a lot of fun. Maybe we'll do it again sometime in the future. It's great chatting with you, and thank you.

Nate Rackiewicz: Sounds great, and to our audience, thank you for tuning in. This is FuseBytes. I'm your host, Nate Rackiewicz, and this is a show sponsored by Fusemachines. Thank you very much.

Transform your business

Partner with Fusemachines & unlock the transformative power of AI.

CONTACT US TODAY

Get to know our speakers

Nate V. Rackiewicz

Nate V. Rackiewicz

EVP, Head of Data & Analytics,
North America
Fusemachines
Burhan Hamid

Burhan Hamid

Chief Technology Officer
TIME
We use cookies to improve your experience & analyze usage. By continuing, you agree to our Privacy Policy.