Ep
12
April 1, 2025
37 minutes

AI Isn't Magic...But This Chief Magic Officer Knows Its Biggest Secret

On the Podcast
Speaker

David Reich

Speaker, Coach, Technology and Thought Leader, Executive Advisor and Customer Success and Chief Magic Officer at IBM, Creator of the Tactical Communication method
Host

Thiyagarajan M (Rajan)

Partner, Upekkha

AI is everywhere nowadays.

But is it really as intelligent as we think?

In our latest episode of Pivotal Clarity, David Reich joins us to explore the hidden psychology behind AI adoption.

In today’s conversation, David dives into:

  • Why generative AI is hitting a plateau while predictive AI is gaining traction
  • The misconception that AI is truly "intelligent"
  • How businesses misunderstand AI's capabilities
  • The secret to making AI projects succeed
  • Why trust is the real barrier to AI adoption

You'll find this particularly valuable if you're:

  • AI and tech leaders
  • Business executives and decision-makers
  • Developers and data scientists
  • Startups and entrepreneurs
  • AI enthusiasts
Transcript

[00:00:02] Rajan:  AI is reshaping businesses, and ignoring it could leave your industry disrupted. I'm Rajan, and this is Pivotal Clarity. We talk to those building or using AI, founders and engineers with real world experience. Our aim is just to cut through the hype and see where AI is truly making an impact. If you're a business or following tech trends, these conversations offer clearer insight than most of the press. Let's get into today's episode. Welcome to pivotal clarity and AI podcast. Today, I have with me David Reich, who's a fascinating intersection of technology and human psychology. Officially, David is the chief magic officer of IBM, and he's a veteran technologist. David, welcome to the show, and I'd like you to give a quick introduction about yourself.

[00:00:50] David:  Well, thank you, and thank you for having me. Thanks to everybody for listening, watching. So like Rajen said, my name is David Reich. I am based in Boston, Massachusetts, and I have a long history in technology. And the other piece of my background is rather unique. Um, it's a psychological background. I'm actually, um, IBM's chief magic officer as well, and people say, what the heck is that? I'm actually a professional magician. And, no, we're not gonna do magic tricks here, but the idea is magic and technology. The magic part of it is the psychology of how people process information, deal with things, deal with technology, deal with things that are strange, they're not used to. They don't understand how it works. And as far as my technology side, I have a couple of degrees in computer science. I have been doing technology my entire professional career. And in fact, a lot of the things we're doing with AI today, I was doing, I won't tell you how many years ago, um, when I was working on speech recognition technology and natural language processing, which through evolution of the over the last fifteen plus years has turned into what we know as the large language models of generative AI, the semantic interpretation of when you speak. Does the system know what you're saying and what you want? And the intersection of the psychology and the technology is something that's extraordinarily fascinating, and that and that's what we'll talk about. And before we get into that, let me just pop this up so you guys can get some some extra information. Please go to my website. I'm hoping it shows up properly. It's showing reversed on my screen, but I think it is. It's, uh, my name, David Reich dot com, d a b I d r e I c h dot com slash tactical communication, all one word. Um, you go there, and you will be able to download some information. There's a whole bunch of video links and a whole bunch of educational stuff that I that I've got going on there because in addition to what I do for IBM, I teach courses, I coach executives, and I guide, you know, decision makers based on not just the technology, but how people work with it, the acceptance of the technology. And there you go. And, you know, Rajan, let's get into some questions. Enough about me. Let's let's let's talk.

[00:03:04] Rajan:  So, David, uh, let me start with a bet. I know that the whole, uh, space of AI is going through a lot of changes. But if we were to take a bet, five years from now, what aspects of AI from today would have not changed?

[00:03:17] David:  I think things are moving very quickly, but I will tell you that from what I've seen, it's one of these it's almost like an exercise routine where it takes a lot to get started, and that's very similar to where I was in the industry back in the early two thousands doing speech recognition natural language processing, and the limitations that were there with respect to processing power, memory, the ability to build models without having a PhD, and all the data that needed to be collected. And we were at this kind of we were in an inflection point. It was kinda like the bottom of a hockey stick. Okay? Sorry for the people outside The US. Okay? But it's basically a hockey stick. Then all of a sudden, things started to ramp up a little bit. And a few years ago, with ChatGPT and generative AI, all of a sudden, this stuff came about, and people said, oh my god. This is all brand we've been working the midst for years. But it finally got to an inflection point. And what I'm seeing now is two ish years later, I'm seeing a lot of companies working on how can we monetize this. You know, the providers like IBM and others, the companies, how should we use this? Alright? I'm gonna take a quick dive away from generative AI for a second because there's also and the term I use, and there's a lot of terms for it, is what I called analytic AI or predictive AI. And that's where you're taking data and crunching it and coming up with things and variables and permutations that humans could eventually figure out themselves, but can't process as fast as computers. So there's that, and then there's degenerative AI, which is taking the knowledge, reforming it, creating, you know, the text and the emails and the marketing copy and so on, and being able to understand things. And what I'm kinda seeing now is in the generative AI space, I'm kinda seeing that kinda plateauing a little bit because everyone is now trying to figure out where do we go from here, how do we put it in. And for the most part, the vast majority of implementations are intelligent chatbots. They're intelligent chatbots assistants and people who want to write copy and don't wanna spend tons and tons of time on it. So writing marketing emails and invitations to conferences and all kinds of ad copy. So I'm seeing that kind of plateauing a bit right now because everyone's trying to figure out and they're trying to put that stuff in. It's a much lower barrier to entry, but what's happening now is people need to start accepting that. Okay? Psychologically, the users, they start to say, this doesn't sound quite natural, and they need to start being able to trust this, and you have to be able to trust the data when that went into those models. And I'm seeing some comp a lot of companies backing off a little bit and saying, okay. Let's we're going a hundred miles an hour. Let's put on the brakes for a second. Let's just tap on the brakes. Let's not go to zero, but let's come down to 35 or 40 miles an hour, something realistic, because you're finding all of the models that are out there. Models are being trained on data that's copyrighted, and there's a lot of things that are being infringed upon. There is a lot of arguments about that. There's a number of things that need to get fleshed out. In five years, I see similar to where we are, just growth in scope. I see people doing more things with the assistant technology and being able to interact naturally, right, rather than type questions and look at reports. I see the intersection of the generative AI as well as the analytic or predictive AI. So rather than saying what I need in a sales report and predicting where customers are going to want to buy this holiday season, Now it's going to be a lot more integrated, and it's gonna actually come up with those answers and say, this is what people are doing. That's what people are doing. So rather than a business analyst who need to type in a whole bunch of parameters and mess with pivot tables and things like that to get their reports. Now they're gonna be able to ask for what they want. Now today, that will generate their reports and their graphs. Next is going to be making those recommendations, And that's where things are also going to get very interesting with people accepting those answers because for a long time let's face it. Okay? And, again, as a magician, okay, are you gonna really trust it? I mean, I come up to you. I wanna do a card trick or something and say, and, by the way, can I borrow a $20 bill? Let's do something with this. You're gonna say, no. I'm not gonna do that. You wanna do a card? Okay. But what they're gonna wanna do is they wanna supervise that. So I see a lot of people wanting the results, but not necessarily trusting them quite yet to just do it for me, and I just take a step back and delegate. I mean, heck, people, first line managers, they don't delegate their work to people they've been working with for six years. You're gonna trust some box. So that's kinda where I see that. I see some more of that happening, but I also see that being, um, you know, that next inflection point is going to be the trust of those answers.

[00:08:47] Rajan:  I know. It's fascinating. So, David, you said, you know, there has been a inflection point since two years ago. AI has been around. But since the two years, like, what are the most surprising thing that you have seen within that two year window, uh, in

[00:09:00] David:  terms of, like, usage of AI, adoption of AI? The most surprising thing? I'm gonna say it's surprising but not shocking. And that is that well, first of all, that it took so long to get to this point. Because quite frankly, I was doing this stuff twenty years ago, but it took the degrees we had in computer science to build these models and huge amounts of processing and everything else. I thought that was going to happen a bit quicker than it did. Okay? And very similar to, like, what happened when Siri came on the scene. That stuff was there for a long time. I worked on Siri, by the way. I worked on some of the core technology of that. But all of a sudden, boom, it just rocketed the use of the voice assistant, if you will. And it's the same thing with the generative AI. I personally thought that the analytic AI was going to come out first because all of that was number crunching coming up with pure business value, you know, very similar in a superficial way to, like, what Netflix and Amazon do. They look at your buying pattern, and they offer you some other things. I thought that was going to be the first real big usage of AI in business where business analytics, not just predicting user behavior, but, for example, predicting failure of parts. And the example I always give is in an aircraft, the commercial aircraft, you have the landing gear, and there's tons of parts in the landing gear, everything from the tires to the struts to the brakes and everything. There's hundreds of sensors in there, and they inspect those all the time. Really good thing. Very necessary. But at the same time, wouldn't it be really useful to know when those things are going to fail based on all the sensors, the runway conditions, how much brake was applied, all the all those pieces of data. Analyze that data so now you can proactively replace the brake pads, proactively replace the struts. You can save money and keep people safe. That's the example that I like, and that's pure data analytics. Then you could also go back and say, these pilots are rougher on landing gear than anybody else. Let's send them back for a little extra training. That'll save us some money and make people safe. Those kind of things. I thought that was going to happen before the generative AI, and I thought the generative AI was gonna be a lot more evolutionary, not chat GPT comes out, now everybody's gotta have it. That was the most surprising thing to me.

[00:11:30] Rajan:  Yeah. So, David, is that a timing thing? You work with a lot of AI projects, and some are successful, some are not. So what are the things that make an AI project successful while many are not so successful?

[00:11:42] David:  Oh, I can simplify that. Alright? And this goes back to my whole tactical communication. And when I work with executives and I do a lot of classes and stuff like that. Setting expectations. People come in, and they think it's going to slice, dice, and cure world hunger. And all of a sudden, my revenue is gonna shoot through the roof and my costs are gonna go through the floor, and they have just these unrealistic expectations. I think the most important thing is to go in and say, this is what my goal is, and how am I going to get there, and what am I realistically going to expect? In fact, I'm working with with a large client right now, um, with a customer support center, and they spend way too much money on people picking up phones. And, you know, this is not a rocket science case, but they haven't taken the leap into this assistant. How can I deflect physical calls into self-service? And then have the self-service be able to do that. And the realistic expectation is don't try and do this all at once. The first thing you do is take the questions and answers. Okay? And now start to deflect, I'm making this up, okay, 15% of your calls, 30% of calls, whatever that is. Then take that money you just saved and now put it into some of the self-service applications, and make the self-service applications, going back to the psychology, think like the user. When I do these coaching sessions and everything else, I bring a lot of my magic performance to the business world. And the most important thing I say is put the audience first. It's not what you want. It's not what you need. It's what they need. Why is the audience there? Why is that customer calling you? Think about what they need. Try and get there, and here's a secret of magicians, quite frankly. Be a step ahead of them. Be two steps, three steps, five steps ahead of them, and think about where they want to go, not where you need to take them. Because if you need to take them and you're grabbing them by the collar and pulling them over here, that's not a great customer experience. They're gonna use you because they don't have another choice. You want them to want to use your services. So by thinking about what they need, where they want to go, what they need to accomplish, and setting some realistic expectations, the biggest reason I've seen a lot of the failures is people just don't go in with the right expectation levels.

[00:14:14] Rajan:  Are there any hidden costs that people don't take care of, which is why cost blows up and leads to failure?

[00:14:20] David:  From a cost perspective, wow. I could get in a lot of trouble with this. So, quite frankly, I'm gonna put it into an extraordinarily positive light. You can't ask too many questions of your technology partners. There are no stupid questions. People go in, and either they're afraid to ask or the technology people, the salespeople, they know their scripts. They know their feature sets and everything else. Nobody is a cookie cutter out of the box, so you need to ask what's right for you, and don't be afraid to keep asking. If they wanna be your partner, okay, and that's where I am in this, I don't care how many questions you ask me. I'm gonna have infinite patience in answering everything you need because that's gonna set your expectation level. That's gonna make both of us successful. Got

[00:15:15] Rajan:  it. Any counterintuitive, uh, insights that you've gained while working with folks and getting these large company projects implemented that make things more successful?

[00:15:25] David:  Not so much counterintuitive. I think and, again, you know, I I keep talking about this because, again, my passion is I love the technology, but my real passion is the human aspect here. And a lot of the communication and, again, I'm gonna drop I'm just gonna drop this in here for a second in case anybody didn't write it down and you wanna do it while I'm talking. With my tactical communication method, what you're doing is you're exposing what the users, what the clients, customers, what they want and what they need, oftentimes, even before they know it because you're using these mechanisms. And, again, that's why I love teaching this stuff, is to teach you how to get that information from people that they didn't even realize that they needed or wanted yet. And I will tell you, that's another secret of when I do a mentalism show, when I'm doing mind reading. I'll give you the secret. There's no such thing as magic. Okay? It really is I mean, I'm not reading people's brainwaves. What I am doing is I am asking questions, and I am listening, letting them drive the conversation even though I need to know kinda where we need to wander through. And oftentimes, again, you said counterintuitive. I'm gonna say it's almost hidden where they think they want something, and by the time we're done with a thirty minute conversation, they're like, you know, here's what I thought I wanted. But this is really where the important stuff is. And, you know, and I'm not telling them what to do. I'm helping them figure out what is most important to them. And I think that's the most important thing or or or the most prolific thing when you when you said counterintuitive. In fact, I'm working with one client right now where they said this is what we need to do, and this is what we need to do, and this is what we need to do. And I started off by saying, okay. I I hear you, and I get it. It makes sense. Tell me why. Help me understand what's the motivating factor behind what you're doing. Now did I have another idea for them in mind? Beginning of a conversation, I didn't, but that's what we do. We go on this journey together. And, again, I can talk about AI technology till you know, for forever, but this is the human aspect of what's evolving. We are on the bleeding edge of what is going to happen with AI. And people see these things. That looks cool. That looks fun. And, you know, they come up with these intuitive ideas, but that might not be what the right thing is for them. And by understanding what their motivation is and what they need to do, that's where we start having the real fun where they say, wow. I can do this. And I say, and by the way, that's even easier to implement than what your original need was so we can do those things.

[00:18:12] Rajan:  Alright. So that makes sense. So let's talk a little bit about what you described as tactical communication. So if we were to explain tactical communication to 11 year old, what would it be? How would you explain it to them? And then we can talk about, like,

[00:18:25] David:  you know, the overlaps of that with AI. So look. That they always say when a magician does a trick, they say perception is reality, and that's what, as a magician, we do. You know, we make you see something disappear. Now it didn't really disappear, but you perceived it. And all communication is perception.

[00:18:45] Rajan:  All communication is what is received, right, not what is sent.

[00:18:48] David:  All communication is what is perception, how they are seeing you. And tactical communication is the art and the skill of connecting with someone so they will talk to you. And there's a whole bunch of things, again, that I do in these in these courses where at the end of the day, the most important thing, and this is some of the some of the, uh, stuff that you'll get when you download those those things for me. I have a whole bunch of documents and some videos and some cool stuff. Uh, there's a TEDx talk that I did in there, which was really cool, but you need to be candid, authentic, and vulnerable. The whole idea is we do business with people we know, like, and trust, and this is how you get people to know, like, and trust you. And in doing that, then you can start to expose what they need, and they they will be a lot more open with their problems, their concerns, their fears. Okay? You know, how much they wanna spend, how much they don't wanna spend, and being able to bring those things out into the open. And, yes, while not AI specifically, that's the success for any communication, whether you're communicating with your employees, your investors, customers, or even implementing some sort of a system that is going to interact with your customers By figuring those things out through tactical communication, you can develop better systems, cheaper systems, and make your look. Instead of having customers, you're gonna make friends, and you're gonna make clients, and they're gonna be coming back to you. And that's the idea, and AI is just the latest new thing that we have to get people used to. So perfect example, in an AI assistant, for example, I want some information. I'll make this up. You're renting a car, and I want to change the return date for the car. I haven't started renting it yet, or even if I have started. And you go in, you say to the assistant, I wanna change the return date of my car. And the assistant says, go to the website and do this and do this and do that. Easy enough. The assistant can even say, go to the website and click on this and click on that, and now do this and now go here. It can even say, I can take you there right now. How about the customer saying, the hearing or seeing, I can do that for you right here. Would you like to do that now? And then it takes you right into that transaction. Okay? So now you're doing what the customer wants, and they're saying, wow. I thought it was just gonna tell me how to do it. It's gonna do it for me? That's awesome. So those are some of the things that you can use the predictive skill. Now going a little bit further, and I will, again, give you a a secret of some magicians. There's something that we do called outs. Now when I perform, not every trick is going to go the way it's supposed to go. It just doesn't happen. Simply, hey. I never I never go anywhere without a deck of cards. Right? Let's say I have somebody pick a card, and they lose it in the deck, and some stuff happens, and I say, okay. Was this your card? And they say, I don't remember. How am I going to recover from that when things don't go the way they're supposed to, which is gonna happen in any system that interacts with a customer like generative AI or even generative AI with predictive AI. I wanna report on the most problematic component of the front landing gear in a Boeing seven thirty seven eight hundred max, and you don't get the right answer. The system needs to be able to not make you start all over again. It needs to be able to say, what was right? What was wrong? How do I get where we need to go? So you need to think as you implement the system, you need to think like your users and be able to put those systems together and be there. And even if it's something that wasn't exactly right in the answer, you can make it look like that's exactly what was supposed to happen, and you're just taking them through the next step. So, again, these are the things, and by by doing some of this, some people would say, oh, that's just pandering to people. It's a little disingenuous. I look at it in a completely different way. I look at it as people will now trust the system more because the system knows that not everything happens perfectly, and it's ready for me no matter what I need. Those are the systems that are wildly successful, are the ones that can be there no matter what happens. When I say no matter what happens, I mean, we know things will get totally crazy sometimes, but by being able to handle the vast majority of those things, that's what gets the users to accept your systems. I've seen a number of cases where a company put in a lot of money to put in a assistant type system, and people didn't use it. How many times have you called some company, and the first thing you say is operator, or you start pressing 0 on the phone because you don't wanna deal with the system because it's terrible. You don't wanna be that system. You don't wanna pull that money, and you need to be there and have that continuous feedback. That goes back to the tactical communication. That goes back to being candid, authentic, vulnerable, asking questions, listening to what they need, thinking about the users, okay, and putting that audience first rather than what you need. Think about what they need, what they want to do, and get there before they even know that's where they wanna go.

[00:24:21] Rajan:  Now as people think about, like, you know, bringing technology and bringing AI, what are some of the mistakes that, uh, we are making right now when we are bringing generative AI technology in, like, you know, making a system like this? You know, are people not thinking to this tactical communication well enough that it can be trusted more?

[00:24:38] David:  I think people are way too business focused, and they're not focusing on the people. And if you focus on the people, the users, the customers, the people you know, the business analysts who are going to need the output of these systems. You think about them first. The business successes are naturally going to follow. If you're trying to do things based on, this is how much it's gonna cost me, and I wanna cut cost here, and I wanna get it out faster there, Now you're measuring your success based on artificial metrics and not on how usable, how accepted your system is. You focus on those things first rather than the business. I've been working with with one client, and they decided we have enough skill. We're gonna do this ourselves. We're not gonna pay for this. We're not gonna pay for that. They coulda had their project done a year ago. They're still working on it because they decided to cut corners. So, you know, I think the I think the the biggest mistake companies are making I go back to, uh, to a a very old adage, measure twice, cut once, or you can buy something high quality and do it high quality, or you can pay to do it twice. And that I see is the is the biggest mistake. People are too anxious, rolling it out too fast. And, again, and this is another thing that, again, as a magician you know, look, psychologically, perception again, perception is reality. So if you roll out your initial system, your bootstrap system, to a certain subset of customers, now you start getting that real feedback, and you can start iterating on that and rolling it out to more customers. As far as the broad customer base sees, you've released this system. So their perception is, look what they just did. That's awesome. It doesn't matter that not everybody can get to it yet. It doesn't matter that it's still in its infancy. The fact is you got there. And by doing it in these incremental steps and iterating on it and growing it from there, you're giving the people that you're focusing on that high quality, high effectiveness experience. And the perception out there is you've got this, and people then say, I can't wait to get it. I can't wait to get it. I can't and you build up that anticipation. That goes to the marketing. Again, I keep going back to the performance and the magic stuff, but that's what you want. You wanna build that anticipation. So it's not just the technology. At the end of the day, it's all about the people. Those are the people who are gonna pay money to you, and they're gonna use your systems. That's what's keep your lights on, and and

[00:27:16] Rajan:  there you go. Yeah. Last few questions, David. What is the miss biggest misconception that, uh, we have about AI today?

[00:27:23] David:  Oh, the biggest misconception is that it can do more than it really can and that it really is intelligence. Because at the end of the day, it's really not intelligence. And they say, oh, but it it's putting together things that have never been put together before. It's like, it's not really creating. It's coming up with things and putting them together in different orders, and it's almost like you think about the million monkey where eventually, monkeys are gonna do all this too. You give them enough time, give them a few million years, a few million monkeys, they're gonna come up with all of this stuff too just through random statistics. And, yes, you can direct AI a lot better than you can direct a million monkeys, but people, I think, are putting too much faith into what it's doing, um, and, you know, into what it's generating. And a personal view of mine is people are taking what comes out of AI, and they are sending it out as though it's theirs, marketing copy. I have some friends who do magic shows, and they do a lot of their marketing copy straight out of AI. And I said, so now you have somebody hiring you based on what they read, but what they read is not what you wrote. And, personally, I think there's a much more effective use of this, and this is what I use it for. I will ask it questions around what I need so I can use it as fertilizer from my own ideas. I say, what are some better terms for this? What are better ways to say this? Give me a two word phrase that conveys this. I am a professional speaker and a course creator and executive coach, and I am looking to name my next course that talks about these things. Give me 10 of your top options or you know? And, you know, so I start to you know, giving it to prom. And then I'm not taking those names. I mean, quite frankly, that's how I came up with tactical communication, because one of them had the word tactical in there. I was like, oh, I need that. That's so descriptive of what I do. And, you know, I use that as fertilizer to come up with the things that I want to write. And that to me is the best use of the generative AI in that context. And then, of course, you've got, you know, the models that can, you know, scrape websites and find information and actually build assistance.

[00:29:51] Rajan:  So there's a whole bunch of of facets to this. David, my last question is gonna be, what is a question that most people should be asking today, but they're not asking as of today on AI?

[00:30:02] David:  Well, there's a whole bunch of them, but I would say for them to ask their, what I'll call the their AI professional, whether it's their AI team, whether it's a vendor that they're talking to to provide language models or services or anything like that, and say, give me some examples of some of your successes with AI. Show me where this don't tell me all the cool stuff you've done. Don't tell me the cool stuff you can do. Don't tell me why your large language models are better than anybody else's or faster, whatever. Show me some results. And that goes back to, let's use that as fertilizer for me to have some of my ideas. Show me some of the things you've done. Let me tell you what my business is. Tell me what you think and where you think I could, you know, benefit from putting various types of AI into my business, into my products, or delivering services for my customers. Again and as a technologist, k, as a consultant and so on, we all jump into the solution space way too fast. Something else, again, it's all part of my tactical communication stuff, is first seek to understand, then to be understood. And so many people are jumping and saying, I know what we can do. And they do that before they really understand what the problem is. And right there, they're already off course because they can't possibly know anything that quick. So even the people who are looking to implement AI projects, they should say, here's my business. Let's talk about some ideas. Tell me what your ideas are for me, and show me how things you've done relate to this and what successes they've had. And it's not like you're gonna try and mirror those successes or replicate those because everyone's business is different, but use that as fertilizer for how you want to move forward in your business.

[00:31:59] Rajan:  Dude, I'm hearing you say that most people ask the question of how is AI gonna replace things. What you are suggesting is you think it is better off if people ask how do you use AI as a fertilizer, as a catalyst? And that's a better question and a powerful question.

[00:32:12] David:  You know, in in some cases, people say, I need to simply deflect calls in my call center. Okay? And that's an assistant. That's a pretty cut and dried thing, but a lot of people don't quite know what they wanna do with it yet. And even if you spend a couple just spend a couple of days on it. No one's saying spend six months to a year, you know, sitting around pontificating about this, but it doesn't hurt to ask those questions because that time spent upfront is going to save you tons of money and help you save tons of time on the back end in putting the projects together.

[00:32:45] Rajan:  David, it was so wonderful having you on the show. Thank you for joining us.

[00:32:49] David:  Thank you. Um, I'm just throwing it up one more time in case anybody didn't get a chance. And, look, when you get that stuff, you're gonna get a couple of emails from me. I would love to talk to anybody about this stuff. My passion is, as you can tell, is helping people through this stuff. I'm not trying to sell everybody everything. Okay? I want people to make better decisions, and that's why I do what I do.

[00:33:12] Rajan:  Wonderful. Pravit, thank you. Thank you. That's it for this episode of pivotal clarity. This is an Opega podcast. Opega is an accelerator for global Indian founders building AI software companies. We're exploring the fast changing world of AI together with our listeners. If you like this podcast, you can find more on our website and other popular podcast apps. Subscribe if you want to keep up.

Latest Podcasts
Ep
1
What's Real in AGI: A Conversation with Peter Voss
November 1, 2024
36:15 Min
Ep
2
How AI is Actually Changing Accounting with Dr Mfon Akpan
October 30, 2024
39:14 Min
Ep
3
AI's Leverage - Timing, Data and Human Relationship: Insights frm Aravind Krishnaswamy
November 1, 2024
42:19 Min
Ep
4
The Surprising Upside of AI Regulation with Maya Sherman
October 30, 2024
36:07 Min
Ep
5
Robotics Inflection Point: Arshad Hisham on The AI-Robotics Stack & Traditional Labor Markets
November 4, 2024
28:52 Min
Ep
6
When Laws Meet AI, with Laura Carr
November 19, 2024
19:10

Get Connected with Upekkha

Want to understand if Upekkha is right for you? or Get AI SaaS Report