Ep
4
October 30, 2024
36:07 Min

The Surprising Upside of AI Regulation with Maya Sherman

On the Podcast
Speaker

Maya Sherman

AI policy researcher, advisor, & ethicist, at the Embassy of Israel
Host

Thiyagarajan M (Rajan)

Partner, Upekkha

In this Pivotal Clarity AI Podcast episode, Maya Sherman, an AI policy researcher, advisor, and ethicist, at the Embassy of Israel  joins host Rajan Maruthavanan to discuss:

  • How AI, ethics, and climate change intersect
  • Ethics, bias, and human-machine collaboration
  • Why founders must embrace AI regulation
  • How AI could revolutionize climate change

An Innovation Attaché at the Embassy of Israel In India and AI Literacy Project Co-lead at the Global Partnership on Artificial Intelligence, Maya has spent over 11 years helping global organizations and governments navigate AI transformation.

With a focus on ethical governance, she's co-authored key books on AI in the public sector and adapting public service for millennials. Her work spans continents and cultures, offering expert guidance on integrating AI responsibly.

Transcript

Rajan: AI is reshaping businesses and ignoring it could leave your industry disrupted. I'm Rajan, and this is pivotal clarity. We talked to those building or using AI founders and engineers with real world experience. Our aim is to cut through the hype and see where AI is truly making an impact. If you're a business or following tech trends, these conversations offer, uh, clearer insight than most of the press. Let's get into today's episode. Welcome to Pivotal Clarity, an AI podcast. Today, we have with us Maya Sherman. Maya works on AI policy and ethics. She spent over ten years at the intersection of AI ethics, climate technology, and what you might call digital diplomacy. Right now, she's the senior innovation officer at the israeli embassy in India, where she's trying to get India and Israel to work together on climate tech and AI technology. She also helps run an AI education programs at the OECD, where she's teaching people in climate related jobs how to use AI. Basically, she's trying to make AI, uh, more useful, less scary in areas that really matter. Maya, welcome to the show.

Maya: Thank you so much. It's wonderful to be here.

Rajan: Maya, what made you decide to work on AI and climate? Was there a specific moment when you realized that the, this is an area that you want to focus on?

Maya: It was a very interesting and gradual process. I started in cybersecurity, worked with enterprises a bit more with governments, and I gradually shifted into AI. And I would say to the beginning of data ethics in the academic discipline. But as time passed, I was looking for the social avenues, and I would say it was during Covid-19 and back then, I felt that the discussion on climate existed, but it was not booming as it is now, I would say. So I actually started a bit more with the rest, inclusion technologies. What led me to work more on, I would say, the intersection of AI, climate change and ethics was my work and research on gender and AI. And it's not about the climate aspect in a way, but it's for me, it was the small moment in understanding that intersectionality matters, meaning it doesn't matter if I work. And on AI related matters. And like very big fortune companies, there will always be this component connecting all of us to the discussion. So it will never be just me in front of the product designers, the Havi core tech practitioners. It's actually about the intersection of all of us under one big ecosystem as the environment. And working on the ESG with these companies made me realize that the discussion should focus in the moment on what we can actually do that requires AI, which I believe is climate change, trying to see how we can enhance resilience. So it was an interesting process that actually happened to me while working with enterprise clients. Uh, so I think it surprised me in a way. But perhaps it was my wish also to try and shift and create perhaps something new with a different industry, leading me to India, which was also not always as expected.

Rajan: You work in the innovation department and you also m just mentioned that, you know, it's about, uh, working on the intersection of ethics. Innovation is about change and ethics is about safety. And sometimes this can come across as like a trade off or a balance. How do you actually navigate this? Are, uh, there any specific examples that you could share?

Maya: So I think this is such an interesting perception, because I. It's true that ethics is about safety, but I would also say that ethics is about what we want to be as society. So I think it's broader sometimes, because it sounds very frightening, very academic. It's everything that we shouldn't do. But if we think about it thoroughly, it's also about what we should do, meaning it's also an enabler. And the thing is, of course, ethics gives us, uh, a lot of restrictions of what a woman wouldn't like to see in society. And of course, when we're speaking about innovation, we want to see growth. So I would say in the most basic aspect, whenever I had to design products that are, uh, AI driven for, again, mass population specifically, I would say in HR, tech companies, and it's also true for climate related services. So, of course, the goal of innovation, we want to see gains, we want to be able to return the investment. So we would like to make sure that we're able to increase the scope, that we're able to reach out to those that would be able to pay back for the investment. But the trade here, and I would say the ethical trade up here, is the fact that we also need to think on, um, who has the access. Meaning, if I'm going to make sure that Netflix is accessible in all the places can actually afford it, there are many places that won't be able to get it. What about all these places that do not have digital infrastructure? What about all those they do not know how to use properly, mobile devices do not know how to access their computers. Now, today, it sounds a bit odd, uh, in certain places, and I would say out there, but there are still places actually that do not have the infrastructure. So there is a kind of a trade off, because in the end, I mean, private companies and generally innovation, we want to grow we want to make sure that our services are wide enough and strong to make sure that we're able to sustain business models. So this ethical perspective is a bit of a hurdle, because we are actually saying, okay, we don't really care just about the clients. We want to make sure that whoever wants to be a client can be. And that's a tricky part. But I think, and uh, I think that's what I perhaps love the most about the climate discussions in AI, is that specifically in this space, we need everyone involved, so we don't have this selectivity. And I think that's why I also shifted from the industrial side of innovation, how to again, work with product designers. I worked as a technical product manager, used to launch product features, uh, that are connected to AI, to what I think today we're seeing in the climate space, and also I would say in edtech and HR tech as well, that we are currently trying to see how we're designing products with the most inclusive audiences we can think of. So there is a balance between the inclusiveness and the commercial aspects. But I do feel it, in the climate space, this trade off is slightly more understood and it's almost required. So this is from our perspective and there are countless of examples, but I think that the climate space brings this intersection in a very interesting light and enables this trade off in a more sustainable way.

Rajan: So you mentioned something very interesting that actually changed. My perspective on ethics is not about what not you do, but you know, it's about what you do. So what is some other misconception, uh, about ethics? When people talk in the context of like, you know, bringing technology, I mean, you talk to technology founders, everybody is just thinking about like how you said product, how do I, how do I create great experience? But then it is also like an enabling design tool, rather than like a, ah, regulating design tool. So what are other misconceptions that people have about ethics?

Maya: First of all, there is a very big misconception that it's purely academic, meaning that it's a very core theoretical school of thought. There are so many disciplines and usually that companies should not be involved in it, because again, it's a scholarly domain and think that if there is a discipline that went such a massive transformation in the innovation space, it would be ethics. The thing is, uh, we need to think of it in the different taxonomy that it has received. When we are speaking today about AI, fairness, responsible AI, algorithmic, uh, fairness, social AI, we are actually speaking of some of the ethical roots. Does it mean that we need to know everything about ethics. No. In the end, we, especially those working in the tech space, we don't have the time to go into every discipline. But the ethical discipline brought some core fundamental changes, in my view, to the way that we are seeing AI today. The fact that we're speaking about responsible AI, I would say that this is the result of a very intensive and gradual process of us analyzing what ethics and AI bring to our world. And this was, it took, I think it took a few years, actually, to understand it properly and also have this commercial perspective. So this is one perception, the other one, uh, is that we can actually be very mindful of our biases, and we can control very clearly how we design the product. So there will not be, I would say, so, biased against certain populations, and we can enable discrimination free services. Most studies, most products will show that there are so many biases that we are just not aware of, and there are also biases that are not always resulting from the developers. There are biases within the process, there are biases within the algorithm. So sometimes it's just above what we think we know about biases. So this is a bit tricky, because this actually, when I'm ahead, actually saying here that we're not able to prevent biases in our products, and this is a bit of a sand, perhaps realization. The thing is, it's important to be aware of it, because we do have some debiasing tools, we have the ways to make services more accessible, understanding that we are not able to encompass all the biases. But first of all, we need to understand that, uh, we are limited in our ability to understand this ethical concept of bias. And I think that the third one, and I think that's the thing that sometimes bugs me the most, and it bugs me because I feel that I also started with this conception, and it's the fact that algorithms are bad, they're going to destroy us. They will take our jobs, and the athesists are here to protect us and tell us not to use technology in such way. And this, I would say, was between the beginning and some of the discourse that's kind of highlighted AI as the biggest devil, the algorithm that will replace us, singularity in its most, uh, dystopian meaning. But I think as time passed, we were able to balance this. So I would say this is co op field perceptions. So today, we do understand that humans and machines actually work together quite well in some cases. That we learn a lot from the way that machine interpret human inputs, that actually, if we use properly machines, we can actually fill in a lot of the gaps that we are able to deal with. So I think that this understanding of human machine timing is something that has been on discussions for quite a while. Sometimes the warfare discussions and the place of drones kind of frightened all of us from thinking of what AI will do next. But if we're seeing today, especially in the climate space in education, AI can do magic in certain disciplines. And I think that also policies and regulations, trying to understand this in the sense that we should not prohibit AI, ah, from happening just out of the fear visible replace us. But we should also think more how AI can complement us and become our extension.

Rajan: Yeah. The last or the third part that you said, I think is a straight out of. Because, you know, we watch too many movies like Terminator and Matrix, and, you know, you can't make, uh, a popular movie unless it is very dystopian. So that's why, like, you know, AI gets painted in that dystopian version. And that creates that misconception about the role of the ethics and things like that. Like how you said, you know, AI can be such a force of positive change. Maya also talked about, um, like, you know, the role of regulation, innovation. And it's trying to understand the world that now I work with is this founder. They don't use the word, uh, innovation, but they use the word startup. And like, you know, when I was earlier working at intuit, we would use the word innovation and innovation. The one description that I, uh, heard was that innovation is when like, you know, you understand, or it is like, you know, the melting point or the meeting point of there is a change in the user behavior which aggregated into market behavior. There is a change in technology, and then, like, you know, a regulation comes and meets it in terms of making sure that it is more for like, you know, benefit of the society. Now, usually I don't see founders thinking about regulation. I mean, founders are always thinking about like, you know, the next day, the next year. Regulation has an horizon of, let's say, five years or ten years. Like, you know, you release a policy and it has an impact three or five years later. How do you have conversation with business folks? Or even if you, uh, meet with founders on how they should think about regulation? And specifically from an AI perspective, if you have any examples or specific mentions about, these are the things that founders should think about to make sure that this innovation really transforms into that, uh, force of positive change.

Maya: And this is probably one of the biggest clashes and probably one of the more interesting ones I've seen I would say in South Asia, and I used to live in, I would say Europe before. And in Europe it's very different because regulation has almost a sacred place, and there are a lot of reasons behind that. I mean, um, because again, it's kind of like the strategic asset sometimes of some of these think tanks, of some of the governments, that they're able to produce very strong cross cultural relative pieces that can actually afterwards be shared globally. But I would say that today, and we're seeing this, I would say perhaps specifically India, a country that promotes innovation in a responsible manner. Because again, there is a very big population, a, uh, vast amount of startups and very skilled tech workforce in the same time. We need to make sure that this population is not harmed by this, especially when it's a diverse population. So today we're seeing also the regulators being much more involved in the technological discussions. If in the past we had almost echo chambers, meaning the, um, tech force meeting the tech folks and the product designers, and it's usually kind of like technical meetups, I would say perhaps again, more from the indian angle, but again, many other countries as well, we are seeing much more multi stakeholder initiatives, much more meetups connecting everyone in the table, understanding that we will not be able to innovate and create without limitation. It won't happen, because it can't. If AI continues to be as pervasive and as strong as it is right now, it will need to have some limits, because that's the way that we behave as humans. We need sometimes some guidelines. But what I would like to say, and that's why usually I say to some of the founders and startups that I'm meeting today, it's crucial to understand how regulators think. Also because they are also compromising and becoming more agile and flexible. They want to understand the technical discipline, they want to understand how the product and services can scale up more. The thing is, AI is new. So sometimes, because again, it was so new in the beginning, then, it was also hard sometimes for the regulators to invest the right time in learning it. But today, in the era of generative AI, regulators are taking the time to understand it because it's more accessible. But in the same time, founders have to understand what's happening on the regulation side because so much is happening and why it matters. Because in the end, this regulation, again, it's, of course it can be kind of like an advisory document, but in the end, it can have some repercussions. Meaning if things do not go in the right directions, if certain populations are harmed if the AI driven service goes out of control. That's the place where Gullathean comes in. And startups today have to understand what kind of advisories and guidelines come up from these regulators. And I feel that, you know, it's almost like, um, two forces going in, like, from opposite directions. But we have to meet, and this is specifically what we've been doing a bit with the global partnership on artificial intelligence, because we are coming as a group of, um, individuals and entities across disciplines, because these discussions are the ones that will enable us to discuss more and also see what are the gaps. So I definitely, that's what I usually recommend to startups and founders, just to understand that there is also a very big change in the regulatory landscape that enabled this discussion much more often.

Rajan: Great point. So, um, I recently read a paper on the topic of regulation and in AI, and it said, metaphors and language shape a lot of policies. Um, and when you're talking with bureaucrats, or when you're talking about other stakeholders, and like, you know, when the tech folks come in, they know what they are talking about, perhaps use esoteric tech language, but when you are having conversation with other stakeholders, are there languages and metaphors that you have seen that people carry over from their other, uh, exposure of, uh, industries or other things that they know that comes in the way or has helped in certain way in the regulation conversation? I'll just give you a simple example. I watched this video almost like a decade ago where some regulator was, some bureaucrat was talking about the cloud technology being the literal cloud, and saying that, you know, if you don't, if you don't regulate it properly, then our data will go to the cloud, and then, you know, it'll be accessible to everybody. But those who are from the tech field, they know, like, that's not how cloud works. I mean, it's very different. But then, like, you know, language, uh, comes in the way of understanding. So how do you deal with it? Have you seen any examples that, uh, made you chuckle?

Maya: I feel that there are certain terms and terminologies that start to be a bit common between the, between both sides. So, for instance, I saw that ethics by design as a term, started to be as a kind of a strategic bridge. In a way. I do feel that, for instance, everything connected to the iceberg, uh, again, the tip of the iceberg, the illustration, this is something that has been repeating in both sides. And I think that it's a good one, not because it's necessarily so technological or so strategic, but simply because it showed that both sides understand that there is so much to AI that we don't know. And I think that there is something in the uncertainty and this narrative of uncertainty and not being so sure on where we're going that kind of bring both sides together. I feel that in addition to that, similar to this example, of course, and I would say that this illustration that repeat themselves and attempt to create this, uh, sometimes technological variations to solve the more of the policy schemes and vice versa. So you see, all of a sudden, uh, sometimes discussions on agile methodologies in the policy space, everything connected to policy simulation, policy prototyping. Who would have thought that this will actually come into a very, I would say non dynamic discipline and something that is supposed to be very thorough, very slow, very waterfall, if we're thinking on the technological meaning here. So I think that this entire discipline of um, testing and product testing really got into the policy space. And to me this is fascinating because it shows how much also regulators are trying to go into this. And I think that it's the uncertainty that brings all of us together, because we are, in most cases, we're just not so sure what will happen afterwards. Will we see other things about AI? What's about quantum, um, what's about climate, but what about all the other problems that are still there? So, yeah, I think the testing discipline has been a very interesting bridge between these two blocks of knowledge.

Rajan: I also followed this, uh, professor from Princeton who's almost, uh, going to get out a book soon, uh, this month called AI as snake oil. And uh, his uh, focus is on some terminologies. Double clicking on that question. He says that there are, when you give this in the hands of the marketer or in the hands of a Hollywood, uh, movie maker, you know, they blow it out of proportion. Are there terms that you feel that have just gone out of proportion? And uh, like, you know, what are some of your thoughts on that?

Maya: It's especially true with the way the culture shaped a bit, the perception of it. And I think that's perhaps the problem that I feel a bit that in many ways, the ways that series and movies kind of created the image around AI, then this created a lot of exaggerated examples and scenarios of cyborgs, of super humanity, of the world being destroyed and controlled by machines. And I think that's in a way, human creativity and the way that we try to think of, again, perhaps a, uh, better selling scenarios for movie kind of shape, the way that we are perceiving technology as waived, and that without thinking that perhaps technology is not exactly in such a scary place, because I would say various reasons, first of all, to be as afraid of technology, the way that something is being perceived, we need to have a vast infrastructure all across. It doesn't happen. Not everyone has the access to it. And even when they do, then it's a question of motivation, it's the question of the wise. So I think that a lot of the exaggeration comes from the very strong hypothetic visual and representation of a consuming in our culture. But it doesn't always have such strong connection to the way technology is developed. There are a lot of discussions I found to be very interesting, and that's how actually the way that culture kind of shaped technology and how that actually inspired developers to try to do something similar. I think it's kind of like, again, the chicken and egg in a way, but it definitely led to a lot of hype and a lot of exaggeration. Not always as needed, I think, for innovation purposes.

Rajan: Maya, you're trying to teach, uh, farmers in India about AI. What is the hardest part? Have you found any trick to make it easy to help them understand AI?

Maya: It's tricky because AI has some scary possibilities of replacing the certain functions of farmers. So I think that the first fear, and that's what we usually encounter, is how we make sure that they're able to trust an alienated artificial component and more than anything, not something that can bring benefits and costs right away. Because compared to a lot of other devices, it's not always as clear to farmers that, okay, if I put this AI driven product or service as part of my day to day, will I gain more? Will I be able to sustain my family in a more sustainable way? And what we do, uh, and again, the vision of the project here with the OECD and GP, um, is to make sure that we are supporting the informal sector and those who usually do not have the access to technology or the knowledge, uh, because again, they are working in other disciplines. So we are focusing on the common case studies, meaning everything, uh, that shows how AI directly benefits into agricultural processes, streams, and trying to go to the, again, to show this case that is across the country. So it will be again, more familiar to the farmland, more, uh, and starting to feel a bit more, I guess, comfortable with the concept. So it's kind of an intro, of course, we're using a lot of visuals. And the main point in India, of course, um, there are more than, I mean, there are more than 20 official languages, 900 non official ones. So whenever we're doing the pilots and the courses, we're making sure that we have the course translated to the regional or the more common dialects. Uh, so it's a long process, it's very interesting, and of course, it's very different than implementing AI literacy for other sectors, but it enables the ecosystem to be more diverse, and more than anything, it enables more actors to be part of the AI evolution, in a way.

Rajan: Maya, you worked with governments all over the world, and you may have, uh, encountered, uh, businesses and startups working with different countries. Have you seen some common mistakes that people make when, uh, going to different countries and trying to help bring the change with the new technologies, such as AI or in climate?

Maya: I think that the biggest mistake is usually trying to implement whatever we have from home or from our original country automatically in the new country. So, for instance, I will also speak about myself, because, I mean, I've always worked in startups, actually, and when I came to India, I came directly from the UK, and before that, Israel and I, it's completely different ecosystem. So, for instance, everything connected to languages was not such a complex issue. Israel is a very small country. Only Hebrew is, I would say, mainly spoken in the UK, of course, Anglo Saxon, and then, you know, in index, it's a whole different discussion. So I think, first of all, so I think that just being open to see changes in the new place that you will be, and not coming with biases from the previous place that you were, uh, it's good to have some assumptions, more than crucial to be able to break them, to come as bias free as possible to the process. I think another thing, uh, and this is perhaps more on the governmental side, is, I mean, if there is a variance between you as a practitioner and the ecosystem you'll be in. This violence is even bigger if you're coming as a foreigner, when you're working with a different government, because government behave differently, there are different priorities. It takes time to understand the ecosystem, and especially the governmental ecosystem and the way that they work in the country. So, in addition to understanding how a, uh, new place is behaving, it's also important to understand who are the factors that can lead you to the more strategic places to lead to the change, and more than anything, understand what the new place needs. And I think that's perhaps the biggest misconception, because sometimes we think that we know what the new place we're in will need, and in the end it's different. So all of this sums up to being open, that when you're moving to a new place, and I know this was true in my case. A lot of the assumptions will be broken, and it's good because such a big part of the process should be dedicated to understanding where you are currently, understanding what this place needs, and then seeing what you can give in return. And if you think that your valuable position is not strong enough, then that's the place you need to work on. And I think it's a healthy process, but requires some time. I know it was at least the case in my case.

Rajan: Maya, we are coming to the close of the podcast. I want to ask you one last question. What do you think AI will actually do about climate change in the next ten years? Is there some new way of working together that could change, uh, everything? And, um, just as part of this, I would say that given what, you know, if you are doing this, let's say just right out of college, how would you try to make a, or do things differently?

Maya: I think that we will see their discussion in a much broader discourse. I feel like today we are going very service specific, very app related, very sector connected. I think that, and it's not just an AI phenomenon. Sometimes we are over focusing on AI and we're forgetting that there is a, uh, whole other set of technologies shaping the way that AI is shaping our reality. So it's shaping the shaper. I think that we'll see, uh, a lot of discussions on the collective aspect of technologies, smart cities, smart infrastructures. We are seeing this a lot with the discussions on open source and digital public infrastructure. I feel that the next ten years will be about consolidating that. Meaning how can we shift AI something that is, um, kind of tricky black box model, meaning we don't know how it's actually structured. Not everyone knows how to access it. Even with generative AI. There are more complex forms to this, but not everyone is aware of it as well. But it's still, there is. There are so much funds and startups and accelerators just focusing on AI. What I think will be moving to see is everything connected to distributed forms of Aih, meaning how we can see the evolution that happened with cryptocurrencies and blockchain in the broader form of data, and more interestingly, in AI driven outputs. And this is, I would say, such an interesting development in the climate space, because open source is probably one of the more relevant things when it comes to tackling public challenges and societal challenges, and would be able to actually have, for instance, all our natural hazards data in an open source data set that, uh, everyone can access in the same time, in real time, regardless of where you are, would be in such an interesting position to actually solve some of the more tricky climate issues in society. And I think that that's the way that we're heading. So I feel that also AI is shaping the bit. If I were in college, I would say not just focus on AI, which I know it's kind of paradoxical because the entire podcast was about that. I would say it's good to learn what technology is, but more than anything, it's important to develop critical thinking. Today it's AI, tomorrow it might be another technology. More than anything, it's important to have tech practitioners who understand the intersectionality with other things. It's important to have policymakers who understand how innovation shapes reality. It's important to have educators who, who understand AI crucial to have climate specialists who understand how technology can save the planet. And the list doesn't end. So I would say being relevant for few disciplines and understanding how they're connected will probably be a smart way going forward for anyone who is graduating or thinking of future career steps.

Rajan: In my perspective, uh, are there any questions that I should have asked in this discussion that I did? Nothing.

Maya: I think that the question of uh, if AI is here to stay or something will replace it. What are your thoughts?

Rajan: I mean, if you ask me, I'm a big believer. I built a startup 15 years ago and since the chat GPT moment, I see that AI as a technology can actually create a lot of business impact. I do believe that AGI will eventually come. And uh, if you actually use the Turing test, then the turing test is already passed. And most people keep moving the goal post about like what good AGI is like. I don't get involved in the philosophical discussion of AGI. I'm more keen on saying what's the business impact that it can create? What's the societal impact that it can create? So uh, if you ask me on like where uh, what do I think about like will it be around? It is definitely going to be around. And I'm an, ah, optimist. I like watching movies like Terminator and Matrix. They're all favorite movies. But I actually believe that, you know, technology brings a positive change. And I uh, believe that AI will have more positive impact than negative. I always have this conversation with people that technology is neutral, it's how you use, uh, it and what you do with it. You can use fire to destroy neighborhoods, um, but then you can use fire to actually cook food. And then that has led to evolution of humanity. I mean, fire was such an important invention of technology. So the technology is always neutral. What you do with it is as important. I think the role of regulation is in saying that. How do you make sure that you use the technology for more good than more harm? So, um, AI is here to say, and I think it will bring a positive change.

Maya: Wonderful. I'm glad that you took this question.

Rajan: Thank you, Maya. It was, uh, wonderful chatting with you, understanding, uh, technology and the intersection with regulation. I loved your, uh, explanation on ethics as a positive design as opposed to what to do, as opposed to not to think about it, what not to do. Thank you for coming on the podcast.

Maya: Thank you for having me. It's been wonderful.

Rajan: That's it for this episode of Pivotal Clarity. This is an Upekkha podcast. Upekkha is an accelerator for global indian founders building AI software companies. We're exploring the fast changing world of AI together with our listeners. If you like this podcast, you can find more on our website and other popular podcast apps. Subscribe if you want to keep up.

Latest Podcasts
Ep
1
What's Real in AGI: A Conversation with Peter Voss
November 1, 2024
36:15 Min
Ep
2
How AI is Actually Changing Accounting with Dr Mfon Akpan
October 30, 2024
39:14 Min
Ep
3
AI's Leverage - Timing, Data and Human Relationship: Insights frm Aravind Krishnaswamy
November 1, 2024
42:19 Min
Ep
4
The Surprising Upside of AI Regulation with Maya Sherman
October 30, 2024
36:07 Min
Ep
5
Robotics Inflection Point: Arshad Hisham on The AI-Robotics Stack & Traditional Labor Markets
November 4, 2024
28:52 Min
Ep
6
When Laws Meet AI, with Laura Carr
November 19, 2024
19:10

Get Connected with Upekkha

Want to understand if Upekkha is right for you? or Get AI SaaS Report