Ep
1
November 1, 2024
36:15 Min

What's Real in AGI: A Conversation with Peter Voss

On the Podcast
Speaker

Peter Voss

CEO & Chief Scientist @Aigo.ai
Host

Thiyagarajan M (Rajan)

Partner, Upekkha

In this Pivotal Clarity AI Podcast episode, Peter Voss, Founder and CEO of Aigo.ai, discusses the evolution of Artificial General Intelligence (AGI) with host Rajan.

He highlights AGI's origins, current misconceptions, and the need to focus on real-time learning and adaptability over the narrow AI applications currently at play.

Peter critiques reliance on large language models and advocates for a cognitive approach that mimics human learning.

Here are the key talking points from the episode:

  • The Evolution and Future of Artificial General Intelligence
  • Why AGI Makes People Uncomfortable
  • Hollywood's Influence on AI Perception and Industry Dynamics
  • The Future of AGI and the Limitations of Large Language Models
  • Integrating Neurosymbolic Architecture for Enhanced AI Reasoning
  • Pursuing AGI Over Narrow AI for Greater Impact
Transcript

Ranjan : Welcome to the pivotal clarity AI podcast. Today, we are thrilled to have Peter boss with us. A pioneering force in the field of AGI. Peter has built many companies. He's a scientist, he's an engineer, and he's a. Also a founder and a CEO of AI. Go. And Peter has been working in the field of AI for a very long period of time with a special focus on AGI. His life mission is to bring human intelligence in a way that is accessible to everyone.

Ranjan: Welcome to the podcast, Peter.

Peter: Yes, thanks for having me.


Ranjan : Awesome. So, Peter, 20 years ago, I fell into this rabbit hole of AGI when I just graduated. Back then, it was called as strong intelligence. And then when I was doing this, I stumbled upon Roger Penrose, and we had this framing of that there is strong AI, weak AI. Today it is called as AGI and narrow AI. And where I came and stopped was to say that, look, there isn't math that is ready for AGI or there isn't conceptual structures that are ready for AGI.

Ranjan : And ever since, that, lot of things have happened. And now, like, you know, thanks to the kitty Hawk moment of 2022 November now, where, again, most of the world is excited about, like, you know, the possibility of AGI and what it can do, can you talk about what has changed? And you've worked in the field of AGI for a very long period of time. So what has changed in the history of AGI, if you were to call that? And why is it exciting now?

Peter: Yes, certainly. Well, let me go back really far. Let me go back to the beginning of AI, when the term AI was coined 69 years ago. It was really about AGI, what we now call AGI. It was about building thinking machines, machines that can think, learn, and read the way humans do. You know, they can learn interactively, that can form hypotheses and be scientists and learn, really, any job. That was sort of the original ideal.

Peter: And actually, people thought they could crack this problem in, you know, just a few years, 69 years ago. And of course, it turned out to be a much, much, much harder problem. So what happened? The field of AI really changed into a field of narrow AI. So what we've seen in the last 60 years has really been narrow AI, where you take one particular problem that requires human intelligence, and you use human intelligence to write a program to solve that problem, whether it's container optimization or perfect example, is deep blue, IBM's world chess champion.


Peter: But it's not really that. The chess program has intelligence like a human. It can't even play checkers you know, so it's really the external intelligence, the intelligence of the programmer, the data scientist, that solves the problem, the particular problem, or set of problems, as we now have with chat GPT. So I spent several years studying intelligence and kind of to get, because I was interested in achieving this goal of AGI, or thinking machines.


Peter: And that's when I learned or understood that we really are just everybody's working on narrow AI, very different. So in 2001, 2002, I got together with a few other people, and we decided, we thought the time was right to go back to the original ideal of building thinking machines. So we decided to write a book on the topic. And we gave. You came up, three of us came up with the name artificial general intelligence AGI.


Peter: Now, of course, 20 years ago, 22 years ago, that didn't get much traction in terms of, you know, there's a very small community of people pursuing AGI at that time, but it's really going back to the original idea of building thinking machines. Now, as you mentioned, now with chat GPT, suddenly people kind of get a taste for, wow, this seems like really close to human like intelligence. And in some ways, it is very close. In other ways, it's really far, far away from that.

Peter: So there has now, of course, been a major resurgence of interest in AGI, and the term AGI used and abused. And it's very annoying from my point of view, having coined the term, to get away from narrow AI. And now people are using it again for things that really aren't general intelligence. For example, particularly egregious is Sam Altman. A few months ago, he said, we'll have AGI soon, but it's not going to be such a big deal.

Peter: No, he's not talking about AGI. If we have AGI, it'll be a big deal. So, you know, and there are all sorts of marketing reasons to raise money or whatever to say, hey, we'll have AGI soon. And then, so people don't get scared, they'll say, well, it's not going to be such a big deal. You know.

Ranjan: Yeah. So why does it make people so uncomfortable? So there are two things that you talked about. One is that everybody has their own definition of AGI. And like I said, it's so annoying to see so many white, different and the hands of marketer. It is completely butchered. And everybody has their own definition, but it also makes people very uncomfortable talking about AGI. Why is that?

Peter: Yes, actually, sure. I just put the shade here.

Ranjan: Yeah, of course we can edit it.

Peter: Suddenly, the sun decided to shine the lighting. Okay, so, you know, why does it make people feel uncomfortable? Well, you know, the changes, big changes, always make people uncomfortable. And, you know, some people more than others, I mean, some people look forward to new and exciting things and are kind of adventurous and. But that's not the norm. Most people are just inherently comfortable with, you know, slow improvement, steady progression, or even yearn for the way things used to be in the good old days, you know?

Peter: So I think that's a, that that's kind of a pretty standard part of the human condition that, you know, change is scary because you need to, you need to adapt. But then you build on top of that, Hollywood, and Hollywood has always portrayed AI as the bad guy, you know, as the scary thing. And so you have the combination of this vision of AI that's going to kill us all from Hollywood, because that makes a good story.

Peter: I was once asked to consult on an AI film, the well known director, and I read through the script and I made a few suggestions from a technical point of view, but then I said, why can't the story be where at the end the AI's and the humans actually work together to help human flourishing? Well, I didn't get any traction. You know, if it is not dystopian, that's not going to sell, you know?

Ranjan: Yeah, yeah. If it is not dystopian, it is not going to sell.

Peter: Right.

Ranjan: That was one reason.

Peter: But, you know, I think there are, it's actually a very complex story, you know. So on the one hand, you have the sort of inherent reluctance or people being afraid of change, and then you have Hollywood stories, but then you also have this massive industry that is now hundreds of millions of dollars of AI risk. And it's literally an industry where people can basically, this has been around for 20 years now, but it's been growing tremendously where some bitcoin billionaires have put in hundreds of millions of dollars into funding research about AI risk.

Peter: Now, there's virtually no money going into the contrary arguments. So here you have a whole industry of people who make a living by saying AI is going to kill us unless you give us money to solve the problem either by legislation or by coming up with provable, safe AI and so on. So you have this whole scare industry. And, you know, then you have some very prominent people who I think, based on very faulty arguments, very faulty assumptions, have come to the conclusion that AGI will be dangerous. And, you know, I've studied this quite extensively and written about it, talked about it.

Peter: They're really based on a lot of faulty assumptions. So I believe the rational approach or the rational analysis of AI, of AGI, proper AGI, is that it's much, much more likely to be beneficial to humanity than to be a risk, certainly an existential risk.

Ranjan: Peter, how do you think about the future from an AGI perspective? Without sounding like a textbook, I'd like to have you describe like, you know, what would be the definition for AGI? And, like, you know, how does it look five years from now?

Peter: Yes. So the best way to describe AGI is really to think of, you know, say, college graduate level of human intelligence, where, you know, a college graduate can. A smart college graduate can potentially become anything if they have the right motivation. They can become a programmer, they can become a lawyer, an accountant, or a customer service representative or whatever. The big thing is that it's the inherent intelligence that allows you to learn new things in real time and to adapt to changing circumstances, to be able to generalize things to.

Peter: You get a few examples, and you now can generalize. This is a key part of human intelligence. So it's the ability to really learn new things in real time and to be able to generalize them, to be able to think and reason about them. And that's really what AGI is. And any system that cannot learn and adapt in real time is not AGI. And there are profound implications there, because it means, unlike what you have to do now with large language models, where there's a lot of engineering that goes into to fine tune the model, to design a rag or to see how you're going to use the input buffer and how to train the system, where you're going to get the data, how you tweak the parameters, prompt engineering.

Peter: You know, there's a lot of external human intelligence that goes in to actually get the things to do a particular job. And then if the job changes, you, basically, the human has to go back and retrain, retweak the system and adjust it. And that's not AGI. AGI needs to be able to learn and adapt to just changing circumstances in real time, autonomously, by itself. Now, of course, the other thing is, it's absolutely ludicrous that people are talking about building new power stations for data centers.

Peter: The CEO of one of the companies is talking about, we will probably soon have models that cost $100 billion to train. They're talking about 100 x improvement every two years or something, and then extrapolating of how big these models will be and how expensive. Well, even if they're not it's completely ludicrous when you consider that our brain is 20 watts, not 20 gigawatts, you know, and is able to learn, and children learn language and reasoning with maybe a few million words, not tens of trillions of words or tokens, it's clearly the wrong path. It's like trying to build bigger ladders to get to the moon instead of building rockets.

Peter: So there needs to be a fundamentally different approach to get to AGI. And of course that's what we are doing is called cognitive. Aih, like Jan Lecun is saying, I mean, the head of AI of meta or Facebook, he's saying large language models are an off ramp to AGI, a distraction, a dead end. He puts it very clearly, and in fact, he tells the students, you know, don't build your career on LLMs, don't study LLMs. That's not where the future is. And I would totally agree with them.

Peter: While people can make a lot more money, especially if you can hype it up and raise a billion dollars at a $5 billion valuation with no product and no revenue, of course, as long as you get away with that. But it's clearly there's a dead end. It's not the way to get to AGI.

Ranjan: So beta, with that background of a definition of AGI in terms of it needs to be real time, it needs to learn, and it's not about like language model, or I've heard you also say that it is not about that statistical representation, but that is the background founders that are seeing that this change is happening in the world, this technology is changing, this inflection point, if you were to call that, how should founders think about using this right definition of AGI when they're building a business?

Ranjan: How do you think about it?

Peter: Right? So there's definitely money to be made with large language models, and in fact, there's a lot of money to be made because they do require all of this external intelligence, you know. So on the one hand, you have consulting firms and I Nvidia and all the large companies hyping large language models. On the other hand, you have enterprise listening to this and say, we've got to use large language models in our company, we've got to keep up to date.

Peter: So they start these projects basically to say, we've got to use large language models to solve problems in our business, to be more efficient, to replace human or to augment human, usually to augment humanity efficiency. And then they spend a lot of money to try and get these large language models to actually do something useful. And in some cases they do, and in many cases, they don't, you know, because they do, you know, fine tuning, as I say, external databases, regs, and prompt engineering and whatnot.

Peter: And so there's a lot of money to be made because large companies, or companies generally want to use this stuff, and it requires a lot of effort. But there's also an increasing understanding of what these large language models are good at and what they aren't good at and what's worth implementing and what isn't worth. So as companies spend millions and millions of dollars to implement large language models to do a particular task, and they find it's not really working that well, you know, there's obviously kind of a starting to be a backlash because there are other technologies, deep learning and older AI technologies, that are often much more appropriate to solve a particular problem, you know, but it's always a shiny new thing of large language models and the latest model that comes out every time a new model is released, which, of course, is like every other week, you have to consider, should I be now redoing my whole application in this latest model because it's so much better than the previous one? You know, it's a little bit better. It can do something more.

Peter:So, in the, you know, for now, companies can make a lot of money implementing or trying to implement these large language models. And for some applications, you know, they. They really are very good and effective, but there isn't really an alternative to AGI right now that's active. I mean, I did a survey again recently to look at who is really working on technology other than large language models, and it's practically nobody.

Peter: Intel have a project, a research project based on cognitive AI, where you need much less training data, much less compute, that learns more the way like a child does, learning incrementally from the ground up. Intel just announced a few weeks ago that they're laying off 10% of their workforce. So I don't know if this project even survived this headcount reduction. So, apart from my company, Igor AI, I'm actually not aware of anybody else working on a cognitive AI approach.

Peter:There's been so much money, and deep learning in particular has been so successful in improving translation, speech recognition, image recognition, the fundamentals needed for self driving cars, and so on. The technology of big data statistical systems has been incredibly successful, but that success has also sucked all of the oxygen out of the air. You know, that basically other approaches are simply not getting any love.

Ranjan: So for furthering the vision from an AGI perspective, what do you think are key areas that needs to develop? Like, are these about, like, memory, like, you know, within cognitive approach that you talked about? Are there specific things that needs to be solved before we can see, like, you know, the AGI systems, which are, like, you know, are able to learn on their own and are real time. What are some of the, like, you know, building blocks that needs to come up?

Peter: Right? Really, by far the most important thing to come up is not specific technical issues, and I'll talk about them, too. It's really just for people to actually work on it. You know, as I say, if nobody's working on it, you're not going to make progress. If everybody is saying, hey, we have a lot of data, we have a lot of compute power. That's a hammer we've got. So let's see what nails we can find.

Peter: So you really need to have people to start from first principles and say, what is important in human intelligence? What are the key requirements of human like intelligence? And to come to the conclusion that it is incremental learning, it's conceptual learning and so on, and to then start working on these learning algorithms. Now, that's what our company is doing right now. And there are just a lot of details to be worked out on how to get these incremental learning algorithms to actually work effectively in conjunction.

Peter: Think about it. Our brain doesn't have very complex algorithms. It doesn't have computer, complex computer algorithms in it. It has a number of, you know, a few, maybe a few dozen basic mechanisms that are used to give us the intelligence that we have. You know, one of the most common ones and well known ones is Hebbian learning. You know, fire together, wire together. So, you know, that's one of them. But they're obviously more sophisticated mechanisms that allow us to generalize and so on.

Peter: So it's developing these core algorithms, these relatively simple algorithms, but in a way that they work together. So one of the things that we discovered, for example, that your knowledge representation, the actual graph structure of how you represent your knowledge and skills, needs to be super efficient. So we developed a technology that is literally 1000 times faster than any commercially available graph database.

Peter: So anybody who works in this field and starts with a commercial graph database, you kind of already lost, because, you know, if your response time would be 1 second normally, and suddenly a thousand seconds, you know, clearly you just don't, you can't get off the ground. So that would be, you know, it's, once people work on it, they'll, they'll realize what those particular problems are that need to be solved.

Peter: But it's a very, very different field. You know, it has really nothing to do with how many gpu's you have or how much training data you have. And you need to create your own benchmarks. Again, you're not going to. You can't have an early development of a cognitive AI system where you may be building something that's equivalent to a two year old or three year old child, their learning ability, and then throw a chat GPT benchmark at it.

Peter: You know, I mean, so you really have to rethink, rethink the whole approach and understand what intel you have to start with understanding intelligence and then building a system that meets the requirements of human life. Intelligence.

Ranjan: Yeah, so you mentioned that. Look, there is money to be made in language models and I data and those who have compute. But really, if you're making progress towards AGI, you need to work on important problems and like, you know, make sure that, you know, you're really understanding how the human brain works. And knowledge graph, an area that needs to be invested in. I've also heard about neurosymbolic architecture, and I know you've also done work around, around that area.

Ranjan: Maybe if you could start with what does neurosymbolic architecture means and what are some of the problems, both on the technical side as well as on the business side, that somebody could pick and work on, perhaps start with what does it mean symbolic for someone who doesn't know?

Peter: Yeah. So as people are starting to realize what the limitations of the inherent limitations of large language models are, one of them is that they really not, you know, they hallucinate and they really don't know what they saying. They don't know what they don't know. You know, they're trained with tens of trillions of pieces of information, good, bad and ugly, and they don't know what's good and bad and ugly.

Peter: So humans, human intelligence has what Daniel Kahneman called system one and system two thinking. You know, system one thinking is sort of our automatic responses. And that's in a way, very similar to large language models. They predict the next word. It's sort of what comes automatically without you thinking about it, you know. Now, they do this in a super, super sophisticated way because of the massive amount of data they're trained with.

Peter: But there isn't a system two thinking, which is metacognition, which is the ability to think about your own thinking. You know, we have access, limited access, but we have access to our own thought processes, and we can direct them. And that's a, that's an absolutely crucial part of human intelligence is the system two thinking or metacognition. So large language models don't have that and really can't have that because it's a black box.

Peter: So it can't really direct its own thought processes by some higher level process. Now, people are trying to do that by having one large language model try and monitor another one, but it goes through the prompting loop, and they're really two separate systems. Whereas in our brain, system one and system two aren't completely separated. In fact, they're highly integrated and they interact. If system one has some uncertainty, then system two kicks in immediately and can redirect it.

Peter: And this is really what the idea is of neurosymbolic is. You know, we used to have symbolic AI, you know, seventies, eighties, nineties, was dominated by expert systems, symbolic, formal logic and so on, which, you know, we're good at formal reasoning. Now, the problem is they were very brittle because they didn't, you know, they didn't think conceptually or they didn't reason. Yeah, they didn't reason conceptually, as probably the best way to put it. So formal logic has quite severe limitations, but people are now looking at how can we combine the power of formal logic or reasoning systems with large language models?

Peter: And that's essentially called a neurosymbolic approach. And the big annual conference on neurosymbolics and so on, where people try to do that, there's a problem with taking this brittle symbolic reasoning system and trying to glue it together with a large language model. There just really isn't an interface between the two systems. And you end up with the disadvantages of both systems. You end up with the brittleness of the one system and still the hallucinations and the cost and the inability to learn in real time from both systems.

Peter: So I think the idea of saying you want the sort of system, one more automatic thinking that doesn't require as much computation, and that's, you know, quicker, quick thinking, and you want a higher level thinking. So to combine these two modes of thinking, I think, is the right approach. Now, our architecture, we call it insert, which is integrated neurosymbolic architecture. And how our system works is that the two systems are actually fully integrated. They're really one system that operates in two modes.

Peter: So it has the same data structure, the same knowledge representation, representation of knowledge and skills, which is this high performance knowledge graph. But it can operate in basically system one mode in the sort of neural mode. Or it can operate in the symbolic mode. In fact, it does both of those together in a similar way to the way our mind works, our brain works.

Ranjan: So the neuro mode is the system one mode and the symbolic mode is the system two mode. Is that correct?

Peter: Yeah. System one is neuro mode. That's fast thinking, automatic, subconscious prediction. You know, a good example is when you know people who are competent drivers, experienced drivers, if you're driving a road, it's kind of automatic. You know, you really don't have to pay attention. That's system one. But then something unusual happens. You know, there's construct a new construction, something has changed in the road, or there's an accident, or, you know, something unusual, then your system too kicks in. Kicks in where you then reason about, okay, I can't continue on automatic pilot, basically.

Peter: And we do that just automatically, that we can switch between these different modes of something that we used to do, and we can just do automatically without thinking. It's the same way you're typing or key gets stuck and suddenly, whoa, something kicks in a higher level. So I think that's the right approach. But the two systems really need to be one. They need to be completely synergistic and highly integrated.

Peter: And you don't achieve that by taking a large language model and trying to somehow plug a reasoning system in. And the reasoning systems that are out there are extremely limited anyway because they are based on formal logic. Yeah.

Ranjan: So Peter, you've also spent the last 20 years building companies, taking some of the public. And this is all before the madness of the chat, GPT and AGI can happen. Conversation happen. So how does one think about how to pick the right use case? We know there are these technology architectures that are changing. But then when you think about business, you need to focus on the right use case. Sometimes that may force you to think about in the narrow AI type of a way.

Ranjan: But then you need to pick the use case. You have to solve a business problem. You need to think about scaling the business. You need to think about defensibility. So how do you think about it? And how should founders think about now? What use case to pick how to scale the business? And how do you think about defensibility?

Peter: Well, at the moment, there are a lot of opportunities with large language models and other related AI technology, in particular narrow areas where if you can have the expertise to solve a particular problem, whether it's in medicine or legal or selling houses or whatever it might be. Now, of course, there's also fierce competition that a lot of people are trying to do that. But addressing being an expert in a particular area in a particular market and potentially partnering with well established companies in that market, I think that makes a lot of sense to build a business.

Peter: Now, that's not AGI. And an important thing, really, advice I try to give everybody on large language models and generative AI for real applications is really think about what the right technology is. Don't always force everything into a large language model. There may be older AI techniques that are actually much more appropriate to solve the particular problem. It may be way overkill because running a large language model, training it and running it is not cheap.

Peter: And often these projects end up just not being financially viable. As long as you can get it for free, it's fine. You build your prototype and get stuff for free or whatever, but then you try to deploy it and suddenly you're finding, whoa, this is costing more than having a human do this. And you still need a human in the loop anyway because of the hallucination. So I think there are a lot of opportunities, but that's not AGI.

Peter: Now, we actually had developed a very, very powerful customer support system with our technology, with our cognitive AI. I think a very well known customer of ours is 1800 flowers and Harry and David group of companies and using our technology. And last Valentine's Day, we replaced 3000 agents. Not the system, but replaced they normally needed to hire 3000 agents for just one week, three days before Valentine's Day and three days after.

Peter: And we did that very successfully. But we actually completely put our commercial business on hold because we can't really concentrate on building AGI and getting to AGI while we're also building a commercial business. Building commercial business is hard. You're constantly building out the technology, you're improving it, getting new customers, new use cases and so on. So we actually decided to shut down our commercial business and focus 100% on AGI.

Peter: Because the commercial business, if you're successful, you end up with, you know, if you're doing well, a billion dollar company, and then you sell it off to somebody or whatever, you know. So if you're doing well, you know, maybe hundreds of millions in a SaaS company or maybe a billion or two. But AGI is a multi trillion dollar opportunity. And the benefits for humanity, for AGI, are so tremendous in my mind that why would I want to spend even an hour of my life pursuing a commercial opportunity when I can actually be spending my time pursuing AGI?

Ranjan: Awesome. What are some of the areas of research or what are some of the other projects that you're seeing that excite you, either, like, you know, related to AGI or something else that caught your attention?

Peter: Well, AGI is so overwhelmingly powerful that other areas that I'm interested in, like nanotechnology and anti aging research, and just generally research better materials, better batteries and all of that, all of those will be massively accelerated with AGI. So AGI becomes an enabler for all of these other technologies and other problems we want to solve. So, you know, having a strong belief that we can have AGI, full human level AI in less than five years, there really isn't anything more exciting to work on because it'll enable so many of these other things that are exciting and useful and valuable.

Peter: So it's really that. And it's in a way distressing that so few people are pursuing AGI proper, you know, paying lip service to it in order to raise money or just having completely the wrong idea of what AGI is. So I wish there were more projects, more people actually working on it, but they. I've seen this over the years. Over the last 20 years, projects that have tried to work on AGI invariably fall into what I call the narrow AI trap.

Peter: And what the narrow AI trap is, is basically as soon as you try to productize an early AGI, or you try to beat benchmarks, or you try to even build an impressive demo for your investors, or you try to get a paper published. In all of these cases, as soon as you have that kind of benchmarking, either commercial or otherwise, your project basically turns into a narrow AI project, because you can always improve the performance of your product or your benchmark, or make a more impressive paper by adding human, external human intelligence to your system, by basically hard coding stuff.

Peter:Because if you build an early AGI that's capable of learning, like a two year old, a three year old, a four year old, it's just not that impressive. It's only impressive in the context of really believing in that approach. So it's really, really hard to, you know, you need to have that vision and stay on focus, and you need to have the funding to pursue and to go from two year old to three year old to four year old to five year old, you know, until you get to kind of, you know, college level, where you now have something that is commercially viable and clearly impressive.

Ranjan: Peter, any final thoughts that you'd like to share on anybody that is working on improving AI or building a business around AI, or working on a project around improving AI? Any thoughts or advice that you like to share?

Peter: Well, I'd very much like to hear from other people who are interested in really achieving AGI, you know, to brainstorm how it can be introduced to the world, how we can make it happen, what the risks and benefits are of real AGI to really address that problem. And it's surprising how few people are really interested in talking about it. You know, it's often just, I've got to run a business. I've got to just get more customers or whatever. So I'd really like to hear from people who are generally interested in talking about how to get to AGI and the implications of real AGI.

Ranjan: Peter, thank you so much for joining the pivotal clarity podcast. Good luck with aigo, and we hope to see AGI really soon.

Peter: All right, well, thank you.

Latest Podcasts
Ep
1
What's Real in AGI: A Conversation with Peter Voss
November 1, 2024
36:15 Min
Ep
2
How AI is Actually Changing Accounting with Dr Mfon Akpan
October 30, 2024
39:14 Min
Ep
3
AI's Leverage - Timing, Data and Human Relationship: Insights frm Aravind Krishnaswamy
November 1, 2024
42:19 Min
Ep
4
The Surprising Upside of AI Regulation with Maya Sherman
October 30, 2024
36:07 Min
Ep
5
Robotics Inflection Point: Arshad Hisham on The AI-Robotics Stack & Traditional Labor Markets
November 4, 2024
28:52 Min
Ep
6
When Laws Meet AI, with Laura Carr
November 19, 2024
19:10

Get Connected with Upekkha

Want to understand if Upekkha is right for you? or Get AI SaaS Report