Audrow Nash Podcast Transcripts

Full transcripts from the Audrow Nash Podcast.

Join me as I learn about robotics, AI, and manufacturing.

We'll talk with experts about their technology, business model, and personal experience.

I'm an engineer, and this is a technical podcast. But we assume no specific knowledge, so if you're interested, you should be able to follow along.

Find an error?

Oops. These transcripts are made with Descript and then checked over quickly. I'm sure there are lots of errors, unfortunately.

If you'd like to help, feel free to make a pull request with a fix. If you're not sure how, here's a good page on getting started.

Also, if you're into it, this repo has a .devcontainer setup, so you can easily make a Codespace to preview your changes. (It takes a couple of minutes to build the Docker image.)

Rethinking Robotics: Electric Sheep's Journey to Safer, Smarter Machines

Table of Contents

Start

[00:00:00] Audrow Nash: I've talked to a lot of people about this interview, and I'm excited that I get to share it with you. In it, I talk with Nag and Mike from Electric Sheep. They're doing many things differently than most robotics companies that I've talked to, and they're making big bets that I think will pay off. Here are three examples to show you what I mean.

First, they're throwing away classical robotics approaches, and instead... using machine learning. I'm not just talking about for perception or parameter optimization, but even for things like localization or high level control. Second, they've turned the lawn mowing problem on its head to make robots that are intrinsically safer.

And third, instead of selling their robots or doing a subscription model, they buy profitable landscaping businesses and give those companies robots. There are a lot of advantages to this last point and you'll see the details of each during the interview. I hope it surprises you as much as it did me. I think you'll enjoy this interview if you're curious about how AI and robotics can fit together in a real application and if you want to see a new robotics business model that I think will be very popular in the near future.

I'm Audrow Nash, this is the Audrow

Nash Podcast, so let's see why Electric Sheep is doing things so differently, and I'd love to know in the comments or on X. What do you think about their bets? Would you take them? Are you convinced? Or do you see some weak points? I hope you enjoy.

Hi everyone.

Nag, can I have you introduce yourself?

Introducing Nag + Mike + Electric Sheep

[00:01:37] Nag Murty: Yeah. Hi everyone. I'm Nag Murthy. I'm the CEO and co founder of Electric Sheep Robotics.

[00:01:44] Audrow Nash: And Mike, would you introduce yourself?

[00:01:46] Michael Laskey: Hi everyone. I'm Mike Lasky. I am the Vice President of Autonomy at Electric Sheep Robotics.

[00:01:52] Audrow Nash: Awesome. Now, Nag, would you tell me a bit about Electric Sheep? What do you guys do?

[00:01:58] Nag Murty: Sure, yeah. We are an outdoor maintenance company powered by artificial intelligence and robotics. So the way we work is we acquire traditional outdoor service providers. And we then progressively transform their operations by deploying our own proprietary AI software and robots. and basically what we're doing at the back end is we're building this large world model that can effectively drive all outdoor autonomous equipment.

And then over time, we plan to use those, automated robots to improve our own margins over time. So that's what we do.

[00:02:34] Audrow Nash: And you're starting with lawnmowing at the moment. So that's the first embodiment, I suppose?

[00:02:39] Nag Murty: That's right. So that's the first instance of what the model, you know, automates.

[00:02:46] Audrow Nash: Gotcha. And so there's so much that's interesting to unpack there. let's start with why, like the approach you guys are taking. Yeah. I suppose. Like, so why is it these world models? Like, what is that? And why is that interesting? maybe Mike, is that, is that a good one for you?

Large world models

[00:03:09] Michael Laskey: Yeah, I can take that one.

So I think, I think there's like a, there's kind of two fronts of it. One is like, what's, what's the architecture? What's the approach of how we go about designing these robots? And then the other side is like, how do we get data? How do we actually train these models? So at Electric Sheep, we're very much like, we believe that...

The future of robots is a learning robot. A robot that can understand the world in an intuitive way, can relate to it with more of a natural instinct that progresses over time from experiences, and can be interacted with in sort of the, the UX that you might interact with like an outdoor work animal. So natural language, visual, like there should be no app.

So none of our robots have apps. They should just have simple like on off buttons, and that's about it from a sort of like button interface standpoint. now, to create this sort of intelligence and this experience, there's sort of like two ways you can do it in robotics, right? There's, direct policy learning, where you say, okay, well, I'm going to like push mow and just take all my push mowing experience and train the robot to do push mowing.

And then on the flip side of that, when you look at sort of like different RL, like MDP celebrations, there's you learn the model. So instead of saying, I'm going to teach this thing to only push mow, it's you're saying, I'm going to teach this thing to learn how the natural world works. And then you can tell it to do different tasks.

Then you can say, oh, oh yeah, sorry?

[00:04:34] Audrow Nash: So you're saying like in a reinforcement learning context, you are, it's almost like you're generating like an embedding or something. You're learning how to learn by getting like an efficient representation of reality or something like this. And then you use that to get more specialized into a specific task.

Is that, like, is it along those lines or what do you think?

[00:04:56] Michael Laskey: It's along those lines. Like if you, you know, if you're familiar with kind of like the old ideas of reinforcement learning, it was the idea of the agent's policy. So it was like given a goal, how it would try to like respond and achieve that goal.

But then there was the other concept of the environment model, or the transition function. And that was how the agent would in its head see, okay, well if I'm in this state and I take this action, what happens next? And what's cool about that representation is it's kind of like task agnostic. so it's more like if we can give the robot intuitive physics and intuitive sense of how the robot behaves, we can then have it synthesize arbitrary tasks.

because today we're push moving, but tomorrow we might say, well, can you strengthen it? And it's like, well, I know how movement works, I know how semantics works, I know collision avoidance. Let me go and do those, tasks with that.

[00:05:47] Audrow Nash: So you're learning a lot of, like, fundamental things about how reality works.

That's the goal? Nag, do you wanna?

[00:05:55] Nag Murty: Yeah, that's exactly it, right? At, you know, basically, that's the world model that, you know, the robot has in its head. And so, it uses, it uses those intuitions to sort of perform these different tasks. so all these tasks are like policies that you can learn that are specific to that task being performed.

So mowing, sweeping, snow removal, but at its core, there is a task agnostic world model, which is independent of the task being performed.

Learning physics

[00:06:24] Audrow Nash: So that sounds amazing to me. and I'm wondering, like, how do you, so I would wonder, my first thinking would be physics. like, our models of physics are pretty good for probably like lawn mowing and things like that.

They're not good for fine manipulation yet, probably. I mean, I haven't seen work anyways. how, how do you, do you just, do you kind of like, where are you using things like a physics emulator, and where are you learning stuff? Are you learning everything, or how does it, how does it work? Maybe Mike again?

[00:07:00] Michael Laskey: Yeah, no, great questions. So I think when you look at You know, TAS in outdoor landscape, you're right. You kind of have like two camps right now of TAS. There's your mechanized surface coverages, your snow plowing, your push mowing, your street sweeping. And then there's like really fine grain, like tree trimming, where pick an apple, prune a bush.

so what we want to do is definitely take this like a roadmap. And we first started off saying like, okay, well. There's a huge amount of bulk labor in mechanized surface coverage. So let's first just think, what would the world model need to do to solve these tasks? And then build in a way, though, where it can be scaffolded to Dexterous later.

so, like, it's, it's pretty generic, like... AI kind of tech, so it's definitely scalable, but now when you look at like something like a snow plowing or push mowing, you really have like, four key challenging things that need to be done in order for the robot to affect the report, like four like kind of key AI concepts.

So one of them is, so think just for mowing for a cleaning sample, you need to have collision avoidance. But it's subtle because some things you actually have to get very close to and touch. So you have to have this concept we would call like semantic collision avoidance. Like understanding like, oh, I can push my mower right next to a bush and mow an area, but when I'm in like, when I'm next to like a human or a tree, I can't.

and then you need obviously mapping. You have to have some sort of global map of the world to know what you've mowed before and what you have mowed. And then for commercial mowing, you need actually, a lot of people don't, this is actually a really hard nuanced problem of mowing. You need to lay down these very precise, straight lines, so you need high quality localization.

better than like most people would try and get for in like a self driving car application. cause a professional landscaper looks at these lines and they basically say if this robot was good or not. And to do that you need to have this very precise... Ability to follow lines and lay them down over like hundreds of meters of stripes.

so when you combine all that, you need mapping, localization, 3D awareness, the ability to understand what's traversable, what's not. And these are the core concepts. So, like you're saying, you don't really need...

[00:09:19] Audrow Nash: What was the fourth one? I have collision avoidance, mapping, and straight lines. What was the fourth one?

[00:09:24] Michael Laskey: Semantic traversability, I would say. Like, really understanding, like, what you can mow over versus not. which is almost like a semantic problem of like, is this mowable, basically.

[00:09:35] Nag Murty: So take leaves on the ground in the fall, right? It's okay to go over them. Like a toy bunny rabbit on the ground, not okay.

You know, so, it's a, sprinkler heads, not okay. Not okay.

[00:09:48] Audrow Nash: Big bummer. Mm hmm.

[00:09:50] Michael Laskey: Yeah, and like, what most products in the market do today is you teach it, through like, you know, precise, like, joysticking, how to, just avoiding things. what's very different about what we're building is there's no teaching. This robot must semantically see the world and understand if it can go over or not.

Robot sensors

[00:10:09] Audrow Nash: Okay, so how do you do that? How do you semantically understand the world? and maybe the best way to start would be like, are you using, what sensors are you using? Is it your cameras primarily, is what I would imagine? Maybe depth cameras, or what are you guys doing there?

[00:10:25] Michael Laskey: Yeah, so our robots are equipped, I'm a huge believer in stereo vision.

So the robots are equipped with just one stereo camera. just one.

[00:10:36] Audrow Nash: That's super cool. Just one, yeah. You don't have like 40 all posted around, which is what it seems like many do. I mean like autonomous cars will have them everywhere. Okay,

[00:10:47] Michael Laskey: yeah, I mean we very reduced the like weight and size of our platform that we felt like from a safety standpoint You could do one and one's interesting because now you're in the domain of like a human, right?

You have like similar like human eyes what you mostly see in other animals This also forces you to reason across time and space so you can keep track of like what's around you How do you look around and like

[00:11:08] Audrow Nash: which is super cool? Yeah, that's why it's surprising that you have one Okay, so then you take that stereo Image Data.

So you're getting 3d information about the world and image information. You've kind of are superimposing them. And then you're running that through some. Sort of neural network to establish semantic understanding, like object recognition or what do you do from there? Or it's, it goes into just some neural network and, you'd say good, bad and it eventually does the good thing, or what, how does it go?

[00:11:39] Michael Laskey: Yeah, so right now how it works is we feed a stream of video stream stereo video into what we're calling ES one. and then the, also the other thing we feed in is uncorrected GPS, so non R-T-K-G-P-S. This is relatively cheap, simple to get, and you kind of want, like, Nog has a good analogy that kind of, like, relates it to, like, pigeons, like, how they use magnetic sensing to navigate.

So, you know, it's kind of like, it's something that is there, it's very easy to get when you don't want non RTK, so we feed that in as well.

[00:12:11] Audrow Nash: I love that approach, because, so you're learning, like, RTK GPS is very accurate. It's like, what is it, like centimeter or millimeter level accuracy? On the good

[00:12:21] Nag Murty: days, yeah.

[00:12:23] Audrow Nash: Yeah. Well, yeah, it depends on coverage, depends if you have beacons or something set up for it to triangulate with, all sorts of things. But, it's very expensive too. and all this thing. So you're just using, or maybe it's come down in price, right?

[00:12:36] Nag Murty: It's, it's not so much the expense. I think, sure, there is, you, you get like those Trimble GPSs, which costs like seven, 10, 000, maybe more, but then you also have like these audio simple GPSs, right, which are like in the, but the problem, the RTK corrections is you can't rely on them.

Right. So you have all these multi pathing issues. When you go close to the walls or, you know, when you're in an urban canyon, right, you really don't get like the right, you know, signals to get to you, or there's a lot of distortion of the signal that happens. Ionospheric activity can like wreak havoc with like your GPS, right?

All kinds of crazy things like affect it. Cause I mean, think about it, right? Like you're having like these little basketball size satellites, like whizzing past it, like, I don't know, God knows how, how fast you're trying to like get like signals and synchronize them using atomic clocks or something, right?

Like, so it's, it's again, like very classic in its sort of modality, right? Like precision GPS. That's kind of where all of this is. And we don't like classic modalities fundamentally because you have to get everything right and everything precise.

[00:13:47] Audrow Nash: Oh, so you, I think I want to talk about that quite a bit, but you, at a high level.

Think they're finicky because of the precision that you require. And so you want robust solutions and that's why you rely on GPS and these kinds of things.

[00:14:02] Michael Laskey: When you scale the products with it, on good days when you get it, it's really amazing. But there's a lot of bad days.

[00:14:09] Audrow Nash: But finicky and lots of failure cases and, uh, sorry, Mike, go ahead.

[00:14:14] Michael Laskey: Yeah, exactly. It's an unreliable service that in a lot of places in the country you just can't rely on at all. Or on different weather days.

And you also are pushing your bill of materials to be cheaper, which is a big advantage too. like having only one stereo camera means you need only one input.

One chip thing to accept one stereo camera. Pair of information. Like, it's not like you need a custom board that goes to 30 stereo cameras. And it's like, it makes all the, and then you don't need a crazy processor to process all that. You're not storing an insane amount of data that you have to upload and figure out all that.

It makes all the downstream problems also easier. I would imagine to do these kinds of make these decisions. It seems like power moves to me, like you guys are making power moves in how you're doing stuff.

We said ML was going to drive every major decision, and then we designed the stack to force ML to do that.

So we said, no external calibration, no ability, no, yeah, no redundancy in sensing.

[00:15:22] Nag Murty: Here's the, yeah, this is the crazy part, right? You gotta, there's a history to all of this, right? And we have flirted with like classical approaches, scaled classical robots. In the first two years of our existence, right? And we are deeply aware of like how amazing it can be, but also deeply aware of how painful it gets when you go down that route, right?

Like we had like these big massive ouster lidars and like, you know, all kinds of crazy things. On our robot. And then we just said like, throw away all that. Yeah. Hell with it. You

[00:15:56] Audrow Nash: know, I wonder if it's like a pendulum and you guys have swung, you started one way and you're going to swing the other, and then you're going to find a happy medium eventually that's a bit more, in the middle.

I like, I don't know. And maybe the other methods are working wonderful.

[00:16:11] Nag Murty: I think, yeah, I think the thing is, one way to think about this is, right, I think that we won't swing back primarily because this is now a data problem at its core. It's not like a sensor problem. It's not a calibration problem. It's not a sensor quality problem.

None of that.

[00:16:30] Audrow Nash: Interesting, yeah, so you don't have most of those problems now to solve with classical methods. Interesting.

[00:16:36] Nag Murty: Every time you have a classical method, you always come down to like, how good is your accelerometer, right? Like, do you have like high quality accelerometer? Does it like, cause that's the thing that ties in like your whole slam stack together.

[00:16:47] Audrow Nash: Yeah. So how does, so with your normal GPS, you're basically saying we think this is going to be somewhat unreliable and therefore your system will behave accordingly. For this kind of thing, it's just that you'll adjust for the information that isn't good, but is something. is that, that kind of the thinking with not using RTK and just using regular GPS?

[00:17:13] Michael Laskey: Yeah, cause, so this all feeds into what we call AS1. One of the outputs of AS1 is essentially maps and localization. now... Non corrected GPS is only so good, but when you combine that with, let's say, a Learn VO that's happening interior to ES1, you can... Visual odometry.

[00:17:34] Audrow Nash: Oh, okay. Acronyms. I don't even know them in my own field.

Okay, visual odometry.

[00:17:38] Michael Laskey: Perfect. Yeah, so another way of doing localization is to use, of course, camera data. It's like how humans do. Walk around, we see movement with our head related to vision. So, internal to our network, it's able to kind of fuse the GPS with visual data and produce better pose estimates.

Introducing ES1 as CatGPT

[00:17:57] Audrow Nash: Mm hmm. Okay, so you get your pose estimate. So maybe Nag, what is ES1?

[00:18:04] Nag Murty: I think that's a question better answered by Mike, but I'll take a stab at it. Like ES1 is essentially, I joke about it and call it like CatGPT, if you want to call it that, right? Which is essentially

[00:18:15] Audrow Nash: like computer aided design, like 3d parts.

[00:18:18] Nag Murty: No, no, no. Like, no, it's like, it was both. CAT, so it's, if you think, yeah, so if you think of it this way, right, like ChatGPT was to language, ChatGPT fundamentally groks language, right? Fine. All the naysayers aside who question whether it has an internal representation of what languages and all that, but it clearly has some kind of, you know, it's some kind of magic going on in there where it sort of understands the structure of language, the grammar, the nuances, right?

Not just one language across languages. And we're saying there's got to be something similar when it comes to embodied AI, where, and it's non verbal intelligence, if you want to call it that. So ES1 really is something that, sort of captures sort of that non verbal, intelligence. Like it's a way for...

Like a robot to sort of navigate its surroundings, or it's the means by which the robot navigates its surroundings, manipulates physical reality. And it's this one giant sort of world model neural net, if you want to call it that. Maybe Mike can add to this, but that's how I describe it as a product guy.

When

[00:19:30] Audrow Nash: I love the, the cat GPT, so cat, like the animal, it like gets around. It's a physical representation. That's super cool. Okay. Mike, do you have anything to add to it?

[00:19:41] Michael Laskey: No, I'm gonna just completely go with what Nox said there. No, I think it is that it's it's basically like right now it is like embodied AI and it's supposed to be going after the predictions that a physical agent would need to solve tasks and like look at the the key concepts that we're trying to do of like You know just surface coverage right with no map, no teaching and right now that's what ES1 is trying to predict is those fundamental representations for an agent to do a task like push, mow, snow, blow.

and then we want to keep building on it to get more and more of the embodied AI full stack.

ES1’s input / output

[00:20:17] Audrow Nash: So, you, with ES1, you take in GPS and video, and you output pose and maybe actions, like motor commands, or, what is that, what is the output again?

[00:20:29] Michael Laskey: So the output, right now it kind of outputs like, kind of a plethora of representations.

[00:20:34] Audrow Nash: Yeah, it's probably a big tuple of a lot of things.

[00:20:36] Michael Laskey: It's a big tuple. Yeah. It's like, it's like, which you kind of see in like, you know, like, the, I think like perceiver.io like papers where you have like now like many, like one to many type of representation outputs.

[00:20:48] Audrow Nash: Yeah. 'cause you need lots of information for your robots.

For sure.

[00:20:52] Michael Laskey: Exactly. So it some of the, so, you know, it's kind of interesting 'cause right now we're sort of testing it by predicting like some of the more classic representations. And then we have a planner that can like synthesize the, basically policies can be synthesized with a planner on top. but, I think as a whole, like, we'll probably at one point start getting to learn planning.

But, what it does today, is it outputs, like, a point cloud of the world, a semantic understanding of the world, so like a 3D semantic point cloud. a bird's eye view map, so it says, here's where you are, here's what you've covered before, the exact position of the robot on the map. so... And then, also, what's kind of interesting is like, low level obstacle detection.

So I would say like, oh, here's a manhole cover. Like, oh, here's your child's bunny. This is a 3D cuboid. Like, do not go near it. Yeah.

[00:21:42] Audrow Nash: How do you, I mean, there's a lot of questions, I have a lot of questions about that, just in general. So, how much is it like, So I interviewed, like, many years ago, Sergey Levin from Berkeley.

so he's a professor who's like, End to end robotics stuff, so you basically pass in raw sensor input and you output, like, actions to do the task. this seems a bit like that, and maybe you have a better description of his work for this kind of thing, but how, it seems like this is very similar in spirit to me, and it might be the same thing.

what do you think?

End-to-end learning for robots

[00:22:22] Michael Laskey: Yeah, I mean, so he was, he was on my cause committee, so I'm pretty used to debating with Sergey, actually. but, so I think, I'm, like, at a very high level, like, I really agree with a lot of his approaches. and if you look at, like, the body of his work, it really says, like, yes, robots need to learn from actions and experiences.

You need to collect a lot of data, and they need to have recitations that are good enough they can actually efficiently learn. and search algorithms in RL. Where I think ES1 is slightly different is instead of just predicting, like, let's say like it's trained policy, for example, well, okay, just a bit of a caveat.

It's kind of hard to argue with Sergei as a whole because he's done so many papers that it's like, what's the specific one that, you know, you're trying to predict?

[00:23:09] Audrow Nash: Yeah, for sure. Because ES thinking is moving. It's like a big tree advancing. Right, exactly.

[00:23:15] Michael Laskey: For sure. So, but if you just look at maybe, like, the RL community, rather than learning a specific policy, what we're trying to learn is like a representation of the environment.

And then this is something that you can call like planners or learn planners that can operate on top.

[00:23:35] Audrow Nash: Okay, the way that I would think of it, and maybe this isn't correct, and I, maybe just, when I was in grad school, all the things were like embeddings and you learn efficient embeddings for things. so you learn a good way of representing something and then you can do like one shot learning or something if you have a really efficient.

embedding that lets you learn a new task really quickly or something like this. Is it, is it at all similar to this or is it, I don't know, how does it fit with that? Or is that just like an outdated model? or just a different model.

[00:24:09] Michael Laskey: I think that's where you would actually want to go. So right now we actually predict presentations that are more human interpretable, like a point cloud or a semantic map.

yeah. And then we call like, you know, a planner that we might code on top of that. And that, that works for today. It also works for scaling a product because you want that interpretability as a, as an engineer, especially when you're trying to. Product fit. There's a lot of UX that you have to code. So there's a lot of practical reasons that we do that.

But as we evolve and scale, our sole goal would be to replace everything, you know, to you. Just get to a point where it actually predicts like, here's a thousand dimensional float vector. And then the planner takes that.

[00:24:53] Audrow Nash: but raw input from the sensors. Yeah. All that.

[00:24:57] Michael Laskey: But like having worked a lot in my PhD with systems like that, when you're actually deploying something, you have an end customer and they're like.

Oh yeah, why'd your robot like hit a tree? Yeah, then you, you really want somebody to fall back on of saying, well this is what it thought, and you can as a human interpret that because it's a representation that makes sense to, people, really.

[00:25:19] Audrow Nash: Yep, so what's the, what's the point of, so if you're outputting a point cloud...

How different is it than just a normal point cloud that you'd get from a depth camera? I guess it has some semantics included, or like, why not just go right to a point cloud from your sensor? For this kind of thing.

[00:25:41] Michael Laskey: Yeah, I mean well basically because we're just feeding in left and right stereo You know one of the representations that will do is create.

[00:25:47] Audrow Nash: Oh, you're not doing any of the calibration and things like this You were saying is that true?

[00:25:53] Michael Laskey: Yeah, so you can just feed in like two images With minimal you can have a pretty like rough estimate of your baseline, but you don't need like machine tolerance on that Like you can be off with like a couple of centimeters.

[00:26:08] Audrow Nash: Okay. So let's just backing up a bit. So what we have so far is you guys have this ES1 system. This ES1 system takes camera and GPS and it outputs a big bunch of information, including like a point cloud, maybe motor commands, all this stuff. and so the goal is to have this be more of like a learning system that you can make more flexible over time.

And it's starting with lawn mowing, but potentially getting into a bunch of other applications. that all seems wonderful to me. It's, I wonder how you, so how do, how do you get your data, I suppose, for all this? Like, and like, do you use simulation or how did, how did you get started? I suppose. cause getting started to me sounds like it sounds wonderful, but it sounds like you need a lot of data to even get started and maybe I misunderstand something.

Getting data

[00:27:11] Nag Murty: Yeah. I think Mike can go into the Simtorial part, but like there is a precursor to this, which is, you know, when we were in operation for the first couple of years, we generated a ton of data. Right. That helps seed some of these efforts. And then Mike brought in sort of the Simtorial angle to this, which he could talk about, and then sort of using both of those got us to like a 80, 90 percent good enough product, and then from there on, yeah, it's, it's like our data engine that really drives those.

Like, corner case improvements over time, but Mike can delve more into the sim to real part.

Electric Sheep’s background

[00:27:47] Audrow Nash: Or just before we get into the sim to real part, so, more background on Electric Sheep. So, how long have you guys been around, and Nag, you've been, you're a co founder, right? That's right, yeah. How long have you guys been around?

[00:28:01] Nag Murty: three and a half years. we got our first, we raised, like, a pre seed round in, like, Jan of 2020.

[00:28:10] Audrow Nash: Wow, right as the world shut down.

[00:28:13] Nag Murty: I know, like just snuck it right in, you know, just like before the world shut down.

[00:28:19] Audrow Nash: Okay. Yeah. So then for the first two years or something, you said you were trying classical methods, like it was the same idea in a sense, because I mean we talked before this conversation, Nag, you and I, and Like you guys have a lot of interesting ideas about how to do things and it's, it's surprising to hear, it's like revolut I don't know, awesome on the technical side, awesome on the business side, awesome on the, everything.

So it's, it's cool to see it coming together. But it makes sense that you guys have a bit more history where you've been trying things and you probably had those like different business model ideas back then and then you're coming around to the technical decisions on things and bringing on great people like Mike.

For this.

[00:29:06] Nag Murty: Exactly. I think it's been, we've always wanted to sort of build this full stack business. You know, when we first sketched out the idea on a paper napkin between me and Jared, we felt like Jared is my co founder. He comes from the commercial landscaping industry and he's like, yeah, he's like the man when it comes to all things landscaping.

Right. So, and it's, it's, it's important to sort of bring that up because. What shapes our company, right, is this sense of intense pragmatism about what we want to solve, right, and how we want to solve it. and it's, it's sort of the confluence of, you know, three things really. One is this, sort of mindset on like the, the business model is the ultimate goal.

That's that mindset combined with sort of the operations know how that Jared brings combined with the machine learning and autonomy, smart that Mike brings. Those three things all shape our company and shape our vision and they reinforce each other really at the core.

[00:30:13] Audrow Nash: They're the pillars that everything builds on.

You have the strong business model, the great operations, and then the great technical.

[00:30:20] Nag Murty: And you need all three to sort of come together, for this to actually happen, because this is a complex business to build. It isn't like, you know, and, but, but we feel like this is the only way to sort of to build. To do it.

Yeah. Like if, if your end goal is this moonshot, you know, large world model, which is a chat GPT equivalent for outdoor robotics. This is the only way to do it. We cannot think of any other way and we've tried a bunch of ways. We don't, yeah.

Business philosophy at Electric Sheep

[00:30:50] Audrow Nash: Was that the goal from the beginning? To get the large world model?

Or was it like, as you're in it, you're like, we could do this. And then it's like, maybe this is feasible. or how, how did that come

around?

[00:31:02] Nag Murty: I think the goal was to always, we knew the value of data, right? Like that it was true, it was clear that data would sort of lead to an AI revolution. The question was, how do you time it, right?

And what business model do you need to?

[00:31:16] Audrow Nash: Oh, definitely. Yeah. The timing is the really tricky thing. Right?

[00:31:19] Nag Murty: Exactly. But then you can, you can sort of abstract away the timing if you, and this is what we started with. We said, look, the technology is going to catch up. You know, all of this is going to catch up, but if you solve for profit and revenue, and if you can solve for a profitable way to get the future on your side, exactly, then, you know, whatever is the latest advance in technology.

Can sort of be folded into what you're doing. Right. So you don't take on too much technique, technical debt at the beginning by committing to like, you know, this one approach, right? There's profits are always good. Like revenue is always good. Right. Deployed robots are always good. So solve for that. And then all these other things kind of like sort for themselves.

that's always been our philosophy. Since day one.

[00:32:03] Audrow Nash: Yeah, I love that. It is aggressively pragmatic, as you were saying. I think that's wonderful. Cause yeah, if you just have the business alive, like if you, if you just keep it going for a bunch of years and it's doing well and you are making profit, like.

You can always invest in whatever you think the best thing is the whole time. and that's a wonderful thing. versus going and heavily betting on one technology and not optimizing for profit. Like, this is going to be big, but one day it's going to pay off. And, then, I don't know, funding dries up like it is now.

Or, any of these things, you don't find good customer fit right away, whatever, and then the company dies. So just optimizing for profit seems really, really good to me. and Mike, when did you join?

[00:32:51] Michael Laskey: So I joined about, a year and maybe like three months ago. Wow. Yeah. and I remember in the interview, Nog was like, I want you to make the mind of a sheep.

And I was like, seems like a really interesting, like, place to join.

[00:33:11] Audrow Nash: Okay, hell yeah. Just, okay, so that's the context. So you've been working on this. Now for a year and three months, tell me a bit about, so we were talking about the sim to real gap for this. So we, we generated some data, from the first two years of trying things.

And then, so you add that to seed things with, we have examples of mowing and it working. How, how do we get from there to more sophisticated behavior? How do we fold that in so that you have these learned controllers effectively? Like, how does it all work?

Sim-to-real + long tail events

[00:33:47] Michael Laskey: Yeah, so I think the data that we started with was really, like, it kind of, you know, you start realizing, like, the breadth of the problem, the scope, right, because you've seen the diversity of mowing.

It's a lot more complex than what you might think at first, especially when you're trying to solve it generically with no teaching. But just given my background, I was always drawn towards synthetic data to start, you know, building models from. There's a few reasons why first pre training synthetic data can be a nice advantage to a model, but then actually having the model go out and then learn from the real data, from the interactive data.

So that's generally the paradigm we've been taking now, is using like the landscaping companies that we acquired as like an RL sandbox to like refine the model and teach it about the long tail. But where we start is...

[00:34:40] Audrow Nash: What do You mean the exceptions? You mean the things that don't happen frequently?

You're... What's the long tail?

[00:34:46] Michael Laskey: Yeah, the long tail of the real world. So, what happens is... Like a kid's toy being left in the lawn or this kind of thing? Kid's toys, like interesting properties where you have like very unclear boundaries of like grass. One of the properties that we're actually looking at, there was like grass that led right up to the pool side.

So the robot would actually have to know, oh, I can't put my cask outside because I would fall down into the water, right? And that, yeah, exactly. Like the interactions with the physical world and that world model. We didn't necessarily cover in simulation, right? Thinking like we would be mowing like right next to a pool and it was a complete like drop off.

so you can never really, I think, you know, like McCarthy has like a really good quote in his Tesla's talk of like, you can never really simulate the world because you don't understand like the true long tail. So you need this like interactive data to cover like the last. Because honestly, when you think about robotics, 90 percent and even 99 percent is garbage.

What you really need is like 6 9's. When you talk about these systems,

[00:35:54] Audrow Nash: oh, I get what you mean. I wasn't quite sure what you were saying. So you're saying that if you solve only 95%, it doesn't really, you, you have failure all the time because of that 5%. If you have 99%, you still, if you have thousands of robots running for thousands of hours, you're gonna come up into these really.

Uncommon events that are going to be showstoppers all the time. and that's going to be a giant pain. Okay.

[00:36:19] Michael Laskey: Yeah. And then that's why there is this long tail problem where in like a more like SaaS like framework, you can ship an 80 percent model. Like if you actually upheld ChatGPT to like a safety critical system of like...

That'd be terrifying. Right, yeah. Absolutely terrifying. Like don't mess up with the blade. You know, yeah, like.

ChatGPT

[00:36:38] Nag Murty: This is a bit of a, you know, like a abstract sort of question maybe for both of you guys. Like, do you see the analogy between sort of ChatGPT? you know, ChatGPT is sort of this... It's great at Q& A, right?

And there's a bit of like Cleverhand syndrome going on over there, where the human in the loop is sort of saying, yep, that's great. And that's not great. And so you attribute a lot of smartness and intelligence and agency to the model at the end of the day, right? Like, so you think it's amazing, but it's really, you're sort of discarding all the bad stuff it gave you, but like only latching on to the good stuff it gave you, right?

Robots, on the other hand, okay, they're not as clever sounding as ChatGPT. But they have to execute a series of actions and tasks that have to converge to like high levels of reliability, right? And you're seeing the same challenges that, that are happening out there when it comes to things like auto GPT and like some of these other sort of agent driven things that people are trying, right?

They're running into the same issues, the long tail of issues that robotics runs into, but they're running into it in the software realm. And if you ask a roboticist, they'll be like, yeah, it's dumb to like, ask an ML model to like, act as an agent. You know, it's just you need to like, I guess, put another way.

You can't just RL or like expect like good output or you can't like train it to just give you a good answer to a question. You have to train it to converge to perform a series of tasks, plan and sort of reason across the series of tasks. And that's what we mean by the long tail. Over here, in many ways.

[00:38:19] Audrow Nash: Interesting. Yeah, I like that. That it's, it's you doing actions over time, and it has to constrain to finish the actions before it can proceed. whereas ChatGPT can just have a witty one time response that had no, like the response before was garbage. And you just discard it, and it doesn't matter.

But this one, it's very sequential in nature when you do robotics. And I totally have seen what you're saying with like agent GPT and things like this where they try to do a sequence of things and like in the middle they write a Python script that has an error in it and then it fails and then it's like, I don't know what to do now, failing, like trying 10 other things, all equally failing, and then it doesn't do anything.

[00:39:02] Nag Murty: Yeah, it's a loose analogy, but that's kind of like, and I think it's, it's similar to, you know, the challenges.

[00:39:09] Audrow Nash: I agree. It's, it's interacting with complex unstructured systems. Like if it goes and tries to pull the web, the web has a bunch of formats and a bunch of exceptions. Yup. Robotics, the world has a bunch of exceptions.

[00:39:21] Nag Murty: Exactly. Exactly. that's, that may be the stronger analogy than like, I don't know, like it, it just stands, right? Like, one of the really fascinating examples we saw in the Sim2Real sort of translation was the, the notion, like, you know, how do you model the sun? And this was all like mind blowing to me when like Mike was talking about it, right?

How do you model for sun glare? And until we actually deployed across like so many different geographies at different times of the day, like we could not get a true sense for how do you sort of, what's the range of. So there's a lot of values you use to sweep for sort of modeling the sun and sin that is relevant to like what we do as a company, which I thought was like really interesting, you know, that example sticks with me.

[00:40:09] Audrow Nash: Oh yeah. Mike, any thoughts with, NAGs, uh, connection to GPT or auto GPT and all those things, or I don't know, any thoughts on it?

[00:40:22] Michael Laskey: He's definitely right though. Like when you see the consequence of an action in an agent, then you really start evaluating its true intelligence. When it's this kind of these zero shot text prompts, you fill in a lot because you come in with your sense of the world, your understanding, and it's easy to ascribe intelligence to something, but with like a robot, it's, psychologically, it's, it's kind of like, you know, when the robot fails to mow, as humans, we're like, why can't you, yeah, it's the classic like, more of that, like, why can't you do these simple things, like, why is this so hard?

yeah, the failure modes of robots just, they hurt extra more, I feel like, in that way.

[00:41:05] Audrow Nash: Yeah, and there's risks, and it could be expensive, and like, I mean, you do a terrible job mowing, or you like, run over a sprinkler, or, like, all sorts of things, like, it can be expensive, it can ruin the client's day, like, all these things, there's real consequences.

For these kinds of things.

[00:41:25] Nag Murty: not so, yeah, sorry, not as bad as like the consequences for an autonomous, like self driving car, like, you know, yeah, yeah.

[00:41:32] Michael Laskey: A lot of it's more like just sad moments.

[00:41:36] Audrow Nash: You're like, damn it. Oh, what a pain. Yeah, for sure. Or like the robot breaks itself. Like if it, if it did just go right into the pool, it's like, Oh, got to get it out of the pool.

All the hardware's broken, whatever, for sure. Yeah, that's funny. At this scale, it's just sad. It's not catastrophic or anything. so how, how do you build in these safety systems? To what you're doing. And I don't know who's the best one to answer this.

Building for safety

[00:42:12] Nag Murty: Yeah, I can talk about it because like my background is in building medical devices before this, right?

Like that's kind of, I guess I, yeah, maybe I'll give you a bit of a segue, right? Like I built my first company out of Stanford grad school. And that was a low cost medical devices company that, you know, it's like being like, it's been in the market for a long time now. It's saved like 300, 000 lives across 22 countries.

And it's been a great sort of learning opportunity to really understand how do you build safety critical systems? Because you know, the devices that like I designed before, you put premature babies in them. And like, that's the most fragile, like, life form you can think of, right? Yeah, really. You put in, like, a device.

And, like, so this approach of, like, safety first, trying to be very, sort of, risk driven, risk minimization driven, always has been the, at the core of what we do as a company. and so, when we first started, right, we started by automating a lot of these, Automating these large 60 inch, 72 inch, zero ton mowers, and which is what a lot of our competitors are also still doing today.

Yeah, like Greensy is doing that kind of thing. Right, yeah. And like, great companies, right? Like, really respect, like, these guys. The core concern that we've had, right, with these systems, and we built, like, we probably had the largest deployed fleet of, like, these large machines two years ago. Right. And we were, we were about to like scale them like in an order of magnitude more, but then we were working with like a, you know, like a IEC, you know, standard safety consulting, 6150 at safety consulting, you know, consultant to sort of figure out how does this apply, you know, when you scale, right.

When you don't have sitters on the ground who are watching these, you know, and safety sitters on the ground, how does it really scale? The short answer is nobody really has an answer today, right? Because you're doing,

[00:44:12] Audrow Nash: you're going zero to one in a sense. So

[00:44:13] Nag Murty: there's right, exactly. There is no model. 80 companies have solved for this by like, you know, self insuring or whatever.

Right. Like, or, I mean, you know, daddy Google like puts in a lot of money, like, you know, there's like, there's like all that money to blow. Right. So they, they self insure and they get away with it. Tesla has taken the other approach where, you know, they put a guy behind, like there's implicitly a guy behind full self driving all the time, right?

and he's paying for the privilege of like doing self driving testing, which is incredible if you think about it, right? And so they've solved, they've addressed the safety problem that way. They always have a sitter, but then like companies in the robotics space, they are, you know, they only have like.

This notion of, okay, let's put a SICK LiDAR, like, and SICK has a monopoly in this market, right? Or maybe Hokuyo is like some other company that does this, but let's put a SICK LiDAR and let's cover our machine with SICK LiDARs, right? And like that way you can ensure safety. At the end of the day.

[00:45:12] Audrow Nash: And so, yeah, they make sure it doesn't bump anything and it's, but then it's a huge data problem and everything.

Like I think of like cruise or Waymo or something, and they just have like sensor, sensor, sensor, sensor, sensor, big rack on top with a gigantic LiDAR on top and everything.

[00:45:28] Nag Murty: Exactly. So I think the, then we came to the realization that look, maybe there's a different way to think about safety, which is you still want to build in redundancies.

You still want to go through, you know, like a risk analysis and all that. And that's a given. But then if you look at like, how do you impact like the. You can reduce the probability of occurrence and you can also reduce the severity of occurrence. right. You can reduce the probability of occurrence by building in like some redundancy.

So you can have a mechanical bumper in addition to like your stereo camera, right. To sort of not hit objects. But then you can reduce the severity of it by making these machines smaller. Right? And by making them lower powered than some of these large beastly sort of 60, 72 inch machines.

[00:46:16] Audrow Nash: The ones that people ride on and this kind of thing.

And like ride on for industrial things where it's a huge fan beneath them and this kind of thing cutting all the grass.

[00:46:25] Nag Murty: Exactly. So then now the question is, do you need that paradigm? Because that paradigm, right, has evolved to meet the needs of efficiency when it's being driven by a human, right? Bigger, better, faster, right?

Because there's a dude driving it. . Right. Robots don't need that paradigm. Right. Because you can swarm a robot and you can have a lot of little like lighter electric basically. Yeah. Electric sheep. So it's safe. Exactly. Safety is driven by like a change in thinking about how you think about form. That's the key thing.

Interesting. I like that, you know, and then you sort of layer in like redundancies that you always have to, you know, but that's kind of it really at the end of the day, and, and that's also explains like why we took this sort of turn as a company, because we're like, we can scale a hundred of like the big machines, but really, is there a future, right?

For like these. Yeah, and there's, it's clearly like the writings on the wall that machine learning, cheap cameras, small robots, that's going to dominate. So let's just go that way.

[00:47:37] Audrow Nash: Interesting. Yeah. I like how you've looked at a lot of forces that are kind of shaping like where technology is going and what's working and picked what you think to bet on, like, stereo cameras.

But, so, okay, safety first. You basically, you've made it so that the form is more, much safer. And so your robots, just looking at the videos online, they're basically like push lawnmowers that someone would have. and it's like, it's like an electric, robot push mower, basically. They even have the handle so you can flip it up and someone can use it like a regular lawnmower, I suppose.

[00:48:15] Nag Murty: it's more to the Fandle flipping over is really just to, it's to help the robot be driven from one site back to the van. you know, it's for UX, pull it back on a cart or something. Yeah. we don't like encourage, or we don't even design it to be push, like dual use. Because, one assumption about it.

No, no, no, no. That's not the idea at all. you know, because like these robots are already fairly good and autonomous. Like they 80, 90 percent of a given yard. so, you know, we don't really need like, you know, this thing to double up as like a secondary mode. Yeah.

[00:48:55] Audrow Nash: Okay. So, that's awesome. I love, I really, I feel like I need to think more about the safety driven by change in form idea.

I feel like that's a very interesting, it's like you just change what you're doing and that changes everything about the risk of this kind of thing and that changes the safety. I think that's a really cool idea. I'm going to have to think more about that. Let's see. Mike. So, tell me about safety from the, like.

I don't know the AI side of things or how you put things in place in addition to maybe the AI controller. I don't know if you have like, here's the AI part of everything, um, and then on top of that you have a thing that says, whoa, whoa, whoa, terrible idea, or something, or how do you... From a software architecture perspective, I guess, how are you thinking of safety?

Safety with AI

[00:49:54] Michael Laskey: Yeah, that's a really good question. So, there's definitely... Like Nox said, we reduced the probability of a severe life accident down significantly. and then, from that, you know, we then said, okay, well let's make the camera... The primary source of safety, and this is where it's going to get a little complicated of my thoughts about, different types of vision problems and the robustness you obtain.

So, when you look at the safety system of a, a push firing robot, right, there's, what's your, what's your number one concern? It's going to be running into people, right, running into a kid or some other human. now that is just pure, like, collision avoidance, and that's what we would call, like, more, like, low level vision, in a way.

where, yeah, like, depth sensors, like stereo, are trying to just solve those problems. And it's actually, it's not, it's not a high level vision problem. You don't have to consider semantics of the world. You just have to think about, like, when you look at the fundamentals of stereo, you're just looking at, like, where does this pixel match this pixel?

So you're actually just doing, like, a... A mapping of two images, so you don't have to think about like true semantics.

[00:51:07] Audrow Nash: Yeah, so you're inferring depth and you're using that depth to not hit stuff, I suppose.

[00:51:11] Michael Laskey: Exactly. And this is actually something that in the literature, it's been shown time and time again that if you pre train on synthetic data, the raw point cloud is actually extremely robust.

so you can get away with highly like You're not going to get like LiDAR, you know, precision and accuracy, but your ability to estimate like core shape, especially at the distance of like two meters in front of you, is very reliable. We haven't really seen this ever fail. Testing now for over a year on our platforms.

[00:51:44] Audrow Nash: Do you ever like someone has like a metal fence or a glass? Wall or something like this or something like that. Are those pretty easy to detect still?

[00:51:55] Michael Laskey: Yeah, and actually with like Learn Stereo, especially when you train on like heavily domain randomized images, I've had papers before that show you can like precisely manipulate glass objects with robotic grippers.

That was some work coming out of like TRI, but today we run it next to fences. There's not a lot of glass in the outdoor world, sadly. But, you know, cars and things, which are kind of optically tricky, it works very well on. we've never had an incident where it like ran into an object. Cool.

[00:52:29] Audrow Nash: Nag, you look like you have something to add.

[00:52:31] Nag Murty: No, it's just, it's just interesting, right? Like, when you design for safety, like, there's, the standards really call for, like, harm. Like, you do no harm when it comes to living. Like, it's specifically humans, really, right? They don't even mention dogs and cats and whatnot, right? So then, this is really a question of...

Sort of what you're designing towards for us, sort of the do no harm is basically do no harm to living objects, right? The non living objects, you can easily sort of RL your way through, right? Like with more and more data points as you, as you. So it's really from a product perspective, it's really not as relevant to be solving for like, you know, uh, even though like it works pretty well.

Yeah, we have to solve for the living things and then, you know. It does like do pretty well on like the non living stuff and where it doesn't like, you can just always like improve it, but that's the minimum bar you want to meet when you're considering safety.

[00:53:30] Michael Laskey: I will say it's more like the animal world, not necessarily plants.

we've, I mean, things that we definitely.

[00:53:39] Nag Murty: Yeah. Okay. Plants, were

[00:53:41] Michael Laskey: also living Yeah, yeah. Like plants aren't gonna count, but like, yeah. .

[00:53:46] Audrow Nash: Yeah. I guess

[00:53:46] Nag Murty: technically living. Okay. I mean, we, we, we are cutting a plant at every single second of operation, like grass. So, like , I mean, like, sure. Okay. Gotta draw the line somewhere.

Yeah. Yeah.

[00:53:59] Audrow Nash: Don't want to kill all the roses in the yard or something too.

[00:54:02] Nag Murty: There we go.

[00:54:04] Audrow Nash: Okay. So do you have, in terms of architecture, is it like, here is like the front level analysis of the sensor data. Like the first thing it gets to is like, is this alive? And to do that, you go, has it moved? Or is it warm?

Like if you had, I guess you probably just have your stereo one. I'm imagining if you had like an infrared one, Oh, I'm getting the Apple thing.

Apple’s surprise new features

[00:54:29] Nag Murty: That's funny. How do you do that?

[00:54:32] Audrow Nash: There's it's hand gestures that make it do things. Let's see now that it's doing it. Look at this. I am unable to turn this off.

So now it just is here, but

[00:54:45] Nag Murty: Where you have AI agents everywhere and they're like, you know, oh, can I get you this like did you ask for that?

[00:54:52] Audrow Nash: I know, I know. It's so silly. Also, like, if something is bad, let's see if it'll do it. Oh, it's not doing it. Oh, there it is. You can do all sorts of things. It's like a device level, too.

It works in all of the video things. It's not, but anyways. Yeah, crazy. So, I know, I can't believe, the update, I didn't even know. It just, like, I was talking to a friend, and I, like, did, like, a thumbs up or something, and it showed some emoji with that. It's like, ugh. I guess this is what it is now, but so you're doing like a detecting life thing.

Are you doing it right at the beginning in a sense? So like the first thing is don't hit living things and maybe that has some semantics built into it. Like here's a human pose detector or just this thing moved or how do you deal with this?

[00:55:47] Michael Laskey: Yeah, so you, we don't explicitly like have a life detector. that'd be kind of, that'd be interesting to go down.

But, you know, so you can, you can start off by just, like, looking at the geometry of the world, right? So, like, how big are humans normally? What size of objects would you have to, like, know you can detect as, like, a geometric problem? so can you just say, like, there's an object in front of the camera?

So that'd be step one. Because you know, like, in this space, the robot should never mow into something that's, like, camera height, for example. So, like, that's gonna pretty much eliminate anything that can stand taller than, like, half a foot.

[00:56:29] Audrow Nash: Okay. Yeah, so you don't need to do too much special for this. You just say, don't run into things that are, like, camera height and whatever.

[00:56:37] Michael Laskey: Yeah, so that's gonna, I mean, that's what's nice about a lot of these tasks, is, like, you can do pretty simple collision avoidance. But then, it... Go ahead. I was

[00:56:48] Audrow Nash: going to see how that... You go.

[00:56:51] Michael Laskey: Well, okay, so that's like your, your foundational level. But then it does get more challenging when you actually think about like the semantics of mowing.

Because like what stops our robot from just running to the street? and that could also cause like a huge thing. Here the robot now needs to know, Oh, I can mow over this and I need to turn around this property. and that's where we're at then, that's where the more advanced ML is coming in, like high level vision.

[00:57:23] Audrow Nash: Gotcha. I think my wife and dog just got home. He's at the door. let's see. Well that sounds cool. So, I feel like there's a lot to discuss with this. I really want to get into more of the, like, the business model that you guys are going to. but is there anything else? I mean, it seems like a wonderful approach that you guys are using.

Your ES1 system, you're learning a lot. You have simple heuristics that are robust for, don't hit stuff and things like this, and then you have a lot of. physical world understanding that you're being, that you're developing. Actually, one thing before we go on to the business side, tell me more about how to generalize for other tasks.

Like how do you imagine this mowing data teaches us some sort of physics or navigation or whatever it is that we might have, and how do we extend that to snow plowing? And how do we extend that to picking apples, eventually. Like, how does that go? And whoever, whoever wants to jump in.

Generalizing learnings for other tasks, like snowplowing

[00:58:31] Nag Murty: It's Mike.

[00:58:33] Michael Laskey: Okay.

Yeah. So, okay, let's think about like snow plowing, for example. It's very similar to push mowing, right? You have the understanding of like, what can you can, what can you move your, you know, plow is basically like a blade, but it's now kind of vertical. what can you move that over versus not? So there, the idea is like deep awareness of collision avoidance.

So like understanding like small obstacles, manhole covers. All of that comes into play. then there's localization. You obviously are trying to lay down stripes efficiently. All of that also comes into play. and then there's the semantics of like, where is snow versus not. and that is very similar to like, where is there like, mowable grass versus not.

Snowpine, I think there's some nuances there that's gonna refine the model because you have significant occlusions, right? You're gonna have to have this, like, somewhat of a prior in some spaces or be able to detect, like, the edge of boundaries. Very hard. Yeah. Yeah, I think that's gonna be one of the key challenges, but when you look at how humans do it, it's very possible that with the central stack we have, these could be done.

So we feel like the intelligence is capable of doing this. We just have to like test, refine, and build. and then if you want to get into, you know, like more dexterous things, like let's say apple picking. A lot of it is just like geometric motion planning, semantic awareness, understanding, like, how do you move your arm in a free space?

But then there's also the contact physics, right? Like, knowing, like, how to grasp something and pull it. I'm pretty hopeful because, you know, like, coming from, UC Berkeley, one of the papers I worked on was, like, the DexNet project, where we used simulation to learn grasping. so you definitely see there's already like huge amounts of research pointing that simulation can learn contact physics to a pretty reliable degree.

so I don't think that's impossible for like our world model to produce.

[01:00:31] Nag Murty: Yeah, I think we... No, just one more thing, right? Like a lot of this needs to be evaluated also from a product lens. I think this is where, you know, the question is like, what is, what is the true, like, you care about generalization. Sure. But like across what dimension, right? wait, did we lose Mike?

So, you know, the, the one thing I was sort of, you know, going back to was this notion of. Like, how do you, like this, like, as we build this kernel, right? Like you, it's, it's already pretty generalizable to a lot of surface coverage, even without going to be, without going to something as complicated as snow.

So you take things like weed treatment, right? Or like fertilizer application on a lawn. Like that's like exactly the same motion on like turf. That's something we're able to do right away today. Right, so that's a task that you get paid for, it's a very high margin task. A lot of interesting angles there.

[01:01:36] Audrow Nash: So you're saying you'll be pulled by different markets as they are, like you'll basically evaluate which ones are the good things to go into, in addition to which ones you're well suited for. So it's a pragmatic choice, basically.

[01:01:49] Nag Murty: Yeah, it's always a pragmatic choice. And you know, it's, it's, these are the things that we do today, right, as part of our existing business.

We're already collecting data ahead for some of these other tasks. That we need to do right with the view. And what's also interesting is like the, this 3d point cloud that Mike talked about, it's already generalizable to things like, you know, trimming and blowing. which are, which are sort of very different tasks than pure surface coverage.

But this point cloud is already able to predict, you know, sort of the attributes. You can still use it. Correct. So, and all together, these sort of more than add up to like 30 to 40 cents on the dollar of revenue. So if you're making like a dollar of revenue in landscaping, all these activities, right, mowing, blowing, trimming, and then weed application or weed treatment application, they will total up to more than.

I'd say at least like 40 cents of like labor cost when it comes to your own business. So now you're effectively talking about sort of 40 cents of automation impact with just You know, with basically the model that we already have today. Do we want to go into snow? Yes. You know, should we, like, we could probably go IPO and not even touch snow.

Right. Like that, that's the way to think about it. And again, like it goes back to the philosophy of like, what should drive a robotics company, right? It has to be like, you know, can you like get a margin out of this? And if you can't like, you know, that's research and that's not engineering. Really don't want to go there.

[01:03:28] Audrow Nash: That's a great way to put it. Let's see,

uh, so Mike,

going back a little bit, so you're mentioning all these tasks and you kind of broke down mowing into four different components earlier. So is it like each of these components is a different model that you're kind of putting together in a smart way?

Or is it, are they all connected in some way? Are they all implicitly handled as like constraints and whatever you're doing to solve? For your current policy or how, how do they fit when you have these four modes and you add a different additional modes when you go to wheat or you go to snow or whatever it is, how do these all sit together, these different capabilities?

AI architecture

[01:04:12] Michael Laskey: Yeah, so on the robot, there's one single neural network that's producing all these things. so it's just like a single, that's why it's kind of like a, like a, you know, like a transformer. But, I'm not going to read too much about the architecture. It is heavily optimized for embedded devices. So it's not just like, it's not like a completely like fat GPT.

Obviously, there's no 70 billion weights on a Jetson. but it is a single model that can produce all these joy presentations. and I think as we scale, so right now we are very inclined to say, like, let's produce, like, human interpretable representations that a robotics engineer could use and maybe, like, code heuristics on top of.

I want to get away from that, but in a pragmatic way, to the point where you do have much more, like, learned representations that just feed directly into a learned pattern and execute tasks.

[01:05:08] Audrow Nash: Why do you want to get away from that? Cause to me, it's like, making the black box bigger, which. Do you think it'll be significant performance gains or you just want things opaque so I can't understand anything or what's that?

[01:05:26] Michael Laskey: I think the more, the more you apply learning, the more interesting, like, feedback. Well, okay, so if you're an end to end system, one thing that's very interesting is like your feedback becomes really easy, right? Because if you make a mistake and you, like, crash your robot, Then all you have to do is say, hey, on that action, you shouldn't have done that.

There's no human labeling. There's no sense of like fine tuning on some sort of like weird point cloud. You can literally just scale these models and then like they will Observe data. So the scalability becomes way higher, in a way, if that makes sense.

[01:06:05] Audrow Nash: Ah, yeah, because you are not labeling things. And because you're not labeling things, you're not limited by, like, people labeling things with efficient software for labeling things.

Like all the, that difficulty.

[01:06:18] Michael Laskey: So we don't have, because we find, SIEM, our labeling pipelines are just in house, very small. We don't have more than, like, you know, a very small R& D labeling. But the more you can unlock true unsupervised learning or like reinforcement learning, the more interesting it becomes at digesting data.

[01:06:37] Audrow Nash: Yep. Yeah. And so you're using simple heuristics like you bump it, it's not good.

[01:06:45] Michael Laskey: Exactly. You bump it, your point cloud is bad. what we're definitely starting to add to our model is like action heads that would synthesize things and then be able to like say, so basically through, you can learn representations through data, but you don't necessarily have those override the like point cloud.

But through fine tuning, they could adjust things. the bumping into things is actually not a problem. So I don't want to give that impression. a wide problem. I would say the main problem right now is like, it's actually, let's see, what's the biggest problem? It's probably just laying down these, like, perfect lines for landscapers.

Like, no, like, perfect non zero overlaps, no matter how a human does. I would say that's one of the more, like, core technical challenges in mowing. to do it with, like, almost zero overlap is actually really interesting.

Training to mow straight lines

[01:07:41] Audrow Nash: So, to train your system to do that, do you look at the result and kind of evaluate it?

Or is there some way for you to avoid manual labeling? so that you can train this, or how do you approach that?

[01:08:00] Michael Laskey: Yeah, I mean, that's what we're, it's actually pretty tricky, because

This is an open problem at the moment. Yeah, so like, we train heavily in SIEM on like, where you have ground truth, right?

These kind of performances. and then it's really like, what's, cause yeah, you could have people kind of like, dictate and say like, this line was bad. how do we like, get that in a more unsupervised way? I think it's kind of open. the ROAS today lays like pretty clean, straight lines, but periodically it could mess up, and that's I think the last challenge of pushmilling.

[01:08:32] Nag Murty: This is actually a really interesting question again, right? Which sort of, which goes back to the whole product technology debate, right? when you sell it to landscapers, right? They will like go on and on about sort of, you know, perfect, precise lines. And, you know, and then you talk to the property owners or the property managers, they just don't care.

Right. So now the question is like, whose problem should you try and solve for at the end of the day? Right. And there's a level of engineering, right? Yeah, exactly. But there's a level of engineering pride to get like these really straight lines. Great and the research team or in research time should pursue all that.

They can high five and everything. Yeah, right But it's not needed and the other thing is, you know, okay So what like if you miss a couple of stripes You can always swarm the damn thing right like oh like build a bit bunch of redundancy into like how you cover You know the the ground and be done with it At the end of the day.

[01:09:31] Audrow Nash: Yeah, it is it is so funny because we can fixate on those Things that are now performance measures like did you get perfect straight lines? There's some mowing guy who's like the best mower get impressed by how it does it or is the customer like yeah, here's money Like I'm happy with what you did this kind of thing But

[01:09:49] Michael Laskey: here's the thing right I feel like

[01:10:00] Nag Murty: That's what makes for like robotics. It's such fascinating field, right? Because you have this interplay of operational pride with like business or product pride and engineering pride. You don't want like all of them to converge, right? Like, I mean, you want, you want to converge them, but you don't want them to lose their independence and lose their voice.

Because then you have like, you know, three shitty sort of like, you know, endeavors all converging versus three perfectionists who will sort of vote for their, you know, thing to happen. But there's always be consensus that forms, you know, at the end of the day.

[01:10:37] Audrow Nash: And if you put the legs super close together, they fall.

[01:10:41] Nag Murty: Exactly. That's a great analogy. Yeah. That's what I'm imagining as you're speaking. That's such a great analogy right there. No, it's a really nice way to put it.

[01:10:50] Audrow Nash: Yeah. And it's, I really like, I feel like each of you, so Mike, Nag, and then you're, you're, like operations founder. I feel like you guys are probably the great three legs voting for your different perspectives.

[01:11:05] Nag Murty: That's pretty much how it works. Yep, exactly.

[01:11:08] Audrow Nash: Oh yeah. Let's see. So that sounds really cool. the simulation side is, I mean, we're just, the approach is very, very cool. one thing that I wanted to make sure we got to talk about, because Nag, when we talked earlier, there were many interesting things brought up.

And so, one thing that I would love to hear your perspective on again is... robot as a service model and why you don't like it.

Why not Robot-as-a-Service?

[01:11:40] Nag Murty: yeah, I think like, it's not that we don't like it. I think it's just lazy pattern matching by a bunch of software VCs, you know, who act to like justify opportunity costs to their LPs who ended up saying, you know, I'll change one letter and we'll call it RAS.

As opposed to SAS. Instead of SAS. Yeah. Yeah. And they all high fived each other. that's, that's, that's basically what happened. it is, it is a model that you do want to get to in the limit, right? Like when, when you're, when you've built like this sort of common sense robot that can like navigate the world, right?

It really understands what the world is. It's kind of like OpenAI didn't like start by like licensing chat GPT, like from day one, right? They took a bunch of time to develop it to the point where it was like, you could put an API call to it and you know, you could make some use out of it. I think robotics, if you start enforcing like a RAS like approach.

What that leads to is you're forced to make a lot of classical choices when it comes to your stack. And it's clear that the future is not classical. Like, we should take it as an article of faith that, you know, I'm not saying everybody should, we've taken it as an article of faith. That the future is in classical.

So if you buy that piece, right, exactly. So if you buy that argument, then you have to sort of say, okay, fine. What, like, why would you want to like go with RAS knowing that, you know, a few years from now, like a fully ML based system is going to blow everything out of the water.

[01:13:18] Audrow Nash: Yeah. The thing that struck me when we talked earlier about this perspective was the alternative was.

Just have the robot do the job and charge for the job like it's any other job. So like, if I want my lawn cut, I don't care that it's robots or people cutting it, I just want my damn lawn cut. And if that's the case, what you can do is you can charge the exact way that the customer was taking it before.

It's on a job by job basis, not a subscription. And like a big benefit of that was it's easier for the customer. And they can try you out without big commitments where it's a robot as a service It's often a big upfront payment or probably is sometimes and then it's the subscription cost where you say I'm gonna do six months at least And pay you several thousand dollars for supporting the robot and this kind of thing, which is a lot more friction to adoption for this kind of thing.

[01:14:18] Nag Murty: Yeah, I think we're going like even more extreme than that, right? We're basically saying, you know, buy a company that provides the service. And, you know, all like, we're not even saying by job, we're saying by business, right? Like, it's just, it's like, take the extreme, like, like argument, it's sort of, ridiculous extremes, right?

Like, okay, classical, take it to the ridiculous extreme of like all ML, and then take the business model and say, take RAS and take it to the extreme of saying, just get paid for the service. right. and it's, yeah, I think both of them sort of work well together. And the reason why we're doing this, again, I don't think, I want to say that, you know, it's not true that we'll never sell a robot to a landscaper, right?

Like, maybe that'll happen, you know, when we're like a couple billion in revenue, right? 10 years from now, we have like the best model out there with the deepest moats, then we'll basically happily sort of destroy the competition, right? Like that, that's. Not a problem, right? But step one is to build that mode and like build it as deep and as wide as you can, right?

with the data because Machine learning again, like it's like to the to the person with the model goes the spoils Like every everyone else is going to build a wrapper around chat GPT, right? But OpenAI is gonna, OpenAI is gonna eat everybody's lunch Like it's, you know, like they, they started to eat Midjourney's lunch with like, you know, Dolly 3 and like, this is going to like take, and like, you look at what happened to Jasper and like some of these other sort of wrappers that were built on OpenAI.

They don't stand a chance, right?

[01:16:06] Audrow Nash: Yeah, because they're not, they're not doing the thing. They're just using the thing. And so they can, I feel like Apple does this too. Where it's like, oh, they let everyone make a notes taping app and then they make the best, not that their notes app is that great, it's okay.

But they keep folding things in that other people were doing, except for that this thing. Let's see. I'm on

There it is everything that, that one I could have lived without, but maybe not, never know But, so yeah, so you're doing the thing versus, you're creating rather than just wrapping something else you're going heavily in the Direction of, and you're, you're building that moat by getting out there, by doing the model that's easiest for customers for this kind of thing.

And you're also just by buying businesses. Like that is a very cool step. Cause I've heard of like boring businesses and this kind of thing where you can buy, I don't know, a landscaping company and then you add some technology. And what you guys are doing is you're literally doing that, where you buy a landscaping company, and then you augment them.

With your robot platforms, making them even more efficient and even better, able to do their tasks.

Boring businesses + robots + AI

[01:17:29] Nag Murty: Exactly. That's the goal. again, though the, you know, the value of the company. So if you think about like when you buy a landscaping business, right. You're sort of valuing it on like an EBITDA multiple, right?

So you're buying it for sort of X times its earnings. but then like to us, the value of the company is not just like what we buy it for in terms of its earnings. It's the data and you're buying essentially the services of all the operators who can perform reinforcement learning for you on the ground.

What a clever model. That's the crux of the whole thing. And again, this is all driven by like this sort of, it's an article of faith that machine learning will eat everything, you know, going forward.

[01:18:14] Audrow Nash: I wonder, I mean, it seems like it probably will to me that just like all the advances we've been seeing.

And I mean, like even, so you look at like perception and now the way everyone does it, it was classical when I started robotics and now it's all. Learned like it's all perception that's running through AI, which is totally nuts And then you get a pose estimation or whatever it is out of it, and they're they're all learned

[01:18:44] Nag Murty: You know, what's also interesting is like this is it's not it's not like it's not like like we discovered this religion You know in isolation, right?

It's because we tried a bunch of other approaches and we tried to scale hard with those approaches, right? Like I built like the first Like GPS based more in my garage, right? Like I built it on like a, like a drone audio pilot stack. I learned like, like literally, right. And we got our first, like, you know.

A couple hundred thousand dollar contract on the basis of like this rickety thing that I'd built in my garage. It was crazy, right? And we're deeply sort of aware of the pain, painful sort of aspects of these classical systems, whether you're talking about GPS. Then we found lidars and we're like, okay, this is amazing, right?

You can map the whole world, you can teach a path, but then you start deploying those and you realize how brittle that approach is because the environment is changing all the time. Right? Unless you have semantic awareness, you don't have any hope of solving. Like a robotic problem in the unstructured outdoors to the level of reliability where it can actually, you know, generate like significant value unlock.

and then what's interesting about machine learning is that you can get to an 80 percent good solution pretty quickly. So it's, it's not like a moonshot bet where you, you have to wait for it to become a hundred percent. So these approaches, our mowers today are generating value, right? They're fully like, they're all ML, but they're generating value.

And so that allows us to sort of plow back that value back into the business, right? So you're not trying to like take a deep hit on the cost. In any case, when it comes to this.

[01:20:29] Audrow Nash: Yeah. Cause you're staying profitable. Like you're just, you're buying a business, you are scaling and getting more data and maybe getting investment, but it's on your terms cause you're profitable.

and it just, it seems like a really good way to proceed to me. Like the whole thing of, cause one thing I really have, I've seen so many companies like, so I've been podcasting for, I think it's almost 10 years now, which is nuts. So I've talked to a ton of companies for this. So Sense Think Act, but then RoboHub before that.

and so I've talked to, and the whole time I was interested in startups and, I've talked to so many companies and a lot of them are out of business, which is nuts. and a big reason that that. Seems to occur is they take on a bunch of investment and then they go and try to find a market or something like this.

Maybe they found one, but then they are forced to become profitable or they're forced to try to cash in so that the venture capitalists can cash out on some timeline. And a big way to fight against that in a sense, like you want investment to scale and you want it for the connections and things like this.

So it's not all bad. but by having revenue by having kind of like a very good long plan, you put yourself in a much better position. To accept investment on your terms and by buying profitable businesses that are already profitable And then just making them a little more efficient like that that to me seems like everything is going to be on your terms

[01:22:12] Nag Murty: Exactly on your timeline your terms and you know, not everyone can raise like A billion dollars in charity investment like OpenAI did, right?

Like, we've got to figure out, like, They're all for profit, though, I think. I don't know exactly about their situation. Well, yeah, they pulled a for benefit thing. But what's interesting also is that, you know, OpenAI could, like, scrape the web and get all the language data that they could. Now they can watch YouTube and they can build multimodal LLMs, right?

But where are you going to get the data to build robots? Yeah, it doesn't exist. And sure, Google and DeepMind and all these guys can, you know, like rally together and like collaborate to like, you know, build this massive dataset. But it's not enough to build a dataset. You need to have like boots on the ground to do that sort of reinforcement learning and provide the feedback.

So, without a boots on the ground approach, there's no hope in hell to build outdoor robotics at scale.

[01:23:09] Audrow Nash: And especially if you're not in Google or whatever, some huge company that can fund all this. Like, if you want to do it as an individual, you need to probably do this approach. I'm sure you've thought about this quite a lot, um, and this seems like the way to scale.

You have to bootstrap.

[01:23:27] Nag Murty: Yeah, it's the way to fund yourself and, you know, and honestly, there's another angle to this, which is you end up building something, like robotics is a workflow optimization problem and it's a people plus machine optimization problem and the only way you can solve this is by sort of injecting robots and trying to inject robots in many different ways into an existing operation, right?

Because it's hard to sort of just say, here's the perfect machine, I'll drop it in and like, boom, like, you know, everybody's like high fiving each other. That doesn't happen. Yeah. You know, it's very hard. and so it's almost like you're, you've sort of evolved this robot, like fit like a glove. into your existing operation.

And it is, the data engine is not just for ML. The data engine is also to influence all other aspects of your product. you know, and so it, it's, it's just, it's just like a rational way to do it. Yeah, exactly.

[01:24:27] Audrow Nash: For sure. Yeah. It's a very cool idea. So Mike, how, so going, building this data model for all sorts of things, like how, how are we doing that?

Like how we're collecting lots of data. How do you make it so things can generalize? From the mowers to the snowblowers to, or the snow pushers, whatever it is, all the things. How do you, how do you organize, how does all this data work together, I suppose, and I'm sure that's a very interesting challenge.

How do we build the data model?

[01:25:02] Michael Laskey: Yeah, I mean, so, I think we have to, the way we're kind of structuring is like, we're trying to build, like, an intuitive physical model of the world for the robot that can have, like, key concepts, like, I know where I am, I know where I've been, I understand the semantic 3D structure of the world, I understand what I can move over and what I cannot move over.

Those core concepts, they apply to all the tasks you just mentioned. so, as we, what we try to do these days is, like, you know, this year we shipped the mowing product, Next year, we're going to want to see like, okay, let's test our stack and do something like trimming or edging or blowing. We'll take the same model and then put it on a different platform and have the same, basic deployment cycle with that.

But we keep it to being trained with the same model and the same model should run on every robot. I think that's very feasible, especially because, as Nox said, a lot of these tasks are so similar that you would have to change like almost nothing to do different tasks right now. so cool. Yeah, and what's gonna get really interesting over the years is when we get into like dexterous manipulation, which is when you look at landscaping, it's almost like this goldmine of mobile manipulation outdoor unstructured environments.

Like you actually probably couldn't think of a better benchmark than landscaping because every day you have a crew like, you know, right now we have a hundred people go out and do complex manipulation in random parks, random cities all over the country, right? So, like, that's not, you're not gonna get that, like, Google, they have, like, maybe 10 people demo to the robot in a conference room.

Same conference room, same robot every day. The type of data that we're getting is very much, like, real, unstructured mobile manipulation in the wild. and doing complex tasks, like, if you ever see someone tree trim, that's insane. Like, if a humanoid was able to climb a tree and cut a branch. That would be like a huge AGI moment.

Oh my god, yeah. Yeah, so these are some of those complex tasks that you see people do daily. And we can collect and train on that data, which is really interesting.

[01:27:09] Audrow Nash: Huh, so for people to do these manipulation tasks and everything, are you just having your crews wear cameras? Or like, what's the, or is it just your robots are going out, is that what you mean?

But eventually, maybe you'll add more sensors around the crew, so you can get data on them?

[01:27:27] Michael Laskey: The number one data right now is like they bring the robots with them and deploy them every day. So we're getting that like interactive point of view perspective of the robot. as we start scaling to like dexterous, you would have to ask the question like is the robot platform we're trying to do good enough that you can just do like RL type data, like interactive feedback, or would you want to have passive data collection?

That's gonna be the challenge. But then you can also, like, you can use these people as basically ops, right? Like, every Roblox company has an ops team. Our ops team is likely the biggest ops team. Because we have, like, entire crews of people doing this work for us, right? Yeah.

[01:28:09] Audrow Nash: It seems a lot like what Tesla did with their self driving for everything, which is they just kind of outsourced it and then they get the benefit from millions or billions or I don't even know how much, driving data.

Billions of miles of driving data.

[01:28:25] Nag Murty: It's, it's what every company has done though, right? Like if you take a look at like, like Facebook and Google or whoever else who sits on all these like massive data sort of, you know, data, what a units, Which, you know, people buy to train their LLMs that was generated by users who sort of paying in kind that data for the privilege of using a Facebook or a Google or a Reddit or a Twitter, right?

So it's the same thing. Tesla's basically doing the same thing for the physical world, and we are doing the same thing for landscaping. That's super cool.

[01:29:01] Audrow Nash: Yeah. What a cool connection. so where, where do you guys see yourself? Where do you see yourselves going in the next like. Like, what's the future, you think, for like, I don't know, five years, ten years?

What do you think? maybe start with Mike?

Getting into dexterous tasks

[01:29:22] Michael Laskey: Yeah, I mean, so I'll speak more technology wise. I think what we're wanting to get to is like, into the dexterous tasks, right? Like we've definitely, where we're at today, I feel like surface, mechanized surface coverage, is obtainable within the next couple years.

And then we really want to have hardware platforms where we could do dexterous tasks. and I think you're seeing, with hardware it's actually really interesting because you're just seeing so many prototypes come out of like hardware platforms. so you could, you might even have like some sort of commodity thing that can sort of move in a dexterous way and then apply AI to it or we build our own.

which we're very capable of. I think we have a pretty decent in house like hardware team these days. and then, we want to get to the scale though, where it is like, you know, every state we're in, every like, county we're in, and our robots are learning from every data possible. and if you look at, like, this space, it's very possible.

You have so much fragmentation in the industry, it's just waiting to be rolled up in a massive way.

Building a monopoly

[01:30:29] Audrow Nash: Oh yeah, what do you think?

[01:30:32] Nag Murty: I think it's like in five years, it's, you know, given like our sort of acquisition pipeline and sort of the state of the industry, it's very easy to see getting to something in like the billion dollars of revenue range when it comes to rolling up this industry.

And then I think we'll have this really mature model, which we can then start to, and like these robots that are fully baked, right, for our own operations, we can start opening it up to other and like, you know, other people in the industry. But really the, would be awesome to build a monopoly. And just like, you know, just like, just take the whole industry.

Cause I mean, yeah, like, yeah, deliver the service, build your own robots. It doesn't get like more sort of, you know, a bigger moat than that. Sort of take the industry. Yeah. Yeah.

Advice for our new world of AI

[01:31:23] Audrow Nash: Cause I mean, like, related to that, I would love to hear what advice you have. For people other than join your company like what what kinds of what so with this kind of new world of machine learning and robotics and Data being very valuable.

how,

how do you think someone can skill up or best position themselves or start a company like what? I know it's, it's a super broad question, but what would you think would be some of the most important things to doing well in this kind of new world? We're in, maybe you start with Nag this time.

[01:32:09] Nag Murty: Yeah, I'd say it really comes down to like people from different disciplines really need to talk to each other.

So if you're a roboticist, like don't talk to other roboticists, go talk to like the business folks. Right. and if you're, yeah. And vice versa, right? If you're someone in the PE space or someone in like the VCs, I don't know, they've got their own sort of, you know, their own sort of swim lanes that they swim in.

But I think robotics is going to benefit a lot from like the PE folks talking to the, through the roboticists. Again, there are reasons why it'll never happen, but I wish that like more of it would happen. How do you,

[01:32:49] Audrow Nash: what are some of the reasons and how can you kind of best them for this? Or like what, cause individuals can do things.

[01:32:55] Nag Murty: Individuals should and will talk to each other, right? Like, it's already happening, right? So you have, well, it's already happening for non robotics things is what I mean, right? Like where you have PE companies, you know, that are looking to inject like LLMs, LLM agents, you know, their operations. We use LLMs to sort of, you know, do bidding on like new job sites and things like that.

So there's, yeah, it's, it's fun how all that is shaping out. but what's, what happens with PE is typically they have, they don't know how to evaluate technical risk, private equity, like, right. It's like, I think the world today is sort of siloed into people who know how to evaluate technical risk and people who know how to evaluate financial risk, right?

Private equity is like financial risk. So they know how to take a mature business and then inject a bunch of debt into it and then blow it up to be bigger. The VC's come in and they know how to evaluate. Technical risk, right? But it's for like, for anyone who's trying to build like something similar or wants to like, you know, explore more along these lines, it just feels like there's got to be sort of a way.

Where these two models can merge for robotics to really take off because otherwise you're going to have the same sort of repeated cycle of Companies that go public through SPACs and like, you know, I don't want to name names here But they're trading on like, you know, the stock exchange and they're worth less than the cash they have on their hand Which is really weird and this is because they're massively cash, cash burning entities who are selling robotics and AV and whatnot, but they really don't have like a way to generate cash flow.

because these things are so, yeah, exactly. And, and that's why Tesla is what it is. And like, you know. Any other RAS company is what it is, at its core. And so really there needs to be like a rethinking of how you think about, like, what kind of business model robotics will need. And for people to start talking, you know, with each other to sort of.

evolve that. Because left to itself, like the pattern matching was sort of insured. Yeah, like, and it's just going to be pattern matching all the way down. And it's really not, I mean, really, yeah, like, it's not gonna lead to anything exciting. You know, you're not gonna change the world with like, you know, these same repeated ways of thinking.

It's not gonna happen.

[01:35:25] Audrow Nash: Yeah, for sure. How do you, so do you have any, like, great Investment thinkers or anyone's to recommend for this kind of thing? Like, I mean, I'm in technology and I know very little about private equity. I talked to a lot of people through podcasting, but, like who, who to learn more, who to become, or even like, I mean, you guys are, in my opinion, from this interview, a great example of kind of merging these two, but.

Who else can we learn from?

[01:36:00] Nag Murty: It's, yeah. So I think we should make a distinction to say, you know, private equity. I don't think like you can have a private equity company that merges the technology company tomorrow. Right? Like there are different fund structures and whatnot, so there's logistical challenges to do that.

But, but I think there are examples out there like of companies that are building these vertically integrated solutions like an Amazon or a Tesla. And the way they've approached building their business is what I think like we can learn to emulate. By focusing on cash flow, right? By focusing on sort of, like, by asking yourself, like, what are you getting paid for?

And like, where is the margin accruing from? And then what are the jobs to be done? And what can you truly, do you truly need to automate? And what can, what, what can you automate, right?

[01:36:46] Audrow Nash: Yeah. Okay. I really like that. I mean, it's just simple pragmatism in a sense.

[01:36:52] Nag Murty: Cashflow. Exactly. I mean, follow the cashflow, right?

Like follow the cashflow, but also like you have to aim for the moon here because again, classical methods are not going to solve it for you, but find a way to like link the two, in a way that works for your industry. It may not work for all industries, right? I think it works for landscaping. You know, we're growing, we'll find out, right, as we grow further.

we can certainly tell you how not to do things, and then hopefully, like, in another year or so, we can tell you, like, you know, how to do things really well,

so.

[01:37:24] Audrow Nash: Yeah, for sure. It's iteration. And, Mike, how do we keep

up with all this LLM stuff, all of the world language model, all the AI, how, how, how does one, I don't know, operate in this or skill up or?

Yeah. Yeah. Just do well in this coming world with, if machine learning is going to eat everything or everybody's lunch. How do you, how do you be a part of that?

[01:37:53] Michael Laskey: Yeah, I mean that's, keeping up with it is definitely a chore that you have to like, you know, by then, like, your week is reading papers. it's a very fast moving field.

I, you know, and the pure software side, I, I'm definitely not an expert in that. When I think about robotics though, like, how do you, how do you do ML in robotics? How do you actually, like, Build a career and like actually ship these products. The one lesson I've learned, and this is actually why Electrosheep is really appealing, is like it's really easy to be overestimate in timelines, or like overconfident, and saying like, yeah, tomorrow the robot will work, or like the next day the robot's gonna work.

These are very subtle problems that require a lot of iteration and feedback. So you really do want to think about like, what is the cash flow of your business? What's the runway? And what is the realistic timeline for these ML products to actually deliver value? And then plan businesses in a way that gives you the R& D runway to actually like scale and ship product.

[01:38:58] Audrow Nash: Yeah, I really like that. So keeping the R& D runway, because that's how you are differentiating yourself. But you need to keep cash flow positive so you don't have to do things on bad terms or just get bought for way less than you're worth by some company because you're in a bad moment and you ran out of runway or all these things.

What, Mike, do you have any other, like, I don't know, like, advice for learning about this or applying, learning about AI and ML or, and applying it or how do you get really good technically, I guess?

[01:39:39] Michael Laskey: I think for robotics, the number one thing that's really guided me is, like, put it on a physical robot and actually put it out there and see the performance.

The benchmarks, they matter, but what really matters is how's the robot's doing. because that's going to be your ultimate benchmark, like does it really work on a robot? Can you see the physical embodiment of the AI? that is your ultimate, like, North Star when it comes to shipping robotic products that actually work.

really easy to get lost in like the Twitter esque sphere of like what's hot, but when you really see it fundamentally on a robot and delivering value, I think that's when you know you've actually like done ML research in a cool way that's exciting.

[01:40:24] Audrow Nash: Mm hmm. Yeah, when you've actually provided value using these methods, it's awesome.

Okay, well let's see, we've gone very long. It's been a blast talking to you guys. Do you have any, links or contact info you'd like me to include with the episode?

[01:40:47] Nag Murty: it's just, we have a new website, right? So, shiny new website, like you should include that.

[01:40:53] Audrow Nash: Yes, for sure. Of course, the shiny new website will be included.

It's great.

[01:40:58] Michael Laskey: Yeah. Much more information. We're doing a lot of PR soon, so there'll be more stuff, but for now, just the shiny website. Yeah.

[01:41:07] Nag Murty: And then, my email is nag at electricsheep. company and Mike's is michael. lasky at electricsheep. company, so. Both of those, you know, we can provide you and, yeah, so that people can sort of get us, get in touch with us there.

and then, yeah, we're based in San Jose, you know, and, we keep like doing demos all over the Bay Area. So, anyone who wants to come and hang with us. We'd love to sort of, or when you're in the Bay Area, you should come and hang with us.

Hanging out

[01:41:37] Michael Laskey: Yeah. You can literally just pick a park and we'll show up.

[01:41:44] Audrow Nash: Yeah. I'll reach out, once a quarter I'm there. So I will reach out and we'll hang out.

[01:41:49] Nag Murty: Yeah, absolutely.

[01:41:51] Michael Laskey: Where are you at? Where are you living right now?

[01:41:54] Audrow Nash: San Antonio, Texas. I was in San Francisco. We moved like a year and last, last June, we moved here. So I like it a lot. We don't have family or anything.

We thought it was a nice place to live.

[01:42:09] Michael Laskey: Oh, cool. Do you have a massive like house? Like, is it like

[01:42:13] Audrow Nash: relatively, we were in a, I was in a 400 square foot apartment in San Francisco. And, my mortgage is the same for a four bedroom house that has a yard and stuff. And we're in like the best part of town or one of the best parts of town.

It's bonkers. so I don't know. I like it here. And I love barbecue. I, my wife and I, we went and looked at Austin a while ago and, we had the barbecue. And we were thinking about where to live and, at some point in the meal, I was like, I'm ready to beg. I will. Like, I want to live here. I want to be close to this barbecue.

[01:42:56] Nag Murty: You know what's interesting? Like, Roon put out, like, the guy on Twitter, like, who, you know, who's known for sort of these takes on AI. He put out a tweet like a couple of days ago where he said, you know, the future is going to be sort of these immaculate sort of living spaces. I'm just paraphrasing. It's not the exact tweet, right?

Where it's like all these built like gorgeous outdoor and indoor living spaces for us, right? There are fractions of the cost and anyone, right? Whether or not they work in tech like is able to afford it. I was like, yeah, that is such a cool vision, right? Because Like that's the promise of robotics and that's a promise of like these AI agents that we're building, right?

Just massive improvements in standards of living for everyone at the end of the day Which should be insanely cool to make happen.

[01:43:44] Audrow Nash: I mean even also just like easing labor shortages and stuff in the short term Yeah,

[01:43:50] Michael Laskey: which also means like things become cheaper and the abundance of labor. Yeah, it just it's a whole new equation When you have infinite labor or

[01:44:00] Audrow Nash: I was listening to this thing on chat GPT or it was the Sam Altman interview with Joe Rogan and he he was Sam Altman was saying how if Like their goal is to make intelligence free Which is quite cool.

[01:44:16] Nag Murty: So then it's like all the knowledge work in a sense, becomes free, which is really interesting and robots would do that for the physical world,

[01:44:24] Audrow Nash: which would be quite cool. Yeah. Well, we got to get towards our utopia.

[01:44:30] Nag Murty: Yeah, we got to get you like permanent barbecue. Like, you know, like that's the goal.

[01:44:35] Audrow Nash: And if you guys come visit San Antonio, it's lovely here.

The barbecue is wonderful and there's plenty of grass to mow. Do. Sure.

[01:44:42] Michael Laskey: Yeah. I think it's, it's in the options of places we'll be going to, yeah.

[01:44:47] Nag Murty: Yeah. Hell yeah. We might be there sooner than you think, like, we're looking at a few, yeah, acquisitions in that area, so like, we'll be there sooner than you think.

Awesome.

[01:44:58] Audrow Nash: Yeah. Keep me posted. Okay. Hell yeah. So we'll wrap this up. great talking to you guys and I'm looking forward to following and seeing what you do more.

[01:45:08] Nag Murty: Likewise. Thank you so much for having us and it was a pleasure chatting with you.

[01:45:13] Audrow Nash: Hell yeah. All right. Bye everyone.

That's it! If you made it this far, you're either asleep or enjoyed the interview.

If you enjoyed it, consider subscribing. I'm not planning on keeping any kind of regular schedule, so subscribing or following me on X or LinkedIn is the best way to hear about new episodes. If you're asleep, I hope it was restful and that you wake up and build something. Until next time, bye everyone!

Table of Contents

Episode

Start

[00:00:00] Audrow Nash: Like, I think the feeling is because of things like ChatGPT that there's no reason to go into something like computer science because you're just going to get in a thing and be automated right away for this kind of thing. I think that's the feeling. I wanted to hear your thoughts on it. Like, do you think it's a good time to be a roboticist, or,

[00:00:21] Melonee Wise: I think it's a great time to be a roboticist. This is the, probably the next 50 years are going to be like the heyday of robotics.

Episode introduction

[00:00:33] Audrow Nash: Are humanoids the next big thing? How long before they take our jobs? To get some perspective on this, I talk with Agility Robotics CTO, Melonee Wise. Her answers will probably surprise you.

I think you'll like this interview if you want to understand the current technology around humanoids, what's possible, and where the opportunities are. I also think you'll like it if you're curious about how LLMs like ChatGPT will impact robotics and jobs, or if you'd like to know about manufacturing in the US, especially the challenges and opportunities.

As always, I'm Audro Nash. This is the Audro Nash Podcast. After you listen, I'd love to know. on X or in the comments if you agree or disagree with Melonee's perspective on the timeline for humanoids and where we'll see them first. Also, if you want to talk about this interview or robotics in general, I host a weekly space on X on Thursdays at 9 p.

m. Eastern time or 6 p. m. Pacific time. It's free and has been a lot of fun. All right, here's the interview. I hope you find this conversation as enjoyable and enlightening as I did.

Alright. Hi, Melonee. Would you introduce yourself?

Introducing Melonee + Agility Robotics

[00:01:45] Melonee Wise: Hi, Audrow. I'm Melonee Wise, CTO of Agility Robotics.

[00:01:51] Audrow Nash: Now, last time we talked, you were at Fetch. tell me a bit about what's happened, like, your path, for the last, I don't know, year and a half since we've done an interview?

[00:02:01] Melonee Wise: Yeah, sure. well, last time we talked, Fetch had just been acquired.

I spent about a year and a half, working at Zebra Technologies and then I decided to take some time off. So about six months off, traveled the world, that was a lot of fun. Went all over Asia Pacific, went to Antarctica and then South America. Then I decided I should get back in the game. And, I decided to go join Agility Robotics.

[00:02:39] Audrow Nash: would you give a little bit of background on Agility Robotics?

[00:02:44] Melonee Wise: So, Agility Robotics is a mobile manipulation company. So, it has a humanoid ish Form Factor. That is targeting, tote manipulation for machine assisted operations. So, every time you order something or online, it potentially passed through a tote.

and Digit, our mobile manipulation robot, is one of the robots that might be handling a tote that has something you ordered in it. Or even parts for a thing that you might buy.

[00:03:21] Audrow Nash: Oh yeah, and so a tote, it's just a small bin? Is that what a tote is?

[00:03:24] Melonee Wise: Yeah, a tote is a plastic container that it can be, it can range in sizes quite a bit, so it can be anything from like 2 feet by 2 feet to like 10 inches by 10 inches or, so the range on the size of the container is very large.

[00:03:43] Audrow Nash: Mm hmm. Okay. And so you have Digit. Digit's a humanoid. Digit is helping with these TOTE related tasks. And you mentioned it's humanoid ish. Can you tell me what it looks like?

Introducing Digit

[00:03:56] Melonee Wise: Yeah, so Digit, is probably about five Feet two or three inches tall. it has two, six degree of freedom arms as a head with LED lights for a face.

and it has two legs. however, the leg architecture is a little bit reversed from the leg architecture that we have as people. then a lot of people would say the knees are backwards. However, Because it has more of an avian leg structure, it's actually the ankles are backwards. so it's, it's a little bit different, from that perspective.

[00:04:42] Audrow Nash: Yeah, I think of it like having ostrich legs or something like that. Because when it kind of, and it's cool because I was imagining while watching it that it's kind of pragmatic to have the legs go backwards, because then you can have it drop its and keep the arms in front of it, but you don't have the knees occluding whatever the arms may want to get into?

[00:05:02] Melonee Wise: Yeah. The knees don't get in the way. And it also biases is the center of gravity to drop straight down, which is nice.

[00:05:13] Audrow Nash: It's got a nice squat. Hell yeah. Okay. And so how did you pick to come to agility? Tell me a bit about that decision.

How’d Melonee to join Agility?

[00:05:24] Melonee Wise: so when I, when I, you know, looked out in the robotic space, there were a lot of things that I was interested in.

I thought very briefly about starting my own company, another one, and I decided I wanted a little bit of a break. and so I thought I'd take a less stressful role, as like a CTO, not a CEO, cause that's a very stressful role. And I spoke with a couple of different companies, um, and the thing that really excited me about agility were, was a couple of things.

One, it was very much in a market that was similar to Fetch, so I knew a lot of the customers, I knew a lot of the, the customer challenges, and why people buy. Robots. so it helped me really understand whether I believe that the product had a product market fit and whether people would even want to buy the thing.

Two, Agility's product is relatively mature. They have physical robots. They have been working with customers. I didn't want to start with a company that was still in the does the technology even work, you know, past the first prototype stage. and so that was another aspect of why I was very interested in joining Agility.

And then the, the third thing was, is You know, one of the, one of the things I've always been interested in is mobile manipulation. I mean, at Fetch we had a mobile manipulator, but also as we worked in the AMR market, it was very clear that there was, there's a whole set of tasks that are better suited towards mobile manipulation and not just mobile.

And I thought that that would be an interesting, Direction to go. And if you're wondering why I didn't go and do more mobile, well, I had a non compete. So I couldn't just go off and do another AMR company.

[00:07:36] Audrow Nash: That's very funny. How long does, I mean, I'm just curious about the non competes. How long does that last?

Is it like for five years you can't be in an AMR company?

[00:07:44] Melonee Wise: Yeah, so my non compete, because I was a key person in a material transaction, as they call it, my non compete is three years from the date of acquisition.

[00:07:56] Audrow Nash: Gotcha. That's pretty interesting as a constraint for how to pick the next robotics company.

[00:08:02] Melonee Wise: Well, it It's, it is a material constraint, you know,

[00:08:07] Audrow Nash: yeah, for sure. Okay. So then because of this, I guess, because of the constraint, then digit having legs makes it so that it's a different product in a sense. And so the non compete is not valid. for this kind of thing?

[00:08:24] Melonee Wise: Yeah, because my non compete was very specific to autonomous mobile robots, so AMRs, and didn't apply to other types of mobile manipulation technology.

[00:08:37] Audrow Nash: Okay, and so I interviewed Jonathan Hurst a number of years ago, who is one of, is he one of the founders of Agility? And so he's a professor at Oregon State, right? Who was deeply involved. Yeah, one of the founders and everything. and so when we talked, Agility Robotics was really about the last. My last last hundred feet of delivery this kind of thing so like going from the FedEx truck to the curb I guess curb to doorstep or something.

And I understand from talking with you earlier, I think at Roscon or some other time that it's, pivoted just a little bit. Can you tell me kind of what market you guys are going into and also why not? Last little bit of delivery, this kind of thing.

Pivoting from curb to doorstep delivery

[00:09:31] Melonee Wise: So, like all startups, our product intentions have evolved.

and when you look at last mile delivery or the last hundred feet of delivery, let's call it, there's a lot of challenges that go beyond the robotics technology. Safety, compliance, Weather, many of the, many of the challenges that you see, even in the autonomous car market, Digit would have to face if it was going to do the last hundred feet.

And you're seeing today that, you know, a lot of companies have been successful in indoor semi structured environments. And before I even got to Agility, they were already, converging on product market fit and product alignment that was more geared towards indoor Material Handling. And since I joined, I've been working with the team to really get us focused down on a set of solutions that, really play to Digit's strengths and are scalable and repeatable within our customer sites.

And this is, this is something that, you know, many startups have done in the AMR space, Locus, Fetch, Automotors, we all ended up getting into kind of a set of repeatable workflows that our customers really were excited about. And we're doing the same thing at Agility.

[00:11:14] Audrow Nash: Hell yeah, and so that workflow for Agility is the moving totes.

Is that correct?

[00:11:20] Melonee Wise: Yeah. And there's tons of workflows in the warehouse that involve moving totes. Oh,

[00:11:26] Audrow Nash: hell yeah. And then, so, I mean, like just thinking about humanoids, why is a, like, so a thing that strikes me is that perhaps. And I'd love to hear kind of the, why, the, why Digit is a very good fit for this, but like I imagine a humanoid is going to be more expensive than a robot with a wheeled base, walking is a challenge, probably limits the payload, um, why a humanoid Form factor for this kind of task.

Why not just a mobile base with a big pinch, like, like a two foot pinch gripper that could just grab the totes and move on.

Why Humanoids?

[00:12:07] Melonee Wise: So as someone who spent a lot of time building mobile robots, one of the things that you'll find is there's a lot of companies, including Fetch and Automotors who built base platforms and then got into the business of building lots of accessories, right?

You can see that with Fetch, Geek Automotors, and even, Locus now with, with their acquisition of, Waypoint, right? Whatever. but, but the thing is, is that Is that when you look at, that the accessories start to eat into the payload, right? So like, when you add, so like if you look at a lot of autonomous mobile robots, right?

Like, everyone wants to put a hundred kilograms or more on top of these things, but every bit of shelving or cart or whatever you put on there reduces the payload that you can put on. And then the next thing is, so say you want to put a hundred kilograms of You know, e commerce goods on there. Now it usually gets to some conveyor end point and they typically want you to push the tote off onto the conveyor.

But now you have to build the conveyor, uh, you know, tool, and typically they want to put in multi bins. And so now you need like multi layer conveyors.

And then you end up in this situation where you're building all of this mechanism to do these like really complicated things. And. all at the same time, you're trying to battle the other problem of having a small footprint because most warehouses are not meant to have, you know, 10 foot aisles, they're meant to have relatively narrow aisles because people are relatively narrow, and you're also fighting against stability.

So, as you put mass higher on a very small platform, it wants to tip a lot more. And so one of the nice things about legged or dynamically balancing systems is as you reach higher, you can do other things with your stability platform to enable you to reach and pull weight from, you know, kind of outstretched positions back into your footprint.

And so. One of the advantages of having a dynamically balancing system or a bipedal robot is you can have a relatively small footprint, you can reach relatively high, and you can carry a relatively competitive payload.

[00:14:56] Audrow Nash: Very cool. Yeah, it's an interesting I guess the alternative of an AMR has a lot of trade offs that you run into and so a humanoid is very flexible in what it can do.

You can change the dynamics. It's, it's almost like these environments were made for people, which is very interesting and so you can kind of leverage that. What do you, what do you think about that perspective where it's like one of the big benefits of a humanoid form factor is that the Majority of the world and a lot of the infrastructure was built around people.

So the aisles and the warehouses are thin so that a people can, a person can walk through, not a big robot carrier, like with a huge footprint. what do you think of that idea?

[00:15:43] Melonee Wise: Yeah. I mean, that's just the reality of the world, right? Like. And the thing is, is that it's very hard and costly to change over these, these environments for the infrastructure, right?

And our customers are very targeted at return on investment in under two years. And so remodeling an entire warehouse. For a robot, typically puts you in the 5 10 year return on investment time frame. But if you can drop a robot right in, your return on investment is pretty rapid.

How does Digit compare to people?

[00:16:18] Audrow Nash: That's awesome. And so, our how does Digit compare to people in terms of speed of doing tasks or this kind of thing?

Or what's, maybe it's not even a fair comparison, but how do you think about that?

[00:16:32] Melonee Wise: Yeah, so, typically it's not about Direct Speed Comparison, it's about throughput comparison. Because what you'll see when people do tasks, they do a lot of compound, rapid tasks, and then they wait around a lot of time. A lot of time

They do. people are really good.

[00:16:55] Audrow Nash: So it's like the tortoise and the hare Yeah. Kind of thing. So person goes and does like six things at once. Yeah. And then they just like take a 15 minute break kind of thing, I guess. Whereas the robot can be doing like one every three minutes or something like that.

Yeah.

[00:17:08] Melonee Wise: You find, you find if you go and watch the way people do. A standard activity is it's very bursty, where they have lots of like activity and then no activity. and so what our customers really care about is total, total activity over some timeframe. The other thing that you see is, is people take breaks, right?

Like people over the course of a, an eight hour day. Take at least one hour of breaks. And so that's a lot of time for a robot to catch up in as well, um, in terms of total utilization and total throughput.

[00:17:53] Audrow Nash: What, on the order of how does it compare? So if you consider a long period of hours, what, what are, I guess, long enough so that the breaks and everything average out, what's kind of the ratio of throughput, for digit versus say a human doing a similar role?

[00:18:12] Melonee Wise: Yeah, so we don't have I would say a large enough data set to make any like claims on the direct claims on that

[00:18:23] Audrow Nash: Give any any guesses like is it 1 to 1 is it?

[00:18:28] Melonee Wise: Yeah 1 to 2 is I mean, I would say that that as we Continue to go into the field. We are at parity or slightly better over the long run then Then in the use cases we're deployed in.

So like, obviously there are some high speed activities that Digit most likely will not be doing anytime soon, but in the use cases that our customers have us targeted at, we are at parity or slightly better.

Ethics of humanoid robots

[00:19:01] Audrow Nash: That's amazing. Oh yeah. And then how do you think of like, while we're on the topic of people doing these jobs, I, so I hosted a space on X and so we talked a bit about this as kind of like about humanoids and like food for thought for this interview.

one of the concerns brought up was kind of the ethical one of having people moved, like basically replacing people in different jobs. And I brought up labor shortages and things like this, but I'd love to hear your perspective on kind of the ethics of robots, especially humanoid robots, and kind of the complexities there.

Yeah.

[00:19:42] Melonee Wise: So let, let's, let's separate some of the concerns. So let's start with just labor in general. So when I first started in this industry, the logistics and manufacturing industry, robotics, kind of focused. In, in the 2014 2016 timeframe, there were about 600, 000 jobs available. So, and in that time, all of the robotics companies that you know and love have been deploying robots.

Into that, that environment

[00:20:19] Audrow Nash: and as fast as they can,

[00:20:20] Melonee Wise: as fast as they can. I mean, I don't know if you've seen the latest numbers from Amazon with their Kiva fleet. It's like 750, 000 robots. They've deployed. Yeah. Remarkable. So, so, since then, the labor, the labor gap has grown to a million unfulfilled jobs.

So, so we have been throwing robots at the problem as fast as we can. And the labor shortage has 000 jobs. So that, that's something to food for thought. however. The, the thing is

[00:21:03] Audrow Nash: 400, 000 and since when did you say that? That's since 2014? Between 2016 and today. That's bonkers. Oh my goodness.

[00:21:10] Melonee Wise: Yeah, just look at the Bureau of Labor Statistics.

[00:21:12] Audrow Nash: I think it's the, is it the, the baby boomers retiring? Is that what, one of the forces on this?

[00:21:19] Melonee Wise: It's the aging labor population. It's a, lack of interest in the jobs. it's a, wage pressure problem. Right? and it's also just, you know, I, I'm one of them and you are one of them. As our parents told us that when we would grow up, yeah, we would grow up, we were going to be special flowers and go to college and do whatever our hopes and dreams were.

I don't know about you, but my, none of my hopes and dreams were to work in a warehouse. but, but

[00:22:01] Audrow Nash: that is, what a way to put it. Yeah. But yes, very true.

[00:22:05] Melonee Wise: That's one aspect of it. But I, I think, you know, on the flip side of it is as time approaches infinity. And robots approach infinity. We eventually are going to have to have a conversation about the nature of certain jobs and whether people do those jobs, right?

Technology is continuing to prove that it can create more jobs. Less than six months ago or nine months ago, at this point, nine months ago, no one had ever heard of a prompt engineer. And now it's like. One of the the more random jobs out there for the kind of Quote unquote artificial intelligence economy that we have So technology is creating jobs.

The problem is is whether the people who are currently doing These other jobs have the skill set, the training, and the capability to move into these new, these new labor positions. And the problem is, is that's a socio political issue in large respect. There is some onus on robotics professionals to develop tools that are easier to use.

and that can be cross trained from someone who's doing a warehousing job, but we've got a long way to go. I mean, most people struggle to use their iPhones and web browsers. And we're talking about like robots are basically iPhones and web browsers on steroids with legs and arms and actually, and so, although, So, although technology is going to continue to create more jobs, we have a problem of helping people transition into those jobs long term.

And eventually, the definition of work is going to change. And I, you know, this is a hard one for me because from a, from a personal perspective, I believe in your universal basic income, and I believe that the way we solve this problem is changing the way we view work. and creating a social safety net and then creating a basis for everyone to be, you know, live.

Yeah. To live. And, and if they want more, they can earn more. Like we create an economy that allows for that. and that, that, that doesn't preclude capitalism. It just, Yeah, it enables a basis for living.

[00:24:48] Audrow Nash: It raises the bottom but keeps, you can still be super ambitious. Yeah. yeah, I know, I feel like that is such a complex issue and I don't, I don't know where to go on it.

Yeah. Because I've heard size for everything. What do you think, so related to all the prompt engineers and stuff, I feel Like, like we all thought all the blue collar jobs were going to be automated and then here comes ChatGPT and then it looks like a lot of the white collar jobs are going to be automated.

how, cause, and then that would be interesting because the people that are doing fairly sophisticated work, it's like, okay, now go learn how to be a plumber and electrician from a lawyer or something like this. there, there was a very funny South Park on this a little bit ago. what do you think about that?

Like, what are your thoughts on this in general, all the AI and knowledge workers and automation in general?

LLMs automation and white collar jobs

[00:25:53] Melonee Wise: Yeah, I, I think that it has come a long way. It's very interesting. I think that there is a class of worker out there today that is going to struggle, specifically in the area of knowledge consolidation.

And so that's where you're seeing a lot of the pain right now. in, in like, you know, you can go to ChatGPT and ask for basically a travel schedule, you know, that challenges travel agents. You can go ask for consolidation of legal law, you know, like legal Opinions for law arguments, right? And that's, in my mind, a lot of what ChatGPT is good at right now is consolidation of knowledge.

And so any job that basically is that activity, yeah, there's a, there's a problem, but what we are seeing is ChatGPT struggles with creative tasks, algorithmic tasks in some ways, ironically enough. Well, complex algorithmic tasks, anything that needs, intuition or intuitive thought, anything that needs specialized knowledge.

And so, it's, yeah, it's hollowing out the center, but I don't think that it is getting to the point where it, It's challenging the specialized knowledge sets or anything that requires a human interface or reasoning contextually in this social framework, right? Like a lot of like what lawyers and doctors and other people do is they're making decisions not only based on case law.

But also the social context, the social framework, and the emotions of what is happening at the time. You know, can you imagine, ChatGPT diagnosing you with cancer and just reading it out?

[00:28:00] Audrow Nash: Like, oh my god, right, that little prompt pops up and yeah, it's like, congratulations! Like a sad face emoji.

[00:28:06] Melonee Wise: Yeah, you have cancer.

So I think that there, there is, there is a We're not there yet. It would probably or might get there. It depends on how we evolve as a society and how comfortable we get with these kind of interactions. You can see some of that in some cultures that are more robot facing or more technology adopting. But I think where we're seeing it right now in a lot of the pain is the consolidation of knowledge kind of work group where Yeah, if that's what your job is, yeah, ChatGPT can do it.

It's basically a database searching tool.

[00:28:49] Audrow Nash: It is, and it just formats the information for you. I, I was reflecting on this recently, and the way, the kind of conclusion, or The, I don't know, the, the, what, what I arrived at was that I think it's, it's, there's like knowledge, like how do you do stuff? And there's what do I do?

And it's not very good at the what, like if I'm solving a thorny programming problem, it's not much help if I describe it to it and ask, what can I do? It's super generic advice. But if I, like, I'm having to use a new, large framework at work, and I don't know how to do things in it, but I can be like, I know exactly what I need to do.

What should I do? And it tells me very well. So I'm getting up and running probably like 10 times faster than I would before this. So it's like knowledge is cheap, but like the intuition on how to proceed and understanding all the complexities or nuances of what you're doing, and like common sense checking it.

Seems to be like, there's a ways to go there, in my opinion.

[00:29:57] Melonee Wise: Well, and, and it, it also, I think one of the other things is you have to make sure to check it, right? Because it does storytell, and make up, information. Actually, one of the funnier things that, my co founders and I did was we asked it who founded Fetch Robotics, ChatGPT, and it didn't get it right at all.

And we were wondering about this because two of my co founders are very. Under the radar, they don't have much social media. They don't have much about themselves online and it didn't even know who they were. So, so it's, it's one of these things where it has only the information that it has access to, right?

And so. It is still when, when I say knowledge consolidation, that's what it's doing. It's consolidating knowledge that's available and it has access to.

[00:30:53] Audrow Nash: Yeah, I think that's a big point. I really like that, that way of phrasing it. Knowledge consolidation. Let's see. Going back to Digit, do you, Are there, are you guys attempting to put an LLM or anything on Digit for like having it do, I don't know, or I guess what's your, what do you think about LLMs and their use in robotics, and are you guys interested in trying this at Agility?

[00:31:26] Melonee Wise: Yeah, so Yes, we have an innovation group within Agility, and we have done some pretty interesting demos with LLMs, largely showing how, if you assume that Digit has a set of skills, Like, walk, pick up, place, and you have that interface of skills, then you can start asking Digit from, through a large language model to do arbitrary tasks, as long as they can be broken down into those composite skills.

So, very recently we did a demo day in San Francisco, with a large group of Interested Parties, and one of the things that we showed off was a large language model demo and someone was like, take the box that is the color of Darth Vader's lightsaber and put it on the, podium. Labeled, the same number as the movie number that Darth Vader appeared in for the first time and it did it.

And like, like, there's so much to impact there in so many ways, like, and, but all of that, basically, because it basically Digit was in a, like an environment with some boxes and some podiums that were numbered and labeled and they had colors and other iconography on them. And so, the large language model basically unpacked that all into basic, like an action tree, you know, a behavior tree, and then using the skills that Digit had, it went and executed.

[00:33:22] Audrow Nash: Mm hmm. That's really, it's very interesting and it's so funny that like now you can be like, you can give, you're trying to give your robot instructions and you program it with like a riddle and it has to solve it and this kind of thing. Like you can be like esoteric and whatever. It's so funny. Do you what do you think about its impact like LLMs?

In robotics, do you think, do you think it's going to be like very useful for high level control, or is it kind of a flash in the pan and we can do some neat demos with it? Or is it just like user interface changes where it's, you can talk to it and it's a bit easier to then do action selection from there?

How do you think it sits?

[00:34:06] Melonee Wise: I think we'll initially see it, you know, because one of the things that we are building at Agility is, is kind of this skill framework for Digit. So as we start doing more and more with our customers, probably the first place you'll see us potentially using these tools is for, our customers to describe in natural language what they want.

The, the robot to do from a, from a workflow perspective. It's like, I want you to move this tote from the put wall to the conveyor. If you have a, if the tote has an error, I would like you. To put it in the, hospital, quote unquote, hospital area of the warehouse, but, you know,

[00:34:56] Audrow Nash: So it's super fast templating of actions effectively, like building a behavior tree.

[00:35:02] Melonee Wise: You're, you're taking natural language of like someone who talks in business logic. And making it easy for them to describe that without them having to know like specifically, okay, digit has to walk over here and identify the tote. And then digit has to grab the tote. Like all of that will be basically derived from the natural language description.

[00:35:29] Audrow Nash: That's super cool. Yeah, I do think that kind of thing I could see speeding up robotics adoption. Significantly, because they, because you don't have to have someone who's an expert in the field per se, translate it into like an expert in manufacturing or logistics, go and take all the words, relate them to what it means, then program it in the robot.

So you need someone who's good at both or can communicate with teams that do both. And so now it's like, ah, you just tell the robot and it does it and it translates it. That's very interesting. Okay, so that's kind of the short term. Do you think like, eventually it's going to be a bunch of robots that are all using LLMs for high level control, and they'll just have like action sets that they can do, and you'll get the high level commands?

from the LLMs and they'll just do it? Or what do you imagine?

[00:36:29] Melonee Wise: I'm, I'm guessing that what you'll have is like a natural language description that you can say, and then you basically evolve it until it's what you want, especially in manufacturing and logistics. And then you say freeze and then run forever.

because People want repeatable, deterministic labor from their robots. They, they don't want every time you give it an instruction, it runs the LLM every time and is like, well, today I decided I'm gonna take a tour around the warehouse before I do this.

[00:37:03] Audrow Nash: I'll move this box to over here every single time it has to decide it.

This kind of thing, that'd be pretty silly. Okay.

[00:37:12] Melonee Wise: But you can consider, you could believe that that would be a normal thing in like a home robot, right? And so it's really a domain specific behavior and way of looking at the problem. It's just in our domain, we care about stability, determinism, repeatability, you know, throughput, reliability, all of those things.

Robot factory + suppliers in North America

[00:37:36] Audrow Nash: Yeah, for sure. Let's see. So going back to Digit, um, one thing that I saw that was very interesting on the website is you guys have recently opened like a robot factory, like a big humanoid robot factory. Will you tell me a bit about that?

[00:37:56] Melonee Wise: Yeah. So we've, we haven't opened it yet. We've kicked off construction of it.

You know, it's, it's opening in the spring. and that will Be, you know, it's designed to produce long term, up to 10, 000 robots in anticipation of, of the, the customer contracts that we currently have. Mm hmm.

[00:38:23] Audrow Nash: That's awesome. And so what parts of the making the robot will be done there? Will it be a lot of assembly or will it be Okay, where do you get the parts from otherwise?

Like the, the motors, the sensors, where, where are they all coming from? I guess it's a diverse

[00:38:43] Melonee Wise: Yeah, it's a diverse set of suppliers and fabricators, um, in largely North America. Wow. Hell

[00:38:52] Audrow Nash: yeah. Is that, I guess that's, that's probably by choice to have it mostly in North America?

[00:39:00] Melonee Wise: You know, we're making sure that we are choosing suppliers and vendors that are not in, I guess, conflicted geopolitical regions.

We're trying our best to do that. and we're trying to make sure that we have a good, diversity of suppliers so that we're not single sourced. but a lot of the North American focus is, is because our initial market is North America. And so, um,

[00:39:35] Audrow Nash: Very interesting. Do you, can you tell, I mean, we talked about it in, I think our last interview, about just.

There's actually a good amount of manufacturing here, one of the, here in the US or in North America. one of the things that, another space, so I was talking to Rgs and he was saying that the US is really good for, and if I, if I am mistaken or anything, I apologize in advance. But what I understood was the US is very good at, very specialized.

Like high precision manufacturing, but when you get more to volume, it's tricky. cause it doesn't seem that there's that much of that, but what's your perspective on this?

[00:40:23] Melonee Wise: Yeah. I mean, I think that that is relatively true. except for in automotive, I think if you, if you look at it, it really comes down to the cost of, of bringing the product back into the United States and in what volume and in what kind of import tariffs and things like that that you might encounter.

It for very large high value goods for smaller electronics, high value electronics, you know, things in the one 5, 000 range. One of the things that we basically run into in the United States is just the labor pool. Like we're already talking about logistics and manufacturing, having a million open jobs right now in the United States, right?

Can you imagine producing the iPhone in the United States? Like, I saw, like, an estimate that you would have to basically build, like, a whole town with millions of people to support the iPhone production, in the United States. And so Like one of the big reasons that actually precludes us from building certain things, especially at high volumes in the United States, is we just don't have the infrastructure and the people to do it.

which is, is different in other parts of the world. They actually have, like, if you look at Shenzhen in, in China, Like, they produce so much there, and you could like, walk down the street and go to like 14 injection molders in like, the, like the span of like 5 city blocks, which is something you would never find in the United States, potentially.

and so there's just this contrast of, of, suppliers. The density of them and the sheer amount of labor that's available to support some of that manufacturing.

[00:42:20] Audrow Nash: Yeah, where, so where are most of your Manufacturers in North America, like where in the U S are they in Mexico? Maybe some in Canada, like where, where are you seeing most of them?

[00:42:35] Melonee Wise: I can't speak for all of that because that's not my like, job at, at agility, but we do use, several large North American machine shops and other vendors for making our, our machine parts and things like that. And they're, They're located in the United States.

Reestablishing a trade class in the US

[00:42:56] Audrow Nash: Gotcha. do you like, one thing that has been interesting is the US I think is investing pretty significantly in like, reshoring, a lot of manufacturing.

So I think there's like. I don't know, trillions of dollars or billions of dollars, like some huge amount of money in flight. And that infrastructure is going to take, I don't know, 10 years or something to spin up. But how do you imagine this changing in, say, 10 years, 15 years, if you have opinions on kind of what's, what's been going on?

[00:43:31] Melonee Wise: I think that in order to achieve that, we have to reestablish our trade class as a country. and I think that's, that's something that a lot of people are working on. But, I think that there are not strong incentives right now. for people to go become tradesmen, to become skilled electricians or skilled mold makers, welders, welders.

Yeah. Yeah. And so it kind of creates this vacuum. It, even if we wanted to reshore it all, could we reshore it all? Because we have tended towards. over the last 20 to 50 years, you know, towards a higher and higher, higher education population, that is trending away from trade based jobs. which leaves a gap for trying to reshore manufacturing in the United States.

[00:44:36] Audrow Nash: Yeah. It's such an interesting thing. So like I talked with. on the Sense Think Act Podcast, I talked with like AMT, Association for Manufacturing Technology and a few other manufacturing organizations and some of the things that stuck out to me, it's like, Actually, community college and, like community college is a wonderful way to get a trades education and you can get like, so you train for six months or something fairly short and you immediately get a high paying job for that.

Like not CEO high paying, but like enough to raise a family, probably a couple hundred thousand or more.

[00:45:15] Melonee Wise: Yeah, because, because there's so few trades people, right? And they're trying more and more to get the word out. I know AMT is, I mean, they're really trying. A3 is, they're trying to pull more people into being, you know, robotic operators, for example.

That's a very high paying job, but just getting young people interested in it, them seeing the value in it, and seeing it as a career path that is meaningful.

[00:45:46] Audrow Nash: Do you think, so a thing that I have been aware of is there's, and I don't, this doesn't, definitely doesn't speak for everyone, but I've been seeing a lot of people thinking universities are not a terribly good deal for this kind of thing.

So it's like some people are saying things like I won't have my kids go to college and whatever because maybe trades and this kind of thing could be a good Alternative, and a lot of, a lot of people just end up getting a lot of debt. But, do you think that kind of society's going to start having, like, will we move back into a better balance with trades versus everyone goes to a four year college with not as many exceptions and this kind of thing?

[00:46:32] Melonee Wise: I don't know. I doubt it, if I were to guess because The United States, we have a very manifest destiny approach to life, right? What does it mean? Of, well, if you have a desire, you should go get it, right? And so we raise Our children and our cultural bias is towards achieving what we want, living our dreams.

Passion, passion, passion. Yeah. And until we start reframing, um, some of these things as something that you can be passionate I mean, engineering, for example, has had a really bad, bad rap for a long time, right? Right? You go and you look at TV and every engineer is homely and weird and doesn't have a girlfriend and, you know, sits at home and He's on their computer all the time, and everyone's been told that they have to be the smartest person in the world.

And what have we had for, you know, the last 20 years? A dearth of engineers. And so, you know, we've been trying to change it as a community, trying to help people understand engineering isn't just about being smart, it's about being creative, it's about problem solving. You're not going to be Sheldon. Maybe you want to be, but, you know, you don't have to be Sheldon if you don't want to be.

And, and that is something that like, societally, we have to change about any of these roles that we want people to go into. Whether it's engineering, or a trades job, or, or something like that, but if, if young people don't believe that they're either qualified for the role, in the case of engineering, or it's, something to aspire to, In the case of some trades, you know, how do we convince people that they should aspire to it?

And so a lot of it is reframing what, what people should be aspiring to and raising our children to believe that it's okay to do these things. But if you spend all your time telling your kids the best thing, change the world.

[00:48:56] Audrow Nash: Yeah, these kinds of things.

[00:48:58] Melonee Wise: Yeah, or go to university, then they're going to feel like they're failing if they don't.

[00:49:04] Audrow Nash: Yeah, it's such an interesting problem in a sense. What do you think would be, do you have any ideas what a good solution is? It's reframe it, but how would you reframe it? and I know clearly we've been struggling with this as a community, but what do you think?

[00:49:21] Melonee Wise: I think that's a hard one because I think it, it, It really comes down to what people value, right?

Like, do they value stability? Do they value, you know, wealth? Do they value career growth? and. Those are stories you have to tell, but I also think that one of the reasons people have historically not wanted to adopt trade jobs because they have limited career growth at some point and limited wage growth, for example.

And this comes back to some of the question before, which is, you know, universal basic income. What does it mean to be successful? You know, how do you create, how do you go and reach beyond and, and get more? and we don't have great stories there.

[00:50:25] Audrow Nash: Yeah, so coming up with some sort of good story so that people can, you can just chill and you can do a job, and it can be a meaningful one. That's say a trade, or if you want to go really big and you like. Like it's going to be like, I mean, you with Fetch, for example, I imagine that was, that was a hard path to choose and it was a great, like you did very well and everything is great because of it.

But it was also probably very difficult, I imagine.

[00:50:53] Melonee Wise: And it was high risk, right? Yeah. High risk. But I, I think the other thing that, that has changed. Very much in the last 50 years or 60 years is the disappearance of the pension has also had a very big impact I think on trade adoption.

[00:51:12] Audrow Nash: I don't, so the pension, pensions are like you, the retirement accounts that your company contributes.

[00:51:19] Melonee Wise: No, no, so pensions were The last salary that you made in the last year of work, you got 80 percent of it for the rest of your life. Oh wow. Oh wow. Yeah.

[00:51:35] Audrow Nash: So why did they disappear? They're expensive. I guess that makes sense.

[00:51:41] Melonee Wise: Yeah. Go look up like what happened with GM. Like they had a very large pension program and it almost bankrupted the company.

[00:51:48] Audrow Nash: So it's kind of like, I don't know, you hear about like. The fall of different countries or something. And they eventually are like way over leveraged financially or something to their citizens because of like generous retirement plans, like this kind of happened with pensions. And so we stepped back with it, but it removes some incentive to go do trade jobs that had good pensions.

Interesting.

[00:52:15] Melonee Wise: I mean, there's a couple of jobs still left in the world that still get pensions. Teachers tend to get pensions. Government jobs get pensions, but they used to be that tradespeople got pensions. huh.

[00:52:30] Audrow Nash: Would the, it's the pension, the pension is paid by the employer, correct? So yeah, that does sound expensive for this kind of thing.

And also it's interesting because like teachers, like, I don't know, I have family who, are teachers or, have been teachers and they're not paid very fairly or not very much. So it's like you give them a pension, but like you're only paying them, I don't know, a third of what they probably should be paid at least.

Yeah, you get the thing and you get 80 percent of that, but it's still super, super low. what an interesting thing. I wonder, um, so. Back to robots with Digit. So one thing that struck me from looking at Digit, in some of the videos, it doesn't really have hands. It has like flippers for where I, where we have all the fingers that it can use to pull in and I guess grab things and then it probably pinches.

I don't, I don't remember seeing thumbs. No. tell me a bit about that.

Digit’s hands

[00:53:45] Melonee Wise: So, that's one minute. Instantiation of Digit's hands.

[00:53:54] Audrow Nash: Yeah, very modular, I suppose.

[00:53:57] Melonee Wise: Yeah. You know, when you look at where we're going with the end effectors of Digit, Digit, um, as we evolve the design, we'll have kind of, interchange point, like all.

Robots where we will be able to change out the end effectors of the robot based on the task It's it's what industrial robots have been doing for a long time We've tried to focus on having relatively simplistic grippers to start or I guess end effectors I should say that solve A large swath of problems, with the simplest design.

And that's one, approach that we've taken for doing tote manipulation. And it's fairly robust for some of the, the, the tasks that we've been focused on, but we are right now going to add a different type of gripper to digits repertoire, for handling totes. so it's, you know, We're trying to create MVP products, right?

And so we, we are not trying to solve. And swallow all of the complexity at once. Digit has the ability to have other end effectors. We will make other end effectors for Digit. But our priority is not to make high dexterity hands. Because honestly, like, I haven't seen a problem yet. That, that we need high dexterity hands for digit for the set of use cases that we're tackling right now.

[00:55:44] Audrow Nash: Gotcha. So, it's just unnecessary complexity and you can get most of the way there with just very simple like pinch in grippers kind of thing. Yeah. What do you think, so To me, what it seems is, and this kind of goes back even before you were involved with agility, it seems like it's like the min, as you said, an MVP, it's a minimum product that, can do something useful and can find market fit and this kind of thing.

what do you think of? There's been several companies entering the humanoid space and they seem to be making like full fledged humanoids with not, to my knowledge, they don't have too much of an application in mind for them and maybe they do, but what are your thoughts on some of the other humanoid?

Robot companies or humanoid initiatives.

Thoughts on humanoids

[00:56:43] Melonee Wise: Yeah, so if you look at 'em, some of them are very impressive and have been around for a long time. Like, look at Boston Dynamics.

[00:56:49] Audrow Nash: Mm-Hmm. . Oh yeah. Yeah. I mean, super impressive. some of them are some back folks and stuff, just bonkers.

[00:56:57] Melonee Wise: Yeah, and well, and, and remember they, they started from a very different place, right?

They started a long time ago as the Petman and they were testing hazmat suits, right? Yeah. And they're hydraulic based. And so they have different challenges. Superpower. Yeah. But I think that when you look at a lot of the stuff that sprung up recently, a lot of it is very startup heavy. they're still even just trying to figure out why they're building it.

many of them haven't declared a market or, or shown my impression too, or shown working robots. I mean, let's be honest, there's a lot of videos, not a lot of reality, and, and I'm not trying to criticize, I'm just saying that this is, you know, I went through this in the early days of the AMR market.

There was like five companies who were actually building hardware, and everyone else was showing really cool videos of hardware that they were going to build. Right. and, and we're, we're in that stage right now with mobile manipulation robots. and the thing that I find interesting though, is, is some of the, the players that are getting into the game is recently are kind of funny.

They're just throwing a lot of money at the problem and just hiring any engineer they can. And they're like, you roboticists, you guys are taking too long. It's like, what the hell? As a community, I've been working on these problems for a very long time. And like, magically, like thinking that you can throw a ton of engineers at it and like get good results is I don't know.

It's a mythical man month, basically, and We'll see. But I, I think that there are some really interesting competitors out there. I, you know, and I'm excited about what they're building because, you know, there's plenty of room. I mean, look at what happened in the AMR market, you know. Many companies, yeah.

Yeah, there's Locus, Auto, Mirror, Fetch. You know, we all did very, very well,

[00:59:24] Audrow Nash: So you're thinking of it like AMRs, I suppose, seeing all these humanoid companies. I was, I was feeling like it was a bit like autonomous vehicles with, all the, like the, the feeling that I'm getting with the investor interest and kind of the hype cycles around it.

It feels a lot like the early days of autonomous cars in like 2014 or something when people were like We're going to have self driving cars in four years or something like that.

[00:59:57] Melonee Wise: They haven't put enough money in for you to believe it's like autonomous cars yet. It's more like AMRs. Gotcha. Yeah, like the total amount of money in mobile manipulation right now in humanoids, quote unquote, is closer to the AMR space still.

Like the total dollars put in from venture.

[01:00:16] Audrow Nash: How much would that be, out of curiosity, if you have a good guess? Is it like tens of billions or is it?

[01:00:23] Melonee Wise: It's under five to ten billion. That's like. That's AMRs and then over 5 to 10 billion is automotive.

[01:00:33] Audrow Nash: Over 5 to 10 billion. How much, how much more over? Is it like 50 billion or is it?

[01:00:37] Melonee Wise: I don't know. I don't, I, I, I'm guessing it's in that, in that frame range because like just look at the, what wasn't there a, like an autonomous car company recently that got some massive, insane amount of money. I don't know. It was like, what was it? Well, was it, I thought it was 80, no, 8 billion or something insane like that?

[01:01:09] Audrow Nash: That's quite a lot. I wonder which company that was.

[01:01:15] Melonee Wise: Yeah, I don't know, let me see

[01:01:19] Audrow Nash: if I can Look if you like. Yeah, we have time. Yeah. One of the wonderful things about these long forum Things doesn't really matter. and I'll cut it out if we wait too long.

[01:01:30] Melonee Wise: Yeah, I'm trying to find it.

[01:01:37] Audrow Nash: so we couldn't find the company that had the high valuation, but, so thinking more about humanoids. What do you imagine is a timeline for them? so you're going to, you guys are working on them in a logistics and manufacturing space. what do you kind of imagine the progression? Like when will I see one in a grocery store kind of thing or like, like when will I see them in my day to day life or will it be, will they be relegated mostly to like manufacturing and logistics for a while?

Okay, so I mean, but there's a bunch of, it seems like there's a lot of excitement where they're like, in two years, there's going to be one in your home.

Timeline for humanoids

[01:02:28] Melonee Wise: Yeah. Okay, Audrow, let's reflect. Yes. In, in 2004, 2004, they said that everyone will have an autonomous car in 10 years. Oh, man. Oh, man. So, now, in 2004, if you derated that and you said autonomous cars would be part of your everyday life, do you feel that's true today?

Because I don't. Okay. So now, today, we're starting with, with, you know, humanoid bipedal mobile manipulation robots, right? I don't think that that'll be part of your daily life for another 20 or 30 years. I think we'll spend the next 10 years in industrial, light industrial environments. There's a lot of safety work that has to be done to get them out of the warehouse and into your house.

[01:03:41] Audrow Nash: And just, I feel like I have to ask, This doesn't get super accelerated because of LLMs, or what are you thinking there?

[01:03:53] Melonee Wise: no, because it doesn't make the hardware any cheaper. It doesn't make the controls any easier. It makes the programming of them a little bit easier, but it doesn't make some of the more fundamental problems.

Like do LLMs make autonomous cars go faster? So why would they make bipedal navigating robots go any faster either?

[01:04:19] Audrow Nash: Yeah, now, thought experiment to just, what if, so if OpenAI reveals that they have made super intelligent something or another and it's like Einstein for every single field all at once, how does that work?

Like, does, we still have all these hard problems and the timeline kind of stays unchanged or what do you think?

[01:04:45] Melonee Wise: I think the timeline still changes. It remains unchanged. And maybe that's naive of me, but like, I mean, technology has been constantly progressing. You know, and although ChatGPT is super interesting and showing very interesting progress, how much has it fundamentally changed your day to day in the last 12 months?

[01:05:12] Audrow Nash: It's a better Google to me is what it's been.

[01:05:15] Melonee Wise: It's not, I mean, yes, people are very excited about it. It's very powerful in very specific ways, but it's not the, You know, singularity moment that, that everyone, it's nowhere near that, right? It's, it's like you said, it's a fancier Google right now. and I, I think though that like, you know, some of the thoughts that Bill Gates put, Bill Gates put out about like personalized agents is kind of interesting.

but there's a lot of other things that we have to, to deal with. Or I'm like, what if, what if you had your own ChatGPT?

What if you had your own ChatGPT?

[01:06:04] Audrow Nash: Hmm, like they had a history and this kind of thing.

[01:06:07] Melonee Wise: That you know, that was like it took all of your data, all of it. It has everything about you, your medical records, everything, everything you ever wrote, everything you ever did, all of it.

And was like your own personal agent and could be your the business version of you and your home version of you and did all of these things and could manage the complexities of your life. Super cool, right? Like that's probably the next thing that might happen with this kind of technology. It would be super cool.

Yeah. But there's so many problems we have to figure out in order for Someone like me and potentially you to even want to give it all of our data And then what it what do we do when people want to advertise to us with that data? And how do we set our own boundaries? And how do we deal with the complex social interactions that come out of that?

Like, okay, so you and I have personal assistants. That are these agents, right? Yeah. And you, you say, you say to your agent, hey, see if Melonee wants to have dinner. Okay. And my agent is like, well, Melonee's already got plans with friends that we are both friends with. Oh yeah, yeah. But you weren't invited, right?

And so do you want my agent to go to your agent and be like, well, you know, and so how do you even keep secrets between agents? How do you define that? How do we like, and so I think it's a super interesting space, but I think there's a lot of. Social fallout that we haven't thought about. And it's the same with all of HRI, right?

Like there's all this contextualized social interaction and it's not just, you know, in quotes, social interaction. It's, it's highly contextualized to our, you know, region, our, our cultural backgrounds, things like that. And. I don't know if we're ready or we're like, we're at that point where the technology has that sophistication yet.

[01:08:14] Audrow Nash: that's a good point. Yeah, I guess. Yeah, there's a whole bunch of things that need to be worked out and you really have to see how people react. So this kind of thing can't really be rushed, I would imagine.

[01:08:25] Melonee Wise: Well, I mean, they'll always be the people at the forefront and they'll be learning all the painful lessons.

[01:08:31] Audrow Nash: Yep. Yeah, that distribution of like the early adopters and this kind of thing. What do you, so one thing that's been interesting to me, there's again, from a recent space, we had someone come on, that's like just about to graduate with a degree in computer science where they, feel like it's a hard.

Like, I think the feeling is because of things like chat GBT, that there's no reason to go into something like computer science because you're just going to get in a thing and be automated right away for this kind of thing. I think that's the feeling. I wanted to hear your thoughts on it. Like, do you think it's a good time to be a roboticist or,

Best time to be in robotics or worst time?

[01:09:15] Melonee Wise: I think it's a great time to be a roboticist.

This is the, probably the next 50 years are going to be like the heyday of robotics. Like of, of mobile robotics, mobile, mobile manipulation robotics. I think from 1960 to 2000 was probably the heyday of industrial robotic arms. 2000 to 2015 was probably the heyday of collaborative robotic arms. you know, 2014 till now and going forward is the heyday of autonomous mobile robots.

You know, I, I think that. And I think that, that now is the time. I mean, I think that Willow Garage really kicks something off and we're like, we're in it. So yeah, you should definitely become a roboticist. It's our time.

Advice for new roboticists

[01:10:11] Audrow Nash: That's my feeling too. I have a feeling that it is probably the best time so far to be a roboticist.

What do you, so for someone who's feeling lost, especially with all the advancing technology. What advice do you have for them? Like, how, how do they, how do they get a foothold in this world? And I don't know, how do they do well?

[01:10:34] Melonee Wise: Learn to, yeah, learn to program, learn ROS.

[01:10:39] Audrow Nash: Hell yeah. The, the thing that I was thinking, it's, It's almost like everyone gets a bunch of junior programmers beneath them, for working. And it's like with JGPT, you get an assistant for this kind of thing. It's like everyone is kind of their own CEO, and you have a bunch of people to delegate to that are all AIs, is how I've been thinking of it a bit.

It's not so much. Like you still have a lot of autonomy in how you choose to move forward and you don't have to do a lot of the grunt work, from these things. And like, I mean, I'm, I'm using Angular, which is a complex web framework made by Google in work for Intrinsic. And, I am up and running super fast because of ChatGPT.

It's like, that's a, that's a real power there.

[01:11:27] Melonee Wise: Yeah. And I think the other thing is, is. I don't know what your early career space was like as an engineer, but there, there was this time early in my career where I was always afraid to ask a question, you know, and like, go bother someone who knew more than me.

And now you don't have to, you can bother ChatGPT, you know, like, so your ability to ask questions and to fail. You know, or, or ask dumb questions is totally enabled by having this knowledge consolidator.

[01:12:10] Audrow Nash: Yeah, I think that's a good point. I do think there's an interesting case there where it's actually not very good at answering questions about the bleeding edge of things.

And so It's very good at answering everything that's kind of in the like very well known space, but I wonder, I feel like that's something to be a little wary of, where if you're like, I don't know, how do I do this super hard thing and it gives you a super generic answer, like talking to you or someone who's like actually done very hard things is going to be so much more informative than just asking these same questions to ChatGPT.

[01:12:48] Melonee Wise: Sure, sure, but that's when you're in the mid cycle of your career. I was speaking like early stages of your career You just got out of university and you're struggling with something that you know You feel like would be almost wasting your mentor's time, right?

[01:13:06] Audrow Nash: Yes, that is a very good point. Yeah, you could just bug ChatGPT.

Yeah, what an interesting thing. So we are Coming to the end of the time one of the things that I wanted to ask about And we've just been talking about other things, but, so. Agility was doing Amazon trials, with their robots. Yeah. How did that go? And tell me a bit about that. It's great.

Amazon trials

[01:13:32] Melonee Wise: We, you know, there were a whole bunch of, news media articles put out.

There's a bunch of videos. It's going really well. so we've been working with Amazon for. Quite a while. in, in some different applications, you saw one of the applications in some of the videos that were highlighted, as part of their demo day. And now we're moving on to some other phases of the project, which are really exciting.

and we're continuing to work with them on, on these projects and excited to start deploying more robots with them.

[01:14:13] Audrow Nash: Yeah, that's so cool. Are they, how does it, I suppose you have other customers too, but they're one of the big ones and they're one of the, like, I mean, you said they have 750, 000 robots, if I remember correctly, Kiwa robots, which is bonkers.

So, I mean, there's clearly lots of potential to scale. I think that's really awesome, but you're, you're also working with other companies too. Oh, yeah. And all very same, similar use case at the moment, which is picking up those totes. Okay. Super cool. I'm glad that's going well. It's so cool. I, I really, it's exciting to see a humanoid robot that's doing a really.

Practical job in a sense and making a lot of sense for the ROI of these like justifying itself with a good return on investment for the companies that are investing in it because I think that was my big skepticism for the space was I think it'd be hard to get a good ROI for a lot of the. More complex ones, at least initially.

[01:15:23] Melonee Wise: Yeah, but customers wouldn't be working with us if they didn't believe that there was a return on investment.

Agility Robotics in the next 2-5 years

[01:15:30] Audrow Nash: Hell yeah. Let's see. So what, what do you think is the future? Like tell me the next two, five years. For agility, where are you guys headed?

[01:15:42] Melonee Wise: Yeah. So over the next couple of years, we're going to be very focused on expanding our, let's call it skill set.

So, you know, as I was telling you, we look at Digit as a platform that has composable skills. and as we start working with more and more customers, we're going to be expanding the set of skills that Digit has, so that would be in the areas of. You know, tote manipulation, but also tote stacking, destacking, tote, tote wrangling, those types of things.

But then also moving into other types of containers, like corrugate boxes, palletization, depalletization. And so it's just Looking at the, the space of, of activities in the warehouse and slowly branching out across a swath of similar activities within the warehouse. So there's a lot of processes that require taking some kind of container, whether it's a tote or a box, um, from a shelf or to a shelf.

From a conveyor or to a conveyor, from a cart or to a cart. and so now you've got all these skills. It's like, okay. And then now, if you know how to take something to and from a conveyor and to, and from a shelf and to, and from a cart, now you can go from a shelf to a conveyor, from a conveyor to a cart.

Right. And so we look at it as, Building up of a composable skill space that then eventually can be deployed into different applications. And eventually, Digit has all the skills to form the basis for an app store for labor. And then you start looking at the, the workflows and the tasks that Digit can do.

And, you know, building out the next thing based on the skill set that Digit already has. And so as you start gathering all these skills. It's like any person. The more skills you can do, the more jobs you can do.

[01:17:55] Audrow Nash: I like that. Yeah. It's a, it's a cool thing. You keep building the capabilities that keeps opening up applications that keeps letting you grow your market.

And then it's just, it's like a nice flywheel in a sense. And you said app store at some point like that. Yeah. Yeah. It's, it's so interesting that app stores are, like it's kind of, I guess maybe where

[01:18:15] Melonee Wise: we'll call it a skill store. Fine.

[01:18:17] Audrow Nash: Skill store. Yeah. Yeah. Skillstore. But it's just, it's so interesting because I, in a lot of these interviews, I talk with them and then it's like the long term vision is to get to something like a, an app store or a skill store for this kind of thing.

And it makes a lot of sense because then you have the diversity of application that, like it becomes generally useful. Do you, do you think that, Humanoid robots are going to become like, will they be like the silly metaphor, but will they be like the spreadsheet of the computer age where it's like, you have something flexible enough that it justifies things, places getting one.

And then from that, like you get it, it does its core application and now a bunch of people are buying it, but then you can also add other. Programs on it, that also provide some value like spreadsheets, justifying computers.

Humanoids and spreadsheets? Is this the tipping point?

[01:19:16] Melonee Wise: I think so. I think, I think the thing is, is in the industrial landscape, it's harder to do that because.

The primary motivator for a return on investment is, is like the primary work task. but I think when you look at like, if you look at like, retail applications or storefront applications, you know, Forgoing the fact that Pepper had limited utility because it had limited utility, if Pepper was a fully capable humanoid robot, like, you could believe Then yeah, then you would have kind of Pepper's primary activity, but there was probably a whole bunch of other things that Pepper could have done where LLMs, ironically enough, would be perfect.

Yeah, for sure. Like imagine walking into a retail store and saying, Hey Pepper, can you help me find a pink blouse? Or, you know, then that becomes very interesting in that space. But I think when the return on investment In the industrial application is very. You know, task oriented as opposed to like a retail or grocery space or a, hardware store, for example, you know, like if you ever tried to find someone to help you find the screw you're looking for at Home Depot or Lowe's.

It's, it's like searching for a needle in a haystack.

[01:20:51] Audrow Nash: And then they're walking somewhere and now you're following them. Yeah, this kind of thing. Yeah, for sure.

[01:20:56] Melonee Wise: And so, but imagine if, if the utility of those types of robots is, is more of what we're talking about, where maybe the, the original application that you maybe have is, restocking for the robot, but when it's not restocking, an individual can walk up to it and ask it to find a screw.

[01:21:16] Audrow Nash: Yeah. Or even while it's restocking, just like the people that are working there. Okay, very interesting. do you, let's see, like, I guess, wrapping up, what are you excited about in robotics now, in general?

What are you excited about in robotics now?

[01:21:34] Melonee Wise: Hmm, what am I excited about in robotics? I don't know. I, I think I'm, I think I'm most excited maybe about, the, the growing interest in making robots usable.

and, and I, I, I think that that's, that's something that's still going to take us a long time as a community, but I'm, I'm very, excited by the progress we're making there. I, I somewhat wish there was like. an academic version of an, industrial conference that was more like an academic conference where like companies could go and just present their HRI work.

I think it would be very interesting because one of the things that you, when you look out into the. Or their usability work for robotics is there aren't a lot of places for us to talk about it as a community and, and a lot of the research has limited data sets that are limited to, to like university students or whatever users they could scrounge up on a Sunday.

as opposed to some of the companies that have, you know, thousands of hours of interactions with hundreds of people at a time, kind of data sets. and, and I'd really, I. I wish that we had more of a community and an opportunity to talk about and a venue for talking about kind of how do we advance usability for robotics.

[01:23:23] Audrow Nash: Mm hmm. Why do you think someone's not doing that? Or is it a new idea? Because I think it seems like a great idea.

[01:23:32] Melonee Wise: probably some of it is proprietary work. Like, I will admit that I had a very strong interest in it, but Fetch never showed any of its UI ever. It like, like, there's very few videos of it online.

Wrapping up

[01:23:47] Audrow Nash: Yeah, gotcha. Everyone's holding their cards close to their chest for that kind of thing. Yeah. Okay. Well, uh, do you have any links or contact info you'd like to share with our watchers and listeners?

[01:24:02] Melonee Wise: I don't know. I'm on Twitter, Twitter or X, whatever it's called these days. And, at Melonee Wise, I mean, all my handles are Melonee Wise.

[01:24:13] Audrow Nash: Okay. Hell yeah. And I'll put a link to Agility in the episode. Hell yeah. Okay. Well, it's been great talking to you. And hearing your opinion on a lot of things. it's an awesome perspective and I really value it.

[01:24:26] Melonee Wise: Awesome. it's nice seeing you and hopefully we'll grab a beer sometime.

[01:24:31] Audrow Nash: Hope so. All right.

See ya.

That's it. I, for one, had my opinions changed on humanoids from this interview, but what did you think? Do you agree with Melonee that we're not going to see humanoids outside of manufacturing and logistics for 10 years or so? What other low hanging fruit might humanoids be used for? If you're not already, consider subscribing to never miss an interview and I'll see you next time.

Table of Contents

Start

[00:00:00] Stefan Seltz-Axmacher: this is a, super key problem and is a big part in our belief of why the robotics industry sucks so hard.

[00:00:09] Audrow Nash: We all know robotics is hard. One reason it's hard is because it's hard to find off the shelf components you can build a business on. Because of this, most robotics companies end up reinventing the wheel. For example, building their own mobility stack. And this is a bummer because it's a huge barrier to entry for robotics startups.

Specifically, you may have to raise millions of dollars and work for a few years to solve undifferentiating problems. Like Autonomous Navigation. Before you can even start on your startup's core problem.

This is why what Polymath Robotics is doing is exciting. They're creating a platform that solves robot navigation for a large number of industrial use cases so you can just build on top of it and not reinvent the wheel.

They're also doing a better job than you probably would because it's their core problem and you might cut corners because of deadlines.

I really like where this is going and expect more companies to follow Polymath Robotics lead and make services for hard but undifferentiating robotics problems so you can use them off the shelf. I think this is a great thing for the robotics community as it makes robotics startups a little bit or even a lot easier.

And that makes it so that we're more likely to see robots doing dull, dirty, and dangerous tasks no one wants to do sooner.

This was a tremendously fun interview for me. I think you're going to like it too, especially if you're interested in or curious about robotics opportunities for mobile robots in industrial spaces, if you're curious about new and potentially powerful business models for robotics companies, or if you're a hardcore nerd and love safety certifications and requirements testing.

I know some of you are out there. You're gonna love it. Check out the episode description for timestamps if you cannot wait.

Lastly, if you enjoy this conversation, want to hear more from Stefan and Ilia, they have a podcast called Automate It, which is available on all the podcasting platforms you'd expect. It's a lot of fun, and I recommend you check it out.

After you watch the interview, I'd love to hear what you think. Do you think the approach Polymath is taking is good for the robotics community? Do you imagine more robotics companies doing something similar? I'd love to hear what you think in the comments or on X.

Without further ado, here's our interview.

[00:02:46] Audrow Nash: Hi everyone. Stefan, would you introduce yourself?

Introducing Stefan, Ilia, and Polymath Robotics

[00:02:51] Stefan Seltz-Axmacher: Yeah, of course. My name is Stefan Seltz-Axmacher and I'm CEO and Co-founder of Polymath Robotics. At Polymath, we're making it really easy to add autonomy to any off highway vehicle. And before this I worked on self-driving trucks at, at Starsky where we put the first driver out truck on a public highway.

[00:03:06] Audrow Nash: And Ilia, would you introduce yourself?

[00:03:09] Ilia Baranov: Hi, I'm Ilia Baranov, co-founder and CTO here at Polymath. before this grand adventure with Stefan, I was working at Amazon on their home robot astro and have lots of fun stories to talk about there. and then prior to that was at Clearpath Robotics was one of the early folks there developing a bunch of the research and industrial autonomy.

[00:03:28] Audrow Nash: Now Stefan, would you tell me about Polymath?

[00:03:32] Stefan Seltz-Axmacher: At Polymath, we're building a modular stack where we're doing everything from sensor drivers all the way through throttle and steering commands for the software that runs onboard a vehicle to make it move from one point in space to another. So we're not building hardware, we're not building apps to tell robots what to, what to do, but the really, difficult but undifferentiated software packages that make a robot move through space, we've productized that and we're, we, have it on more robots than we have people on our team.

[00:04:01] Audrow Nash: That's so cool. How many people on your team and how many robots do you have it on?

[00:04:06] Stefan Seltz-Axmacher: 10, and we're, I think 11 robots going on 13 or 14.

[00:04:11] Audrow Nash: That's exciting. Hell yeah.

[00:04:13] Stefan Seltz-Axmacher: Yeah. And they're all different types of robots too. It's not the same thing. 10 times we're on things as varied as like bulldozers and tractors and articulated dump trucks and, all sorts of stuff.

[00:04:23] Audrow Nash: That's so cool. And how are, so you mentioned you're not doing hardware. How are you interfacing with these different vehicles?

Let's stop reinventing the wheel

[00:04:33] Ilia Baranov: Yeah. So one of the things we had talked about with Stefan early, on is that we had thought kind of Robotics keeps reinventing the wheel. And ROS, even ROS's conception or the robot operating system, was that we wanna reduce the amount of reinventing the wheel by having a common set of communication packages and examples and the community. and that helps a lot. But we had seen that a lot of people will build this kind of vertical stack for their particular robot or particular application. and we just decided that we want to abstract that away. So we have apps, we call it a hardware abstraction layer,

pretty standard term for input sensors. And then we have the same hardware abstraction layer for the output to the vehicle. And that outputs the vehicle hardware Abstraction layer takes care of things like, what is a kinematic model? Turn radius maximum speed. What does the footprint look like? and so the core piece between those two hardware abstraction layers doesn't really change because all the complexities that's unique per vehicle are captured by those two layers.

[00:05:33] Audrow Nash: Very cool. So you have a good way of talking to, oh, go ahead, Stefan.

[00:05:37] Stefan Seltz-Axmacher: yeah, because the problem that we're solving is if you were to leave your day job to go start, say an autonomous tractor, the vast majority of the code you'd write in the first two to four years would be fundamentally undifferentiated from any of the other 150 or so autonomous tractor companies.

You'd just be like working really hard on a really hard set of problems that no one cares about for you if you solve them successfully.

[00:06:01] Audrow Nash: Yeah,

[00:06:02] Stefan Seltz-Axmacher: What we're doing is

we're giving you that as a product so you can focus on integrating it into an actual product that's some farmer somewhere or some mine operator somewhere, or someone,

whoever caress about. And you can focus on actually just making robots valuable as opposed to being the 1500th team to reinvent point-to-point navigation.

[00:06:21] Audrow Nash: So are you thinking of it like, you have this software that handles mobility for robots and you are selling it to businesses that make like tractors or big vehicles. And so then they outfit their vehicles with your software, and they're selling that complete package. So you are selling them the autonomy that they use as a service on their hardware in a

[00:06:52] Stefan Seltz-Axmacher: So we might sell it to an OEM like you're describing. We might

sell it to some big industrial company who's really sophisticated and can write their own software to tell robots what to do. Yeah.

[00:07:02] Audrow Nash: I love it. So how does it, I, guess Ilia, how does it work in terms of are you connecting over the can bus with the vehicles or how are you actually interfacing with the, what do we call it, the vehicle, whatever it is.

[00:07:21] Ilia Baranov: The, vehicle, the robot. we have this discussion back and forth is it a vehicle, is it a robot, is it machinery? Is it like industrial equipment? that's a marketing problem. Luckily I don't have to deal with that too much.

[00:07:32] Audrow Nash: Yeah.

[00:07:33] Ilia Baranov: but, yeah, so, yeah, as part of the interface layer, one interface we have is socket CAN, which can then talk to a whole bunch of CAN based equipment. but the most extreme case we have is actually one of our hardware partners, called Hardline in Canada. They're now part of Hexagon. they have a 1996 bulldozer, which is entirely mechanical and hydraulic.

And so the only interface we get to it is open this valve or close this valve

that's the most control we get to it. and so in that case, our hardware abstraction layer abstracting away all the valve controls up to a level where it's okay, we do positional loop closure using all the sensors we have, and then we control this thing like a, like an unknown plant that we throw valve controls at and get speed out of it.

[00:08:21] Audrow Nash: So you are doing, I thought you were not doing any hardware, but you are doing hardware for custom solutions like

[00:08:28] Stefan Seltz-Axmacher: So we don't, so

[00:08:29] Ilia Baranov: even in

[00:08:30] Audrow Nash: Oh, they're doing it.

[00:08:31] Stefan Seltz-Axmacher: Yeah. Yeah. so like an interesting thing and I like, and, I think everyone who's done robotics has seen when you try to build the vehicle or when you try to retrofit the vehicle, there's an incredible amount of really hard custom one-off work to retrofit this 1997 bulldozer or this 2023 tractor or this whatever.

And that work is really hard to do when you're a five person team. And the same people doing that mechatronic design are also writing the code or also building the ML algorithms or also building the app are also figuring out how to operate tractors as a service.

So there are, however, a whole bunch of teams all over the world, including hardline up in, in Canada, who are just mechatronics consulting shops where they can show up, look at a vehicle, they have a pre fitt, retrofit kit.

We'll customize it for that specific vehicle, and then we can talk to their API and issue commands to it.

[00:09:22] Audrow Nash: that's super cool.

[00:09:23] Stefan Seltz-Axmacher: So we get to just be a software company and not go be in cold places to bring up robots.

[00:09:28] Ilia Baranov: so hopefully I'm, not like, I'm not overstating our position here, but the, equivalency I like to draw is that you can think of us as Microsoft in some sense, and there's a whole bunch of laptop manufacturers in the world and like an ASUS laptop and a, whatever, a Dell laptop or fundamentally different beasts, but they comply to some underlying hardware, abstraction layer standard that then our OS can just sit on top of and control.

It's stretching the analogy a little bit too far, but that's the direction we're going where there's these retrofit and service and oEM companies all over the planet, very specialized in their particular vehicles, and all of them know how to treat their particular vehicle, and all they have to do is give us some level of

[00:10:10] Audrow Nash: You just need the interface. Yeah. And are you doing it all with onboard sensors for these, I suppose you probably. So how would you, is it all GPS driven?

[00:10:23] Ilia Baranov: can

[00:10:24] Audrow Nash: trying to imagine.

Yeah. So you are actually, your approach is quite modular, depending on the needs of it, is what it sounds like.

Okay.

[00:10:31] Stefan Seltz-Axmacher: because sometimes GPS solves all your problems and sometimes it doesn't exist.

[00:10:36] Audrow Nash: yeah. And then, yeah, and sometimes it doesn't exist and sometimes you have crazy reflections if you're using RTK and all sorts of things. Okay. So you have this modular layer, you're the software you can run on a whole bunch of different hardware. I like that approach. what exactly, being that you're modular, I suppose there's a bunch of things that you have the ability to do, but, what, are the core competencies of the, I dunno, your, what, do we call, your software?

Like what, how do we refer to it?

[00:11:13] Ilia Baranov: I, we've been, I don't know, again, if this is a marketing problem, we

just call it the autonomy core, that kind of

[00:11:18] Audrow Nash: The autonomy core. I'll just call it

[00:11:20] Ilia Baranov: the core.

Then the layers around

it or the core. And Yeah.

maybe that's a good movie too. Anyway,

there's, the hardware abstraction layers around it, but the core is the piece that we really put a lot of effort into and we try to change as little as possible

[00:11:34] Audrow Nash: So what

[00:11:35] Stefan Seltz-Axmacher: think of that as everything.

[00:11:37] Audrow Nash: Oh, go ahead Stefan.

[00:11:38] Stefan Seltz-Axmacher: yeah, so think of, the functionality of that as everything from literally running the drivers on whichever sensors we happen to have to make sure that we're getting data from the LIDAR, we're getting data from the RTK GPS, we're getting data from the, radar or whatever combination of things we happen to have. Turning those into some preset, things like point clouds, turning that into, cost maps.

[00:12:01] Audrow Nash: on a lot of things.

[00:12:02] Stefan Seltz-Axmacher: And use over and over

[00:12:04] Ilia Baranov: lot of the,

[00:12:05] Stefan Seltz-Axmacher: And then do,

[00:12:06] Ilia Baranov: hardware abstraction layer is basically turning into common data types and having as few of those as possible that are as widely defined as possible.

[00:12:14] Audrow Nash: Makes sense. That's probably the best way to scale, is what I would imagine for this.

[00:12:18] Stefan Seltz-Axmacher: basically our secret evil plan. And honestly it's part of our sales strategy as well, is we tend to just talk to our customers about how we've built things. And it tends to be how they wish they could have gotten to build things if they didn't have a six month deadline when they were first building their whatever. And 'cause the reality was when they, thought they were gonna start whatever autonomy project they're currently on, they thought, this time I'm gonna do it right. It's gonna be reusable. and then, some jerk squad, CEO changes the deadline or some customer says, I want some additional feature or whatever.

And the engineering team cranks out a bunch of garbage spaghetti code in the last three weeks that then works for the rest of the robot's life. everyone wants to build robots this way. We've been free of the original sin of being tied to a specific use case that's enabled us to build a highly reusable, adaptable, modular autonomy code.

[00:13:12] Audrow Nash: Yeah, so that sounds awesome to me. And you guys are using the robot operating system pretty heavily with this, so I'm, the way that I am thinking of this now is it's like it's an additional abstraction on top of ROS that manages a lot of things

[00:13:30] Stefan Seltz-Axmacher: Yep.

[00:13:31] Audrow Nash: Okay. Is it, could you almost say it's like a behavioral level where you say, I wanted to do these things?

Or how, or is it just unifying a lot of things? Or maybe it's

[00:13:40] Ilia Baranov: behavior control is a lot of what we do.

So we do have a behavior tree and a behavior engine, especially an API layer to command our vehicle and how that all interacts. but yeah, I get that a lot of roboticists, what people say. what you're describing is basically ROS, which like

[00:13:55] Audrow Nash: well, higher level, easier to input stuff and get it to do what you

[00:14:01] Ilia Baranov: and again. Yeah, and I

[00:14:03] Audrow Nash: Probably does resource

[00:14:03] Ilia Baranov: I give is

[00:14:04] Audrow Nash: all sorts of things.

[00:14:06] Ilia Baranov: exactly, and, the comparison I give is ROS 2 is a, is an excellent open source toolbox and we're providing just the whole house.

And yes, you could use the toolbox and build the house, or you could just go to us and we'll build you the house, or we have a house that we can just copy paste.

[00:14:21] Audrow Nash: Do you guys know in web development, there's like Ruby on Rails or something like this, which streamlines a lot of things, would you guys, would that be a fair comparison? Ruby on Rails, but for Robotics in some sense? Or is it not quite broad enough and

[00:14:35] Ilia Baranov: I, would

[00:14:36] Audrow Nash: on Rails, but four mobile robots I suppose,

[00:14:41] Ilia Baranov: for mobile robots, I'd say it's, even one layer above that. I, would say,

[00:14:47] Stefan Seltz-Axmacher: So, our, API, we have a relatively straightforward rest, API, where like commands might be, go to this GPS coordinate with this heading, and then we just do that. So like we have a customer, for example, who's a big conglomerate. they have an in-house built ERP, written by

[00:15:05] Audrow Nash: an ERP?

[00:15:07] Stefan Seltz-Axmacher: an enterprise resource, program.

So basically it's a software program that runs their operation, their multi-billion dollar operation. it's been written by this guy, like literally a guy, over the last 20 years as I think a monolith. In C#

[00:15:25] Audrow Nash: That's horrible.

[00:15:27] Stefan Seltz-Axmacher: can issue commands to our robots.

doesn't know much about geospatial data, he knows nothing about safety engineering.

He knows nothing about machine learning. He can issue commands to robots that we drive in their fleet. so that, for example, the first time he, we were doing a, an integration test, he was like, huh, I'm sending messages and you guys aren't moving. You've been making all these promises about your robots working and what's going on. and we looked at the message and the GPS coordinate he was sending us to was zero comma zero, which for those of you not, familiar with GPS is the middle of the Atlantic Ocean. we can enable our customers to just send simple commands, and even if they're unsafe, we won't do unsafe things.

Like we won't leave the geofence in that use case. If you try to order us to drive into an obstacle, we won't, unless you for some reason, configured us to do that So now a regular software engineer Yeah,

[00:16:21] Audrow Nash: saved something. Yes.

[00:16:22] Stefan Seltz-Axmacher: a VC during a demo.

[00:16:24] Audrow Nash: Ah, ha That's hilarious. Okay. Yeah. Ilia, were you gonna say something?

[00:16:31] Ilia Baranov: Yeah. So I think I was gonna expand a little bit on the API layer. our kind of mission is to make the API just behave as any other web API standard request and response so that anybody, any web developer out there that you can hire for 20 bucks an hour can suddenly hook up whatever system you have in house to a robot and do so without damaging equipment.

[00:16:55] Audrow Nash: I like that. How do you, the biggest question that comes to my mind with that is how do you manage state? Because it's complex. you tell the robot, go here, you have to wait till it's there. There's some sort of like underlying action like thing where, and then with arrest API, it's very you send something, it sends something back.

how do you manage that? Does it open up a socket or is, but then that gets more complex, or I guess how do

[00:17:21] Ilia Baranov: So we, actually, yeah, so there's two layers to that. The first layer of your question on, robot management is really using the behavior tree.

we fund, Steve Macenski's work in part in Nav2.

And, fans of it. We use big chunks of it. yeah, absolutely. It's fantastic work. And his, he has a behavior tree internal to Nav2.

We have one, a kind of a larger one that controls other stuff outside of Nav2, but very similar kind of concepts, similar, inspiration. And that manages things like the client has requested five different goals and halfway through it they interrupt us and give us a new set of four goals.

Should we then resume the original goals or should we change, or should, what logic state should we take?

So that's all on robot and that's all function of our behavior tree.

[00:18:11] Audrow Nash: if I send, two, two positions or something, it creates a queue. How do you, but so to me. That's probably the behavior expected, but it may be like, oh no, I actually changed my mind. I want to go to this one. I don't want to go to the first one.

[00:18:25] Ilia Baranov: so our rest, API,

[00:18:27] Audrow Nash: Okay. Yeah,

[00:18:27] Ilia Baranov: rest, API, when you send commands, you have either a preempt or a queue

[00:18:32] Audrow Nash: Oh, you guys have thought

[00:18:33] Ilia Baranov: we just provide all the, we, yeah, we provide all of those knobs. And actually a big challenge of building our API is we have hundreds of knobs we can adjust under underneath our layer and being very judicious on which ones we expose.

So our API doesn't just become this enormous surface area that we have to test thousands of options, but just the minimum possible set that is intelligent enough to, work in most circumstances.

[00:18:59] Audrow Nash: Yeah. Oh, I like that a lot. Okay, so I really like the idea of these, Ilia, is this your kind of, are you the master architect of all of this, or how

[00:19:11] Ilia Baranov: I wish

[00:19:12] Audrow Nash: how is

[00:19:13] Ilia Baranov: gonna nod, but it's actually, the engineering team. I'd say I, I'd say I, I do the least amount of engineering outta the team, thankfully. I think they're usually pretty happy when I'm not messing in the code because my code tends to be experimental and tests

[00:19:29] Audrow Nash: You're prototyping, you're trying to get it

[00:19:31] Ilia Baranov: I, I'm per, I'm, very much that sort of CTO and engineer Exactly. And I can do things quickly and kind of test things out. But really it's a huge credit to the engineering team who really get it to a quality level where it works every time and is safe and is functional.

[00:19:47] Audrow Nash: I like

[00:19:48] Ilia Baranov: and as much as I'd like to be that kind of person, I'm, not nearly enough of that kind of person.

[00:19:54] Audrow Nash: the thing is, so I've, seen companies where they have the master architect who builds everything. and they, or it could be build or it could be designs, everything. But the result that I've often seen is that some of the deci, like many of the decisions might be amazing, but some of the decisions were not, and they create unbelievable chafing.

And then it's also very hard to onboard new people into the system. So if you're working with a team, that's probably a very good way to distribute it and get some sort of consensus around the decisions, especially about the things that you're exposing, I would imagine.

[00:20:37] Ilia Baranov: Yeah. And I'd say really my function is in a lot of ways to keep refocusing the team on, let's make a general purpose of a piece of software as we can. 'cause often when we're working with a client, it's a very natural tendency to say, the client has requested X,

I'm gonna build a function, right?

I'm gonna build X and I'll often come in and re remind people like, X is A, subset of this larger function that more people would want.

Let's build that larger function and teach the client to use that 'cause. Then it immediately applies to our whole fleet

because.

[00:21:10] Audrow Nash: and you guys had, was it eight, eight people at the company?

Or something around there. 10.

[00:21:16] Stefan Seltz-Axmacher: Yep. Yep.

[00:21:17] Audrow Nash: And, so 10 people are, is almost everyone an engineer for this kind of thing or?

[00:21:23] Stefan Seltz-Axmacher: everyone. Eight

[00:21:24] Audrow Nash: eight out of 10. Yeah.

[00:21:25] Stefan Seltz-Axmacher: Yeah.

[00:21:27] Audrow Nash: Okay. And how long have you guys been going Just to, like how long has this company been around?

Okay.

[00:21:34] Stefan Seltz-Axmacher: Two, maybe. Two and a quarter.

[00:21:36] Ilia Baranov: Yeah, two and a quarter about right.

[00:21:38] Audrow Nash: How, how are you funding,

the initiative so far?

[00:21:44] Stefan Seltz-Axmacher: So we've, we've raised some money, a decent amount. we haven't announced it publicly. more, than a little, less than a lot.

[00:21:53] Ilia Baranov: one.

[00:21:54] Stefan Seltz-Axmacher: and, we also have this weird thing of customers paying us and SaaS margins, Yeah.

[00:22:02] Audrow Nash: What a thing.

[00:22:03] Stefan Seltz-Axmacher: Yeah. it's, crazy after trying to do a big monolithic, vertical Robotics company before where the, lift to get to enough robots to be profitable, was a, okay, we need $40 million.

But there's a very real world where we're just like a profitable company this year, and I don't know how to grok that. My inner, like my roboticist brain is just very upset with what do we do when more money comes in than goes out?

I think buy Lamborghinis

[00:22:31] Audrow Nash: get into hardware and then you can

[00:22:33] Stefan Seltz-Axmacher: Oh.

[00:22:33] Audrow Nash: You lambing you get

into,

[00:22:36] Ilia Baranov: that's the money pit.

That's the money pit we shovel the money into.

[00:22:40] Audrow Nash: yep,

Okay. I really like

[00:22:42] Ilia Baranov: it's, funny, on that note, just a, small anecdote, Stefan and I keep trading emails with our bookkeeper. 'cause we're always like, am I miscounting something,

[00:22:51] Stefan Seltz-Axmacher: this can't be right.

[00:22:52] Ilia Baranov: with our business plan?

it should be doing this and it's like doing this.

I don't understand.

[00:23:00] Audrow Nash: so exciting for those listening pointing up instead of down.

[00:23:03] Stefan Seltz-Axmacher: Yeah.

[00:23:04] Ilia Baranov: Yes. yeah,

[00:23:06] Audrow Nash: Very exciting to hear for robotics companies. Hell yeah. Love it.

[00:23:09] Ilia Baranov: yeah,

Is your code open-source?

[00:23:11] Audrow Nash: what, I guess another kind of fundamental question. I look on the website, why are you guys not open source? is it because of the kind of competitive advantage of this, but why not?

why not? I guess there were probably discussions around this 'cause you guys like ROS and things like this, but what are you thinking?

[00:23:34] Ilia Baranov: I, took this, from a good friend of mentor of mine, Ryan Gariepy, at,

[00:23:41] Audrow Nash: He's wonderful. Yeah.

[00:23:42] Ilia Baranov: he had told me this. Yeah, he's great. and he had set this a long time ago at Clearpath and I, quite like the logic behind it, which is, we'll open source, anything that is useful to the community and not directly harmful to us.

So we actually open source as much as we can. a lot of the Nav2 stuff that we're, trying to help fund of course is open source. we have a big discussion going on how to fuse. GPS into Locus Robotics is fuse factor graph localization system. there's some efforts we're doing there. We have an open container initiative that's fairly public about a bunch of the containers we build use for build and infrastructure. the specifics, for example, of how our behavior tree dynamically generates nodes and makes sure that they're all talking to the same system is useless for anybody else because we already have built enough that it's weird and special

and so even if we did open source it, nobody would care. and also would cause more problems for us than it solves.

so when, whenever it hits those two, we try to, we open source anything that doesn't hit those two things.

[00:24:51] Audrow Nash: Makes sense. I would wonder I don't know, there are, it seems like there's a few companies that are in kind of similar space. I'd like a lot how you guys have carved out the mobile robots space. I think that was a very smart thing rather than we're gonna do all Robotics. That, that to me seems like a good decision.

I'd like to talk about that later. but some of them are going open source, hoping for additional adoption, I guess maybe Stefan, how, do you think about this? would it accelerate adoption if you made certain parts open source or even you could even have a high level Python library that just calls your API or something.

why, are you thinking about

Or

[00:25:38] Ilia Baranov: be clear, the API is open source, by

[00:25:40] Audrow Nash: Oh, okay. I just don't see

[00:25:41] Ilia Baranov: and at talking to the Oh, yeah, No, it's there. we,

we've gotta do a better job of marketing it, but like any documentation and code on, and I've even posted some YouTube videos on here's how you talk to our robots, here's how you use them.

All of that. That would actually make sense for customers. All open source, throw and pull requests. We love to review them. Yeah, no question.

[00:26:02] Audrow Nash: Okay. That's

[00:26:02] Ilia Baranov: it's the underlying ma, the

underlying pieces that are weird and special.

[00:26:07] Audrow Nash: as someone who's very lazy, I would expect at the footer on your website that there's a GitHub link and this kind of thing. That's exactly

[00:26:16] Stefan Seltz-Axmacher: good, maybe by the time you publish this podcast that will exist.

[00:26:21] Audrow Nash: Okay. Because that would be great. okay, so I misread the website in that case. that seems awesome. And, okay, so I guess,

Why are you carving out the niche of mobile robots only? Why aren't you doing arms? Why aren't you doing all sorts of other things?

Why focus on mobile robots?

[00:26:42] Stefan Seltz-Axmacher: Yeah. so I think that has to do with a lot with what we've done before. I'm not an armed guy. I'm not a quadcopter guy. There's also a lot of people wanting to do arm stuff like this. There's a lot of people doing quadcopter stuff.

in both of our past experiences, we rebuilt stuff that we had already built and other people had already built a whole bunch of

times and then found it wasn't incredibly reusable.

so previously, when I had a, vertical self-driving truck startup, every time we'd get some big piece of press, I'd get a, bunch of inbound from random industrial companies saying, Hey, air quotes, you've solved autonomy. If we pay you $8 million, can you automate this super niche piece of equipment that only eight of in the world exists and no one else, uses, but is really valuable to us and we can't hire operators And because of how we're architected, as like some big monolith vertical thing, we'd basically always have to say no to that. It would be an entire pivot. It would be, throwing out 75% of the code base. And if you look at the big OEMs who built autonomy multiple times,

like some of 'em, one of them who likes to color green, has, has built autonomy at least like 15 or so times.

Never, none of those use, none of those rebuilds have used more than 25% of any existing code base.

None of them have cost less than $10 million. None of them have been completed in less than three years.

This is a, super key problem and is a big part in my, in our belief of why the Robotics industry sucks so hard. Because like people want robots to grade construction sites so they can start building or to till fields or to, spray, fertilizer on crops or to do whatever. And to even look at a use case like that pre Polymath, you might have to hire a consulting firm five to 50 for five to $50 million to build something that might only work for a demo day. And that does not build a good, strong industry. So like the arm people have done cool stuff in manufacturing. The drone people have done some cool stuff in, in, in toys and inspection and whatever. But for mobile robots that move through the earth, like we need to make it not so hard to make, to create value.

[00:29:09] Audrow Nash: I think that's a great idea. And it's so cool to me that, so going into the founding story

[00:29:16] Stefan Seltz-Axmacher: Yeah.

[00:29:17] Audrow Nash: coming to Polymath, Robotics, you guys, you,

so Stefan you were doing a, highway autonomous car up or autonomous truck, or what was it exactly?

Stefan's experience starting Starsky Robotics

[00:29:27] Stefan Seltz-Axmacher: Yeah. So Starsky Robotics, we were doing self-driving trucks. We did, we, we started with a similar thesis as Polymath that robots are hard. the way we, that we were overcoming robots being hard is we do autonomy on the highway where it's a relatively constrained environment.

We do tele op in the first and last mile where it's kind of chaos. and then to make that problem easier, we'd control our own routes by being our own trucking company. and at the time when we were making a lot of hard technical decisions, 2017 ish, basically no vendors, would give you any part, unless you're willing to sign something that said you never rely on it for a driver route test. So as a result, what starsky became was this. Yeah. Yeah. So we talked to LIDAR companies, we talked to the Velodyne, we talked to,

[00:30:16] Audrow Nash: stuff, but don't use our stuff for real things. Is that what basically was?

[00:30:21] Stefan Seltz-Axmacher: Yeah,

yeah. legitimately that was what it was like.

So we ended

[00:30:27] Audrow Nash: They don't wanna be affiliated with it, I suppose if it goes poorly,

[00:30:31] Stefan Seltz-Axmacher: it wasn't illegal, it was just fear. I

think it's important to note in on-road autonomy. I haven't done a, an actual count in a while. I think there's five companies ever that have taken a person out on public roads. like

something like that. And there's in the range of probably $25 billion invested in the space. It is mostly not real industry or I think

actually the number might be a hundred billion dollars invested. but so, previously I'd done that and we had, because of robots being hard, we grew into this frankensteinian monolith vertical company. Where we had to do a, an incredibly not built here syndrome, hardware stack, an incredibly not built here syndrome, autonomy, tele op safety stack software for running your own trucking company and then running a trucking company.

I'd have to go from meetings where

[00:31:27] Ilia Baranov: Yeah.

[00:31:27] Stefan Seltz-Axmacher: just really, really hard. And I don't know if most people can be successful in that, in, in that type of business if

you

reinvent

[00:31:37] Audrow Nash: have to be like four CEOs at once to do that kind of thing. It seems like so many different businesses and industries that you're, and I don't know what your trucking experience is, but like you have to be a trucking business manager as well for all these things. So that sounds to me tremendously difficult.

[00:31:52] Stefan Seltz-Axmacher: I, think the re one of the main reasons that company didn't raise a Series B was because I was left less good at talking about the financials of a trucking company than a Fortune 500 trucking company CFO was,

which was a hard thing to be good at talking about right after explaining, safety engineering to someone for

[00:32:09] Audrow Nash: yeah, totally. Yeah. That's so interesting. Okay, and in that experience, you saw a bunch of these, it'd be like, okay, you get press. Then, it's oh, you've solved autonomy for these problems. Some people see it and then go, Hey, we really need something similar to that. And you saw these over and over again, and this is how you've come to Polymath,

[00:32:33] Stefan Seltz-Axmacher: Basically, big enterprise believes in autonomy.

Like big mining companies, big farms, big factories, they all have labor shortages.

They've seen enough headlines about how self-driving is here, and they're looking around and thinking like, I can't hire people to work in my lumber mill. I can't hire people to work in my port.

I can't hire people to work in my rail yard. maybe some robots could move this stuff from A to B. And before Polymath, maybe there was someone who happened to be building the right solution on the right vehicle, like a, perfect unicorn of a vertical lineup.

Or maybe you had to hire a consulting firm to build you something from scratch.

[00:33:15] Audrow Nash: So you are allowing companies to build up around specific vehicles that big companies are using, and you provide a lot of the research and basically the infrastructure that they need to build to do

[00:33:30] Stefan Seltz-Axmacher: We give them a code base that will just work. we can connect them to, with people to do the retrofits. and we can take commands from whatever their weird system for running their businesses

[00:33:40] Audrow Nash: Cool. Cool. That seems so, I really, oh yeah, go ahead.

A good task to automate and not get lit on fire

[00:33:45] Ilia Baranov: a I'll, give you a fun, illustrative example, which is my favorite one. We haven't done this yet. I keep hoping that eventually we'll get to this mission,

but there's this, problem in metal recycling where if you recycle a bunch of metal, you end up with this big molten pot with a bunch of slag on the top. Once you dump out the, useful

[00:34:04] Stefan Seltz-Axmacher: and by like big molten pot. imagine maybe, a cauldron with a, radius of like 10 feet and like maybe

[00:34:16] Ilia Baranov: a backyard pool.

[00:34:17] Audrow Nash: like something a villain would fall into at the end of a

[00:34:20] Ilia Baranov: yeah, exactly. Exactly.

[00:34:22] Stefan Seltz-Axmacher: what kills the bad guy in Terminator

[00:34:24] Audrow Nash: Yep. Yep.

[00:34:26] Ilia Baranov: Yeah. So it's this enormous crucible. And what they have to do is they have to pick it up, drive it outside to a particular dump site and dump it over And there's a purpose built machine. There's a purpose-built machine that drives this thing. And it's a heinously dangerous job because if you accelerate too quickly or

suddenly break or have a bump, liquid metal splashes the vehicle and catches on fire.

[00:34:49] Stefan Seltz-Axmacher: and you die.

[00:34:49] Ilia Baranov: the vehicle's pur and you die. Yeah. The vehicle's purpose built that, the like liquid metal fire death is at the back and the propane tanks running the vehicle are all the way at the front

to make sure that as far away as possible, there's probably like 200 of these vehicles on the planet, but they're so ludicrously dangerous that, like perfect case for autonomy, same spot every day, same exact conditions, perfectly designed space, but the guy's getting paid a ton of money

[00:35:21] Audrow Nash: I was gonna say they

[00:35:21] Ilia Baranov: because it's dangerous.

[00:35:23] Audrow Nash: year or something

[00:35:24] Stefan Seltz-Axmacher: they also, a thing Ilia, I don't know if I've told you

[00:35:26] Audrow Nash: at a high rate.

[00:35:28] Stefan Seltz-Axmacher: at, another thing I learned about this use case recently is it's also incredibly carcinogenic. So even though the guys might be making 300k a year after 10 years, they have all sorts of unpleasant cancer.

so if you, if you think about like vertical autonomy for that solution, there's 200 of those vehicles in the in the world.

Does it make sense to spend $50 million to learn how to automate Just that,

it literally might be cheaper to just give people cancer than to build one off autonomy for that.

use case. Yeah. the mar the market has accepted cancer over the status quo in Robotics.

[00:36:07] Audrow Nash: Oh my gosh.

[00:36:08] Ilia Baranov: But meanwhile, it's the same steering wheel, the same gas and brake pedals, like a human's driving it the same way that we drive our test tractor.

So there's zero reason we can't stick a few sensors and actuators on there with a retrofit partner and then have day one perfect autonomy just like we have on the rest of our fleet.

[00:36:27] Audrow Nash: That's super cool. I don't, so

one thing that I don't fully understand is, the sensors, so you said earlier that you use a lot of the sensors that are on these vehicles. I would imagine you want additional ones, for autonomy. So I would think vision would be really useful. Maybe LIDAR would be useful.

You might want an IMU or whatever. so do you have a little box of sensors you put onto these vehicles sometimes or

Integrating with sensors

[00:36:55] Ilia Baranov: I wouldn't say a box, I would say. Yeah, case by case, but generally speaking, the main ingredients are some sort of feedback from our vehicles. Just even a, live or not a live state is useful. So on that 1997 bulldozer, we at least know the engines turning over.

So we have some concept of health. but even better than that, if it has a speedometer, feeding in the speedometer is a nice ground truthing for us

[00:37:20] Audrow Nash: Okay. And would, do you do

[00:37:21] Ilia Baranov: If we're standing still, we're standing still.

So even that layer of data is, even though it's coarse, it's useful for us as a, error checker for everything else.

[00:37:30] Audrow Nash: Do you tap the vehicle system in that case, or do you put a camera to just look at it or, oh, okay.

[00:37:37] Ilia Baranov: we tend to just use the CAN bus if it's there.

if it's not, we'll put a little encoder on the, again, it's not us, right? It's our partner teams, our integrators, put a little encoder on, part of the drive shaft or camera staring at the dashboard. Worst case. But that one, usually the update rate is, eh,

[00:37:54] Stefan Seltz-Axmacher: and then we'll have a bunch of different sensors added, depending on the use case. So it could be LIDAR, it could be stereo cameras, could be radar, it could be, whatever. an interesting thing that we found is people have really strong opinions about sensors. if we were just a LIDAR shop, there are, we would have fewer customers than we have right now.

if we were a never LIDAR shop, we would have fewer customers than we are right now. And I

[00:38:19] Audrow Nash: gotta support em all. Yeah.

[00:38:21] Stefan Seltz-Axmacher: Yeah. And like the thing is when everyone builds their robot as like a monolith, what happens is there's so many crappy parts of your code that at some point some demo was ruined because a velodyne LIDAR wasn't behaving properly.

Or at some other point a demo was saved because, oh look, we replaced that LIDAR with an ouster or replaced it with a Zed camera. so now every robot forever needs a Zed camera on it. And we need to be able to, work with both of those strong opinions. And my hunch is every listener here has some sensor out there that they hate and another one that they love.

[00:38:58] Audrow Nash: Yeah. That's so funny. Yeah. And you guys, it makes sense. Catering to, or not catering to, but allowing all of the different sensors that people might want to use.

[00:39:10] Stefan Seltz-Axmacher: Yep.

[00:39:12] Audrow Nash: Okay. So that's really cool.

And I like, so a thing that is very interesting to me about this is that what you guys are doing, I.

It's like okay, so you mentioned those 200 vehicles that are doing the super dangerous metal hauling, liquid metal terror. That just sounds awful. But so you have 200 vehicles that are doing that. That problem does not it, like some companies want to pay a lot of money to do that, but it doesn't make sense for a venture capitalist to be investing in a company that does that because you have 200 of these vehicles, it's not gonna grow that much.

Maybe you can apply it to other things. but they're looking at the market that's gonna buy right now. And so it's really interesting to me 'cause I've seen, I feel like with a lot of the Robotics companies that I see is they pick one little vertical like that and then they go build something in that.

And it's not that attractive to venture capitalists because they want to buy in at a fairly late stage where they go, oh you are, you clearly have market traction, whatever, and you need this big injection of cash for doing something, but I want to see a 10 or a hundred times return in five years or something like this.

You guys are doing it clever where you're agnostic to all these problems and you let other companies go and be profitable on those vehicles.

but there's a lot of growth that you guys can do, which is quite exciting, I

Approaching the problem from another side than most robotics startups

[00:40:39] Stefan Seltz-Axmacher: yep. And we can make a lot of betts in a lot of different industries, where if our, customers are successful, we'll be successful. but we can simultaneously make betts in metal recycling as well as forestry, as well as ports, as well as this, that, and the other. And as long as some people build robots that create real value, we can make sure the robots work.

But in a lot of these use cases, the real value comes from what they're told to do.

So if you think about robots in a port, for example, driving a yard truck with a trailer, is not special. There's something like five to 15, yard truck autonomy companies that are doing just that.

There's, an incredible, diversity of yard truck manufacturers. So if you're vertical, you have to figure out which ones you're gonna integrate ones, which ones you don't. The really important thing that yard trucks do is they get commanded around by some system that manages a yard, management system. And that system might be configured so that, maybe there's five lanes going from north to south, and on Wednesday morning a big ship is coming in. So four of those lanes are coming away from the ship where one lane is going towards the ship. And, an hour later you reverse the traffic the other way around.

and that way you can get the ship empty two hours earlier, which means you can have five more ships visit per year and blah, blah, blah.

That's the stuff where you're actually gonna make a billion dollars with robots. If you're reinventing our part of the stack,

[00:42:11] Audrow Nash: Oh,

[00:42:12] Stefan Seltz-Axmacher: you're missing

[00:42:13] Audrow Nash: you mean.

[00:42:14] Stefan Seltz-Axmacher: because a lot of teams doing yard trucks are spending all their effort building what we build and not like intelligent yard management based on machine learning statistics and blah, blah, blah. And that's really the cool part of being a yard truck company.

[00:42:30] Audrow Nash: Yeah. So yeah, I agree. So if you do the super big tasks made by these super big consumers, you can be rewarded handsomely and be a billion dollar Robotics company,

[00:42:45] Stefan Seltz-Axmacher: Yep. And if you're, and if you build and if your autonomy stack sucks, you don't get any of it.

[00:42:50] Audrow Nash: Yeah. And you have to build that. And that's a heavy investment initially. So you guys are helping enable that, which is quite cool.

Now, I would love to hear what are your ambitions for this company for, and I'd love to hear both of your perspective on this, to like, where do you want this to go? Because it seems like a slightly new business model to me, which is quite cool. maybe we start with Ilia, like where, are we going in the long term?

Like how, do you see you guys growing?

Ambitions for Polymath Robotics

[00:43:20] Ilia Baranov: Yeah. From, the technical standpoint, I think the interesting part of our business is, again, drawing that operating system kind of comparison. you buy Windows to then run Excel or games or whatever, right? I think because we're really focusing on A software and B only motion software. and the edges of what motion means will blur over time. concrete example really clearly, really quickly that bulldozer will control, the bulldozer will control the blade somewhat, but if a builder says, here's a site plan, dig the site plan autonomously, we'll probably partner with somebody who specializes in translating site plans to blade control to whatever, and we'll drive it and work with them. So I think the, technical interesting part is we're building a foundation to then have. a web store. If, you talk about what, universal robots did, I really like what they did for arms, right? Is here's the arm, here's the foundation, but like here's 50 different companies that will take our arm and then make Apple picking or make CNC machine tending or whatever.

It's the same thing is, take our autonomy core and then build on top of it and here's a bunch of vendors that will sell you pre-baked modules to do agriculture with autonomy, to do mining with Polymath autonomy.

We can do those pieces, but I think we'll never outcompete thousands of people using our

[00:44:44] Stefan Seltz-Axmacher: And I, and we don't know how to run a mine. We don't know how to run a forestry operation. We don't know how to

[00:44:48] Audrow Nash: saw this with your trucking company. I

[00:44:50] Stefan Seltz-Axmacher: Yeah. I had to learn how to run a trucking company, which turns out is not simple.

[00:44:56] Audrow Nash: No, everything's hard when you get into it, I think. Yeah. Okay. So that, I really like that point. It's interesting to me how, it seems like I. So many of the companies that I've talked to on this podcast, it's like the vision eventually scales to being something like an app store with this kind of thing where you allow the functionality, you provide the core platform, and then people can build off.

And that sounds like where this may go. and it's just, to me, it feels like an exciting time in Robotics that maybe this will actually occur. 'cause it seems like a lot more people are working on it recently, and have good go-to market strategies to get there for this kind of thing. At least from my perspective.

[00:45:39] Stefan Seltz-Axmacher: Yeah, absolutely.

[00:45:41] Audrow Nash: and Stefan, what do you hope,

like where are you, what are your ambitions with this?

[00:45:46] Stefan Seltz-Axmacher: yeah, so my last company, start was my first ever startup, and it was really hard. And at the time I didn't know how much, it was hard because startups are hard and how much, it was hard because I was building robots and running a trucking company and writing regulations and blah, blah, blah, blah,

[00:46:04] Audrow Nash: You were five CEOs at once? Yeah, for sure.

[00:46:07] Stefan Seltz-Axmacher: and then after that I went and hung out with some SaaS companies and holy crap.

like, assuming Audrow that you've never worked at a, at a, just a normal SaaS company, DocuSign 5.0 SaaS Company. The way that you start a SaaS company like that is you have a producty salesy, CEO. You have a generalist, 50th percentile engineer who can talk to people but can mostly write glue code. That product guy figures out like what you should be building. The glue code engineer finds a bunch of microservices, ties them together to build it. And like theoretically in five years, they're each, worth a couple hundred million dollars. Whereas in Robotics it looks almost like, it looks like the world of PCs back in the seventies where screw Steve Jobs. You need like Steve Wozniak who can like code and binary and design his own circuit boards and build gooeys and make them useful. And like without an army of Steve Wozniaks, you have no product.

Like the Robotics industry will never get beyond that phase if everyone has to keep on building point-to-point navigation.

If Polymath can enable the market to just focus on things that work and it's still not gonna be as easy as DocuSign 5.0. But if Polymath makes it so that building a robot is as easy as making a website in 1998,

then like we're gonna start to actually have robots.

We're gonna actually start to have cheaper, better food. We're gonna have, cheaper materials. We're gonna have cheaper cost of living. We're gonna enable a whole lot of the world to live as nice as we do in, in the, west. And I think

that like along the way, if we enable that, we'll make a whole bunch of money and build a cool thing.

So that's what I think, that's what I'm hoping

[00:47:57] Audrow Nash: Good for you.

And, exciting problems. 'cause to me what it seems like it's not, it's a lot more exciting than this, but it sounds a little bit like a lifestyle startup in some way. And what I mean by

Running a different kind of business with fewer hats

[00:48:13] Stefan Seltz-Axmacher: What do you mean?

[00:48:15] Audrow Nash: so what I mean is that you get to leverage the power of the, SaaS model in some sense where you don't have to be, you're, only one CEO at this time.

You're not five.

[00:48:26] Stefan Seltz-Axmacher: yeah,

[00:48:26] Audrow Nash: And so it, it seems like it's a bit easier. You guys are profitable early, so you're not like lighting the ground on fire behind you and running like you would be if you're taking investment. maybe lifestyle isn't the way to say it, but like it's more on your terms in a

[00:48:43] Stefan Seltz-Axmacher: Yeah. we get to be,

[00:48:45] Ilia Baranov: you're a, you, accidentally slandered, Stefan there by using

[00:48:49] Stefan Seltz-Axmacher: I, I,

[00:48:50] Ilia Baranov: lifestyle. No kidding, kidding, Purely kidding. VCs will use lifestyle business as like something

we shouldn't invest in, which is not

[00:49:00] Stefan Seltz-Axmacher: like, why would you ever want to just be rich and have a nice life that's terrible.

[00:49:05] Audrow Nash: You,

[00:49:06] Ilia Baranov: No, you have to make them rich first. That's your goal. Yeah.

exactly.

[00:49:10] Audrow Nash: real good, you do our 10, 10 CEOs. Yeah,

[00:49:13] Stefan Seltz-Axmacher: what's funny is compared to a normal full stack vertical Robotics company, this feels incredibly easy compared to my friends in SaaS. This is still incredibly

[00:49:23] Audrow Nash: hard.

[00:49:25] Stefan Seltz-Axmacher: and I think like even, in 10 years or so, the, some of the modules that we sell as individual products that like, as a part of our stack, there will be billion dollar companies that just

do things like SLAM or just do really clever path planning for diff drive, vehicles.

there will be industries like that, but right now it's rather radical to say, oh, we'll just take the autonomy core off your hands

[00:49:55] Audrow Nash: You mean make it so they don't have to do that?

[00:49:58] Stefan Seltz-Axmacher: Yeah.

[00:49:58] Audrow Nash: you can

[00:49:59] Stefan Seltz-Axmacher: Right now that, seems radically easy. In 10 years it will seem radically hard.

[00:50:04] Audrow Nash: Yeah. It is amazing how that seems. Like building a website now is still difficult. if you wanna build a fully featured SaaS stuff, like a lot of my side projects are SaaS stuff. and like it's still a lot of work. but it's so much easier than Robotics projects, which is just bonkers.

Stefan's bankrun t-shirt campaign

[00:50:22] Stefan Seltz-Axmacher: let, lemme tell you a version of that. I had a side project, an e-commerce side project a couple months ago.

I've never actually publicly in any setting admitted was, me. So this is like an Audrow Nash exclusive

[00:50:35] Audrow Nash: Cool. You heard it here.

[00:50:36] Stefan Seltz-Axmacher: So yeah, so early last year, I got wind on a, WhatsApp group for YC founders that, something weird was going on at SVB. and like a panic was happening and

[00:50:49] Audrow Nash: That was Silicon Valley Bank, that big one that collapsed a while ago.

[00:50:52] Stefan Seltz-Axmacher: yep. And I panicked early, like I panicked on that Wednesday night, which meant we got our money out like relatively soon.

But while I panicked, I had a, like a dark joke that like the Silicon Valley Bank run was kinda like a marathon run. so I went on Fiverr and I basically paid a Fiverr, I think 20 bucks to make a t-shirt design for what a bank run marathon T-shirt would look like.

so 25 bucks,

I bought the domain name, I think SVB run 23, which sounded like a good marathon run.

so that was 10, $15. And then I set up a store on Teespring, and I don't know if you're familiar with Teespring,

[00:51:34] Audrow Nash: Yeah. Open. Robotics uses it

[00:51:37] Stefan Seltz-Axmacher: Oh, cool. So for those of, for the listeners who don't know, Teespring, Teespring will make it easy to do like a screen printed shirt business where you can just upload a shirt design and they will print the shirts and ship them and do all of that.

So I paid for the premium version of Teespring for I think 20 bucks.

And for $75, I had an e-commerce site with a line of nine different takes on SVB Bank run marathon shirts, which I posted in a handful of links. And I suddenly had an e-commerce site that got me, I was a top of 1% Teespring creator that month.

and if that was a real business, that could be a thing that I did. And, it has nothing to do with robotics in terms of how hard it is. It is a

[00:52:23] Ilia Baranov: I add a little bit of, can I add a little bit of late breaking news? This will be a surprise to Stefan too, in the full circle of e-commerce. You are also immediately ripped off by a

[00:52:32] Stefan Seltz-Axmacher: yeah.

[00:52:34] Audrow Nash: Oh,

[00:52:34] Ilia Baranov: This is your exact design,

[00:52:36] Audrow Nash: no way.

[00:52:37] Ilia Baranov: seller

[00:52:37] Stefan Seltz-Axmacher: that's on Etsy.

[00:52:40] Ilia Baranov: Etsy.

[00:52:41] Stefan Seltz-Axmacher: what a schmuck. There was also, there's also another person ripped me off on, Teespring and Teespring refused to do anything about it.

[00:52:50] Ilia Baranov: yeah.

[00:52:52] Stefan Seltz-Axmacher: But, I think

[00:52:52] Ilia Baranov: of e-Commerce Life.

[00:52:54] Stefan Seltz-Axmacher: but like you can literally set up an e-commerce store without like even opening up Photoshop, to make your own products.

Like Robotics is not gonna look like that for at least 20 to 30 years, but that's. If, robots are gonna be a real part of our life, if robots are gonna bring us the promise that I think everyone in this conversation wants them to bring, they need to get a lot easier. And

that's what we're hoping to do.

[00:53:16] Audrow Nash: Hell yeah. Okay. I really like that. what's the, so you guys are 10 people now.

[00:53:24] Stefan Seltz-Axmacher: Damn Etsy store.

[00:53:26] Audrow Nash: I know that's so fricking funny. you made it that you got ripped off. That's great.

[00:53:32] Ilia Baranov: I think we need to open an Etsy store for Polymath shirts. By the way, like the one that Stefan's wearing right

now for the listeners, he's wearing a shirt that just says Total ROS bag on it. because we had an internal joke at Polymath that oh, anytime anybody would do something bad, it'd be like, ah, you ROS Bag. So

[00:53:50] Stefan Seltz-Axmacher: It doesn't work as well with mcap.

[00:53:52] Ilia Baranov: you. Yeah.

we also have MCAP hats. We have MCAP hats

too. Yeah. Anyway,

[00:53:59] Audrow Nash: Hell yeah. let's see, I lost my train of thought with this. That's very funny. All the swag at everything. Oh, so what's the, growth plan for this? so you guys are, 10 people now you're,

that line is going up, so it sounds like you're profitable or becoming like you're,

you'll

[00:54:20] Stefan Seltz-Axmacher: we're not profitable yet, but it seems like we're, it seems like

[00:54:22] Audrow Nash: on the trajectory, two years and starting to move in the right direction.

That's wonderful.

how, do you imagine growing or do you need many more people for the work you're doing or will it be like, there'll be like system integrators that help push it out and maybe you can focus on a few promising,

How Polymath plans to grow

[00:54:40] Stefan Seltz-Axmacher: Yeah.

[00:54:41] Audrow Nash: areas or like what are you thinking for how to grow in the future?

[00:54:45] Stefan Seltz-Axmacher: So it's funny, like right now more of our problems are like business problems than they are, Robotics

[00:54:50] Audrow Nash: Technical

[00:54:51] Stefan Seltz-Axmacher: is also weird. Like I didn't think that

could happen in Robotics. So we need, like, we're, essentially we might go raise again or we might just grow to profitability depending on how the VC market's at and which could be a whole other conversation.

but if we suddenly had 10 more people, probably half of them would be business people doing stuff like account management and like splitting functions into other things.

[00:55:17] Audrow Nash: It's interesting to me 'cause it feels like with a lot of hardware companies, say like medical device companies, as the company moves across time, the staffing changes pretty significantly. So you have a lot of like researchers and our engineers at the beginning and then once they make something it's okay, now you need way less of them and now you need like a big legal team or certification or I don't know, whatever.

Then marketing and who knows. but it seems like you guys are going through that as well where, okay, we had a bunch of engineers, eight of the 10 were engineers. Now you probably need another 10 business people.

[00:55:50] Stefan Seltz-Axmacher: another five.

[00:55:52] Audrow Nash: Okay, something like

[00:55:54] Ilia Baranov: and I think the key metric that I convinced Stefan over time to start tracking is our robot to employee ratio.

I really my

[00:56:02] Audrow Nash: it right at

[00:56:02] Ilia Baranov: is to get to 10 to a hundred

Yeah. Robots to employees.

[00:56:08] Stefan Seltz-Axmacher: and also when

IA says that I, IA actually means different types of robots, not even specific robots.

[00:56:14] Ilia Baranov: not even copies.

Yeah. 'cause copies to me are like, eh, whatever. It's

the exact same

[00:56:19] Audrow Nash: they're free.

You already have it working,

[00:56:21] Ilia Baranov: yeah, exactly.

[00:56:22] Audrow Nash: but I guess those 1% cases will come up more, for this kind of thing. So not totally free.

[00:56:29] Ilia Baranov: No, yeah. I'm, big, a

[00:56:32] Audrow Nash: Facetious. Yeah, for sure.

[00:56:35] Ilia Baranov: but yeah, no, I think as we grow, I think we will still need quite a significant engineering presence because one of the reasons we sell ourselves as SaaS is that we're continuously improving every piece of our stack. especially actually we can get into this in technical detail in a bit, but, the machine learning part of what we do is actually a very small, thin. Sugar sprinkling on top of the cake

in a lot of ways where it's, a key differentiator in some ways, but in other ways actually, there's so much fundamental to do at the underlying layer that we gotta get that large and expanded first.

How Polymath uses machine learning

[00:57:14] Audrow Nash: What's the, what is the

[00:57:15] Stefan Seltz-Axmacher: I think to,

[00:57:16] Audrow Nash: of your

[00:57:17] Stefan Seltz-Axmacher: yeah.

[00:57:18] Audrow Nash: about that at all. I, don't know what

it

[00:57:20] Ilia Baranov: I'll give you a con, I'll give you a concrete example. So we have an application in forestry and we have a LIDAR system. And you can imagine in a dense forest, you're not getting very much GPS. So there's a lot of complexity on localization, orientation and heading management, obstacle avoidance, those kind of things. but from a LIDAR point cloud, you can't really tell if the thing you're seeing is a bush, which you're supposed to just drive right over 'cause you're a multi ton vehicle. Or is it a boulder, which if you try to drive over, you'll damage your drive train,

right? And that's sort of stuff that's where ML will come to play a role with, image data or LIDAR data, which classical algorithms can't do quite as well. You can think of some heuristics maybe, but honestly, like

any heuristic you can think of, a bush in a boulder will look similar at some point from some angle. whereas camera data and humans can tell 'em apart pretty easily.

[00:58:12] Audrow Nash: So the.

[00:58:12] Ilia Baranov: that's a case where that extra guidance, even though the, all the underlying pieces to navigate and pathway and obstacle void, the extra guidance allows us to not make suboptimal decisions of avoiding a bush when we can just drive right through it.

[00:58:26] Audrow Nash: Gotcha. Yeah. So these are in your modular framework. These are components you can add, that might improve applications. Maybe you say, okay, I have this path planning and this path planning effectively has a plugin or something that says, is Bush or not is bush or rock or something like this. and you can have machine learning determine which one it is.

So that's interesting that you guys are getting into that.

[00:58:52] Stefan Seltz-Axmacher: but like a relatively weird opinion. Yeah. But like a relatively weird opinion that we have. Is in Robotics. there's a lot more ML in MO in the world of Robotics and the market of Robotics and the products of Robotics that you can buy. ML is a lot more commodified than like really good Robotics software that is reliable and turns on every time and spits out metrics for why it doesn't work. And deployment infrastructure so that you don't have to send people to west, west Nowheresville, to go fix a robot in the middle of the day night with a, eight hour notice or, controls engineering that can smoothly drive vehicles of very various different sizes. And I think, I think there's this weird market dynamic going on where the VCs are really stoked about like machine learning.

'cause they just got to see what like LLMs can do and like a way that they can directly interact with and they're pretty amazed. Whereas I don't know, I'm pretty stoked by robots that turn on every time. And I know how few large Robotics organizations that is true for, so we're focusing a lot more on those kind of fundamentals.

[01:00:07] Audrow Nash: Yeah, no, I think that's smart. a lot of the, I don't know, it's very interesting to me because there are some things that they, like machine learning seems well suited for. but there's a lot of things where we think it's good, but then it isn't when you try it. And, like the problem with Robotics now really seems to be a lot of exceptions pop up when you bring robots into a less structured environment.

and those, I just, it's very hard to imagine a lot of machine learning, doing very well with a lot of those. I just I don't know, I'm doing some thorny coding, for work and the, it's it's complex. There's all sorts of hidden stuff that's hard to understand and, I'm asking LLMs advice on how to do this kind of thing and.

All their advice is garbage for this kind of thing. It's like I've actually gone to, looking at these and just Googling, because it is le it's a, it's like it will look convincing and it looks like it's gonna work and then I try it and it just doesn't, and I can imagine that this pops up a lot in Robotics applications.

[01:01:24] Stefan Seltz-Axmacher: so the exceptions thing is actually part of why off highway or like large vehicles are really neat.

So basically all of these different industries, from like ag to mining to whatever suffer labor shortages they like, they

Big and expensive vehicles are easier to automate?

[01:01:38] Audrow Nash: everything is suffering labor

[01:01:40] Stefan Seltz-Axmacher: spend enough money

to hire enough people to do the work.

So as a result, they need, a smaller number of people to do the same amount of work, which means that machines get bigger and bigger. when machines

[01:01:53] Audrow Nash: It's like shipping containers that are just absurdly large.

[01:01:57] Stefan Seltz-Axmacher: yep.

[01:01:58] Audrow Nash: Yeah. So they make one person do what 20 used to do

[01:02:02] Stefan Seltz-Axmacher: so there's

[01:02:02] Audrow Nash: 20 times larger machine, this kind of thing. Okay.

[01:02:06] Stefan Seltz-Axmacher: And what's cool about that? First of all, the machine does

more valuable work. You, can spend more money on sensors for it.

So that's useful for us. but more than that, big machines are really dangerous, so they drive more cautiously. a 500 ton dump truck.

[01:02:24] Audrow Nash: interesting.

[01:02:25] Stefan Seltz-Axmacher: yep. Basically the most valuable machines in the world to automate are among the easiest

[01:02:33] Audrow Nash: Aha. That's wonderful. Yeah. 'cause Oh,

[01:02:35] Ilia Baranov: giant mining truck or the molten metal transporter is way easier than astro in your house, like orders of

[01:02:44] Stefan Seltz-Axmacher: order. Yeah.

and like the 500 ton, dump truck, I think Caterpillar gets a million dollars a year for each one of those.

[01:02:53] Ilia Baranov: recurring,

[01:02:53] Stefan Seltz-Axmacher: recurring SaaS.

[01:02:55] Audrow Nash: Wow. That's so crazy. What a funny thing that just, it makes complete sense, but I hadn't thought about it. Where the ones that are the most valuable and then also like you could strap expensive sensors on them, and relative to the total cost of the thing, it's

[01:03:09] Ilia Baranov: And they're big, so there's a lot of space for them. There's a lot of power, there's a lot of storage

for compute. Like all of your problems get easier at larger scales. There's also an interesting, to harken back a little bit to where we see things going as our type of approaches are hopefully us get more and more successful.

You don't need these large vehicles anymore.

'cause one of the labor shortages You can go back to small. and which kind of dovetails nicely with sensors getting smaller and compute getting more efficient over time. but there's a funny kind of story I heard from the South African mining industry where they use, they're, very active in mining.

They have good regulatory body for autonomy, but they have a black market for tires for these large, vehicles

where getting a new tire if it blows a tire is like a four month process.

And in that

time

[01:04:02] Stefan Seltz-Axmacher: And

[01:04:03] Ilia Baranov: of dollars.

[01:04:04] Stefan Seltz-Axmacher: that truck might be responsible for moving $10 million in material a day. So if it's outta commission for 90

days, yeah. We've lost a lot of money.

[01:04:13] Ilia Baranov: Enormous, money. And so it's, the incentives are so high that they have people running around stealing tires from each other

[01:04:19] Audrow Nash: Ah,

[01:04:20] Ilia Baranov: like at that point, you'll pay a group a million dollars to get you a tire, no problem.

so if you're, if, and these size vehicles are so large that they get shipped in pieces to the site,

assembled

[01:04:33] Audrow Nash: then assembled by

[01:04:34] Ilia Baranov: their entire life on site,

[01:04:35] Audrow Nash: Yeah.

[01:04:37] Ilia Baranov: and then left there, and then left there.

When the site is done, the vehicles are

[01:04:42] Audrow Nash: They're not even worth

[01:04:43] Ilia Baranov: You don't have to

[01:04:44] Stefan Seltz-Axmacher: Yeah.

[01:04:45] Ilia Baranov: yet, you don't have to do any of that. If you have a smaller vehicle,

you just put 'em on the bat of a flatbed truck and you just have a, instead of one giant one, you have 10 medium or, 30 small size.

If one of them breaks, you only lose a 30th of your effectiveness.

[01:04:59] Stefan Seltz-Axmacher: and when they get smaller, they get easier to electrify too.

[01:05:03] Audrow Nash: Oh, that's a cool point.

Okay. Which that's desirable because it's just clean, easy to maintain. You don't need big, like diesel, high pressure, hydraulic,

[01:05:15] Stefan Seltz-Axmacher: I'm, if you like the idea of sometimes there being snow, more electric vehicles seems nice.

[01:05:24] Audrow Nash: is that? That doesn't make.

[01:05:26] Stefan Seltz-Axmacher: Sorry.

if you like the idea of not

[01:05:28] Ilia Baranov: change

[01:05:29] Audrow Nash: facetious. Ah,

[01:05:31] Stefan Seltz-Axmacher: Sorry.

[01:05:32] Audrow Nash: it went right over my head. Yeah. Yeah. Still don't get it.

[01:05:35] Stefan Seltz-Axmacher: wasn't my best climate change joke?

[01:05:39] Audrow Nash: One thing that's interesting to me, so Electric Sheep, the first interview that I had on this podcast, they also had a similar thing where they were. Instead like the constraint, was exactly as you've said where they were trying to do lawn mowing. And companies have fewer people so they get bigger mowers and bigger mowers are more dangerous, but they probably also go fast for that 'cause they gotta mow quickly.

and so what they said is we can remove that assumption and have a whole bunch of small mowers. so that seems like where it goes eventually, but you guys are preparing for the near term where you can automate these single large vehicles and get a lot of bang for the buck there. And then as you automate, I suppose then we can remove that restriction as we build more robots and things like this

Yeah.

So you build, basically we need more deployed and the new generation can start to be smaller ones that can then use the thing, but we get to start on the big ones and it's easier, which is just glorious. Like what a great

[01:06:41] Stefan Seltz-Axmacher: you need to, and you need to deploy on the equipment people have today. Like I've heard of autonomy programs that. Like for some use case, have gone to customers we've talked to and said, all right, cool for us to save you $500,000 of labor cost per year or $750,000 of labor cost per year. Step one is you need to buy $15 million of new equipment.

'cause we only can work with this one specific type of vehicle. And yeah, weirdly enough, those autonomy deployments never happened.

So like we have to be able to meet the market where it is today on the vehicles that it's using today, and then as, the needs around having physically present labor go away, the market can move towards smaller, more electric, more nimble vehicles goals.

[01:07:26] Audrow Nash: I like that a lot. Yeah. And then you don't have, I guess then you can remove the black market for these tires and things like this. 'cause smaller ones are probably easier to procure. That's great. Okay. What a cool thing. What, so we touched a little on AI and machine learning.

What do you guys generally like, just because it's so topical, what do you guys generally think about all the generative artificial intelligence stuff?

And also, there's a lot of other exciting work where it's like learning from demonstration. There've been some cool demos lately. and just, and even like end-to-end Robotics learning stuff where they do a lot in simulation and then learn the final things in the real world. what do you guys think there?

Maybe start with Ilia or Ilia. Do you wanna think for,

Thoughts on AI and Machine Learning

[01:08:18] Ilia Baranov: I think,

[01:08:21] Audrow Nash: thoughtful for.

[01:08:24] Ilia Baranov: I think there's a lot of usefulness there as if I put it on purpose, eh? No, I think there's a lot of usefulness there for, high variability tasks. Like working in the home again, or, high degrees of freedom manipulators where you can't buy inspection or, algorithmically test every possible outcome.

So image generation, for example, why it's so interesting in, machine learning, or gen generative networks for images in general, is that the image space is so enormous that there's no kind of algorithmic approach that can generate enough data. So you have to have these probabilistic machine learning based approaches to actually get anything reasonable.

there's no good way to do it otherwise. 'cause the, the data content of an image that you're seeing right now is incredible, right? and so for those kind of use cases, especially in Robotics, I think it's very promising because there, we don't have enough compute or algorithms to solve it any other reasonable way today. However, again, in, in our space where it's very much the opposite of a very well-defined kinematic models in con in somewhat controlled spaces, I find the ML approaches are a little bit overkill and they tend to over fit to their space. kinda the answer I give people is, they say, you should use, you should use photorealistic simulation and test all your stuff in sim, which we do use simulation, but we put almost no effort to make it photorealistic

[01:09:57] Audrow Nash: You don't need it.

[01:09:58] Ilia Baranov: The amount of money, the, yeah. A, we don't need it, but B, the gap between us and somebody like Tesla is several tens of billions of dollars and they have the resources and are at that bleeding edge where every incremental dollar makes a difference. Whereas us, we don't need a billion dollars to make a 90% difference in what we're doing. We only need a fraction of that.

And so it just wouldn't be a wise investment for us at this time in our use cases to put the bulk of our effort at ML.

[01:10:30] Stefan Seltz-Axmacher: I also think with the types of vehicles that we're driving, they're supposed to do very discreet things. They're not supposed to do fuzzy things.

They might be like told to do what they're supposed to do in a fuzzy way, right? Like it might be that someone says, Hey, yard truck, can you bring this over to crane 72 right now?

Or they might say, yard truck 14, bring this to crane 72. Like both of those two things should lead to the same discrete action, but the way that we drive that yard truck through that port, the speeds that we drive at the way that we interact at a, a. And if it's not fixed, it's terrifying.

[01:11:12] Audrow Nash: Oh yeah. Totally.

[01:11:13] Stefan Seltz-Axmacher: You don't wanna wonder like, why is that bulldozer doing that thing it's doing towards me?

Like, it needs to be like, oh, it goes exactly there and if it's going somewhere else, something's really bad.

[01:11:27] Audrow Nash: Yeah.

[01:11:28] Stefan Seltz-Axmacher: So I think for our application, where I'm really excited about, about the broader, what people are currently calling AI, what I'm really excited about it is for, the UI, is for the interaction between our highly, deterministic systems and like fairly non-deterministic people.

So whether that's voice commands for robots to do things, whether that is using an LLM to scan, an organization's, manual to figure out how their facility operates and to try to turn that into a list of Boolean that we could have a person read through and see if they make

[01:12:06] Audrow Nash: starting point. Yeah.

[01:12:08] Ilia Baranov: Yeah.

[01:12:08] Stefan Seltz-Axmacher: that's that probably like that second thing I just said would probably save 20 hours of stupid meetings that no one wants to be in for implementing a hundred robots in a multi-billion dollar facility. But I don't think, us figuring out a novel path, based on the vibes of the, the transformer model, across, that facility, that's probably not gonna be the right strategy anytime soon.

[01:12:35] Audrow Nash: what a thing. Yeah, it's the thing that's really interesting to me with it is a lot of times, so these big companies where they are, they have this huge handbook and it, they tell you how to do something and they want it done in that way. and if you have an LLM, go and make an initial thing, you still have to pay the expert to go and confirm its results for these kinds of things.

It's been, thinking about this more broadly and it's I feel like programmers, I feel like copywriters, all these different things that we all thought were gonna be automated by ChatGPT. I think they're safe for a while because you want that person to validate it. like you, you need that check on it to make sure it's reliable and does what you want.

And then with, programming especially, it's and if you wanna build off of what you've done now you have to give it back to the thing. And if it says it can't do it,

[01:13:35] Stefan Seltz-Axmacher: Do it

[01:13:37] Audrow Nash: and, but it's still doing the wrong thing. It's just interesting 'cause a programmer could probably figure it out. But an LLM may just get stuck like it is on the problem that I'm trying

[01:13:45] Stefan Seltz-Axmacher: though. what's useful for us is, like the guidebooks for a lot of our machines are written, they're written like booleans for idiots.

'cause oftentimes the managers of these facilities don't have the world's highest level of respect for the vehicle operators. like I've read a manual for a, particular facility. Where, there's vehicles that lift up 30,000 pounds and then drive around with 30,000 pounds and then move it. and there's a list of maybe like 50 rules of operation for when you're driving these vehicles. And one of them was like, if you have lifted a load and you're in reverse and you see someone standing within a hundred meters behind you, turn off the engine, get on the radio, ask for help. If you're driving from here to there and this gate is open, turn off the engine, get on the radio call for help. and there are rules written like that like an LLM could make quick work of. and that we could deploy into a use case really easily. But there's also the, rules that you'd only learn by sitting down with all of the management staff of the site for three weeks in, in 14 different meetings that are each scheduled for an hour and a half, but none of them go less than two and a half hours.

Where in the 15th meeting someone happens to say, oh, yeah, you can't drive in that corner 'cause that's the corner where this thing drains and all these people died once. So you gotta go around that. And

[01:15:11] Audrow Nash: these

[01:15:12] Stefan Seltz-Axmacher: that's written down somewhere, the LLM won, I won't learn it.

[01:15:15] Audrow Nash: Yeah, that context. Yeah, it's interesting 'cause there's a lot that's implicit I think in how people do things. And it may not be in a manual, just has to be learned for this kind of thing. Interesting. Any, so Has it changed? Oh, go ahead, ia.

[01:15:31] Stefan Seltz-Axmacher: go ahead.

[01:15:33] Ilia Baranov: yeah, there's, a bigger meta point here on machine learning in general and Robotics and our particular view on it.

I was, careful earlier to say that machine learning is this kind of sugar res sprinkle on top to, to really solve these corner use cases because in a lot of our vehicles, we actually want to safety rate our system.

Making safety certified systems

[01:15:52] Audrow Nash: Oh,

[01:15:52] Ilia Baranov: And again, safety rating is something where if you want to do an actual formal certification, it's not impossible with machine learning, but the bar is so much higher right than, very deterministic systems, which we can prove out a hundred percent exactly what it will do in every scenario. and so the lower down the stack you get towards the underlying safety layer, the more and more rigid we become and the more and more kind of rules driven we become so that machine learning side at the top can like maybe differentiate shape between a tree and a boulder, but it's never gonna command the vehicles directly.

[01:16:27] Audrow Nash: Yeah. it has to

[01:16:29] Ilia Baranov: because we need to deploy these things in real world.

[01:16:31] Audrow Nash: Wow. I love that. I, can't believe I didn't think about talking about all of the, safety certification stuff, but that makes a lot of sense, which is you guys can, because so these companies, they come and they try to build their own and they spend, you said like $50 million or some bonkers amount of money trying to build their autonomy stack, whereas you guys are making a business out of it.

And then what you can do is you can invest as you need or a you can invest so that you actually safety certify things so they can just drop in a safety certified system. That's really amazing.

[01:17:10] Stefan Seltz-Axmacher: Yep.

[01:17:11] Audrow Nash: that's probably a huge competitive advantage and probably a very good moat, I would imagine for you guys as a

[01:17:17] Stefan Seltz-Axmacher: Yep.

[01:17:18] Ilia Baranov: Yep.

[01:17:18] Stefan Seltz-Axmacher: Safety's really hard and like safety's really hard. It's also hard to build a culture where people are smart about safety, where people are, thinking about safety the right way. And essentially because we use the same safety core on all of our machines, we are basically constantly getting learnings.

We're constantly getting hours of testing to prove that the system actually works.

[01:17:40] Audrow Nash: Yeah. So part of it is putting the case together and because you have deployed on systems and your systems have monitoring, and then you can log, so that you can prove that this has happened for some amount of time. So you can make these claims.

That's nice. And so you can solve, Val, you can solve cases where they don't require the certification, but you can use those to bootstrap justification for additional safety certifications.

That's clever. Hell yeah.

[01:18:10] Stefan Seltz-Axmacher: yeah.

[01:18:12] Ilia Baranov: Yeah.

And that's from a business side, that's also a big

competitive moat for us,

[01:18:16] Audrow Nash: that's what I was

[01:18:17] Ilia Baranov: over time, because we really, yeah, we really try to, again, keep that autonomy core the same and the same for the safety subsystem. So we'll build up hours of operation that are directly applicable across units faster because we deploy it across a much more varied fleet. And of course, as a roboticist, you can imagine the more varied use cases you have, The much better. quality actual proof you have of safety. It's not just the same test a hundred times, it's a hundred different tests.

[01:18:45] Audrow Nash: That's super cool. Okay. What a thing. I really like that. Do you have, safety certifications yet? Or which ones do you have and what are you working towards?

[01:18:58] Ilia Baranov: Yeah, we're working. So there's two different threads we're working towards. One is internally we wanna certify to UL 4600, which is a fairly new standard.

[01:19:07] Audrow Nash: Yeah, I'm not familiar with

[01:19:08] Ilia Baranov: in particular we're targeting that one. Yeah. So,

[01:19:10] Stefan Seltz-Axmacher: so

[01:19:11] Ilia Baranov: past the, the,

ISO 26262

[01:19:15] Audrow Nash: Oh, okay. I know that one. Okay.

[01:19:16] Stefan Seltz-Axmacher: yep.

[01:19:17] Ilia Baranov: So what's nice about it is ISO 26262, if you go through the certification at some point it assumes human takeover, whereas UL 4600 does not.

And that's that's one but kind of key difference. And UL 4600 is more a, umbrella standard of how to design a safe autonomous system without being prescriptive on, like specifically follow this pattern. But it itself suggests things like IEC 61508 for sub electrical systems and those kind of things, which build up proof to get to a UL 4600 certification standard. So again, it allows us to be more flexible and, if we want to go that direction, I can talk about how I'm really excited to turn safety certification into another CICD code problem.

[01:20:03] Audrow Nash: I am, I would love to talk about that. No, I would love that. That sounds wonderful.

[01:20:07] Ilia Baranov: Okay.

[01:20:08] Audrow Nash: I love safety certification stuff. I, host spaces on X. I actually have one tonight. and you guys are welcome to come anytime, but they're like an hour and a half every week or something like this. We had one, before Christmas sometime, and it was an hour and a half of talking about safety certifications and after it's like I found my people, this is just wonderful, Okay. Tell me it's, I would love to hear how you're thinking of it. Like a continuous integration system, testing this kind of thing.

[01:20:41] Ilia Baranov: yeah. Yeah. So, having gone through this kind of process before. A lot of it is a very manual, like engineer sits down, writes a whole bunch of failure modes, get that reviewed, then they get testing, they get, that reviewed. And then maybe a small piece of it is, there's some requirement on code test coverage and maybe that piece is part of the CICD, but I want to actually wrap the entire everything I've described as part of a CICD so that every build we, we run for our, specifically our safety system.

So we try to keep the scope very small and very simple.

[01:21:14] Audrow Nash: Of course.

[01:21:15] Ilia Baranov: Not only does unit testing, but also auto generates a lot of the okay to be in compliance with this part of U

[01:21:22] Audrow Nash: Oh, I love it.

[01:21:22] Ilia Baranov: have this certification that refers to this thing, here's in this particular vehicle, which is configured this way. So when we do a build for this vehicle, it has these characteristics which we can pull in as their own proof for their use cases.

So this vehicle goes at this speed, hence we need this speed of look ahead, here's our proof of that, look ahead, here's our proof of the data and to auto generate as much of that as we possibly can so that the end requirement, of course you still need a human review,

But a lot of that is auto built.

[01:21:53] Audrow Nash: Yeah. That's

[01:21:55] Ilia Baranov: rather than hand calculating all of it.

[01:21:57] Audrow Nash: is laborious for sure. That's awesome. I, so it would be like, effectively, it'd be like you have requirements. So you, could take a standard, you could parse it and turn it into a bunch of requirements. The requirements might have parameters, and then you could use these to define specific tests that then you run that are part of your continuous integration and testing.

is that, oh, that sounds so cool.

[01:22:22] Ilia Baranov: UL 4600 and yeah. and UL 4600 is particular, is written as this like giant spanning tree of a bunch of things. to be compliant with this, you have to be compliant with this, and then you go to those. to be compliant with that, you have to be compliant with this, So it's basically a big tree traversal problem of, to hit this, here's my, all my branches and here's proofs for everyone that then build on top of each other to get that

[01:22:45] Audrow Nash: Oh, that's so

[01:22:46] Ilia Baranov: That's What I

[01:22:47] Audrow Nash: a moat that will

[01:22:48] Ilia Baranov: that's what we're building.

[01:22:51] Audrow Nash: Ah-Huh?

[01:22:51] Ilia Baranov: actually I do want to open source as much

[01:22:54] Audrow Nash: I want that to be open sourced. Yeah.

[01:22:58] Ilia Baranov: At least, the learnings from it and the pitfalls we find and those kind of things. I do want to build that out because I think for too long safety has been this like pay us X millions of dollars and we're a safety certification organization. We'll give you the check mark, like here's the golden check mark. I wanna actually build the infrastructure to let other groups do that themselves. And of course provide that service, we will as well. But I just as basic autonomy, you should be able to build this yourself. But if you want a shortcut, you work with us because we know how to do it. We have the proof points.

[01:23:28] Audrow Nash: Yeah, actually that could be a, that could even be like a side business or a side thing within your business where it's like you basically have all this infrastructure to facilitate. UL 4600, 4,600, you, you could help Robotics companies certify their, sophomore easily. 'cause it's a pain that people have to go and learn all of these safety certifications and all the details of it.

And if it was like, here, hook up to your robot to our, like some, sort of interface. and we're gonna see if it passes in our environment, this kind of thing. That would be just amazing. That would be so cool. That would, accelerate all of Robotics I would think. Like all different

[01:24:16] Ilia Baranov: yeah, and I think an additional detail is that while that's the umbrella standard we've picked, the reason we picked it is it because a high, it has a high enough barrier of proof and demonstration, that kind of stuff that I, I believe at least,

and we're working through the proof exactly.

We, the same techniques we've used on U 4 8600, we can then deploy to whatever people need because mining standards in South Africa. Are different than Australia, are different than the US. Instead of killing ourselves, trying to comply with everyone, we'll generate the products and then go to specialists on site and say, for your mine, you know how to certify.

Here's all the data we have. What are we missing? What do you need That isn't part of this stack.

[01:25:02] Audrow Nash: will be really cool. Yeah, I really like that a lot. I think, yeah, making certifications. it's interesting because if we made them very algorithmic in a way, and you can parse them so they're in PDFs or whatever and you can parse them to make them, like a, tree of things that you have to go check and then you have some document that shows all the requirements that would just be so valuable.

I had to, so I was the ROS boss for Humble for ROS 2 and built a tool that helped us with requirements. and so I did a pretty good investigation on the like requirements, tools, space to try to see okay, you have these requirements, they lead to these test cases and whatever, and how do you do it?

And they were all Horrible.

So we ended up building our own

[01:25:51] Ilia Baranov: all awful And super expensive.

and like ludicrously expensive.

[01:25:57] Audrow Nash: many of them had like free trials or something, so I would try them and they're just terrible. Like awful, Everything. Awful and expensive. but I think if you could make that good, that would be fantastic.

I. that would be a big value to the whole Robotics community. I would expect. I would love, and if you open source that please let me know. I would love to see it too. Hell yeah.

[01:26:20] Ilia Baranov: Yeah. Yeah, no, for sure. you know what? After this send me that requirements. If you have any notes on that, I'd love to see if we can integrate that. That'd be really cool.

[01:26:28] Audrow Nash: Oh hell yeah. Okay.

[01:26:29] Ilia Baranov: If there's anything public about it, throw it my way. I'll take a look.

[01:26:34] Audrow Nash: the tool I ended up making was a bit simpler and it was basically for, it's all open source and it has aged a bit 'cause I haven't had maintenance time for it, but we are still using it for every ROS 2 and Gazebo release. but yeah, I, can send it to you. It's called yet another test case manager was the dumb name that I gave it.

But,

[01:26:56] Ilia Baranov: I love it.

[01:26:58] Audrow Nash: it was fun to build. And then, so we're coming to the end, but I would love to, we didn't talk about it at all about your podcast, so I'd love to talk a bit about that. yeah. go ahead.

Live demo!

[01:27:12] Ilia Baranov: before we jump into that, can I do, one quick diversion?

[01:27:17] Stefan Seltz-Axmacher: Is there, a chat feature in this? Can we send, you Audrow a link?

[01:27:20] Audrow Nash: Yeah. it's in the bottom right

[01:27:23] Stefan Seltz-Axmacher: Cool.

[01:27:23] Ilia Baranov: All right. So for our listeners, you'll have less fun than, we do currently. but I, just to describe it, I sent Audrow a link to our live test tractor in Modesto, California, and I'm gonna share my screen so you can see a video feed from it. now what Audrow's gonna do

see a

[01:27:44] Stefan Seltz-Axmacher: let, lemme let me, lemme, let me walk you through a little bit of what's going on. So, live streaming here on the screen is a first person view from our test tractor in Modesto, California. There's no one in this tractor, no one on site. The closest member of our team is about a hundred miles away.

The last time anyone in our team was physically near this tractor was probably Ilia and I back in September. Audrow, did that, link work for you? Do you have it

[01:28:12] Audrow Nash: I have a, it's like a Google Maps or something, and I click the location, so it looks like

it's

[01:28:18] Stefan Seltz-Axmacher: Oh,

[01:28:18] Ilia Baranov: you, you, clicked a location. Yep. There you go. So

[01:28:22] Stefan Seltz-Axmacher: you just,

[01:28:23] Ilia Baranov: our robot.

[01:28:25] Stefan Seltz-Axmacher: so the app that, re

[01:28:28] Audrow Nash: Oh, so

[01:28:29] Stefan Seltz-Axmacher: was. Was written by a buddy of mine who is a self-described 50th percentile engineer. he's just a front end engineer at a big boring SaaS company. He knows nothing about controls, he knows nothing about safety engineering. He knows nothing about machine learning, mechatronics any of that. He basically, in 20 hours he took map box's, API to pull in satellite data of the field and took in our API to be able to see location of where the tractor is and issue, commands to the tractor.

And from that, Audrow seems to have now sent us on yet another, point

that the tractor's driving itself to.

[01:29:10] Audrow Nash: That's super cool. I love that.

[01:29:13] Ilia Baranov: it's doing a three point turn and then it's 'cause it will, so for listeners, Aras clicking in a little yellow geofence and we're seeing a live feed of the vehicle drive around. What the vehicle will always try to do is, end up oriented from where it started to where it is. So usually that'll involve a three point turn.

So right now it's facing north. If Audrow clicks somewhere south, we're gonna turn around and then end up facing south.

and this kind of easy to use interface. There you go. and this kind of easy to use interface where you just click it and tell it where to go is just one example app.

But the underlying rest API is what we actually use to control our vehicles.

And this tractor could easily be a mining vehicle or a forestry vehicle and it would behave completely differently. But Audrow's clicking would be identical.

[01:30:01] Stefan Seltz-Axmacher: Just the other week we were co-hosting a happy hour at CES and we were using the same app, at 9:00 PM Pacific time to issue commands to, an AGV that our autonomy's on in Australia, made by a partner of ours, BIA5.

So like the reason for this and like the whole thought process here, if you wanna automate a tractor, the special thing about automating a tractor is not the hardware that goes onto the tractor.

It's not like the point-to-point navigation. It's like figuring out like the five screens and four buttons per screen that Farmer John needs to see and, interact with to make something valuable for Farmer John happen.

And I don't know what Farmer John needs, 'cause I don't know if Farmer John is in Ohio and grows wheat, or if it's really Father Juan in Spain growing whatever they grow in Spain. either farmer probably needs a different ui. They probably need different behaviors to happen. We let our customers build that. We just solved the point to point nav part.

[01:31:05] Audrow Nash: Huh. That's super cool. Did not know we were gonna get a live demo. I love that.

[01:31:11] Stefan Seltz-Axmacher: Yeah, we, did that demo I think a thousand times last year is a real number.

[01:31:15] Ilia Baranov: Literally something. Yeah.

[01:31:18] Stefan Seltz-Axmacher: like if you listener are on a call with me and it's during business hours, You'll almost certainly end up getting

[01:31:25] Audrow Nash: ha.

[01:31:26] Stefan Seltz-Axmacher: and that's part of our secret evil plan to prove that our robots just work is by like doing constant demos. of this is not a big deal.

This's not a big production. We're not even treating you all that special right now, Audrow, we're just doing the thing we do every time.

[01:31:40] Audrow Nash: That's great. It's great to not be treated special with the get the good demo.

[01:31:46] Stefan Seltz-Axmacher: I know how you make,

I know how to make you feel.

[01:31:50] Audrow Nash: Yeah. Okay. Do you think

So it's very interesting to me. The web connects all of us. it's just like all our devices are running web stuff, everything is making requests. I, feel like in the robotics community we've looked on rest APIs, which I don't know, they have their git and put and post and whatever their requests. and we've looked at them and gone, that will never work for robotics. And you guys are making these nice demos and you have these simple rest API endpoints for querying. Tell me a bit about that. is it the future of how we're going to interact with robots as you think like a high level rest, API will manage things and you'll handle all the complexity on the robot side, but keep it simple for the user.

Is that kind of how it will be? Or tell me about rest APIs and why them?

Web tools and robotics

[01:32:46] Ilia Baranov: I think, yeah, I think we, when we were just getting started, we wanted something that every web developer and every language out there has a easy to use getting started guide on. So I think like Python or C or Rust or HTTP, Ruby, whatever.

[01:33:02] Audrow Nash: Whatever it is. Yeah.

[01:33:03] Ilia Baranov: And REST was this very simple to understand that you could even run from a command line if you wanted to or run from a browser.

Exactly. You can curl it. and it is a polled interface in a lot of ways. So if, for example, we wanted to webstream, you notice the video I showed was using forint as an actual web backend. We're big fans of formint we use them

for, this sort of stuff. REST is not the path. Yeah. So REST is not the path for streaming large amounts of

[01:33:32] Audrow Nash: No,

[01:33:33] Ilia Baranov: it at all. but Right, but the right tool for the right process.

If, you have a large fleet of vehicles, let's say a hundred, and you just need to be able to tell them, vehicle A, go to this position, vehicle B go to this position, it's actually scales relatively well for that purpose. without having to set up unique connections for each one without having to manage web RTC stuff.

It's just spiel out a bunch of commands from anywhere you are in the world and they'll get to your robots.

because it's built on standard web technology, you get a lot of that security, you get a lot of that multi-threading, multi-user agent stuff is all well figured out.

So we don't have to reinvent any of that stuff either

[01:34:11] Stefan Seltz-Axmacher: And, it can work both for this fun demo, which is a neat thing to do from a distance, but it can also work if everything is on the same site and there is no external internet connection. The same API can work for the vehicle in that use case.

[01:34:24] Audrow Nash: Oh, that's a big

[01:34:26] Ilia Baranov: the API's hosted locally, so the API's on the vehicle.

And so the API could even be self triggered. So something we don't have ready yet, but I mentioned that behavior tree. the long-term goal here is to be able to give our customers the ability to generate little SubT trees and inject them back onto the robot for itself. concrete example, if they're building their apple picking robot,

they focus on their arm to pick the apple and then they onboard trigger an API function, move to the

[01:34:52] Stefan Seltz-Axmacher: forward a foot.

[01:34:53] Ilia Baranov: which they've, or, move to the next tree, which they've defined as move until you see a barrier under these conditions, whatever. Like a little subtree of logic.

[01:35:03] Audrow Nash: I like that. Yeah, so it can self trigger. So you, can basically have a system make its own calls from another call so it can cascade in this kind of thing. That seems powerful to me. very interesting. Seems a lot more complex, but seems like you could do some cool things or maybe you can get, behavior that's more responsive with that kind of thing.

'cause it can trigger something right then I bet. Is that the big benefit that you get with it? Ah, cool.

[01:35:37] Ilia Baranov: it's, local interactivity is definitely one of them. So, again, and this isn't fully ready yet, so our current API, we can publish the API specs. They're online, but they don't, they won't mention this subtree behavior yet.

But, one of our early clients, they just wanted a button that you could press. To press a button and the robot does

x And we thought, oh, we could write a custom ROS interface. We could do all these things to, to get it to behave. But then we thought we're already running an API, let's just loop back the API call to itself.

And that they just have their little button do a little programmatic, REST API call.

Which sounds funny, but it actually ended up being the most flexible approach.

[01:36:16] Audrow Nash: Yeah, I would imagine it would be the most flexible approach. That seems great. and you, do you think that, do you think this is gonna be a larger trend in Robotics? I know like a lot of companies, they end up wanting to put things into some sort of interface you can view from the web. and I guess this often means putting, making endpoints for something.

and sometimes they'll use web sockets and things for streaming video or whatever it might be. but do, you think that we're gonna see a larger trend of robots doing more, being, interacted with through a REST API or something like this? is that something that you think will catch on?

[01:37:03] Ilia Baranov: I, think, if I could jump in,

sorry, Stefan, but, not necessarily REST in particular, but I think

[01:37:10] Audrow Nash: Just web standards.

[01:37:11] Ilia Baranov: in general has. like web, different web websites, whatever it is. If there's a new thing next, next month, ROS special communication protocol, like happy to write an interface layer to it.

'cause we don't care ultimately, right? Like none of our actual safety or behavior happens up there. It's just a translator. and I think we are moving in the direction where there's more connectivity for lower cost on more devices.

there's a lot of this edge computing that's taken hold, a lot of cloud computing. so I think covering our eyes and pretending that doesn't happen in Robotics isn't the right way forwards.

I think we have to accept that.

[01:37:45] Audrow Nash: Definitely. And we might as well leverage everything that exists too.

[01:37:49] Stefan Seltz-Axmacher: Yeah.

[01:37:50] Audrow Nash: there's just so much there where huge sums of money have been invested into improving these systems. and we might as well not reinvent that. So the web is great for a lot of things like that, is my opinion on it.

Hell yeah. What a cool thing. Thank you for sharing that.

[01:38:08] Stefan Seltz-Axmacher: Absolutely.

[01:38:09] Audrow Nash: Yeah. I heard the podcast episode that I listened to, that Nic shared, is, or you guys talked about the demoing, like instead of the kind of your culture where it's just demo constantly. Therefore it's not like you do a big demo once in a while and it breaks and this kind of thing.

And it, I, did not know that I would be, seeing a demo now. So

[01:38:36] Stefan Seltz-Axmacher: I didn't know that was

[01:38:36] Audrow Nash: see. You're still

[01:38:37] Stefan Seltz-Axmacher: you listened to, so that worked out.

[01:38:39] Audrow Nash: Hell yeah.

[01:38:41] Ilia Baranov: CICD stands for continuous demo by

[01:38:44] Audrow Nash: Ah ha.

[01:38:44] Ilia Baranov: just to be

[01:38:45] Audrow Nash: Oh yeah. Yeah. Instead of deployment. I love it. Continuous demo. For you guys it might be demos. Yeah. So what do you,

tell me about the podcast. I, like at a, high level, what is the Automate It podcast and how, what are you guys trying to achieve with it? Or, I don't know, what do you, what kind of questions do you guys investigate?

Maybe Stefan

Automate It podcast

[01:39:11] Stefan Seltz-Axmacher: co of course. Yeah. So every one to 12 weeks. we release a podcast on our podcast called Automate It. and, basically automate it is the podcast version of the fun Robotics conversations you have with your friends at the end of the long day.

[01:39:31] Audrow Nash: does seem

[01:39:31] Stefan Seltz-Axmacher: two sections of it, where the first is we each draw a random card to represent a technology and represent a business case, and then we have to spitball into an existence, a robot that uses that technology in that business case.

And we try to make them be stupid, sometimes ends up making them be good. Often make them stupid, but representative of a real company, which is troubling every time. and then the second half, we'll talk about different challenges in Robotics, like why indoor localization is hard, or why demoing often is good, or any number of these kind of normal things that happen when you're working on robots.

[01:40:12] Audrow Nash: Let's see. Ilia, anything to add?

[01:40:18] Ilia Baranov: yeah, I would just add that generally we do this talk after a beer or two, so it tends to be a little bit it off the rails at the start. I think my favorite was we invented a bartending robot that was a bunch of arms on electrified rails that would fly around the bar to pour drinks or bring them to people.

And, yeah, very economically viable. It's

[01:40:41] Stefan Seltz-Axmacher: Yeah, it'd be, a great business. but in terms of why we started it, so weird thing about our company is

there's, I'm sure there's more personalities in this, but maybe there's three archetypes of types of people starting Robotics projects. There's like the optimistic producty business guy who thinks robots just work and let's figure out this, let's build a robot around this idea that I

[01:41:05] Audrow Nash: it's an opportunity basically.

[01:41:07] Stefan Seltz-Axmacher: Yeah, this is a great opportunity. there's the like young, dumb and incredibly confident, roboticist who maybe they just got out of CMU, maybe they just left some big Robotics lab and like that Robotics lab, they did everything slightly wrong. And as an individual roboticist, I one time made a project that looked neat for a minute.

So I'm gonna rebuild everything in robotics myself to solve every single problem and it's gonna go great.

[01:41:39] Audrow Nash: I've been

[01:41:39] Stefan Seltz-Axmacher: kind of third archetype who either of those first two become after robots are jerks for a while. like the type of roboticist you are when you once had dreams of, solving general problems.

And now you're just hoping God, I hope it turns on tomorrow and can do something and that people, let me keep on working on this for Christ's sake. Why won't it turn on?

and that third group is they tend to be our best customer base. because the people who think that robot Robotics is solved, the people who think that they alone can solve it, they're not gonna let us solve auto point to point autonomy for them.

'cause that will take them like a weekend probably. it's really the people who have been beaten up by robots, chewed up, spat out, who really wanna make a robot work this time and, create real value. That's, who we sell the best to. So as a result, we made a podcast to help people get to that stage of, their robotics journey

[01:42:41] Audrow Nash: Oh, that's really cool. Yeah. So basically you're discussing all the things and you're letting people in on the process, and the idea is it helps people become educated in Robotics sooner about how to make things and that they can't do

[01:42:58] Ilia Baranov: see the back

to see the backstage pain and suffering. we were talking at CSS just recently and we were joking that we should have named it actually Robotics Anonymous as as a, as like a just a support group for here's all the pain we've, faced.

[01:43:14] Audrow Nash: Yeah. Yeah. In the demo one you guys talked about some of your past experiences and they seemed super painful, with spinning parts that were hot and it was super cold out and it was just crazy. yeah, I feel the, thing that is interesting to me is I was definitely stage two or whatever of what you were doing while I was in grad school.

like I, I tried to build ROS at some point, and then by the time I

[01:43:41] Stefan Seltz-Axmacher: you are gonna do it the right way.

[01:43:43] Audrow Nash: Yeah, I know, but, and then I got to ar I needed something like Rviz and something like ROS2bag and it was like, I guess I'll just use it and this kind of thing. But now, it's been interesting 'cause I've been mostly on the software side. and so I'm trying to do some Robotics side projects occasionally and buy a bunch of stuff from Sparkfun and hook it up to ROS 2 and things like this. And it's a lot of work. and the thing that is really becoming clear to me is to do a good job at robotics and to like actually build value.

You need a lot of people. Which is interesting 'cause there's so many things to do, where you need the CEO who goes and figures out the market and knows, and you need a domain expert and you need the technical guy and you need all the manufacturing knowledge and there's a million things.

[01:44:38] Stefan Seltz-Axmacher: Yeah. And like I think I would argue that like for robotics to be the space, we all want it to be you. You need to build a product, you need to build a \company, you build an organization where you only need to be good at one part of it.

[01:44:53] Audrow Nash: Definitely.

[01:44:54] Stefan Seltz-Axmacher: maybe you're doing.

the hardware, maybe you're doing the retrofit. Maybe you are getting really smart about what farmers in Ohio need as a ui, ux, and, as Robotics behaviors, and you're using somebody like us to do the driving and someone else to retrofit the vehicles

and If, you're willing to do just that, it's still gonna turn out to be incredibly hard.

'cause it turns out like as much as like we all might make fun of DocuSign 5.0, it's hard to build DocuSign 5.0 or how to not for cats like those are hard businesses to build in their own right. And like by focusing just on the specific part of the value that you have unique insight to, we can all go make robots

[01:45:38] Audrow Nash: I like that a lot. Ilia, anything to add?

[01:45:43] Ilia Baranov: I, I would say, from the content creator perspective, so Nicole, our kind of marketing guru suggested this to Stefan and I, and to start a podcast. And at first kind of both Stefan

[01:45:57] Stefan Seltz-Axmacher: Hard

[01:45:58] Ilia Baranov: eh, a little leery, eh, I got enough.

[01:46:02] Audrow Nash: You are doing a lot.

[01:46:04] Stefan Seltz-Axmacher: Who talks on

[01:46:04] Ilia Baranov: at them this week at some point.

Yeah. but but realistically, it ended up being A super fun for us. And I think that kind of unstructured part helps get us in the mood and, talk about random, crazy Robotics anonymous topics. and secondly, I think it ended up being hilariously effective as a marketing and re outreach tool.

[01:46:28] Audrow Nash: Oh

[01:46:29] Ilia Baranov: it's, not a, we don't have a thousand, we don't even have a thousand listeners. we don't have a lot, it's, a very small listener

[01:46:35] Stefan Seltz-Axmacher: It's a niche podcast.

[01:46:36] Ilia Baranov: is, yeah. it's very niche, right? But the funny thing is, any conference we'll go to, invariably, at least one, if not multiple people will come up and be like, oh yeah, you, I've listened to your podcast.

[01:46:48] Stefan Seltz-Axmacher: And they'll have,

[01:46:48] Ilia Baranov: the people we wanna work with. yep,

[01:46:52] Stefan Seltz-Axmacher: yep. And they'll be specifically yeah, maybe you can help us. What we're doing is really hard. It seems like it's been hard for you before too, and maybe you can do some things better than we can. Yeah, we'd love to, we'd love to, commiserate with you on robots and help automate it.

[01:47:06] Audrow Nash: Yes. Oh, that's so funny. Yeah, I I have that too, where it's a small fan base, but they are awesome.

[01:47:16] Stefan Seltz-Axmacher: Yep.

[01:47:17] Audrow Nash: I get recognized just a handful of times at every conference. They're like, oh, I love the pod, whatever. Or you should talk to this person. It's just, I don't know. It's such a cool, wonderful thing.

I feel like talking to a small audience that really likes the content that you make is really wonderful. there's something just glorious in it, like very, enjoyable.

[01:47:39] Stefan Seltz-Axmacher: I think it goes back to a basic like Paul Graham of make a hundred people love, or make 10 people love you, as opposed to a thousand people like you.

[01:47:47] Audrow Nash: ah, yeah. It's a good way to look at it, Paul.

Yeah.

[01:47:52] Stefan Seltz-Axmacher: Yeah.

[01:47:53] Audrow Nash: Hell yeah. that's super cool. I love the reason for doing the podcast. and it's funny that it can bring you people to commiserate with and help educate to move people towards this. robots can be really cool and they can solve a lot of things, but they're really hard and I dunno.

Okay. So you gotta approach it that way, like they're super

[01:48:18] Stefan Seltz-Axmacher: Yep.

[01:48:21] Audrow Nash: Let's see. where do people find your podcast? Is it on all of the different

[01:48:27] Stefan Seltz-Axmacher: so we're on all the major, outlets, Spotify, Apple Podcasts, whatever, just call to automate it. And there's a nice cartoonish image, of Ilia and I both donning facial hair and looking incredibly, dapper

[01:48:41] Audrow Nash: Nice. Yeah, I think what, it's funny 'cause I've listened to a couple of your episodes and they're, the beginning is very much complimenting each other's hair,

[01:48:50] Stefan Seltz-Axmacher: hair.

[01:48:50] Audrow Nash: this kind of thing. And it's oh, your

[01:48:53] Stefan Seltz-Axmacher: for anyone

[01:48:54] Audrow Nash: today. A little goofy.

[01:48:56] Stefan Seltz-Axmacher: for anyone, anyone viewing this, that just makes obvious sense. And for the people who are just listening, imagine the most incredibly good looking people ever. And that's exactly what we look like, especially in the hair department.

Just yeah,

[01:49:11] Audrow Nash: And then do not go onto the YouTube. And

[01:49:14] Ilia Baranov: Stefan starts the episodes critiquing. My, Stefan

usually starts by critiquing whatever I'm wearing that day,

[01:49:24] Audrow Nash: really,

[01:49:25] Ilia Baranov: always a good start to our podcast episode.

It's ah, same shirt. I see. Huh.

[01:49:31] Audrow Nash: I wear a black shirt in every episode, but

[01:49:34] Stefan Seltz-Axmacher: This sounds a lot like my wife who tells me I look the best with a mask on.

[01:49:40] Audrow Nash: how funny. Not gonna touch that one.

Let's see, we're, coming to the end of the time. I can talk a little bit more. I don't know if you guys have a hard cutoff or anything. but we're at the end of, okay. Okay.

I, wanted to see what you guys have for advice for people, wanting to get into Robotics. say, it's someone who's in their, the middle of their university or something, what should they know or what would save them time?

maybe start with Stefan.

Advice for getting involved in robotics

[01:50:20] Stefan Seltz-Axmacher: yeah, so I'll, maybe I'll answer this more. For people who are interested in Robotics products and Ilia can talk about it more from an engineering perspective.

[01:50:27] Audrow Nash: it.

[01:50:28] Stefan Seltz-Axmacher: I think the, when you're, when you have not built a robot yourself and you're looking in a very common, intellectual fallacy, or flaw that you fall into is that you see things that have been built before and you assume that's no big deal. And I, I'll just, I'll grab an arm from a Toyota factory, I'll grab a Waymo car and then I'll have a car that can drive around and put mail in mailboxes and it's gonna be great. the, weirdest and, shittiest and hardest thing about Robotics from a, product and a business and investing perspective is that unlike the world of SaaS apps where any feature is purchasable as a microservice in Robotics, basically every feature that you see, every supporting piece of tooling, every supporting piece of infrastructure, almost everything that you see has to be reinvented from scratch. with the exception of some of the stuff in ROS that is in general, mostly a lot harder to set up than any piece of tooling that you would get for starting an e-commerce site.

So like a good Robotics business, a good Robotics product is a matter of what is a thing that is super expensive for humans to do.

Like just there's an incredible labor shortage around it and is drop dead simple for a robot to do. 'cause it needs the same thing to be done over and over again. And start with that. And don't just start by combining robots you've seen on, in TechCrunch.

[01:52:05] Audrow Nash: love it. yeah. Ilia, what do you think?

[01:52:12] Ilia Baranov: So on, on my end, my thoughts are summarized. So I, really love XKCD I'm sure a lot of people listening

have heard of that web

[01:52:19] Audrow Nash: Yeah, they're great. I.

[01:52:21] Ilia Baranov: there's a particular one that's There's one that's oh, we built this robot that like shoots lightning and is also an air balloon and whatever.

And somebody ask what is that for? And it's search and rescue. that's like the default answer for robot projects. It's oh, search and rescue. so I think the trap that engineers fall into is solutionism or robots for robot's sake. And I love to do that for my own life.

Like I build lots of fun trinkets just for fun for myself, but that is not a business. And so as an engineer, really try to step back and to give it the question of, is this the most effective way I can solve this problem? Is robots really the answer here? If not, you're gonna bang your head against that problem, regardless of how good your tech is.

You're just not gonna solve it. I'll give a really quick concrete example. there's a mining project, which they're interested in building autonomy and stuff like that, but, in the short term, they're saying their, immediate term goal is just to reduce the amount of idling that their fleet does, because across their fleet, when their drivers just idle the engine, they waste like multiple millions of dollars a year

[01:53:30] Stefan Seltz-Axmacher: $8 million a year of diesel

wasted because the operators won't, turn the engine off. because if the engine's turned off, it looks like they're not working.

[01:53:41] Audrow Nash: huh?

[01:53:42] Ilia Baranov: Or, 'cause they want to keep the AC on, right?

And, as an engineer, my, my thing is oh, we can automate their vehicles and we can do all this thing, we can put in all these monitoring systems. And then the answer is or you could just have a 10 cent circuit that if you're idling from longer than X automatically turns off your engine.

And like that 50 cents or that $2 solution is worth $8 million. Not particularly interesting. It's a it's a 555 timer and a switch, but worth $8 million off the bat. so really when you're, being an engineer, really try to think do you actually need robots here? And if the answer's no and you want to be a roboticist look for a different idea, you're, not gonna tech your way through this problem,

[01:54:24] Stefan Seltz-Axmacher: Yeah,

[01:54:26] Audrow Nash: oh, go ahead, Stefan.

Stefan's former startup experience and path to Polymath

[01:54:27] Stefan Seltz-Axmacher: I was gonna say, so an interesting thing, when I was working at my self-driving trucking company,

the point where being better at writing software to manage a trucking fleet would make us more profit than solving general autonomy. like we were autonomous, like 90, 95% of the time when we're on highways teleoperated for the last 5%. And like being more clever about how we route trucks being more clever about like automatically telling more brokers that trucks are available for a load. That type of stuff would add more cents per mile of profit than $10 billion AGI solution type of thing. and that realization made my board member think, or, maybe I should scratch that, made senior financial people connected to us.

Think, why do you guys just let go of the Robotics team? Just focus on being a software, focused trucking company.

And like when robot trucks get solved, you buy whoever makes the best ones. Which I think is gonna happen to a lot of the vertical folks where it's gonna become like, oh. running a, day-to-day business in this, heavy industry. These companies suck a lot Just by bringing in like standard Silicon Valley practices of how to manage resources would make you more efficient than most of the world's biggest industrial players.

Yeah.

[01:55:55] Audrow Nash: That's bonkers. What a thing. That was probably a hard time, I would imagine, with that

[01:56:02] Stefan Seltz-Axmacher: Yeah. Yeah. Yeah. That was a like, huh, I can't tell a lot of people on the team about this, or morale would drop pretty sadly.

[01:56:10] Audrow Nash: Yeah. What a thing. How long, how much longer what actually, I know we're late in the thing, but what happened to that company and where are they at now?

[01:56:20] Stefan Seltz-Axmacher: Starsky ran outta money. We, we failed pretty, in like late 2019, right before 2020.

Around that. By that time, we, to my knowledge, were the most commercially successful L four trucking company. We were doing about $7 million in annualized revenue run rate in freight that we're hauling mostly with regular trucks.

But our customers thought we were hauling freight with like unmanned self-driving trucks.

We did the world's first ever driver out autonomous run on a live highway.

So we drove a truck for seven miles, like in regular traffic with no person in it. and we, went to VCs and were like, Hey, look the tech works like we need.

Here's our safety testing plan that will allow us

In about 12 months to be having five trucks driving about full-time and no person in it.

The business more or less works like, here are the things that we need to do to make it be outright profitable and an interesting and like very cookie cutter, running a trucking company, types of things that we need to do.

We've made, we've proven all the stuff, let's go, give us $50 million dollars.

[01:57:28] Audrow Nash: Yeah, we did the hard

[01:57:28] Stefan Seltz-Axmacher: Super profitable. And VCs looked at us, they're like, wow. So you're working really hard. running a trucking company is hard. Robots are hard.

[01:57:37] Ilia Baranov: hard.

[01:57:38] Stefan Seltz-Axmacher: and you're saying that with another $50 million, maybe you can be a trucking company with 50% margins. That kind of sucks.

[01:57:47] Audrow Nash: Oh

[01:57:47] Stefan Seltz-Axmacher: I would way rather give $50 million to DocuSign 5.0 where for $50 million, they'll create $40 million of revenue, but they'll have 90% margins on it. And, and anyone in San Francisco can run that company. Like you don't need some special weird unicorn. Like anyone in the world can run a unicorn SaaS company.

this, this autonomy thing isn't a great business, is it?

[01:58:15] Audrow Nash: Oh, what a thing. Okay. And then, so after that, then you effectively picked yourself up

[01:58:25] Stefan Seltz-Axmacher: Yep.

[01:58:26] Ilia Baranov: Yep,

[01:58:26] Audrow Nash: started, Polymath with this and used a lot of the lessons and like Interesting. I see.

[01:58:34] Stefan Seltz-Axmacher: So like I, COVID happened. I went and hung out with a bunch of SaaS companies. I had an interesting experience where, while I was hanging out with SaaS companies, I took a deep look at like the NetSuite ecosystem, if you're familiar with it.

[01:58:47] Audrow Nash: Is that like Microsoft what? What's NetSuite?

[01:58:51] Stefan Seltz-Axmacher: so netSuite is this big, crappy piece of enterprise software to do accounting set up in a super special way.

So if you've worked at a company with more, than 500 people, they probably use NetSuite or something like it, think of it like QuickBooks, but a couple million dollars and then a couple million dollars to implement it a super special way because Coke likes to do, its accounting slightly different than Pepsi does, who likes to do it slightly different than, I don't know, Snapple or whatever. and when I looked at that ecosystem, I saw software that didn't just work, right? There was software that like, it wasn't like Gmail where you swipe a credit card and you have an inter and you have an email account. It wasn't like Uber where you download an app, hit a button and a car shows up.

it was software where you buy it, you spend a bunch of time integrating it, not even just business to business, but you spend a bunch of time configuring it to work just for you. And then it is one of the most valuable pieces of software in your entire stack.

[01:59:53] Audrow Nash: whoa.

[01:59:54] Stefan Seltz-Axmacher: And I don't think Robotics is ready to be Gmail yet, and I don't think

Robotics is ready to be Uber yet. But I think Robotics is ready to be something that you buy, configure, and then is a key part of how your business operates.

And that's why we're building Polymath the way we are

[02:00:10] Audrow Nash: I love it. Almost like a, I don't know, Zapier or something, like some automation

[02:00:15] Stefan Seltz-Axmacher: like, like Zapier without a nice UI with Zapier, but you have to hire consultants to set it up.

[02:00:23] Audrow Nash: Yeah. Cool.

[02:00:25] Stefan Seltz-Axmacher: Zapier's pitch was probably like we make it so you can do Oracle type integrations without a team of consultants was probably their seed pitch.

[02:00:35] Audrow Nash: huh. Yeah. What a thing.

Okay.

[02:00:39] Stefan Seltz-Axmacher: Okay.

[02:00:41] Audrow Nash: let's see. So last, there's one more thing I wanted to ask. So we had, advice, you guys both gave your advice.

Oh, Ilia, I wanted to ask, 'cause you mentioned side projects, you build your own trinkets for these kinds of things. I wanted to see, what you think of side projects.

I think they're super important, as a way of growing your skills and staying relevant, but I wanna see what your perspective is on them and from a CTO perspective too.

Thoughts on side projects

[02:01:18] Ilia Baranov: Yeah. yeah, no, for sure. I, two things. One is my side projects keep me sane, because, especially as you get into the, more CTO roles, like I would love for most of my day to be coding, but like maybe 10% of it is realistically and 90% is everything else. so doing the side projects keeps me happy.

but the other funny thing is that my side projects end up being, aligned with Polymath some of the time, which is interesting. so early, on we didn't really have a good way to emergency stop our vehicles remotely. So as the demo we gave, we have this completely remote vehicle, so like none of the line of sight e stops will work for us, right?

And fort's, wireless over network stuff wasn't ready and nobody else was providing anything of real value. so because I had built up some stuff in like Arduino land basically, I was able to, quickly throw together a prototype emergency stop that ended up being relatively robust.

[02:02:25] Audrow Nash: Nice

[02:02:27] Ilia Baranov: and those sort of things, I, think. TLDR is that side projects are critical, especially I'd say there's almost like a bathtub curve when you're junior. They let you iterate quickly, and when you're very senior, they let you keep your sanity because seniority tends to correlate with meetings.

It, not always, but in a lot of ways. so they let you like just quickly prototype something and test it out. and I find the risk for, I would say not as much in robotics companies, but for software companies in general is over time as they become larger, there's this growing divide between people making decisions and people writing code. And if the people making decisions don't make a habit of writing code and being connected to the system and like basically being a chaos monkey is another thing that I do quite often that I highly recommend

[02:03:16] Audrow Nash: I dunno what it

[02:03:17] Ilia Baranov: put out an API and I'll just write like the worst possible code.

[02:03:21] Audrow Nash: Yeah.

[02:03:22] Ilia Baranov: yeah, it's just I'll write the worst possible code to talk to our API and break a whole bunch of stuff by accident, but by

doing that, we'll figure out like, oh, like we

didn't handle this exception. Or a, dumb user like me would've thought it works this way, but it works that way. And like all of this stuff. and, all of that all flows into keeping your sanity and having a good business.

[02:03:45] Audrow Nash: Hell yeah. Yeah. I, think it's an interesting thing. I feel like, I've been doing side projects really regularly for a few years now. Like every evening I'm working on a side project, which I absolutely love. It's like some of my favorite time each day. but or, so one of the things with that is I feel like, you learn through your own pain how to make good architectural decisions.

And I feel like that's a very hard thing unless you're going end to end in something you're building. And it's, it like if I just focus on the one specific narrow thing that I'm working on in work, I don't understand the end-to-end process, but when I'm working on that one specific thing, it's easier to make better decisions if I understand the larger context through doing the whole process through side projects.

That's been one of my thoughts on it recently,

but

[02:04:43] Ilia Baranov: totally,

[02:04:43] Stefan Seltz-Axmacher: of sense.

[02:04:44] Audrow Nash: hell yeah. alright. So questions.

what are you guys excited about in Robotics? what are the things you're looking forward to? Ilia, let's start with you.

What Stefan and Ilia are excited about in robotics

[02:04:56] Ilia Baranov: I'll come back to my experience at least with Amazon, where, the Astro program, I think, and for, your listeners, if they haven't seen, so Amazon has a home robot called Astro. You can find it online, theoretically you can buy it. I don't know, I don't make their business decisions, but you can actually play with this thing.

It works reasonably well. We have one at the office just for fun. but them and Samsung and Sony and basically everybody and their dog has made these kind of human interactive robots.

But the, little piece that they're missing is the question, the very small question of what the hell does it do, which nobody has answered yet. And the killer app for robots in the home seems to have been vacuuming from like the 1990s till today. And nobody's come up with a better idea. And I'm still waiting for what is the next killer app for Robotics in the home. putting away dishes or laundry, like everybody, that's the first thing everybody jumps towards.

But manipulation is very expensive and complicated. And even if ML Magic fixes it, like just the actuators are very expensive.

Nobody's gonna spend even $5,000 on this, right? And $5,000 for manipulators ridiculously cheap, right? For a good one that moves around and stuff. anyway, so I think we're still missing the killer app in the home for robots, and I'm excited to see what that ends up being.

Maybe it's an embodiment of these. Assistance machine learned based approaches of just shout in your room. Hey, find me a path to this theater, and this screen magically comes over and talks to you and tells you what to do. I don't know if I knew maybe we'd do that company, but for now we're in industrial space and, a different, business.

[02:06:37] Stefan Seltz-Axmacher: This whole thing sounded like what we did at the beginning of our podcast where we'll start like coming up with these things and be yeah, but wait, I could already ask that question anywhere in my house. And it gets answered for me, and the screen that pops up is my cell phone. And it just happens already.

So why do I need a screen on an arm in each room of my house? I'd say what I'm most excited about, so for better or worse, the self-driving car industry threw a lot of money and interest into robotics. I think like more than other spaces, I, know there's a lot of quad copter people, but from my own perspective, it seemed like way more people got way more excited about self-driving cars than quad copers.

And now there's a lot of people who have worked in and around Robotics and like some of those people have been siphoned off to LLM rappers for XYZ and like some of those will become real businesses or not. And there's like a lot of, there's a similar flavor of self-driving car hype that's happening in humanoid Robotics right now.

And my hunch is it will go roughly the same as it went with Robotaxis. which you could decide how much sarcasm you wanna read into. but I think we now have a critical mass of people who care about robots who, are hooked on these sorts of problems in such a way where Dropbox 5.0 is no longer an appealing job. And that now, like they need to build robots for something.

And many of those people will work on nonsense, like the robots for robot's sake. But I think we're gonna have more real Robotics companies doing valuable and interesting things in the next founded, in the next five years than we had the last 100 years.

and I think like right now. Probably the most successful Robotics company of all time outside of defense and mining is, is probably iRobot. And I think you'll have five to 10 companies as successful as iRobot in the next five years.

[02:08:41] Audrow Nash: Cool.

[02:08:42] Stefan Seltz-Axmacher: even 50. And that's gonna be fucking cool.

[02:08:48] Audrow Nash: Hell yeah.

That's awesome. I think you're probably right with that, and I think that is such an exciting thing, and that matches my observation too, that there's a bunch of people that are hooked on this and they want to be using it for

[02:09:05] Stefan Seltz-Axmacher: man, boring SaaS just is not something like I desperately did not wanna start another Robotics company. Like I was so wanting to do something simple that like I was even working on like a, an e-commerce for planning funerals idea for six months. That seems more cheerful than go be miserable in the mud with robots.

And the thing is, I just can't work on anything else now.

And I think like a lot of people feel the same as me, so maybe right now some of them are doing, open AI for asking your plumber questions and there's money in that right now. But there's a lot of people at those companies thinking, how do I get back to Robotics?

Where do I find a Robotics application that is fundable and buildable and valuable? And I can't wait for that to start bearing fruit.

[02:09:55] Audrow Nash: too. Hell yeah. I love that answer. Okay. Hell yeah. thank you both. It's been so much fun having you on the

[02:10:02] Stefan Seltz-Axmacher: Thanks so much, Audrow.

[02:10:05] Audrow Nash: Hell yeah. All right. Bye everyone.

[02:10:07] Stefan Seltz-Axmacher: It was a pleasure hanging out. Thanks so much for joining us.

[02:10:10] Audrow Nash: You made it!

I should be quiet just in case you're sleeping peacefully. In any case, I hope you enjoyed the interview.

What did you think? Is Polymath Robotic on to something? Would you use their service? Did you like the demo? I'd love to hear what you think in the comments or on X.

Also, I host a weekly space on X on Thursday evenings, US time, where we talk about all things robotics, including this interview. Feel free to come and chat.

Okay, that's all. Happy building.

[00:00:00] Episode Intro

[00:00:00] Audrow Nash: What do you think when a robotics company comes into an old industry? Do they shake it up? Tell everyone that their old technology is out of date? Make things so complex that only people with PhDs can understand what's going on? I think that's the fear of a lot of people, especially people outside of robotics.

But it's not always the case. In this episode, I speak with Ben Alfi, CEO and co founder of Bluewhite Robotics, a robotics startup in agriculture that just closed their Series C investment round. Blue White is doing something different. They're trying to blend in. They want to make it easy for farmers, so they can use their existing equipment, work with the same dealers, and do things their way.

All while adding robots to make things more efficient, and especially Get the work done in spite of increasing labor shortages. You'll like this episode if you're interested in robots and agriculture, the ethics of disruptive tech, and a clever business model that creates opportunities for more jobs. I'm Audrow Nash.

This is my podcast. I hope you enjoy my interview with Ben Alfi.

[00:01:09] Introducing Ben Alfi and Bluewhite Robotics

[00:01:09] Audrow Nash: Hi Ben. Would you introduce yourself?

[00:01:12] Ben Alfi: Hi, I'm very happy to be here today.

My name is Ben Alfi, and people call me Alfi, too many years in the Air Force, so we've been called by your surname, and I'm 50 years old, a very young entrepreneur though, only six years in the startup ecosystem.

[00:01:37] Audrow Nash: Wow.

[00:01:38] Ben Alfi: And, CEO and founder of Bluewhite.

[00:01:43] Audrow Nash: Yes. And tell me about Bluewhite.

[00:01:45] Ben Alfi: Bluewhite is a data driven, autonomous farm company. we are, healing and, assisting how to, create disruption around, autonomy in the agriculture market. And the way we do it, we are, bolting in existing tractors, transform them to autonomy. Adding a kit on it. And, that way one person can operate different type of detractors, in his farm, with an iPad or a laptop or a cell phone, whatever he needs.

And the ideas that, or the labor shortage today in the market and the idea of, food is getting more and more expensive. Bluewhite is creating a model that enables, The adoption quite fast by, dealing with the existing, people, existing people you have, and the existing, tractors you have in the field.

What's nice about what we're doing is that we're focusing in, permanent crops, meaning vineyards, citrus, apples, almonds, and such alike, what has been called as high value crops. So it's all year long. A lot of tractors per acre at the same area. So these are the places that the labor shortage is very high, and the need, and the demand is very high.

And on the technological side, which you are, very keen on, you and the listeners, these are areas with no connectivity, no GPS. So relying on standard GPS, or standard connectivity won't get you anywhere. And so we have, nice technology, which I'm sure we'll dig into along the discussion.

[00:03:51] Audrow Nash: We definitely will.

[00:03:53] Outfitting a tractor with Bluewhite's tech

[00:03:53] Audrow Nash: Okay. going back just a little bit, you brought up a lot of very interesting things there. First, you mentioned it's a kit, so you're bolting it to existing tractors. Tell me a bit about why take that approach and what that actually means. So you have a, box, you're just strapping onto tractors and then integrating into the system, or what does it mean?

[00:04:18] Ben Alfi: it means that, we are a commercial autonomous company. And so today the go to market is a dealership of John Deere or New Holland are selling and installing tractors, Bluewhite capabilities. And, how does it go? You were sending like an Ikea kit, to a dealership. Dealership has an, an order for, from a grower that he wants, his existing fleet to transform to autonomy.

They take his tractors. In the morning, the tractor is totally manual, nothing happened on it. And in the afternoon, it's autonomous. After two people worked on it for, A few hours and after checkups, after all the integration, after all the sensor integration and everything, from actuation to being connected to the cloud, so you can start operating it and that's about it.

This is just the first phase. The second phase is how to implement it in the form itself and this is also something that we're helping or the dealership can do it also.

[00:05:34] Audrow Nash: Okay. So tell me more about this kit. So when these two people are installing it in just a few hours, what are they taking as a component and then, they're, how are they connecting it to the tractor or whatever equipment going to automate.

[00:05:55] Ben Alfi: it's, it depends on how smart the tractor is. Most of the tractors today are just manual tractors with no computer on it. Some of them already are digital in some places. And some of them in the future will be drive by wire, so you can actually, can connect to some kind of a computer. Work It. And the way it's done today, the kit includes actuation, the gas brake, the digital electric gear, and another component will be the sensing, front sensing, and other sensors that will go and to understand what is happening, so it's the actuations are the muscles, The ability to move the muscles, the sensing is the ability to sense and understand what's going on around me.

And you cannot depend just on one sensor, it can be a GPS, LiDAR, Visual, and others. Because we're doing sensor fusion. A set of communication. And compute, because you need also onboard computing, because The tractor, if it doesn't have a connectivity, it needs to know how to drive safely in a good way, even without somebody looking after

[00:07:18] Audrow Nash: Very cool. It's really cool to me that you are outfitting tractors that may have no digital systems in it, like purely mechanical

[00:07:28] Ben Alfi: it. Most of them are today. We're guessing in five years it will be different, but most of them are just without anything. Maybe some kind of a CAN bus to identify faults like, oil status or fuel gauges or other things like that. I don't know. But most of them are really classic mechanical tractors.

[00:07:56] Audrow Nash: And then, so you'll have I don't know, it'll, it probably looks like a cam or like a little foot and you put it on a servo, and then that servo can push the accelerator and push the brake. Another one will push the brake. So part of these, the technicians installing this part of their work is to place these motors in a place that they're actually accessing the controls.

Is it correct?

[00:08:21] Ben Alfi: You're doing it on top. I look at it as if it's like a handicapped vehicle. And this is, But it's, these are, these are off the shelf capabilities, the big things are the algorithms and the being smart behind it and understanding how to operate with the same software and all those different type of tractors, different type of crops, day and night, and also different type of implements, whatever is in the back.

[00:08:52] What types of jobs are they automating?

[00:08:52] Ben Alfi: We're not just tractoring, we need to make sure the job is being done.

[00:08:58] Audrow Nash: Yeah. What, so not just tractoring, what other jobs or what other vehicles are you automating?

[00:09:04] Ben Alfi: So it's, when we say not just tractoring, it means that the tractor needs to mow, to spray, to herbicide, to pesticide. Whatever is at the back over there needs to be very accurate in it, if it's working correctly. We are saving so much money on chemicals, so much, to the earth on putting just extra chemicals on, the vineyard or almond trees, our sensors understand if there is or there isn't.

a tree, so you can stop the sprayer, start the sprayer. Is it a big tree, a small tree? is it the end of the road and now you just need to stop it? today it doesn't happen. Today people just spend more and more, chemicals and it's not good for the people that are driving next to it and dying from cancer, too early.

and it's not good for the environment.

[00:10:00] Audrow Nash: Definitely. Yeah. I do think that there's a lot of opportunity for robotics to improve environmental things by making it so that you don't have to do as much fertilizer or pesticides or something like this, because you can have more targeted. Use or through automation and also maybe more persistent use, because you can have it running more frequently versus having to spray more heavily because you're gonna run less frequently because people are scarce or something like this.

[00:10:29] Ben Alfi: And availability, you're doing it not at the correct timing, or the weekend costs you 150 percent because of the labor costs, or nighttime is too costly, and then you are not doing it at the correct time, so you just over spray, overuse of your chemicals, and also, Think of the amount of tractors that you need in the farm, and we are saving around 35 percent of the amount of tractors you need because they can do double

[00:11:01] Audrow Nash: That's so cool.

[00:11:02] Ben Alfi: And we're saving 85 percent on the chemicals because we're doing it accurately.

[00:11:08] Audrow Nash: Okay, so that sounds wonderful to me. Going,

[00:11:12] How do you do localization? What sensors do you use?

[00:11:12] Audrow Nash: I wanna, understand the system really well, before, and I want to get into a cloud infrastructure and everything that you guys provide. but so you have actuators that are moving the tractor and controlling things. what kinds of sensors do you typically use?

'cause you mentioned you're often in GPS or connectivity denied environments. what are you relying on? And you mentioned sensor, fusion, but how, are you approaching it from a sensing perspective?

[00:11:41] Ben Alfi: And I think this is a key question, also a key way of how we're approaching it.

We're talking about autonomous vehicles for quite a long time. There are a few, very few companies that can say that they are commercial in autonomous vehicles, like Bluewhite.

[00:12:00] Audrow Nash: Definitely.

[00:12:01] Ben Alfi: being commercial means that you need redundancies and you need to be safe, first and foremost.

It's, easy to demonstrate a tractor running in the farm or in the field, but it's totally different to have 50, 000 hours of those tractors running around day and night, a few tons tractors with, with the grower. So how do we do that? It's all about redundancies, and the way we are creating those redundancies is by creating parallel navigation solutions.

It can be a navigation solution done by GPS and RTK, where you have it. It can be a navigation solution by a LiDAR, which is an amazing broad sensor. A lot of people invested through the urban mobility companies, who invested in so many sensing and we're taking those capabilities off the shelf to the agriculture space.

LiDAR sensor, visual sensor. Odometry and other tricks that we have added inside. And think of it that you are driving and all the time the computer gives you four types of solutions. So you can have decision making who is now incorrect because they are checking each other. And if you see an abnormality, You can decide who should be the main navigation system.

For example, a vineyard during autumn, and it's open skies, a good reception. Give the GPS and RTK, let them be number one, and keep LiDAR and visual as obstacle detection only. Almond trees, August, 120 degrees Fahrenheit in Fresno, California, under the foliage at night. No reception, no nothing, so you'll give the odometry and the LIDAR, you'll let them be number one and two.

And then you'll put the visual number three. And then, only then, you just, not rely, but understand where you are in the other one. So this is, and all this is happening automatically. You as a operator are not deciding what is, because the operator is You know somebody who doesn't understand technology.

Everything needs to be very simple. Stop, play, Spray, mow. This is the speed I want. This is the block I want. Other than that, we are counting on, the machine to be safe. So the machine automatically knows, hey, I'm in an almond block, it's summer. This is what I see. I understand what should be the prioritization.

I will give certain prioritization, but if something is changing at real time, I will change prioritization accordingly. And unlike urban mobility, at the end of the day, it's only, up to five miles per hour. So I just stop.

[00:15:14] Audrow Nash: yeah, that's a big advantage that makes this problem a lot easier, even though it's super hard, is that you can just stop. something's weird with the connectivity just stop and it's not that big of a deal.

[00:15:27] Ben Alfi: It is a big overdeal on the quality of work and the idea that we're not stopping a lot is great but on the safety issue it's much a bigger of a deal if you have a safety event.

[00:15:40] Audrow Nash: Yes, for sure.

[00:15:41] Ben Alfi: this is how we're working. Okay,

[00:15:44] Audrow Nash: So you have several different tracks that are all figuring out localization independently, and you can figure out which of those you wanna listen to.

so I'm imagining like a weighted consensus for figuring out where you are. How, and you mentioned that it changes based on certain factors. if you have the almond trees and they're blocking out GPS or whatever it might be, then you switch to just using sensors on the robot. How are you picking between these?

'cause you mentioned that the farmer doesn't have to do it. Is it like you do a, an evaluation of the environment and then you use the environment and your sensor availability and maybe some other heuristics like time of day to select a weighting for these different localization types? Or how do you, pick from there?

[00:16:38] Ben Alfi: so I've tried to simplify it, but it's complicated. And so that, so the way it's happening is that you can have a basic assumptions upon time of day, where I am, what is the field, what is the mission, spray, mow. If I mow, there will be more dust. If I'm spraying, there will be more humid, things like that.

what do I depend on? After that, how old is the, the orchard? Is it a young orchard? An old orchard? A lot of things like that are happening also. And also in the algorithms that we're using, should we rely on the classic algorithms or the AI? The AI is much more flexible, yet less predictable. It needs more maturity.

So these are the balance. Do I have an AI machine, AI algorithm that is running? Well enough and mature enough and got a good scoring already that I can give it, let him be a part of the decision making, or it will be still piggyback, just riding along and collecting hours until the truth, true false and the false truth will be, in a good way.

So it helps, AI helps for scale. Classic algorithms helps for starting to run, and these are balances, and what is nice that we, it happens in a way that it's transparent, so you have around four types of sensors and around, let's say, anything between eight to sixteen types of solutions that are running, and that way you are able to run it, run.

Some of them is based on pre assumptions, but real life scenario, what actually is happening can change. The assumptions in real time. And then at the post-processing, the planning for next time will be okay. This block, the pre assumptions were wrong. These are the pre assumption that should happen.

[00:18:39] Audrow Nash: That seems very cool.

[00:18:41] Adding sensors to the tractor's tools + levels of support

[00:18:41] Audrow Nash: Okay. And so you have your tractors have these actuators to control them, and then they have these sensors that you are using to figure out things like localization. and then you have your implement, which is whatever you're towing. It's the mower, it's the sprayer, these kinds of things.

Are you putting sensors on the implement, on the sprayers? I'm calling it the right thing.

[00:19:08] Ben Alfi: Yes, you are. Yes.

[00:19:09] Audrow Nash: okay. Just making sure. so if we put it. Are you putting sensors on the implement or how are you evaluating what, how you're doing, with the implement? Or do you just have a model of what it is that you're towing and when you turn it on and off?

Or how sophisticated is your control of the implement?

[00:19:29] Ben Alfi: So just like the relationship we have with the tractor, is that we don't own the tractor. The tractor is the growers or the OEMs, and we are, blending in. Same is the idea with the implements. and the implements were we, because we're commercial, the benefit for the implement pro, providers or makers is that they want to work with us and we provide APIs and help them how to make their implement smart.

There is a sprayer and there is a smart sprayer. And so if I have enough information, and what, we're doing, we, have, four grades. That's four levels of how smart an implement should be. First one is on and off. Can I operate, can I just switch it on and off? Then can I, how can I assure the quality of the work?

Can I just The altitude of the mowing, the mower, with a sensor, yes or no. Sometimes the, the company that created that mower, they will talk to us and we'll help them how to adjust and what should be implemented and they will do it. And some places we'll just add a sensor in key areas. for scale, we see ourselves more of helping those companies creating this next generation and make them, what we call the implements.

Bluewhite Ready, so it's on off, sensing on the quality of work, what we call also about, we talk about preemptive maintenance, if I know that I can drive the certain PTO level, a certain engine, RPM, and it drives through miles per hour. And now I see that I need to have more force and nothing has really happened.

Okay. What is happening over there with the mower? What is happening over there with the implement? And then the last thing is the ability to report and to also control, control nozzles. Certain nozzles to shut off, certain nozzles to open. These are things that we're doing. And in that way. And working with the implement companies or enabling by ourself in certain areas.

This is how we're doing those capabilities.

[00:22:00] Audrow Nash: Okay. I like that a lot. And those are very exci, like the. The maintenance one in particular is very, exciting to me.

[00:22:06] Ben Alfi: It's huge. Real time for safety, what we call critical understanding. It's amazing how we are neglecting the understanding of a person that is running. Just understand something just went wrong. And how do we understand the same thing? Noise, vibration. and something is stuck. I need more power, all kinds of things like

[00:22:32] Audrow Nash: anomalous? Yeah, something's different. might've happened. Yeah, I think I, that's one of the huge gains, I think, from automating these systems. So it sounds to me like the core competence of your business of Bluewhite is to create this mobility kit for farm vehicles and then an added thing.

It's almost like a, side business within it. Is to expose, expose what you're doing and give yourself systems so you can control implements and you can help other companies to make their implements work well with your systems. and so it's outside of your work, but other companies can be lucrative, can have lucrative jobs of just retrofitting or designing their systems to work well with your system.

[00:23:31] Ben Alfi: first of all, we see ourself also responsible to, to make it happen with

[00:23:35] Audrow Nash: to help them,

[00:23:36] Ben Alfi: and I think

[00:23:36] Being a full solution for agriculture companies

[00:23:36] Ben Alfi: there is a huge, element that we disregard until now is the operating system.

[00:23:44] Audrow Nash: Ah, okay.

[00:23:44] Ben Alfi: So this is why we see ourselves as autonomous, data driven autonomous firm and not a aftermarket kit for autonomy. What does it mean?

It means that in a certain farm, you don't have just John Deere, you don't have just New Holland, you don't have just one type of tractor, you have different types and you don't want as an operator not to have six type of operating systems because you have six type of tractors, or implements, or crop. One operating system knows how to operate everything.

And the last layer is that those tractors are running. Sensors are there already. They collect a lot of data. There is no reason not to share that data with the grower. With the, farm manager, with the agronomist, with whatever agriculture company that the grower wants to work with that wants to do yield prediction or, weather or the his insurance company that he wants to show them that his farm is working correctly.

the ability to be also data enabler for the grower, for ourselves, to give him operational insight, to share it with third party that can give him also agriculture insight, all that. This is how we look at ourselves as a full package.

[00:25:06] Audrow Nash: Yeah, it's the whole vertical stack of farming using large machinery. Okay. Yeah. Okay. I love that.

[00:25:14] Sending data to the cloud

[00:25:14] Audrow Nash: Tell me more about the cloud component. what's actually how, I guess the first thing, 'cause I going up the stack. So we started from actuators, then we go to sensors, and then implements included in that.

Then after, it's how are we sending data to the cloud? Like how, does that actually work? Where, 'cause you're in these connectivity, design, these connectivity denied environments. Is there like a home base that the tractor returns to where it has upload ability or how does, how does this work?

[00:25:47] Ben Alfi: This is, one of the biggest things that we didn't know that we didn't know. and I think we have, achieved, first and foremost on the system engineering. And the second is how to actually make it happen. What is the relationship? of a cloud based operating system and, what we know from other markets, our IOT device.

So if we use a look at this robot as an IOT device, one thing, what is needed to be computed On the tractor. Okay. Now, how do I also relay, okay, you are the user you're operating. It was under the fully is running all that role for half a mile. Okay. What, how do I know what happened? When should I know?

Should I know it at real time? Should I do an assumption and then upload it? How do you also cache it correctly to the operating system so it won't be an overlapping of what really happened. So this is the coordination that we've done. There are some things that are, we've created a section of what is really, important.

Where are you? Or stop start. I want you to stop now. I want you to start. Okay. Then what, do I, okay. how is the, are you healthy? what is going on? Do you need my help? what is really happening? Oh, the data of the camera. to collect it. Collect it now on the, on your cell, on your onboard computer.

But when I need it. Upload it when I need it or when I ask for it, or at the end of the shift. Not all the information is important right now. And black box it, say it is a machine, something will happen. I need a black box to understand when a fault has happened. Do a record. I need you to record all the time and track what has happened so I can extract, logs and all those layers of information.

So some of it will be on the cloud. Some of it will be on the endpoint, on the computer itself, and it should be synced in a way that it's not just overwriting, and I know exactly from which tractor, and also privacy wise, also security wise, and all those layers.

[00:28:16] Audrow Nash: So it makes sense you are prioritizing what to send out first given your connectivity and things like this. when you were mentioning the black box, I was a bit confused. I, but I think what you're referring to, it's like in, airplanes, they have that box that's supposed to survive no matter what.

So you have, there's an analogous system that records logs and things like this on tractors,

that

[00:28:43] Ben Alfi: Although there is no definition that it should be. we are coming back from 20 years of autonomous vehicles, mostly air vehicles and others. And we believe that this is how it should be. So we created like other urban mobility, standards and, agriculture standards. So this is a great way to understand what has really happened, that I can also pull on demand information.

And to, troubleshoot, understand what is happening. And also, if something has happened, to know why it happened and where.

[00:29:24] Audrow Nash: Yes, definitely. And then, so if you're in the situation where you're operating in a fully internet and GPS, denied environment, so you just don't have access to those, maybe you then just tell the user, at first you are operating in those kinds of environments

where you have just

[00:29:44] Ben Alfi: And so we create, yeah, we created local 5G networks as an example, worked with Intel at the past.

[00:29:52] Audrow Nash: ah, do you put 'em throughout the orchard or something like that, or

[00:29:56] Ben Alfi: We try not to spend too much money on infrastructure right now, and also to the grower. You need it to be cost effective. Each farm somehow has an internet connection. It can be also at the farm manager's office. And from there you are like linking with an antenna if needed with local 5G networks. And we're working with third parties to enable that.

We will see more and more symbiotic, Private and public networks running together. And again, it should be transparent to the tractor and should be transparent to the tractor operator.

[00:30:41] Audrow Nash: That's really cool. Okay, so you'll, where there is GPS denied not GPS, maybe you don't need gPS but is

denied, you'll provide some infrastructure and maybe, the robot can go in and out of connectivity even still, but you still are checking in occasionally and prioritizing things like I can turn it off if I need, or, so it doesn't drop out for like huge of times.

It's just a, second here or a few seconds there.

[00:31:09] Ben Alfi: exactly, and you can also decide, how you want to, are the rules? Okay, if I have no connectivity, should I stop? Am I allowed to get until the end of the row? Before I turn at the end of the row, should I stop and wait until there is connectivity? Is it a place so the grower can decide what are the rules of, okay, how free is that place from any other, any others just to run and play?

[00:31:43] Configuring their robot's behavior + onboarding

[00:31:43] Audrow Nash: So what strikes me with this is that the growers often have a lot of configuration available to them. so if they, wanna set the rule, don't drive this far without connectivity or something. 'cause I wanna be able to shut you off at any time. Or, I don't know, maybe there's more specific, like farm procedural stuff.

One farmer wants to do it this way versus another wants to do it that way. How do you manage and expose them, the growers to that kind of configuration? I imagine it's like customer onboarding and it's a one-time

[00:32:20] Ben Alfi: Exactly. Exactly. Spot on. So it's the customer onboarding. every in orchard you spray around 20 times a year for 20 years.

[00:32:31] Audrow Nash: a

[00:32:32] Ben Alfi: the idea is just, these are the efforts at the beginning. and also we have, our, our home recommendations. So I want it to be with recommendation and automatic, okay, it's an almond orchard.

This is how it looks. These are the automatic, but I want also the grower to have the ability to be flexible if used is used to keep every two rows. So what, I will give him a planning, but it's automatically only every row. You just tell me how you want to do it. You want to skip every three rows, every four rows.

You like different speed. Anything that you want should be addressable. And again, he has either a Bluewhite team that is on the support online that can, he can get more, help or, also dealerships that are helping him on the onboarding to do it. So he's not alone. Okay. This is the main idea, not to be alone, but, tell me what to do, but also when I want to do something, let me have it.

Let me do it. And as long as it's, safe. And it doesn't, infringe safety, we will help.

[00:33:43] Audrow Nash: It's all good. Yeah. So when, you send out a person or a small team of people to onboard a new customer, is it, is it like engineers are going out

or is it

[00:33:58] Ben Alfi: not.

[00:33:58] Audrow Nash: are. Okay. So a lot of this is exposed at a high level configuration, and so you have that are

[00:34:06] Ben Alfi: It's, operators. it's even just, just operators. we used to operate in the past. Now we're moving more from a service company to a product company because it's mature enough. And, but the know how on how to do it and, to give, to help adoption, to help the adoption and to help, to help, the growers, to use it in a good way.

We're not talking about, a year of work, it's, days and you're good to go. Over a day, it depends on how we have growers who have, who have 100, 000 acres, okay, of orchards. So it depends on how big you are

[00:34:52] Audrow Nash: Yeah. Oh man. That's awesome.

[00:34:54] Working with farm equipment dealerships

[00:34:54] Audrow Nash: So it sounds like these dealerships are really playing a critical role for you guys where, so you were doing this yourself, but now you're letting them handle the setup and you're turning into more of a product company. tell me more about the dealerships

[00:35:11] Ben Alfi: and all the market is under transition. Think of it. Those leadership have sold metal for the last century and now they need to be a precision provider and there is a lot of discussion about it. Okay. How?

[00:35:29] Audrow Nash: the sold metal. That's

[00:35:30] Ben Alfi: Yeah, it was a big question of, are we, when a dealership is installing a kit on a tractor and we want to see that every section was done correctly. Will he be opening a Jira ticket? Yes or no. Can he do it? Yes or no. And we found that. And the dealerships, they understand that they must be able to do it. They must, and in a way we are working with them together hand in hand on how to do it correctly. So the technicians are also, it's not a software technician, it's not DevOps.

But it's, what we are starting to see in the world as a robotic technician. So it's, 95 percent of their job will be on the mechanical side. or concept of operation side, but when needed, yes, you open a laptop and connect.

[00:36:33] Audrow Nash: Wow. How, how large is your operation with everything? Like, how many dealerships are you working with? How many robots are you deployed

[00:36:44] Ben Alfi: we are just, the first years, we are now, seeing ourselves growing more and more. we have around, 100 tractors running around already, and there are 400, 000 tractors waiting for aftermarket, just in permanent crops. Our goal is to get to 10, 000 in the next few years with those dealerships together. is an amazing experience. Main idea is how do you grow while making sure that safety is a top priority. How do you grow where gross margin is important for yourself, for the dealerships. And also for the growers to have a positive ROI, we have all that fixed and now we're just scaling more and more capabilities and it's like a spiral development and more like the metrics in a way that, okay, I didn't know how to do, how to run in apples, trees, and now I know, I didn't know how to, do spraying, now I know, so another capability, along the life of the project.

[00:38:00] Audrow Nash: Yeah. Okay.

[00:38:01] Spiral development

[00:38:01] Audrow Nash: And you mentioned spiral development, so it means gradually. So can you describe the spiral to

[00:38:08] Ben Alfi: Yeah, sure. it's, it's,

[00:38:11] Audrow Nash: as you're going.

[00:38:12] Ben Alfi: yeah, it's an, it's an agile, process. yet you can see it in a long term roadmap. Okay. how, okay. How many type of tractors do you know how to, transform? So we know around 20 already. How many type of, crops do you know? So we started with almonds, then pistachio, then.

Trellis then citrus, then apples, then all the others. How many types of implements We started with spraying, then herbicide, then mowing, now we're doing harvesting with the harvesting company. So all those capabilities that you're adding And it's great because we are transparent also with the dealerships, transparent with the growers.

They know what to expect along their upcoming years. All the updates are over the cloud. and it's, again, in a way that you can enable progress without buying a new iPhone every year.

[00:39:22] Audrow Nash: Because it keeps shipping updates for this kind of thing, so they don't have to necessarily buy more tooling. I would even imagine that if, say you upgrade or something, you can just add more compute. You can leave the sensors, you can leave the actuators. And so the upgrades may be somewhat painless.

[00:39:42] Ben Alfi: Yes.

[00:39:43] Audrow Nash: like relatively in terms of cost

[00:39:45] Ben Alfi: The idea

[00:39:46] Audrow Nash: back into the same system.

[00:39:47] Ben Alfi: On the hardware, we're trying to maximize as much as possible. Those sensors are amazing, and we can maximize much more. I gave you an example. We started with using the LiDAR for navigation, and now we also know how to, with the blue spray, we know how to save on spraying, because while you navigate, you know that there is a missing tree.

Let's tell the sprayer to stop. these are examples of how you create more and more applications along the years with the same hardware. And our goal is not to change hardware as often as others might be. We just don't want to go through that effort.

[00:40:28] Audrow Nash: Oh, for sure. Yeah. Especially if it's already working. Makes sense. Just keep going.

[00:40:32] Scaling their operation

[00:40:32] Audrow Nash: Now tell me about scaling. your main, so you've gotten a hundred tractors operating how many hours?

[00:40:41] Ben Alfi: We've done more than 50, 000 hours already of autonomous running. Our growers have 300, 000 acres. Even a grower doing a roadmap with a grower. A grower would have anything between 10 tractors to a robot. We have growers with 400 tractors. So you'll go with four tractors and then 12 and then go up to 80 percent of the fleet.

You don't need all the fleet. And so this is the idea of how we're, so it's a land and expand mode in a lot of places. we've been in California. Then we, now we're also operating in Washington state. We're going to, we're going to be in Europe, I'm guessing by the end of the year, Checkups over there and 2025 with more goals over there.

And Australia is also with huge demand and asking us to come. I just find it hard to be awake on three continents at the same time. So we'll see how we're growing. So the need is huge enough. The market is totally huge. There's enough room for another 10 Bluewhites in the world. And this is how it is.

[00:41:57] Audrow Nash: to 400,000. Yeah. And it'll be exciting. How, just for more context, how quickly, how has growth been looking? I imagine it's on an exponential curve, like in the next year, I would imagine maybe you have a thousand tractors or something like

[00:42:14] Ben Alfi: It depends what will be the, method that we want to do. We are emphasizing ourself, this year. We are going out now with generation three. Now, generation one was to take, the person out of the cabin, where you still seeing other autonomous companies in the world and in other areas. You still have a safety driver in the tractor.

Generation 2 was how that you don't need to look at the tractor, it can just run and it will work and it will know when to stop or run and everything you have the redundancies. 3 that we are now going out with is all the tractors are operated by the customer and to be operated by the customer we are looking for all the feedbacks that will happen along the year and getting more and more understanding of what is needed.

Then boom, we can, we can send a kit to a dealership in Australia. We don't need to go there over the cloud. We know how to upload. He is doing the onboarding and we're done. Okay. So it's more of, stages and maturity and then you can keep on running.

[00:43:27] Audrow Nash: I see. So the, if I understand correctly then the scaling challenge at the moment is you are letting in only enough customers that you can handle the feedback and then as you get ready, you'll take on more and more until, basically it's, biting off what you can chew right at the moment.

[00:43:47] Ben Alfi: And in parallel also creating the, how do you educate a dealership to do all that work? How do you, deal with. two sides, the customers, the dealership, and also your, kids and, meantime between failures, meantime between losses and things like that, that you want to make sure that you are maturing it, and then you can, scale it up.

part of, this last round that we just closed, round C was, is around that area that we can, okay, be, go to scale to the, just like you talked, go to those, big numbers. And, getting it from zero to one is a one obstacle, then one to 10 is a huge obstacle, 10 to 100, and from 100 to 1, 000, this is the, area that we're dealing with

[00:44:42] Audrow Nash: Every change in magnitude. Yeah, for sure. Very exciting.

[00:44:46] How large is the tractor market?

[00:44:46] Ben Alfi: Just to get an, a guess of how, big of a market we're talking of, to operate a tractor today in California costed the grower 100, 000 yearly costs.

[00:44:59] Audrow Nash:

[00:44:59] Ben Alfi: Okay, 400, 000 tractors are running, and think about the yearly cost that they're charging. So in the US it's more expensive, in other places, in Europe it's also as expensive.

So think about how, much money we can reduce to the world on the operation cost, on the chemical cost, on the amount of tractors, and in that way also maybe to make sure that you and I can keep on eating food in a reasonable price. And because food is getting more expensive, almonds, good food is just getting more and more expensive. we just need to control that. And on the other side, the amount of money that the growers are willing to pay us because we save them money. So it's easy for them to say, Oh, you give me 70 percent a cost reduction. Yeah, sure. You can have 30, 000 a year on the tractor. No worries. Okay, so this is how big the market is.

[00:46:01] Audrow Nash: Do you think that kind of savings, like 70% is feasible or

[00:46:09] Ben Alfi: now. I think it can be more. And this is without dealing with the yield that the data can give you on even improving yield. We, but we just, our approach is. And our approach is first and foremost, let's cost on, let's take down the operation costs and because you can compare it and then talk about yield, because I can, okay, who will pay until yield is proven?

This was the big question. And,

[00:46:38] Audrow Nash: mm-Hmm.

[00:46:39] Ben Alfi: it's season and every time. So first of all, when you have certain costs and immediately downgrade the cost. Automatically, the grower sees it. We will

[00:46:52] Audrow Nash: That's really remarkable. What do you think? So as you guys get more economies of scale in what you're doing, it so that the dealerships are the ones interfacing mostly with the customers. So you're not, and the dealerships are not, fielding you many questions because most of the things the customer wants you already support As you get that level of scale.

what kind of reduction in tractor expenses do you expect? Do you think it could be like 90%? Like where's the upper limit of this,

would

[00:47:26] Ben Alfi: it's about trust. The more the trust and the adoption will be, it will be higher. We'll see, less and less. Our assumptions are around 85 percent on cost. And around 30 to 40 percent on the amount of actual tractors that are needing, you need them to run around. And this is a huge amount of capabilities.

It's the only way. Also, we're going to be 10 billion people in 2050. And the way we are making food today is just not enough to feed everybody. So this is, a critical part.

[00:48:14] Labor shortages and people not wanting to be farmers

[00:48:14] Audrow Nash: I agree. And then also, I getting into kind of the labor discussion on this, I think we're have the same opinion with this, where it's, we just don't have enough people for these roles. there's such significant labor shortages that were, a lot of farmers are probably trying to hire people and unable to.

a lot of people don't want to go into farming and don't wanna be tractor drivers and this kind of thing, but can you talk a little bit about the labor

[00:48:42] Ben Alfi: You know how to ride with the shift gear?

[00:48:46] Audrow Nash: Yeah.

[00:48:47] Ben Alfi: Okay, and most of the people at your age already don't know. you need to press a clutch to drive a tractor. We don't burn a clutch. Young people, they don't know. They just don't know how to drive a tractor and not only they don't know, they don't find any reason to drive for eight or ten hours.

with the chemicals on them and suffering or being at, the high sun and suffering from that part. So it's not, it's, we see even, in, Latin America and any other place, you don't have people who wants to do that job anymore. and the baby boomers are now starting to get the intention.

They are starting their pension. They are, they were the last.

[00:49:42] Audrow Nash: than half of them are retired now

[00:49:44] Ben Alfi: And, and they, yeah, exactly. And they were the last tractor operators and running. and the amount of tractors that is needed is huge. So I think, I think just in Washington State there is a lack of 30,000 tractor drivers.

[00:50:06] Audrow Nash: Wow.

[00:50:07] Ben Alfi: And the prices are, they're sold above 20 an hour a long time ago to drive a tractor in California.

So these, and the quality is not good enough. And when you want a person to drive 2 miles per hour, 8 hours, the same speed, okay, he's unable. And downtime, you need to stop, you need to, to have a rest, to eat and everything. in an eight hour shift, we found out that people are driving, the tractor is driving around four and a half hours.

When it's an autonomous vehicle doing the same thing, it's seven hours and ten minutes because you just stopped for refills on the

[00:50:56] Audrow Nash: Ha That's

[00:50:58] Ben Alfi: You don't stop for a chat. You can coordinate when, the refilling will be. So it'll be always one in refill and all the other three are still running. And somehow when it's manned operation, everybody's gathering around at the same time for a refill and, to have a chat about, Sunday days football.

And so this is, this is what's happened and. So it's not just on the quality, on being accurate, when you're asking a robot to drive two miles per hour drivers, exactly, it's if you want it to, when there is an abnormality, you react immediately, you don't postpone it, availability, all those issues are just huge, and what is also beautiful, is that we see people coming to Bluewhite, To Fresno, to Central Valley, to Washington State, to Yakima.

Coming from Seattle, coming from San Francisco, from the Bay Area, saying, Hey, I love this. I can work with the iPad. And diversity wide, it's not just men. We have men, women, handicapped, and everything. And with the ability to operate, and when we're talking to growers. They say, at last, there is a reason for my son or daughter to come back and work with me and not to send them to, to work in Amazon in Seattle or something like that.

So

we are transforming the new blue collar of 21st century from pilots and drivers to robot operators. And we see it live in front of us. This

[00:52:42] Audrow Nash: That's so cool. What do you, so I guess what role will the farmers have in this? So there, I, love the vision. where, is the person adding value to the operation in this, the farmer?

[00:53:01] Ben Alfi: is, the autonomous farm is not a farm without people. it's a farm with people that have robots that help them, and they have data system that help, that gives them recommendations. but bottom line, the decision making, the hunch, the what is needed, and it's a database, decision making.

And, this is what's happening. And we see it in other places. We see it in our classic day to day other job that we are doing and that we get recommendation. If it's how to drive from one place to another, yet you still have, you still drive, right? And other things like that, the idea is that we want those people. who have so many skills and hunches about how the farm should run and have the ability to just deal with this and not deal with, okay, the simple things of just drive that block of 200 acres, two miles per hour, we shouldn't put a human on that part. Put them on. Okay. Should it be today? Should it be tomorrow?

Let's look at the, Oh, it's going to rain right now. Okay. So let's go and do this and that. These are the areas where we see humans. Bring the maximum factor and it will be like that also for quite a while.

[00:54:29] Audrow Nash: Yeah, I like that. So they're the decision makers, and I imagine there's also a lot of tasks that are just hard to automate around the farm that plenty of people will be involved with too. But they can shift from also doing the tractor work to just doing these very hard to automate tasks. And it makes the.

Labor gap a little smaller with

[00:54:52] Ben Alfi: point. Great point. I will use that also when I talk the next time. Because yes, what we, for example, we insisted to keep the seat vacant. That you can in the switch of a button move from autonomy to manual. Because you cannot do everything. Or you, I don't know, you want to do it, take it there.

Cross it, through a, paved road, and to bring it to the other side of the block, to another place that is without, it's not, out there. all things like that. The ability to go from manual to autonomy, the ability to do those, hard tasks or those tasks that we talked about that are yet to be accomplished? definitely yes.

[00:55:43] Audrow Nash: Do you think that we're, so what strikes me is, so I, think this is an incredibly smart approach where it's you allow the humans to still use the machinery just as the, tractor, the just as they would before it was outfitted with autonomous capabilities, so that they can do certain un there's, certain unstructured things that would probably be a pain to get the robot to do.

And maybe it's just a one-off thing.

[00:56:10] Ben Alfi: lot of growers use it as meditation, by the way. They just say, hey, I need just one hour to drive with my tractor. Yes, don't take it away from me. I'm guessing it's like horses. Okay, you still like to just ride a horse. But at the time, but to do the job with the horse, no, I'm okay with that.

[00:56:30] Audrow Nash: Yeah, I like driving just a little bit,

[00:56:33] Ben Alfi: Yeah, exactly.

[00:56:34] Audrow Nash: wanna drive all the time.

[00:56:35] Ben Alfi: Exactly. Exactly.

[00:56:36] Audrow Nash: Eight hours a day is too much.

[00:56:38] Ben Alfi: Too much.

[00:56:39] Audrow Nash: That's so funny. They're like, ah, just go in there, see how it feel and see how it's running.

[00:56:43] Future of unmanned tractors

[00:56:43] Audrow Nash: I love that. But, so going just a little further, going further into the future, do you imagine that we're gonna start building agriculture?

Maybe, and I wonder the timeline for this, but, building tractors and things like this that are not manned, maybe we can optimize them so that they don't have, I'm sure that there's different considerations in making a tractor that would result in a slightly different form if we say, oh, it doesn't need a human on it.

do you think that we're gonna be moving towards there? And I think the approach of retrofitting existing tractors is much smarter than trying to initially. Create tractors that don't like a custom made, very expensive. They need to buy a whole new fleet again. but moving towards, in the long term, world where tractors may not have people on them ever.

tell me a bit about that,

[00:57:45] Ben Alfi: this is my personal view. Okay, and track it also from what we've seen in manned and unmanned aerial vehicles. In a way, what I think we'll see in the next, I'd say three to five years, we'll see more and more, tractors that are coming out of the line that are more digital, with, going from mechanical to, digital and from digital to drive by wire. Then the next generation would be with sensors already integrated inside.

And then we will see what is beautiful about tractor. That is a multi-mission. Okay, I can do it for so many things. And so the growers are more innovative than whoever invented that tractor and what to do with it. So you still want that flexibility, this multi-mission, vehicle. So I see the seat.

Staying there vacant for quite a while, we will see some classic robots where it's not a seat, like just for certain, just I don't know, landmowers or, that we see in our house, near our houses, or all kinds of, cleaning machines and things like that. So they're very simplified ideas. The thing will be the balance on the cost of material, and these machines are huge. And if now, because you don't have a chair over there, half of the season it just sits, and you're not using it, not the correct way to do it. and, so I think this, we will see until, I'm guessing 2030, 2035, at least, we will see still tractors going out with manual and autonomous.

2030, we'll start to see them, with the autonomous inside a capabilities, and still think about transition and how long does it take to adopt and things like that. So we're off to a hybrid after market area for the next, 15 years. 10 to 15 years. It's a, again, it's a huge investment to buy a tractor for the growers.

It's zero thought in a way. It's very easy once trust is made to take an aftermarket kit because you see ROI. Small example, just to have a cabin on a tractor with air condition and anti chemical capabilities because you are driving in chemical area environment. This alone costs more than the setup fee of an autonomous vehicle,

[01:00:50] Audrow Nash: Yeah.

[01:00:51] Ben Alfi: so Most of our tractors that you will see around are without a cabin at all.

[01:00:58] Audrow Nash: Oh yeah. 'cause so then the person's just sitting out there and for these really poisonous, tasks, you just

[01:01:05] Ben Alfi: you go autonomy. Yeah.

[01:01:07] Audrow Nash: Yeah. I like that. And it makes a lot of sense, what you were saying where it gradually bleeds into having more of these auto features that would enable autonomy.

I would think of it, it's almost vehicles like, cars would be similar where you add ABS brakes and then you add lane keeping and then you add, and it just gets more and more sophisticated. Now cars have ultrasonics on them and you're starting to see different sensors. Some have cameras, and then eventually it enables more and more autonomy

[01:01:37] Ben Alfi: you'll still need that Yeah, you'll still need that autonomous, that the operating system that you are used to so so you don't you still want the Bluewhite autonomous system that is able to operate a Operating system for any type of tractors whether it's Autonomy from the shop or aftermarket and you'll still want the connectivity on the implements to make sure that everything is running and going correctly.

You'll still want the ability to use the data to different various areas and so these are the things that, keeps it evergreen, the need for that and what we see also that we will see more and more Bluewhite. Algorithms and capabilities that have been matured with millions of hours implemented in the tractor companies inside.

[01:02:35] Audrow Nash: Yeah. That'll be so cool. 'cause I would imagine from your perspective as that shift happens in 15 years or. More to become like where you don't need the seat for the operator, I would imagine, Bluewhite will be incredibly well positioned to make the jump to autonomy without an operator seat. and that will be awesome.

Yep.

[01:02:59] Ben Alfi: want to be the leading company for data driven farms, off road, mining, and hopefully space. The first, the first farming in space will be, and also I'm guessing the last, will be by robots and by autonomous vehicles.

[01:03:16] Audrow Nash: Ah ha.

[01:03:17] Ben Alfi: we see, ourselves as farmers, working with the farmers, wherever and whenever humankind is, needs.

This is where we are.

[01:03:31] Audrow Nash: Yeah. And so the big advantage, I guess if you solve this problem for tractors. You can also solve this for big mining vehicles. It's a fairly similar problem. I would imagine. Maybe it's more waypoint routing and this kind of thing, rather than back and forth, but it's still following a

[01:03:49] Ben Alfi: Yeah,

[01:03:49] Audrow Nash: I

[01:03:49] Ben Alfi: it's just too costly to mature it over there. To do one million hours in mining and one million hours in agriculture, I can do it while people are earning money.

[01:04:02] Audrow Nash: That is such a clever thing. I did not understand that. Yeah, you're right. Because you're getting these, tractors are mowing. If you can automate it, you can have them mowing around the clock and then you're getting so many hours of operation. You're testing your systems

[01:04:18] Ben Alfi: Maturing those algorithms that are not really still on the decision making side, but they're on the maturity side. So you have a real life laboratory that is running. And not just a simulation that we're also doing.

[01:04:37] Audrow Nash: And also providing value too, while it's learning. I really like that.

[01:04:41] Ben Alfi: Yeah.

[01:04:43] Audrow Nash: What's the, space applications? That sounds so interesting. what do you imagine for that? I have no idea what farming in space or mobility in space. I guess if you're on Mars and we wanna drive things around from here to there, we probably want autonomous systems to do it.

Is that what you mean? Or

what kind of

[01:05:00] Ben Alfi: Yes And I think these are the application that will be sensor based before we are doing a global positioning system in each and every, planet. And the idea is, how do I, can, navigate and monitor, and there will be those implements that are space implements that are needed.

to do whatever is needed over there. So there will be always the relationship between different type of vehicles that are running, different type of implements that are running, and operating system that can suit them all and integrate everything. It can be either not just air vehicles, but also air, not just ground vehicles, but also air and ground altogether under one operating system.

And the idea that the core infrastructure, what we have built and invested a lot on the infrastructure is to have the ability to grow and to adapt to those areas and not to be just stuck and pinpointed on one certain vehicle, one certain implement, one certain environment.

[01:06:12] Audrow Nash: Yeah. That would be limiting for sure. And so having it be flexible for all these vehicles opens up a lot. And then, yeah, you have such diverse applications or domains, space to mining, to farming. That's very cool. Let's see. So with space, just one more space question, then we'll get back to more practical things or more, sooner things I suppose.

do you imagine it'll be on the moon or on Mars or where would you think would be like the first application

[01:06:44] Ben Alfi: I think it will be in a place with an atmosphere. Whether it's an atmosphere done by humans, a dome like, or a place that has some kind of an atmosphere.

[01:06:58] Audrow Nash: Gotcha. Okay. Very cool. I can't wait until that future is here. I hope it's not too long.

[01:07:04] Ben Alfi: I hope, I'm guessing it will happen in your lifetime.

[01:07:11] Audrow Nash: Let's see.

[01:07:14] Data as a Service + Fellowship

[01:07:14] Audrow Nash: So, I wanted to talk about the data 'cause we haven't really talked too much about the data that you're getting and then the uses for it. So we've gone up the whole. Application stack. You have these actuators and sensors on tractors, and then you have a cloud that connects it to farmers.

And then you've mentioned the ability to use that data in flexible ways to maybe assess, I don't know, help them with insurance or help 'em assess crop health over time or whatever it might be. tell me a bit about the data and how you expose it and just how, that whole process of adding value from the data that you're capturing works.

[01:07:55] Ben Alfi: first and foremost, we see ourselves as, we are in charge of whatever is moving in the farm, and while it's moving to collect data and to distribute it to whoever needs it. As long as the grower approves it, and without infringing privacy. the idea is that I don't see Bluewhite as the company that needs to do agronomic evaluation.

But, think of it, if I can send whatever an agronomist needs, in order to evaluate what should be the spring next, the next spray event, or is it the time to start harvesting, or any, or Can I predict yield? It is through the cloud, through the data lake that we are, we have we can process to wherever on demand, depending on, the demands too.

Whoever third party is there, so we can go to ROS, say, Hey, okay, who's doing your yield prediction? Okay, somebody in the farm or a company that you're working with, and we can fill them up with the information. For those data companies to gather the information, this is what, really kills those companies because they're spending so much effort, time, money on the operation cost to collect this data and we can help. And this, the same data can help to so many applications. The same data can help, carbon. People are talking about carbon savings and, weather saving and all that, all those things.

[01:09:35] Audrow Nash: That's really cool. I really like how you guys are drawing lines of what you'll do and what you'll let other people. This is an opportunity and you guys could, maybe it becomes part of your vertical in a sense where you start helping people with the analysis of the data. but that's a future decision.

To make, but for now, it's like there's a line drawn and you say, it's not our core competence. We're gonna collect all it. We know it's valuable. but these other companies, like other businesses, can be founded around analyzing the data that you guys are generating. And so it's another, it's a, it's clever to me where you are deciding you are gonna focus and what you're gonna allow other people to scoop up for value in what you're already doing.

[01:10:26] Ben Alfi: First of all, I think it's very important in general, not to bully and not to be so aggressive and to think that we can, be, ruling all the world. Second, we are, a value based company. Our values are fellowship, love of the land, and innovation. When we talk about fellowship, it's a, this is an exact example of what we're talking.

We are not coming to the agriculture business who's been working for more than a century in a certain way and say, Okay, we don't need dealership. We don't need growers. We don't need tractor companies. We don't need the data company. No, we're blending in. We want to be enablers to all. We want everybody to say, Hey, Bluewhite is helping me.

It was on the operation cost. It's not my cost. It's about the people and the people who work there, their health care, their safety, their ability to work without their having their back ache or any other hazardous event. So this is how I see it, and if you are focused and you are, in a way, you are transparent in where you are, this is why we have so many great connections with the data companies like TELUS and others that are working with us, and communication companies that we talked about, we have from Israel or from the US, we work with Intel and others, growers, big, and also OEMs and also dealerships.

Maybe it's too authentic, maybe I'm too optimistic, but this is how I want to do

[01:12:06] Audrow Nash: way to do it. It strikes me as, even, like even if it wasn't ethical, your position on this, I think it would be pragmatic. and what I mean is that you can't do everything. And if you try to do all of these other areas to monopolize it, you'll spread the company thin. You need to raise even more money to do even less good across a more areas, So I think focusing on a core competence and then getting everything running, and then you can say, okay, maybe you can really work with these data companies or like in the future, but, for now, just saying, Hey, you have access to this. It's all free. You, or maybe not free. They can pay for it even. but it still is cheaper than them flying drones over the farm to try to gather their data.

This kind of thing.

[01:12:57] Ben Alfi: Exactly, we need to be humble, yet assertive. On autonomous vehicles we understand, and how to go from 0 to 100, or from 100 to 10, 000, we understand in this area. We've done it before. To be in places with no GPS, we understand, this is our happy place. But, We need to know what we know and what we don't know.

[01:13:21] Blending into the agruculture culture

[01:13:21] Ben Alfi: I had no idea about agriculture and my co founder is an amazing, he grew in a farm and he was with tractor all his life. And do what you know how to do, learn from others, more things and decide your boundaries.

[01:13:39] Audrow Nash: Yeah, definitely. So tell me more about, so you said your co-founder has a lot of experience with tractors and with

[01:13:48] Ben Alfi: just grew on the farm, and there's nothing better than that.

[01:13:53] Audrow Nash: Yeah, for sure. Is that a big way of, because what strikes me is that you guys are really trying to, and you, said it earlier, you're trying to fit into the ecosystem.

You're not trying to just like bully everyone and push 'em over and say, we do it all differently. Now you're trying to say, Hey, I can make this one pain point easier, but everything else, and work with, we work with too Tell me a bit about that, making yourself fit well in the ecosystem and maybe the role of your co-founder in fitting well in the ecosystem.

[01:14:29] Ben Alfi: First of all, again, it comes with the values and attitude. This is one. Second is the ability to be as transparent as possible. When you are transparent, when you are saying, okay, this is what I know how to do, and this is what I don't know, it creates openness. And, the ecosystem, what I like about the agriculture ecosystem, that there is no logical reason to be in agriculture, unless you are passionate about the mission. Unless you are passionate and you understand that you are part of a greater cause. A greater cause of making food available to the world. And you also understand that it's about trust. And trust is critical. these values resonate together with the goals of myself and Yair, who founded the company with me, and together we understood that these values, these, this is what will enable us to work correctly.

[01:15:44] Audrow Nash: Yeah, it makes a lot of sense. And yeah, I think you could probably even say that about robotics. So doubly true what you're doing, where there's not a reason to be in it unless you're passionate about the mission.

[01:15:55] Ben Alfi: Oh,

[01:15:56] Audrow Nash: so Robotics and agriculture,

[01:15:58] Ben Alfi: yeah. I'm doing it for, I'm doing it, I'm guessing almost 20 years dealing with unmanned systems and robotics. I love it. I think it's, I think it's part of the future. I love being part of the disruption. I love seeing the, transformation, from, talking about will build, will be adoption of robots there to discussion of where the hell are the robots?

I need them now,

[01:16:29] Audrow Nash: Yeah,

[01:16:30] Ben Alfi: and, also the understanding also people, okay, robots cannot do everything. It's not going to fly and give you coffee on the porch, and while it does it, it will also answer your telephone and send the email and do what, the balance and understanding of what it can do, what it cannot do, and also on the development side to start with.

Very repetitive, no missions and tasks, and just go step by step, do it with safety, don't alienate it, make it very simple to use, make it in a way, not just bipartisan for everybody, but also multi type of people that can run it, no matter how sophisticated are You these are the things that really creates at the bottom line.

These are more important, the blending in the business model, the usage of existing assets. This is much more important than another algorithm that is over there.

[01:17:50] Audrow Nash: For sure. And it's also just the pragmatic way to approach it too, I think. so I really like that one.

[01:17:58] Making systems safe

[01:17:58] Audrow Nash: So you've mentioned safety a few times. Tell me about making your equipment safe for this. And I wonder do you do safety certification work or match some sort of standards or how, do you

[01:18:12] Ben Alfi: Oh, yeah. We, spend a lot of time on that one. from the system architecture, from the models, from the ability to, to, record and learn. And what we have done also, we've integrated, because it's a new area, we've integrated a few standards from urban mobility, from a military standard, from agriculture standards, from machinery standards, all mixed together.

And in a way, we have internal safety commissions, we have external safety analysts to look at it. This is critical, this is backbone, and we are doing debriefs, we are collecting data on mishaps, not just something that happened, something that might have happened, and even, for example, so these are critical things of how to, go.

It starts from system engineering, it goes along with, system architecture. redundancies, and, going the approach from inside the envelope, not breaking the envelope. You cannot throw a tractor to a ditch and say, oh, it was too much. There are so

[01:19:38] Audrow Nash: yeah, for sure. Okay. And I get, I bet that, safety. Is a big part of the trust getting, farmers' trust, getting people's trust in general. You need safe systems. They need to trust you with the tractor that they have, and they need to trust your system to treat it well and not drive it into a ditch, as you say.

[01:20:06] Ben Alfi: still talking about Tesla accidents or Uber or Cruze or any others. and still much safer, but it's very hard to explain that it's safer, because, it's a technology and you're afraid and, you're a bit skeptic and you don't have control, you feel that you don't have control when it will happen.

take it slowly. at the end of the day, it is statistics, okay? And accidents will happen for sure. You need to See, and to reduce as much as possible, did you expect it? Did you do the correct risk mitigation? how was the risk analysis for that? how was, how a bigger surprise was it for you?

This is how you look at it. How, when it happened, did you try to cover up or were you open about it? And they published a debrief about it so everybody can learn. It's about that, okay? Transparency, this is, mistakes happen. We're doing lots of mistakes every day. The question is how do we behave when they happen?

[01:21:19] Audrow Nash: Yeah, definitely if you take ownership of it and say, okay, we understand it, we're trying to fix what's

[01:21:24] Ben Alfi: Yeah. had an assumption, the assumption was wrong, anything like that, I think it goes in everything in life, but when we're talking about robotics and, integration, and, dealing with adoption and, maturity. Safety must be at the top level.

[01:21:45] Audrow Nash: Yes, for sure.

[01:21:48] Recent Series C investment round

[01:21:48] Audrow Nash: now I wanted to make sure we had a chance to talk about your C round of funding. So first congrats on the, it was, if I remember correctly, was it 39 million for C round

[01:22:00] Ben Alfi: Yes, it was a 39 million. We raised up to now around 85 million. We are humbled that it's with not only existing investors who believe in us, but also new investors coming in. And, joining the gang on this crazy journey. And we have amazing workers, and amazing, customers and, that we are working with all of them.

They've also, send and expressed their, love and their, appreciation of what we're doing, and it also resonated with the investors. I think everybody understands how big the task is of creating this capability to the world. We have investors from Israel, from the U. S., from Mexico, from, Canada and coming in and saying, Hey, we want to be part of that journey.

So this is what is going on right now. We see this investment that it will enable us to take it to the next step of maturing the product in a way that it can be distributed worldwide and to grow with more and more capabilities as we talked along this discussion. until the next, I think the next either IPO or round, this is where we're going to, and stay tuned, I'm guessing.

[01:23:46] Audrow Nash: Yeah, for

sure

[01:23:48] Working with the community

[01:23:48] Ben Alfi: we are available to anyone, to be an entrepreneur in robotics, you need to be passionate, and you need to be a dreamer, and would love to help to anyone that is at this state, status. And whether it's personally or reach out to Bluewhite people and LinkedIn or any other media that you think.

And, remember that values come first. Doesn't matter what you do, make sure that values come first always. That's what I have from my side.

[01:24:28] Audrow Nash: Hell yeah. Yeah. It seems like you guys are a very value driven company, which is really nice 'cause it's very cool to see. I imagine you've been this way since the early days and seeing you as a larger company now who's in an exciting position to start scaling their work and the values are still driving you, which is very nice.

And like doing a debriefing when something goes wrong. I really like that idea. I feel like a lot of companies may just try to shove it under the rug when something goes wrong, but being transparent and, I don't know, trying to lift up the whole agriculture industry

[01:25:08] Ben Alfi: we have responsibility. to take this around, if we fail, the next investor won't invest in the other robotic company, who might be the successful one, right? So we cannot fail. And we need to be very mindful of what we're doing. We have responsibility on the agriculture. We have responsibility. It's not just a Take the money and run.

This is not why at this age, I've decided to spend every time of my life on, this, on what we're doing right now.

responsibility and, that's about it,

[01:25:51] Audrow Nash: I like that a lot. Yeah. 'cause you see a need and you are trying to help it. And you're right that there are big implications for any startup that's following because if the big Robotics startup goes bust, it makes if an investment in Robotics goes bad. It makes all future investments a little more difficult. and it may discourage people and it, you may, burn some farmers and if you burn some farmers, maybe they're hesitant to adopt Robotics technology in the future. So yeah, it's interesting to think of it as an ecosystem.

[01:26:27] Ben Alfi: Robotics, if you're in robotics in the next decade, think of yourself as a crusader.

[01:26:37] Audrow Nash: Love it.

[01:26:38] Ben Alfi: this is how you, and think of it that you are responsible. To how fast will the world adopt those capabilities? And you, if, and if you have a hunch that it's a wrong path, what you're trying to do, you are correct, go to a better one and, make it easier on yourself, make it easier on the ecosystem, develop what is needed.

But, be, It's a warrior type environment where you need to do it and I believe I see amazing companies here And I see also amazing successful companies that I wish we can be just like them.

[01:27:24] Audrow Nash: Hell yeah. And, I think the next years for Robotics are gonna be exciting and difficult and I, see it as, very necessary. Like I, I've been focusing on the labor shortages, for a while, and I really feel like Robotics is a way that we live in the future while labor shortages are happening and populations are aging and this kind of thing, and we maintain our quality of life.

I feel like Robotics has an important. It will be important for us to figure out how robots can get in and help

[01:28:06] Ben Alfi: I think it's

you're correct. It's labor shortage and labor transformation

[01:28:11] Audrow Nash: Mm-Hmm

[01:28:12] Ben Alfi: need to do this job anymore. Okay? We don't need to. There's no reason. If you want to work out, just do work out. don't take stones on your back and climb a hill. Just for work. I just wanted to say really, first of all, thank you for having me.

And it was great that you had this show. I've been talking about robotics, talking about from the technology, but not just the technology, how to implement it. To the world, I listened to some of your podcasts before and I learned from this a lot. I hope more and more people will come to the show and share, not just the happiness, but also the struggles.

And there are a huge amount of struggles in order to make it. And really, just to say thank you for having me here.

[01:29:12] Audrow Nash: Hell yeah. it's been absolutely wonderful, speaking with you and learning more about Bluewhite.

[01:29:18] Ben Alfi: Cheers.

[01:29:20] Audrow Nash: thank you. Alright, bye everyone.

[01:29:22] Ben Alfi: See you everybody. Bye bye.

[01:29:26] Episode outro

[01:29:26] Audrow Nash: You made it! What'd you think? Blue White is doing a great job, aren't they? What other robotics companies have you seen that are doing a good job fitting into an existing ecosystem, like agriculture? I'm curious to know in the comments or on X. If you like this interview You'll probably like the weekly spaces I'm hosting on X.

I often have a guest, and we do a short interview where you get to ask your questions. They've been a lot of fun, and there's a great community on X. If you're interested, there are Thursdays at 9pm Eastern, 6pm Pacific. Just look for at Audrow on X. That's all for now. Happy building!

[00:00:00] Episode intro

[00:00:00] Audrow Nash: I've been talking to robotics companies both in and out of podcast interviews for about 10 years now. I hear a lot of things from them about working with venture capitalists and the mismatch in expectations that creates friction for founders and people in technology.

In this time, I haven't talked to many venture capitalists, and I have not had any on any podcast I've been involved with.

This interview is with Sanjay Aggarwal, who leads the robotics and automation efforts at the venture firm F-Prime Capital. This was an eyeopening discussion for me because I got to hear the other side of the story and it makes good sense to me.

My hope in doing and sharing this interview is that some current and aspiring robotics entrepreneurs and technologists get a better sense of what VCs want and are looking for, and that they and their companies are better off for it.

I also found our discussion on the macroeconomic conditions affecting the startup and investment community oddly calming, even though it seems like we're in an unprecedented time.

Lastly, we talk about F-Prime's State of Robotics report, which is the second yearly report that they've made on the robotics ecosystem. It looks at robotics investments between 2019 and 2023 and points out patterns and trends from their analysis.

There were interesting results around what VC money is going towards and how that has changed in the last few years. And it was interesting to put robotics in a larger context with respect to other VC investment categories, like enterprise software.

My main takeaway is that the popcorn pops of successful robotics startups are just beginning and that the future looks exciting.

This episode was launched at the same time as the State of Robotics report. The report is freely available online. You can find the link. in the description below.

Without further ado, here's the interview.​

[00:02:13] Introducing Sanjay and F-Prime

[00:02:13] Audrow Nash: Hi Sanjay, would you introduce yourself?

[00:02:16] Sanjay Aggarwal: Yeah, nice to meet you, Audrow, and thanks again for the opportunity. My name is Sanjay Aggarwal. I work with F-Prime Capital, an early stage venture fund based in Boston.

[00:02:26] Audrow Nash: Hell yeah, tell me about F-Prime.

[00:02:29] Sanjay Aggarwal: Yeah, as I said, we're an early stage fund, we invest across, enterprise software, fintech, and then I lead our efforts looking at the robotics and automation space, so we've been active for, actually we have a kind of a multi decade history of making venture investments, and in the current version of the fund. we actually have a tech fund, which I'm part of. We also do a bunch of work in the life sciences space, with, assist, another team, at, at the firm. but quite active, I would say in early stage, typically doing series A, opportunities as a primary focus area of ours.

[00:03:06] Audrow Nash: Gotcha. So seed in A or just A for this kind of

[00:03:10] Sanjay Aggarwal: Mostly A, but we'll do seed and, we'll do earlier and later. yeah, exactly.

[00:03:16] Audrow Nash: Okay, awesome. Now you were saying that you lead the robotics investment part of this. Tell me about, tell me how that works.

[00:03:25] Sanjay Aggarwal: Yeah, over the last few years, there's obviously been a lot of hype around the robotics and automation space, starting really with the AV market a few years ago. and we started poking our head in, into the space and just trying to get, getting up to speed.

And my personal background actually was, earlier in my career, I was actually working as a robotics engineer at a, a company that focused on manufacturing automation, so I had first hand experience with, call it the earlier generation of robotics and automation solutions. So I had a kind of a natural affinity toward the space.

And I led the effort to get ourselves up to speed and started really looking at the AV space and had made a couple of, smallish investments there, but,

[00:04:10] Audrow Nash: AV that's autonomous vehicles,

[00:04:12] Sanjay Aggarwal: sorry, Autonomous Vehicles. Yeah, Autonomous Vehicles, exactly. and we spent a lot of time there. I think as we dug deeper there, we came to the conclusion that we preferred more industrial sort of use cases and have spent most of the more recent history, focused on those sorts of, looking at logistics and construction and agriculture, oriented use cases for robotics.

[00:04:36] Audrow Nash: Awesome. Okay. So I want to come back to this, but tell me how you,

[00:04:40] Sanjay's background

[00:04:40] Audrow Nash: so you mentioned you were working in the robotics space. Tell me how, tell me what your path has been. So how'd you go from being involved in robotics and where'd you go from there?

[00:04:51] Sanjay Aggarwal: Yeah, so you know, as I said, I did a degree in mechanical engineering and majored in control systems. And so robotics was a natural, starting point for my own career. It was, in those days, robotics is not a new concept. Obviously, it's been around for generations. and it was very standardized, repetitive tasks that you could, automate it in more sophisticated ways.

but it was, very, just repetitive tasks, was a focus in those days. and we built, all sorts of systems for, semiconductor wafer handling and optical fiber manufacturing and these sorts of things, in those days. but after a while I decided I just, wanted to get more into the business side of things.

So I went to business school, worked at consulting for a while. eventually ended up, moving to India, to start a company, focused on the mobile messaging space, and spent a few years there, and after

[00:05:49] Audrow Nash: How did you make that decision to do that? It seems like a big leap. I'm curious what you were thinking at the time.

[00:05:55] Sanjay Aggarwal: yeah, it's, I would say that, I guess I always had the itch in some ways, to do something entrepreneurial, and the company I worked for initially out of school was a startup, and The opportunity kind of presented itself with somebody I knew, in India that was starting this business.

and so it was just, took a leap of faith and said, okay, let's, give this a try. just to try something, try something a little bit different than what I had been doing. which turned out to be a fantastic experience and really, rewarding both for me and my team.

personally for sure, and so did that and really got a first hand experience of what it meant to build and run a startup, and so when we exited that business and I came back to Boston, I hooked up with a team and I knew some of the folks that already at F-Prime and then started working on the investment side of things and, hopefully leveraging some of my own experiences from the startup world.

[00:06:53] Audrow Nash: Why did you move over to investment? What did you, so leveraging your experiences from the startup world, but what was the motivation there? What were you, what was the appeal of venture capital as opposed to just like starting another startup or something like this? How did you look at it?

[00:07:10] Sanjay Aggarwal: Yeah, I don't know. I certainly didn't set out to become an investor I would say. It was a little more opportunistic in the sense that I had worked with some of the folks. so much for joining us today, and we hope to see you again soon. Get reacquainted, with the local ecosystem. And I wasn't sure at the time if I was going to continue with investing or, get back into a startup. And I think as I got more and more absorbed, into investing, just enjoyed it, like you, it's a great opportunity to learn, from very amazing entrepreneurs, the, the pace of change and the pace of new ideas is pretty rapid.

And it just, I guess it, I, grew to like it more and more as I spent more time. And as I said, it's it's quite inspiring to, to see what people are building out there. And hopefully I can provide some useful perspective for my own personal experiences as well.

[00:08:12] Audrow Nash: Hell yeah. Yeah. I feel like one of the other things. That I don't know what your perspective on it is, but it seems like a lot of people, if they go do a successful startup, now you have a lot of experience. and especially like you being a strong entrepreneur on your own, you can go and you can help the startups do that, but you can also now your interest is in robotics.

you can also be an expert in that or have relative expertise in that, which is probably very important for good investment decisions like that feels like a big reason or a big benefit of you going into investment as opposed to someone who has no experience in the area making investment decisions or it may not even be technical say

[00:09:01] Sanjay Aggarwal: Yeah. I would say that venture capital used to be very much, founders who became investors, That has changed, and the reason for that is exactly what you described, which is that. Okay, you have, you understand, at least you have experience on what it takes to build a business.

I would say that has changed a little bit over time, that you find more people that are professional investors and that has, is what they've done. but I think the, expertise that you gain is more about business building as opposed to industries per se, right? in the sense that, industries are evolving pretty rapidly, right?

And the, and I always, believe that the people that have the best ideas, for businesses are the entrepreneurs that we're, talking to and meeting and hopefully investing in, where we hopefully come in as having some kind of overarching view of the industry as to where it's going.

But I say the nuanced view, which really makes all the difference, is really coming from the company more than from the investor by and large. And our focus is hopefully to provide them guidance on. how to think about building their business. what are the right metrics to track? How do you hire the right people?

How do focus on the right strategy to scale, things like that, which are a little bit more, generic in a sense, as opposed to very specific to the industry. so we certainly try to build expertise and, we want to build expertise in industries, but, but I would say that, at the end of the day, it's really entrepreneurs that have the most knowledge about the businesses that they're building.

[00:10:32] Audrow Nash: Yeah, very interesting point so tell me so you said that you first Or did, was it you when you got first involved where you were doing autonomous vehicles? Was that your kind of first foray into investing in robotics? Or how long have you been involved?

and,

[00:10:49] Sanjay Aggarwal: Yeah, when I first joined F-Prime, we were not looking at autonomous vehicles, I would say. I think this has been maybe going on four or five years now that we've spent, we've spent looking at, in the, just generally in the robotics and autonomy space. prior to that, I spent time looking at other areas of enterprise software.

So there's a number of companies I'm involved in that, Martech or, this company in the insurance space, so a few different areas, but over time, I gravitated towards this again, given my own personal background, I found it to be particularly exciting and hopefully, something that I have some at least unique expertise and experience in.

[00:11:31] Investing in Autonomous Vehicles then industrial robots

[00:11:31] Audrow Nash: Okay. And tell me about pivoting from autonomous vehicles more towards industrial and manufacturing robotics.

[00:11:43] Sanjay Aggarwal: Yeah, I would say that, I think the more we spent time in the autonomous vehicle space, the more I think we realized that, a couple of different things, and it informed our own investment philosophy in many ways. number one, these are pretty hard problems to solve.

And obviously, that has been borne out, over the last few years is, as companies that have shut down or faced a lot of challenges or just missed, missed projections on when they're going to, release their products, et cetera. So I think, the more we spend time, the more we realize, okay, these are unbounded problems, which makes, which makes predictability of, hitting any kind of commercial milestones quite, quite hard.

and related to that, then it's like the capital that you need starts to become, really unbounded. And I think we were not like, our goal was not to invest, hundreds of millions of dollars in these companies. And there's obviously some firms that, are able to do that and have done that, but that was not our, our strategy, or even our capability at the end of the day.

And so as there was an early wave of lots and lots of startups. I think that quickly started to consolidate across a few market leaders. and, and as that was happening, we realized, okay, we're probably You know, we don't have the capital capability to put that kind of money in, nor is it really strategically where we want to focus, just given the uncertainty about, actually getting to market.

and we leveraged some of those skills and knowledge and actually the work that I had done in my early career was around manufacturing. and so I had a, my own understanding or appreciation for what that took. and so then we thought, okay, why don't we leverage what we've learned about these newer generation of companies and start investigating other areas, and so there's a whole range of, logistics as a notable example that we spent a bunch of time in and we have an investment in a company called RightHand Robotics in that space, but it's a good example of industrial, an industrial robotics, company that, that you know, that we spent a lot of time starting to explore.

And then, within that, there are many different industries that you can target.

[00:13:56] Audrow Nash: How did you start, deciding on which vertical or which, domains to invest in? Logistics, manufacturing, maybe agriculture, these kinds of things, they seem very promising to me, but how, did you select from them as opposed to anything else?

[00:14:17] Sanjay Aggarwal: Yeah, I would say, again, it was a little opportunistic in the sense that in Boston, there's a very good ecosystem of companies focusing on the logistics space. I think that was, has been born a little bit out of, Kiva, having been based in Boston, and then from Kiva, there were various people that kind of spun out and launched companies.

In the logistics space, a little bit was, we're based in Boston. And so there are a lot of local companies that were focused in that area. and so it was a natural starting place, combined with the fact that I think the, the market opportunity and logistics is quite appealing.

if you even look at investment numbers today, it's probably the, one of the biggest categories of robotics investment just because of the sheer demand for, from e commerce and the like, that is driving a lot of robotic solutions and also Amazon, being the, the, the bellwether for what other people are trying to emulate, being very, aggressive in robotics.

So I think that it just was a natural starting place, partly just because we were in Boston and there were a lot of companies, and so we started there and then over time started to, expand into looking at other, other areas as well.

[00:15:29] F-Primes portfolio of robotics companies

[00:15:29] Audrow Nash: Okay. do you wanna, I would love to hear about some of the robotics companies that you are invested in, your portfolio. wanna give me just a short description of each of, or of a few of them?

[00:15:43] Sanjay Aggarwal: Yeah, absolutely. so one of the first investments we made is, was from the time we were looking at the autonomous vehicle space, which is in a LIDAR company. the company is called, it used to be called InnoVuzion. They rebranded as Ceyond. really interesting company. the founder had been, working on, LiDAR systems at a variety of companies and had his own perspective on what the, what the enabling technology should be, what the right technology strategy should be for, for LiDAR, because one of the challenges in LiDAR is that there are many different approaches that people are taking, from the spinning LiDAR, which was the early days of Velodyne to other people trying to do solid state and, other approaches.

yeah. And that was one of the first investments we made. one of the reasons that we made the investment actually was that, they had already been in a sense vetted by NIO Auto in China. they had looked at the market and identified, they thought this was the company that they would partner with, ultimately.

It took a little while for that partnership to, to take off, today, although it's not super well known in the, US although they are based in California. They're actually standard on every single NIO car that, is sold in China. So they've literally produced, hundreds of thousands of, LiDAR today, and have done really well.

But, like the LiDAR market, as you, if the LiDAR market is, a pretty challenging market. There've been, there were a lot of companies that, that were launched. not, many of them have promises of OEM contracts, et cetera. Others never got there. but, It takes a lot of money to actually build an automotive grade sensor. these guys, partly because of their NIO partnership have, been able to successfully get to that point. again, as I said, as we got deeper into the market, we were really excited about our investment there.

but over time just realized the capital needs for companies like that were such that we. Then we started focusing on logistics in the first place. we ended up investing in RightHand Robotics here in Boston. they make, piece picking for, for e commerce. And so if you look, if in the logistics space, there are a bunch of different people making different kinds of robotic arm solutions for different use cases.

Some are doing parcel handling, for example, others are doing, piece-picking, even within piece-picking, there are different companies, focused on, let's say, food or clothing or just, general e commerce goods. so RightHand is focused on the, last, so they, for example, recently announced a partnership with Staples, where they're going to

[00:18:24] Audrow Nash: Oh, cool.

[00:18:25] Sanjay Aggarwal: the

[00:18:25] Audrow Nash: Good for them.

[00:18:26] Sanjay Aggarwal: Yeah, really, impressive accomplishment, by them, and anyway, that was a good example of a company that, really impressive technical founders, it's taken them a while to really perfect the solution, just given, again, it's a pretty hard problem to solve, you, because it's a problem you can't solve at the 80 percent or 90 percent level, you got to solve it at like the 99 percent level,

[00:18:50] Audrow Nash: or nine, a few nines.

[00:18:52] Sanjay Aggarwal: Yeah, so getting to that last bit takes a little

[00:18:55] Audrow Nash: Way longer. Yeah.

[00:18:57] Sanjay Aggarwal: but yeah, they're a company that we've invested in the logistics space.

[00:19:04] Audrow Nash: Hell yeah. I really like them. One of my, when back in the Robohub days, I think I interviewed them quickly. and we hung out a lot at ICRA, the robotics conference and there were a lot of fun people. I really like them as a company and hope to catch up in the near future.

[00:19:20] Sanjay Aggarwal: Yeah, they had the most amazing office. If you ever went to Boston, they used to, they used to occupy an abandoned UPS facility,

[00:19:29] Audrow Nash: Ah,

[00:19:30] Sanjay Aggarwal: It was quite, quite the cool office that they had.

[00:19:34] Audrow Nash: Hell yeah.

[00:19:35] Sanjay Aggarwal: Yeah, then we have a couple of other investments. One's a company called Burrow in the agriculture space.

So they make a small autonomous vehicle to help move. Goods in either outdoor or indoor agricultural environments. And then the last company is a company called Taleo, which is a hybrid, remote operation, an autonomy solution focused on construction and mining use cases.

[00:19:59] Audrow Nash: Yeah, both very exciting. Burrow, it looks really cool. I'm very excited about the agriculture space. Actually, the mining space is very interesting too. I, those seem like very good tasks for robots to me, just, to helping out there. And Burrow, it's, funny, it's like donkey, right? Doesn't, and so

[00:20:18] Sanjay Aggarwal: Yeah, I'm sure that was the, the tongue in cheek kind of inspiration for the name. but yeah, there's been a, there's been a fair number of companies focused on agriculture, and we spent a lot of time, meeting many of those companies. Many of them are focused on harvesting itself, So actually, picking crops,

[00:20:36] Audrow Nash: Oh, that's super hard. Okay.

[00:20:38] Sanjay Aggarwal: but I, yeah, I think we concluded that's just hard, It was hard enough in RightHand Robotics case in an indoor environment.

You could get an outdoor environment,

[00:20:46] Audrow Nash: Sun and dust.

[00:20:47] Sanjay Aggarwal: And it's up and down a row, like really hard.

I think several companies have started to crack it, but, it's taken a little bit of time. But I think we like the kind of the simplicity of what Burrow was building. And it was also, I think, in some ways, I feel like it was a It's emblematic of the new crop of robotics founders that we see today, which is that, Charlie, the founder of, of, Burrow was actually a farmer growing up, so he deeply understood, the, needs and could really identify with, with the potential customers.

And so he is very pragmatic and trying to build a solution that, that work, that could be quickly deployed, that, delivered value, very, in a very short period of time. and I think that, that kind of ethos was really what enabled them to, to get to market pretty quickly with a solution that's quite effective and was quite unique in the market at the time.

now there's starting to be some similar players out there, but. but they've done really well. they literally have hundreds of devices out in the field, today.

[00:21:52] Audrow Nash: Mhmm. Yeah, that's so cool. when you say, the thing that struck my ears was, new crop of founders. What do you feel like they're doing differently?

[00:22:03] Sanjay Aggarwal: I think there's, I think there are two or, there are a couple of things that we see. so it, used to be that, prototypical robotics founder was, they came out of a PhD program, had done robotics, had, were very technically talented, but they were a technology looking for a problem, in a sense. and. and I think that, that proved to be difficult in some ways, right? in the sense that it just takes a while to navigate yourself to, yeah, what is the right problem? how do you, because now you have to not only identify the problem, but also understand the industry, understand how do you sell to that industry?

There are many kind of dimensions of what you have to solve. And, I think the stereotype is that, you're almost too in love with the technology, as opposed to solving a problem. And obviously, that's not true in all cases, but, oftentimes, I think, maybe there's a risk of a solution being over engineered for a problem that you're trying to solve.

I think there are two things that we see differently today. number one, you see a lot of, of people like Charlie, at Burrow who, they're, they, they, have a deep, they start from an understanding of the industry. that's their story.

And the technology is a means to an end as opposed to the, end in and of itself.

And very pragmatic in terms of, let's, build something that works, without trying to solve everything. and I think that has proven to be in some ways, maybe they don't build the most, technically amazing problem, solution, which is, not to say it's not great.

technically strong, but they're not trying, they're not so in love with the technology that they want to make something that's, that's incredible. They want to, they're actually just in love with the problem, and they're trying to solve the problem. so that's one dimension.

And I think the second dimension, which is a, kind of a, actually a more, an example of Taleo, which is that, there's a bunch of founders that sort of cut their teeth on in the autonomous vehicle industry. so they spent time there under, like understood number one, how do you productionize, code for this, productionized solutions in these industries.

And so that, gives the, much faster cycle times, but also again, are just more pragmatic, because they, they did see the challenges of the autonomous vehicle industry. and so I think they, tend to just. Attack the problems in a more pragmatic way without trying to solve everything.

I think, again, very much informed by, the challenges of launching a, autonomous vehicle for passenger roads.

[00:24:39] Audrow Nash: Definitely. I see it like, robotics is really hard but it's becoming easier and so now you're starting to see people that can come in with a little bit less technical knowledge or no technical knowledge and maybe get a good co founder that has that technical knowledge and then they can actually start to provide value especially if they stick to deeply pragmatic solutions that often have tons of value, Just because they weren't being done, or they can free up some hands to do some other work, or anything like this.

[00:25:15] Sanjay Aggarwal: Yeah. Yeah. And again, I think it, like it has to be born from a deep understanding of, of the industry that you're serving, right? Like only then can you envision, okay, what would be a solution that would be helpful here? And to your point, yeah, you don't, you can hire and you can find co founders or engineers that can help you solve the problem, but it's got to start with an understanding of the problem itself.

[00:25:38] Audrow Nash: Yeah. It's interesting to me going from autonomous vehicles, just like autonomous cars on the street to like Burrow, which is very practical and a much easier problem. It's I feel like it's, a nice thing for the robotics industry because we're going from something that's really flashy and seems very big.

and when we solve it, it'll be just amazing. but very hard. And then Burrow, which is. Very cool, very practical, probably has strong ROI for the customers and, probably has less risk for like less technical risk as a company, than autonomous vehicles. So I like this trajectory in robotics.

[00:26:28] Sanjay Aggarwal: Yeah, it's it's the irony of robotics, I would say, which

[00:26:32] Audrow Nash: Tell me about it.

[00:26:36] Sanjay Aggarwal: as, a, consumer, as just your layman, so to speak, what are, oh, yeah, people making humanoid robotics, or people making autonomous vehicles, where, you you, never have to drive again.

And those are fantastic visions. I

[00:26:51] Audrow Nash: Oh yeah, all

[00:26:52] Sanjay Aggarwal: I, I find it, like it's hard to imagine that those things

[00:26:55] A new type of robotics founder

[00:26:55] Sanjay Aggarwal: won't exist eventually, but but there's so much technical uncertainty that it a little bit, it's not clear how it fits into the venture capital market, right?

it's one thing

[00:27:10] Audrow Nash: because it's unbounded in a sense. Is that why or what

[00:27:13] Sanjay Aggarwal: Yeah, from a VC's perspective, like they're looking for,

[00:27:17] Audrow Nash: Return on investment.

[00:27:19] Sanjay Aggarwal: yeah, they're looking for returns, which is driven by, the standard kind of, framework, I guess is like you're hitting commercial milestones at each point, at various points along the trajectory of a business.

Now, if it's, if you have to raise a hundred million dollars to get, to any kind of material commercial proof point, okay, that's, that is not the typical venture capital. you have this kind of big vision around, okay, yeah, there's all these cool technology that you could build, but, it may just be so challenging that you don't, like, how do you marry that with the ability to actually hit some kind of, proof, like market proof points in some way.

bounded time with a bounded amount of capital, right? And so I think the pragmatism of what, what you see a lot of today in robotics is driven by that, which is, hey, it may not seem the, visions that people have about what robotics are, but they're actually solving very, very real and very practical problems.

[00:28:21] Humanoids

[00:28:21] Audrow Nash: Now, you brought it up with the humanoids. What do you think of a lot of the interest in humanoids? In humanoids in general?

[00:28:31] Sanjay Aggarwal: yeah, again, I've met a couple of the companies and, it's pretty, it's quite amazing,

[00:28:37] Audrow Nash: Oh, definitely.

[00:28:38] Sanjay Aggarwal: building, And I was very favorably impressed, with what, with what I heard and what I saw. again, I think the, question is to, there, there's always investors that are ready to make investments in these kind of big, home run type of opportunities.

and you've seen, I think Figure, for example, announced an investment from OpenAI, for example. or Tesla, investing in, in their robot. and like with Tesla, okay, like it's different. Like I think their, math of, okay, is this a worthwhile investment? It's a little bit different than an investor because

[00:29:18] Audrow Nash: Oh.

[00:29:19] Sanjay Aggarwal: they're their own captive customer, for example, right?

they don't have an unlimited amount of capital, but they have, their, goal of ROI, so to speak, is not, necessarily, as, they can think in a much longer term way, perhaps, than, your typical investor can. And so when I look at humanoids, I think the question that I have is that it's hard not to imagine that they will exist at some point in the future.

what's less clear to me is will they exist, in any material way, in material way, just meaning, in terms of practical applications that are in the market, solving real problems. And, In five years, 10 years or 50 years, right? And so as an investor, it becomes, it's a, sort of a hard place to invest in, I believe, unless your, time horizon is, it's much longer or much more flexible, to do that.

And so again, like you, you see these videos that people release and they're pretty amazing, right? like hard to imagine like what people have created, but, but the, but making those systems do real world things, I think is still a little bit of a ways off and you starting to hear some Agility, for example, had, announced like some initial pilot at Amazon warehouses, for example.

And it may be closer than I think, in some cases, but, I think again, like my guess is that those near term use cases will be narrowly defined, very, not, the vision that one imagines of a humanoid, but actually pretty, fairly, constrained, use cases to be able to, to achieve, some, again, some kind of commercial milestone.

[00:31:02] Audrow Nash: yeah. Yeah. I talked to Melonee, about their humanoid from Agility and it sounds like they have pretty good market fit with that. there's a, they, provide ROI in a reasonable timeframe, but you're probably right that it is narrow in terms of, actual application, and like it won't be just doing everything in your house super soon.

[00:31:26] Sanjay Aggarwal: Yeah, just as an example, like they don't have an artic they don't have a hand, in Agility's case, right? I mean they, yeah, they don't have, they're not trying to articulate finger, fingers, and so that's one way that they've, significantly simplify the problem, right? I think what they are doing is they have a very specific use case that they do, which by itself is not easy, right?

But again, like they're pursuing one use case, they're not trying to do, a hundred use cases, right? And I think again, like the, path to rollout is probably going to be look like more like that than again, what one might, envision in their mind. but yeah, I think it's an interesting and exciting space, one, it's very hard to predict how it's going to play out.

all

[00:32:10] Audrow Nash: heard, so I host spaces and we talked about humanoids many times, but one of the interesting ideas was the reason, because a lot of investors have really piled on for humanoids in my, from my perspective, maybe that's wrong,

but from that appearance, a perspective of someone else was that the reason is because with humanoids, with the promise that they can go do general tasks,

[00:32:41] Sanjay Aggarwal: Yep.

[00:32:42] Audrow Nash: there is a potential for a much bigger return on investment for this kind of thing.

Like their, point was that most robotics companies have pretty narrow markets and therefore you don't see a hundred times return. You see a 10 times or 10 times return. 20 times return, especially if you invest a bit later in the company, but with this, the market is enormous because it could just do all like manual tasks in a sense.

what do you think of that perspective? And I guess it probably comes down to timeline, but what's your, what are your thoughts?

[00:33:17] Sanjay Aggarwal: Yeah, I think timeline and capital, right? yeah, the concept, yeah, what you said is, hard to debate. the question is how much money does it take and how much time does it take to achieve it, right? Because if you just wanted to move in, again, I'm no expert on Agility, generally what I understand is they're moving totes from kind of point A to point B, right?

[00:33:39] Audrow Nash: Yeah. That's

[00:33:40] Sanjay Aggarwal: I am sure that you can build a robotic arm solution or some other solution to do that specific task, probably in a, equally efficient, probably less capital intensive way, but it would be a single use solution.

[00:33:53] Audrow Nash: Yeah.

[00:33:54] Sanjay Aggarwal: And so as you try to build generalizability into many different tasks, yeah, that's great.

But again, does it take, how long does it take and how much time does it get to get there? And do you just run out of steam before you get there? And again, in some ways, like that's the, that's what happened to the autonomous vehicle space, which is that. like some of these companies have raised billions of dollars literally, right?

You know with the same print premise, which is that okay. Yeah, if you can build a generalized autonomous vehicle Like okay, there's this massive TAM, but you know at some

[00:34:26] Audrow Nash: What's TAM?

[00:34:27] Sanjay Aggarwal: like just a addressable market, you know is is massive but But you know now you look at it and okay, it's been you know people miss their deadline, over and over again, like you're all the negative press with Cruz and Waymo and so forth.

And it's okay, you wonder, people may just run out of patience, right? And people and the money may run out, and so you never get there. And so I think that's the potential risk with humanoids. I don't, I don't know if that's what will happen, there, but, again, like if you can build it, fantastic.

But, do you, can you actually get to the finish line is question mark in my mind.

[00:35:04] F-Prime's State of Robotics report

[00:35:04] Audrow Nash: Gotcha. Now, you have, segwaying a little bit, you have worked, or, F-Prime has worked on a State of Robotics report. would you tell me a bit about that?

[00:35:17] Sanjay Aggarwal: Yeah, absolutely. I would say that as we started spending more and more time on the industry, I think one of the, things that we realized is that it's not a well covered industry, right? the data that exists for robotics as an industry, Is, is spotty at best. so it's hard to know what's actually happening and, as an investor, or even as a, as an entrepreneur, for example, like it's helpful to understand, okay, how many, how much money is being raised into this space?

What are the hot sectors? what are, who are the investors, all sorts of, these kinds of dynamics to help inform your own thinking, about how to, approach the market. and as we started digging in more and more, we realized that part of the problem is that.

the industry itself is not well defined. what is robotics actually? And so we undertook a task, starting last year to really do a bottoms up, analysis. literally we went through thousands of deals to figure out like, what is robotics, what's not robotics, and then create our own taxonomy in a sense, around different categories.

And then use that as the basis for then actually creating a little bit more. comprehensive and accurate, view of the market. And with that, we published the first one about a year ago. We're working on the second, the second report now. and we, we created a taxonomy and really, created what we call three different categories.

one is AV, autonomous vehicles. really focused on passenger, or I'm sorry, on public road use cases. So things like what cruise is doing, but there's also trucking, for example, on, on public roads, the second is a term that we coined called vertical robotics, which is, just, really industrial robots that are focused on very specific vertical use cases, so something like RightHand on, logistics or Burrow focused on agriculture, but they're really trying to solve a very particular use case. and the third is what we call enabling systems. So these are people building new sensors or new, testing, software tools for testing, or a lot of people are building, development platforms for robotics. So they're not, building an end to end system, but they are building tooling and systems that help, hopefully, enable, The next generation of robotics companies to, to, be more productive, to do things more efficiently, et cetera.

so anyway, it was a really interesting process. I think we learned a lot. hopefully the, the investor community and entrepreneur community found it, found it useful. And, and again, we'll, keep publishing, our analysis just to help, help everyone understand, like what's really happening, in the market.

[00:38:00] Audrow Nash: and what are you finding? what have been some of the larger takeaways from the report?

[00:38:06] Sanjay Aggarwal: yeah, I would say, one is, the market's pretty decent size, if you look at the last, call it five years, there's been almost a hundred billion dollars, invested in, the broadly defined robotics categories, as I described,

[00:38:20] Audrow Nash: how much at each? Because I'm curious if most of that gets eaten up by autonomous vehicles or

[00:38:25] Sanjay Aggarwal: so that's, that was one of the big trends that's happened. that was in some ways, that was the spark, if you will, for the excitement in the category. so if you look back, four or five years ago, I think it was like 70, 80 percent of the funds were going into autonomous vehicles.

And this is when, the Cruises of the world were raising billion, billion dollar rounds, there was quite a lot of money going in, but over the last few years, like the autonomous vehicle market is starting to, really retrench it, certainly from an investment perspective.

I think it peaked at something like. 10 billion ish a year. last year it was close to two, right? So really, strong pullback.

[00:39:04] Audrow Nash: That is

[00:39:06] Sanjay Aggarwal: and then in its place, this vertical robotics, categories I described has really started to take over and is now the majority of investment, is from vertical robotics.

again, logistics, and there's a lot going in defense and medical robotics. so that's, been the, shifting mix of where the investor dollars are going. and then there's a, just the macro headwind,

[00:39:29] Macro forces for declining investment cash

[00:39:29] Sanjay Aggarwal: I would say, in the venture community in general, which is like investment dollars are obviously, declining, pretty rapidly, certainly off of the highs from 2021.

Yeah.

[00:39:52] Audrow Nash: baby boomers are retiring. So over half of them are retired now and they were at their peak earnings. just before. And then once they retired they transitioned their wealth into safer things, bonds and t-bills and those sorts of things, and not risky funds so then the capitol goes away. How do you look at that macro trend, or what do you think is driving it and what do you think the future of it will be?

[00:40:18] Sanjay Aggarwal: Yeah, that might be, if that's true, I might describe that as a second or third order effect. I think what, the reality today is they're like funds and people raise massive amounts of funds. So there's a ton of so called dry powder, meaning that, funds that were raised that have not yet been deployed, into investment.

So there's no shortage of cap, sorry, there's no shortage of capital, today. I think what happened was that, there, there's a market euphoria, literally,

[00:40:50] Audrow Nash: As a bubble, basically.

[00:40:51] Sanjay Aggarwal: Yeah, like late 2020, late, 2021. and so what happened is like the dollars went through the roof and along with it, valuations really,

[00:41:02] Audrow Nash: Soared.

[00:41:04] Sanjay Aggarwal: and some of the traditional fundamentals that people, investors were looking for around again, market traction, et cetera. we're put aside to some extent, as people, we're chasing, the hot deal, so to speak. and now as the, public markets corrected, like then the private markets tend to, correct as well.

And what you see generally in the invest, in the venture markets is that when you look at very early stage deals like Seed and Series A, they've actually been pretty robust. Actually, they haven't really changed that much over the last couple of years because at those stages, people aren't looking for, revenue tart, revenue figures or whatever.

Like they're investing in a concept, a team, a market, et cetera. and, combined with the fact that there is tons of capital out there to be deployed. And so that part of the market has been relatively less affected and maybe even not affected, in a sense. but as you get into

[00:42:02] Audrow Nash: You said seed and A typically.

[00:42:05] Sanjay Aggarwal: yeah.

so those funds, yeah, like the amount of money that's gone into those stages have been flat,

[00:42:11] Audrow Nash: Is that in, in robotics you're saying, or in startups in general or.

[00:42:15] Sanjay Aggarwal: it's definitely true robotics and in general, in the broader market, I don't have the exact numbers, but,

[00:42:20] Audrow Nash: Yeah. Just your feeling is interesting.

[00:42:22] Sanjay Aggarwal: Yeah, but as you get into later stages, like B, C, D and beyond, that's where you start to have.

Challenges, which is that, maybe the company was overvalued, in a previous round or at least overvalued relative to current valuation metrics. And so now, there's a disconnect between what the investor is ready to pay and what the founder is wanting to raise at.

And so it just creates fiction, in terms of, raising rounds, maybe the metrics never, were what they should have been. And they didn't really improve the metrics over the last couple of years. And it just becomes hard for companies to raise money at all. and so there's been a real slowdown in those kind of later, mid to late stage rounds, as I think kind of price discovery has been an ongoing process.

again, just meaning how much the investor wants to pay versus how much the founder, wants to raise at. And, and then also people were raising, because of those dynamics, people were raising extension rounds, essentially, which is, the existing investors put in a few million dollars more so I can extend my runway before I have to raise again.

So that just created a delay, effectively, in when rounds were being raised.

[00:43:34] Audrow Nash: Interesting. I've heard one of the things related to what you're saying that I've heard about and not sure again if it's true, but a lot of companies in this kind of peak 2021 Era, they raised at ridiculous evaluations. And so what they would have to do to get another round of investment is often do a down round.

So they would have to accept a smaller valuation than they got at the previous round. And no one likes that bad for all the previous investors and looks bad for the company.

[00:44:08] Sanjay Aggarwal: So that, yeah, that's a perfect example. it just slows. eventually the company needs money,

[00:44:14] Audrow Nash: yep.

[00:44:15] Sanjay Aggarwal: and in 2021, they raised, people raised very large rounds. And so the runway that they had was maybe long, typically it would be like 18, 24 months. Maybe they had 36 months of runway or 48 months of runway, especially as they started to cost.

Just because the round sizes were so big, so they were able to kick the can down the road, so to speak. But eventually, either you get the profitability or you have to raise, and then the reality of I might have to raise a down round, it's unavoidable at that point.

I think that's where we're working, the market is working its way through all of that.

[00:44:49] Audrow Nash: Interesting.

[00:44:50] Sanjay Aggarwal: I wouldn't be surprised, like late 2024, like you'll see more of that happening or a company will just go out of business, which, you see

[00:44:58] Audrow Nash: That

[00:44:58] Sanjay Aggarwal: in a while, more and more of that, today as well,

[00:45:01] Audrow Nash: Yeah, it feels like a lot of companies are being squeezed maybe because of this. It's like too much fertilizer or something. It grows too fast. It's not, then it needs more and then it can't get as much. So then it dies, this kind of thing. What do you think? so I actually, it sounds like you're saying that this is a necessary correction in a sense in the markets or in the valuations because they were in a bubble effectively.

And so that, that bubble has to be. Worked out in a sense, and then after things will return to normal, we're assuming, or what, kind of thing

[00:45:40] Sanjay Aggarwal: Yeah, I'm not sure what normal will be, but there'll be some, there'll be a new normal, but yeah, unlike in public markets where there's like a real time pricing that's happening, like public private markets aren't subject to that, right? And they just, have a way of taking more time to, for that process to play out.

But it's happening. this is why people are, layoffs because people are trying to cut the burn, to extend the runway, to get to better metrics. it's a, hopefully a healthy thing for the market as a whole, as companies build more sustainable business models. But, it can certainly be a painful process at the same

[00:46:20] Audrow Nash: for individuals and maybe companies. Yeah, for sure. what are you, the one thing that's interesting, and I don't know if it's related, but I'm curious to think, see if you do think it's related. it's been.

[00:46:32] Why are companies not IPO'ing lately?

[00:46:32] Audrow Nash: Many, I've been hearing that most companies are like, it's very hard to IPO now. it seems like companies are not iPO ing, is that shaped by similar forces because maybe their valuation is lower and they don't want to do the down round or something with the evaluation at the IPO or I don't know exactly how it works, but any thoughts on why IPOs may not be happening as frequently

[00:47:04] Sanjay Aggarwal: Yeah, I think there's definitely a dynamic you described, which is that, okay, the valuation that you might get at the IPO may not be appealing, certainly relative to your, the last round or what your expectations were. So I think there's that part of it. I think that the other part of it is the market itself, right?

Is the market ready to invest in these companies? And so you know, and I think there's a huge, one of the things that F-Prime does actually is spends a lot of time in the FinTech space. So we just published a FinTech report also, and and there's like a huge backlog of coming, like you take Stripe as the poster child of the FinTech, era, there's, they've been, rumored to want to go public for quite a long time, but they haven't yet, partly because the market, they probably could go public at any time, but, they probably have certain expectations about, what valuation they, they want.

And, so part of it is the market itself, like how, much appetite is there from the market side to support, to support these companies and to, give them, attractive valuation. So I think it's, it seems like it's starting a little bit. there's been a few recently, if I remember, but, but yeah, I think that I think, as a VC, like your exit, path ideally is either M& A or IPO.

And so if the IPO markets are slow, like that affects obviously how you think about, your, exits and your timelines and, and how much you can exit for. And M& A has also been quite slowed as a lot of the large corporate buyers are, have similarly slowed their,

[00:48:35] Audrow Nash: They're, subject to the same forces too.

[00:48:37] Sanjay Aggarwal: Yeah, exactly. Their own stock might be down, their, they have investor pressure to reduce cost, the regulatory challenges. There are all sorts of kind of headwinds, I would say, in the exit market more broadly, which is very market driven, I would say.

[00:48:51] High interest rates and how that effects the VC business model

[00:48:51] Audrow Nash: How do you think, and we'll go back into robotics, but I'm curious about all your market, or economy point of view. I'm curious about your economics takes. oh, how do you think the high interest rates are affecting things? like what do you, to me, it makes it so people are investing in more Things that are lower risk and more likely to do better than like I've heard a lot of things described as like zero interest phenomenons because cash was so cheap that like just throw it at anything and now it's like you need something valuable and I feel like it's actually doing well for robotics but I like I feel like this has been like robotics has been doing okay but I'm curious about your thoughts on this and on the interest rates and how it's been affecting things.

[00:49:43] Sanjay Aggarwal: Yeah, I think at a simplistic level, like you said, the money used to be, free, so to speak. there was no cost to, even if you want to borrow money, the interest rates on borrowing were quite low, if you're taking, equity money, etc.

[00:49:57] Audrow Nash: Or negative interest rates in some silly places occasionally,

[00:50:01] Sanjay Aggarwal: Yeah, exactly. But, yeah, I think the, the simplistic way that it affects things is people, that people are looking for a faster path to profitability, right? Where you're not burning money, but you're actually making money, right? And so I think it just drives the behavior that, at least certainly investors are expecting is, more capital efficient businesses.

lower burn, some, maybe being able to on your last round, get to, get to profitability if you had to. but just, a less focus on growth at all costs, so to speak, and more focus on kind of capital efficient growth, and hopefully hitting some level of, profitability of, in the, and depending on which stage, obviously, but, in the, foreseeable future.

[00:50:48] Audrow Nash: Yeah. And growth at all costs to me implies looking for a bigger multiple. So they're looking for a hundred or a thousand times on their investment or something like this. Whereas more profitable probably implies a profitable, like lower risk would make it so that maybe it's a lower Return. So do you think that, the venture capital model is going to be changing or maybe the average goes higher because all of the ones do not unicorns, but all of the companies become not unicorns, but become 10x to 20x instead of most of them failing and some of them being 200x or something like that.

What are your thoughts around maybe a changing venture capital model or I don't know.

[00:51:35] Sanjay Aggarwal: Yeah, I don't know. Yeah, I probably don't have a super informed perspective on that per se, but, yeah, at the end of the day, I, I think in the, last few years, you saw some really significant exits, through very high valuation multiples, And so that drove some really great returns, I think for a lot of investors, but that was.

Abnormal, right? That was not the way I would say invest, exits worked historically. there was a huge run up, a bubble, whatever you want to call it, so that, so I don't, I think we're going back to what was normal as opposed to what was abnormal perhaps a couple of, years ago.

So I don't, I'm not sure that, it's, the companies are any more or less risky, per se, but, rather, kind of valuation multiples are back to what they used to be, a few years ago, or getting closer to what they used to be a few years ago. And and correlated to that was companies didn't raise so much money, to get there, right?

doesn't mean you can't still get a great return, but I think it's, you just do it in a more capital efficient way, rather than raising 100 million, you raise 50 million, or whatever it is, to achieve a, an interesting outcome.

[00:52:54] Audrow Nash: Yeah. Interesting. Do you think that this will adjust the timeline at all? So if you grow at any cost, this means to me, maybe you scale quicker, but if you take 50 million, maybe you have to scale a little slower and a little more efficiently, and it probably takes longer. I have always heard that, VC.

Venture capitalists want it's like a seven year timeline for funds or something when they, invest and then they expect, and I'm sure it depends on the stage, but they expect that there is a merger and acquisition or an IPO or someone buys it or whatever it might be. do you think the timeline is going to start changing?

Is it going to go longer or does it stay this? I don't know. It just. Seems like something has to give and I don't know. I don't, or it seems like there's a tradeoff in some way, but I don't, and I'm not sure I fully understand. What do you think?

[00:53:51] Sanjay Aggarwal: But again, I would say that, that, that was always the norm, right? I think we're going back to what the market used to be, as opposed to this kind of abnormal period in the middle. and so I don't think it changes the model. I think it just, people have to, reset their expectations because if your expectation is based on what happened in, in the last, like four or five years ago, then that may not, that may have.

That's probably not the right expectation and historically was not the right expectation either. But I think the exit timeline ultimately is almost more a function of kind of market dynamics which kind of, which and the market is cyclical in a sense, right? so if M& A's are plentiful then yeah, your timeline to exit is

[00:54:34] Audrow Nash: Maybe shorter.

[00:54:35] Sanjay Aggarwal: potentially faster, But if the market, the M& A market dries up in the way that it has the last couple of years then, then you're going to have to wait longer, so I don't, so it's, it is correlated to what you're describing in terms of how fast you're growing, but I think it's probably even more driven by the exit, the health of the exit market, whether it's M& A or IPO, and if the IPO market's closed, it's closed, like there's nothing much you can do about it, no matter how fast or slow you're growing.

[00:55:01] Audrow Nash: Yeah, for sure.

[00:55:02] Challenges in raising B & C investment rounds

[00:55:02] Audrow Nash: What do you, so one thing that I've heard, which I thought was very interesting is that, so you guys, F-Prime, are investing in seed and series a typically, and it strikes me that there is quite a lot of investment firms that are investing in robotics companies at series A and seed levels.

And it seems like when you go BC, it gets a bit harder and maybe you go to I don't know, Sequoia or some of these really large ones for these later rounds.

How do you, think about this and what do you think around the idea of Series B and beyond being very hard for robotics companies?

[00:55:42] Sanjay Aggarwal: Yeah, I, yeah, I would say that it's definitely the hardest stage to invest, to raise money in as a robotics company. you see that historically and you see it particularly in the last,

[00:55:53] Audrow Nash: You're saying B and C?

[00:55:55] Sanjay Aggarwal: yeah, absolutely. Because, and I think there's, again, as I said, and the seed in Series A, to a large extent, like you're excited about the company, the use case, the team, you're betting on the future in a sense, right?

and more and more investors getting involved in those stages as well, because, as you think about the, just the general tailwinds, around the industry, whether it's AI or labor shortages or these kinds of things, like robotics seems like a great place to address and kind of capitalize on some of those tailwinds.

and so you see more and more of. more and more companies making early stage investments where again, you're betting on the future, the future, what's possible in the future, than anything. and then as you get to later stages, as you get to a D or E round, by that time you have, again, a different set of investors that are just investing based on the performance of the business, like how, what's the revenue, what's the metrics, all those kinds of things.

And so it's, analyzed to some extent, just like any other, a category of, of, company would, invest in. You even see private equity firms, starting to get involved, in, at those stages. And again, they're underwritten, the investment is analyzed in the same way that you might analyze any other investment.

In the middle is the hard part, where you started to get some traction, you have some metrics, but not that many metrics, oftentimes you have a big, if you're doing well, you have a big, Backlog of contracts, but they haven't been deployed yet, or, you have some new product that's coming out, that's going to, you started with a very niche product, with a very small TAM, you have a new product that's coming out, that's going to expand the TAM, so you haven't quite, you starting to hit some kind of interesting commercial milestones, but you haven't figured out the whole story yet.

and I think that's where the market is really getting squeezed today, which is that. many investors, will come back and say, okay, great, we love the story, but come back when you've proven it, effectively, right? And so I think that the need for commercial proof points, not just, hey, you're, talented team and you have a great technology and you have a great product, but I actually need to see real commercial traction, adoption, et cetera.

I think those are that's the, Those kind of B and to some extent C rounds is where you're just on the cusp of getting those typically. and if you don't have them, then okay, you may not be able to raise at all. And you've actually seen some shutdowns of companies, in those situations.

and, but if you, and if, but if you have them, then again, like you're in this, sort of situation where, Investor likes what you're doing, but they want to see more, they want to see more traction. And then what inevitably has happened in the market is then companies will raise their Series A extension, so they'll go back to the, Series A investors and say, hey, things are looking good, but we need 12 more months of runway to, to get to more commercial proof points, will you support us to get there?

and so I think you're, seeing a lot of that nowadays. And again, like the, challenge is really how do you get enough kind of commercial speed and, those, the validation that people can start to see the path of how do you get from, a few deployments to tens to hundreds of deployments, over time.

[00:59:23] Audrow Nash: Yeah. Very interesting. Yeah, that B C does seem like a big squeeze.

[00:59:29] Sanjay Aggarwal: And we see, companies all the time that, that are, we did see companies that, they're trying to raise a B and they still have just pilot revenue, for example, right? that's a very challenging place to be. And, some of those companies have ended up just shutting down, because they couldn't, there wasn't enough investor support, to give them the runway to actually get to production deployments.

whereas three or four years ago, you could probably raise on that story. Hey, we have a great technology. We're in pilots with these great, great customers. will you invest? now I think the bar on, performance is much higher.

[01:00:07] Audrow Nash: Yeah, very interesting. So do you think, how do you imagine this going over time? One, one way that I could see it is so like you guys at F-Prime companies like Alley Corp, any, other like series A and seed investors do well off of theirs, their robotics companies, maybe in five years time. And then You guys grow into a B and C funding and help make that squeeze a little bit easier or is it necessary to have that squeeze or like I guess what's the evolution that you imagine seeing of funding in the B and C space?

[01:00:51] Sanjay Aggarwal: Yeah, I think it'll probably be a little bit less of what you described, and I, said that only because just given the size of our fund, we focus on a specific year. There are other like very large, they raised a billion dollars for their fund and they are explicitly like multi stage funds, right?

So you have a number of those, that will do. Seed, and Series A and Series B. And so a little bit of that dynamic may exist in what, with those types of funds, but they're not, there are, there's some, but they're, and there's a lot of capital in those funds. But, but that's not, the majority of the capital out there.

yeah, like with those funds, except they're investing in robotics, maybe they start, they dip their toe in the seed and Series A, if it goes well, they start, getting more involved in later stages. But I think the real, I think the real driver of excitement at the B&C is when you start to see more exits, essentially one more

[01:01:45] Audrow Nash: Oh, okay.

[01:01:47] Sanjay Aggarwal: all flows downhill, right?

if you, if people see, Oh, wow, that was a great outcome. people made a ton of money on that company, and there's not one or two, but there's 10 and 20 examples, then people can start to say, Oh, okay, this is what Success looks like, and, if we rewind the clock, this is what that company that just sold for a billion dollars looked like at the series B, this new company that we're looking at, seems to exhibit some of the same characteristics, right?

And so I think it will, it'll come more from that. angle, which is you have successful companies that have exits. they make a bunch of money for the founders and the investors, and then that kind of paves the way for more and more such companies because people see what success looks like essentially, and what it looks like at each stage of an investment.

[01:02:36] Audrow Nash: Interesting. Yeah, I interviewed Bluewhite, one thing that they were saying, which I found very interesting, is that they feel like they have a personal responsibility as a robotics company.

I believe they just got their It was either B or C funding

[01:02:53] Sanjay Aggarwal: yeah,

[01:02:53] Audrow Nash: and they have a personal responsibility to keep going and do well because it ushers in more great robotics companies. It makes it easier for them to get funding.

[01:03:05] Sanjay Aggarwal: yeah, one of the, one of my beliefs as to why logistics is such a hot area, in the world of robotics, logistics is amongst the hottest areas, is because there have been successful exits, Kiva was bought for a few hundred million dollars, Six River Systems was bought for a few hundred million dollars, like there have been exits, and People can see that, yeah, there is a path to an exit, in those businesses and successful exits.

If you look at medical robotics, there've been a, a number of very large outcomes, of medical robotics companies. So there's, again, people can see that there's a path, if you look at agriculture, for example, it's still early days. there've been a couple of acquisitions by, John Deere made a couple of acquisitions.

but there were, mid sized 250, 300 million dollar type of acquisitions, which are, which were great, in many ways, but, they weren't, they weren't like the home run type of investment for the investors necessarily. So they were good enough to get people interested, but probably not good enough to people to really pile into the category just yet,

[01:04:08] Audrow Nash: Oh,

[01:04:09] Sanjay Aggarwal: Again, like the path to success has not been proven completely.

[01:04:15] Audrow Nash: and there hasn't been like huge wins, even though there's been some pretty good wins. that's been a thing that I've been hearing too, where you're seeing robotics exits that are like hundreds of millions of dollars, not a billion or more for

[01:04:31] Why we're in the early days in robotics

[01:04:31] Sanjay Aggarwal: Yeah. and even that, and we were actually just running the numbers. there have been 25 robotics exits greater than. 250 million dollars,

[01:04:41] Audrow Nash: It's awesome.

[01:04:42] Sanjay Aggarwal: 25 may seem like a lot, but it's actually a drop in the bucket in the world of venture capital. If you looked at enterprise software, the 25 might be a thousand, for example, right?

And I don't know what the number is, it's very small in relation to what, the broader, venture backed community, has produced and so all it means is it's still early days. there are a bunch of companies out there that haven't exited yet. but I think if you start to see successful exits, in the category more generally as well as in specific, verticals, I think that will spur more investment.

And it just becomes a virtuous cycle, essentially.

[01:05:23] Audrow Nash: Yeah. It's very interesting. Do you have any, idea? I, would love to see, The, distribution of the exits. I wonder if that's in the report, but if I, you could see like this many at this much and,

[01:05:38] Sanjay Aggarwal: Yeah, we actually, when the report comes out, you'll see it there.

[01:05:41] Audrow Nash: I'd love to see

[01:05:42] Sanjay Aggarwal: we had some version of it in last year's report, but yeah, we did something a little bit more detailed, this time around. and yeah, it's, it's, good, but not yet great.

[01:05:52] Audrow Nash: It's still early. Yeah.

[01:05:54] Sanjay Aggarwal: still early days, because most of these companies, they were funded in the last three, four, five years, right?

And back to your point of it may take you seven, ten years to get to an exit, right? Like we aren't, we haven't gone through a full cycle, for the most part, to really know what the, outcome of those early, the investments that were made three, four years ago, we haven't gotten to a full cycle to see how do those turn out just yet.

[01:06:19] Audrow Nash: what do you think? So we're in early days. what do you imagine the timeline is? For this. So like the one cycle, is it a few cycles from your perspective? And I know this is a wild speculation, but I'd love to still hear your thoughts. What do you imagine is the timeline for robotics companies? Like, how does it look over the next five years, 10 years, 15, 20, or whatever you think would be significant

[01:06:50] Sanjay Aggarwal: Yeah, so we're obviously bullish and we spend a lot of time and I personally spend a lot of time in the category. So where I'm bullish, I'm hopeful that it will, turn out better than not. And as I said, there are a number of companies that are starting to reach some interesting scale, across many different sectors.

If in logistics, you have companies like, there's a company called GreyOrange, which is, doing well, there's a company called Locus Robotics, which is doing well, and there's a others that have actually, have really significant commercial traction in the hundreds of millions of dollars kind of revenue, right?

So you can certainly envision those companies out there. having an IPO, let's say, if not an M& A outcome, that is, not in the hundreds of millions, but is actually in the billions of dollars, type of, type of exit. and that, that could be within the next, couple of years, frankly, right?

And maybe sooner, depending on the state of the markets. in defense, you have companies like Anduril, right? like they've raised, they've raised insane amounts of money. they were, I think, last valued at eight billion dollars. if they have a successful exit, I think that will really spur a lot of excitement.

And there's already a lot of money going into defense related, robotics companies. I think it, again, it creates that virtuous cycle in some of these use cases. Others, like agriculture, as I said, like I think the the companies tend to be three, four, five years old, So I think we have another, five years to go before you start to see, what the result of those investments are, those early stage investments.

So I think different stages are at different places. Like AV came and went, if you will, right? I think some of the early excitement in AV was because, okay, Cruise got acquired and Neutonomy got acquired and there were really big outcomes and I think that, caused, in some ways caused investors to pile in.

and unfortunately it hasn't, it may not pan out ultimately, but, but as you go into other sectors like logistics, there've already been some initial exits. There are other companies that are poised to exit soon, literally in the matter of, a year or two. and then there's others, like agriculture, where the companies are still really young and it may take another, five, six years to get there.

But I think if If these things play out the way I hope, like I think it will, again, create this virtuous cycle around people see that, okay, yeah, you can have big exits, you can make a ton of money, as an investor in these categories, as a founder, you can see that, hey, there's a great opportunity here, it'll just spur more activity.

[01:09:23] Will labor shortages affect robotics investments?

[01:09:23] Audrow Nash: So a thing that I think is very interesting is that when talking to a lot of robotics companies, one thing that is highlighted very often is the labor shortages and that's a big motivation for them. And if you look in, I don't know, manufacturing logistics, like there are massive, labor shortages, and they're only increasing, I believe.

[01:09:44] Sanjay Aggarwal: Yeah.

[01:09:45] Audrow Nash: do you, how do you think that will affect investment and timeline for these different robotics companies? Or do you think it's a big factor too? Or how, how do you think about it I suppose?

[01:10:02] Sanjay Aggarwal: Yeah, realistically, like that's a pretty standard part of, your, most robotics companies pitch as to why, why now, in a sense. but I think what we've realized is that's not alone to drive adoption, right? in the sense that like the solution has to really be foolproof in a way, right?

Because it can't be, it works. 75 percent of the time it doesn't work 25 percent because then they just won't use it at all,

a sense. And so I think it's a, it's a necessary, but not sufficient, driver of adoption ultimately. Like I think that because of that, it causes customers to,

[01:10:41] Audrow Nash: Look for things.

[01:10:42] Sanjay Aggarwal: look for solutions, try solutions.

If the solution doesn't, yeah, if the solution doesn't fit seamlessly into your workflow, if it doesn't work, almost all the time so that there's very few exceptions, if there's not a good, if the company is not supporting, the, solution properly, I think all those things necessarily have to be in place to drive adoption.

Because again, it's a new piece of equipment that's sitting there, oftentimes like in manufacturing, it's It's all or none. Like you have a robot that's doing the whole system, right? There is no alternative, in many of the newer generations of robotics, it's a mix, there's, they're sprinkled in with humans.

And so if it doesn't work, the person like take Burrow, for example, if the, if Burrow's system doesn't work, there's also people driving around and will, walking around in wheelbarrows, right? They'll just like. Say, okay, let me, put this aside and let me go back to the old way of doing things.

And it doesn't matter what the labor shortages are. Like they got to get the job done. so I think the, again, the labor shortage are a good, impetus, like you really got to, and this is where again, go back to just entrepreneurs understanding use cases around, how do you actually build a system that works and that fits seamlessly into a workflow so that it works all the time, not some of the time, because some of the time is as good as never.

[01:12:00] How reliable should systems be? + Human-in-the-loop

[01:12:00] Audrow Nash: Yeah. When, when does some of the time become enough? is it 99%? Is it 95%? Is it 99. 99%? Or, and I guess it probably depends on the application, but

[01:12:15] Sanjay Aggarwal: I think it depends on your approach in some way. So I'll, give you two, examples. so we're investors in a company called Taleo, like their explicit strategy.

[01:12:23] Audrow Nash: That's the mining one. that correct?

[01:12:25] Sanjay Aggarwal: Yes, yeah, exactly. They basically have a. they, work on any piece of large construction or mining equipment. I think of an excavator, massive multi ton, hundred thousand, multi hundred thousand dollar piece of equipment.

And their strategy is, is what they call supervised autonomy. So it's basically a human in the loop approach where,

[01:12:45] Audrow Nash: It's a smart approach. Yeah.

[01:12:46] Sanjay Aggarwal: So they retrofit the machine with a kit that enables you to operate the machine remotely, essentially. and so what happens there is that, you can automate part of the task.

So the task that they typically do is where, what they call is, let's say you're digging some dirt and then they have a process they call tramming, which is like you move the dirt from point A to point B and then you dump the dirt somewhere else. And this is done repeatedly over time.

Now for them, you can either do it completely remotely, and the human is there doing it, or, over time, you can, or if you choose, you can automate the tramming part of it. Now, in that case, if the automation doesn't work for some reason, maybe there's some obstacle, who knows what the reason might be, the human is there anyway, right?

Again, 50 percent is probably not good enough, but it doesn't have to be 99 percent on the autonomy part, because there's a human, there's necessarily a human in the loop in the process. So it's more kind of robust to potential failures, and if you take, but on the flip side, if you take Burrow, for example, if their system, Isn't able to autonomous like it, if it's not able to autonomously traverse the path that they're supposed to traverse, then it, it's as good as dead, right?

there's, no utility to the solution at that point in time. That has to be much, much more, robust. but even for them, what they do is okay, like it's a low speed applications, so worst comes to worst, like they may stop because they found something that they didn't expect, and the operator can just start it up again, and it'll just go on its way.

But if the system completely fails, okay, the motor went out, and it's just, it's, inoperable, like that's a problem, right? that, that is not, that is not acceptable.

[01:14:28] Audrow Nash: Yeah. Now I, that makes a lot of sense, but it makes me question, Why not just always have a human in the loop for this kind of thing? Like in the Burrow case, if it becomes dead because it has something unexpected, why not just always have a, what would be the disadvantage of, I guess it's more expensive, but why not just have most robotics companies bake in a human that can intervene if it does something funny?

[01:14:57] Sanjay Aggarwal: I think you're seeing that more and more, and again, particularly in the industrial setting, right? I think this is where, passenger vehicle use cases, it was hard, right? And I think people did create, certain ways to, have a human intervene if there's an obstacle that they didn't understand or whatever.

But I think the stakes are very high, in those kinds of cases. And the fault tolerance, even if you have a, a human remote operator is just so low, that it becomes hard to execute. I think in an industrial setting, it's very different, And I think this is why, at least we see more and more companies with some version of a human in the loop, that can intervene because.

the, if the system stops for a couple of minutes, that's not catastrophic. in a, it's a very, it's a constrained environment. there's, if it stops for a few minutes, it's not the end of the world. it's, it's hopefully not a safety consideration and things like that.

So again, people have different approaches from Taleo, which is like an, The human is always in the loop, by design, versus, Burrow, where the human is in the loop on an exception basis. and I think you see, lots and lots of these kinds of approaches, which is, again, why I think the industrial use cases have just proven to be, more, more attractive.

[01:16:14] Audrow Nash: yeah, I think so too. And it's interesting because even if say you are 50 percent efficient with a human operator, so it's failing to operate autonomously 50 percent of the time. Now one person is manning two robots and then you can use that as a way to get into the market and then you have data and more experience and you can start automating and keep improving that ratio.

Now it's 75%. Now you have one person watching four robots with perfect math and it keeps going,

[01:16:47] Sanjay Aggarwal: Yeah, And I think that's, and frankly, that's very much informed our own view of to invest, people that are trying to do a hundred percent end to end automation, I think there are many interesting solutions, but again, like the bar for performance is really high and it just ends up, oftentimes, just being hard to execute, versus if you get these kind of, if you figure out the right solution that has a little bit more fault tolerance, you can have a human intervene.

I think that just, gives, makes the path to commercialization just that much easier.

[01:17:18] Audrow Nash: Gotcha. Yeah, I agree.

[01:17:20] Common pitfalls to avoid for your robotics company

So speaking of that what are some lessons we can learn, and try to avoid in doing a good robotics company?

[01:17:30] Sanjay Aggarwal: The thing that I find most often is that, when you're, there's a lot of customer excitement for these solutions, right? which is great. like that's a good place to start a business, obviously, but in the course of customer excitement, the customer starts envisioning all sorts of other things that you could, like, why can't you do this and this, right?

And so I think the, danger can be that, okay, you start chasing too many different shiny objects, in a sense, right? So I'll give you a simple example. Burrow, as I, said, is a good example, like they, they make an autonomous vehicle, some of their customers said, why can't you pick crops as well, right? Why don't you put an arm on top of the system and pick crops, right? Sounds, sounds like a logical thing from a customer perspective. Obviously, if you're the

[01:18:23] Audrow Nash: Yeah, you already have robots

[01:18:24] Sanjay Aggarwal: It's already out there. Like, why don't you just pick a compute? Why do I need people to pick? Why don't you pick?

but again, like the challenge with that's like a very fundamentally different technology, right? Like an autonomous vehicle is very different than a picking robot. and so I think there's a lot of temptation, for robotics companies to say, Oh, okay, the customer really wants me to do this other thing. Why don't I do that too? and in doing that, I think you can get a little bit derailed, with, not, again, you got to. You got to be clever about constraining the problem in a way that's useful, there's a large enough market, but isn't going to take you down a bunch of different tangents on technology that you need to build that maybe, is, just too orthogonal to what you're, currently building. So I think that's what, I see most often, from companies, which is, Hey, like you're just trying to do too much, in the effort to satisfy customer demand, expand the market, things like that. that's by far the most common. and then the second again is like almost over engineering,

[01:19:29] Audrow Nash: Oh, definitely.

[01:19:30] Sanjay Aggarwal: right. you can, there's always ways to make the system better, right?

better sensors, more sophisticated hardware, whatever it is, what, the right, what's good enough, is, a tough, is a tough call to make. And

being smart about that, because again, like capital is not unlimited. And so you gotta be, you gotta make the right choices around, doing something that's going to work well enough without making it perfect. so I think these are, probably the two most common, examples.

[01:20:06] Audrow Nash: Yeah, I see those too. And it's interesting because, especially for the first one where companies are trying to find product market fit and the customer is telling them, we would like this, we would like this. And so it's oh, I want to go find the good market fit, but it's like a complete diversion from what they're working on.

[01:20:25] Sanjay Aggarwal: Yeah. Yeah, absolutely.

[01:20:27] How to make your company VERY attractive to investors

[01:20:27] Audrow Nash: Being that you are an investor, what kinds of things can a company do to make themselves very attractive to investors?

[01:20:40] Sanjay Aggarwal: again, it depends on the stage, obviously, is your

[01:20:43] Audrow Nash: So maybe seed and A.

[01:20:44] Sanjay Aggarwal: again, like having, deep domain knowledge, having, a strong technical team, all of those are prerequisites, understanding you know, one of the reasons why logistics today is a little bit challenging is that, there are a lot of solutions doing similar ish things.

and so finding a unique use case or market segment that you're going after, that is not, super competitive, these are just like the standard things that you would look for at any early stage company. going back to the question of, okay, what makes for an attractive, B round or C round, what I always feel like is the important is that like it's very hard to pitch an investor on a bunch of contracts, that have not been executed,

right? Because inevitably, yeah, it's not to say that the contracts aren't real. I think the problem is that, are they going to get executed? Are they going to get they'd be renewed.

Three months or six months or not even that. I'm just saying that, they've, signed up to deploy a bunch of systems, but is it going to take three months?

Are they going to take six months? They're going to take, like it's unpredictable oftentimes, because there are all sorts of things that happen, like you're integrating into real world systems. And they may have other priorities. They may have to upgrade their infrastructure.

They may have to change out, maybe there's some integration with this other thing. So there are a lot of dependencies that you as the company cannot control and I think it's super important, not, the contracts are important, but it's super important to have, number one deployments, live deployments that are in production meaning used every day, not Once in a while, but I actually used day in and day out,

[01:22:26] Audrow Nash: Yeah.

[01:22:29] Sanjay Aggarwal: which, which have high utilization, which is not, yeah, we did it.

We, we tried it out for a couple of days

[01:22:34] Audrow Nash: It worked once.

[01:22:35] Sanjay Aggarwal: And yeah, and we, it's off on the side because, oh, we have this problem or whatever. I think that's not convincing. So really focusing on, not a hundred customers, but even just one customer or two customers that are like deeply engaged, using the system day in and day out.

Ideally went from one system to two systems to 10 systems or whatever the appropriate numbers are, but are like demonstrated a willingness to really go all in on the product. for, most of these companies, it's, really a land and expand where you start with an initial deployment, and they the customer tests it, figures out it's at work or not.

very much. And if they're excited, they start buying more. and so until you can prove that, I think it doesn't, it doesn't matter how cool your technology is, that's the ultimate proof, that at least I as an investor look for when looking at businesses.

[01:23:27] Audrow Nash: It makes good sense. I would imagine more investors do that too. You want to see some buy in. You want to see that it's working. These kinds of things.

[01:23:34] Sanjay Aggarwal: Yeah.

[01:23:36] Biggest challenges in robotics

[01:23:36] Audrow Nash: what, from your perspective, are some of the biggest challenges in robotics? . .

[01:23:44] Sanjay Aggarwal: there are, a few. one is obviously these are like real physical systems, so a lot can go wrong, and with that, like the cycle times are just longer inherently, right? there's usually a piece of hardware and a piece of software and yeah, you can iterate on the software pretty quickly, there may be, hardware issues, right?

You need to change, the sensor isn't quite working, or the motor that you picked isn't quite working, or the, the arm, is, out of tolerance, or whatever it is, right? So there are all these kind of physical limitations that increase cycle times, so I think that's, one thing that you have to be, recognize and also figure out how to navigate, to, maintain, because, startups are all about, fast iteration.

That's what, that's the mantra of, startups and how do you do that in a hardware oriented robotics company, is, a bit of a different challenge, but still needs to be, achieved in some way. and related to that then is just the amount of capital that may be required, right?

That, changing hardware, making, you need a much broader. Set of people. it's not just a bunch of software engineers. You probably need people that understand how to build like the infrastructure part of the software who understand perception systems, who understand autonomy, then you need mechanical engineers, system integral, system integration people.

So there are a lot of different disciplines that you need to hire, combined with longer cycle time just means that capital, your capital requirements might be more than a classical software company. And so how do you navigate that, capital requirement, not only in the early days, but even as you scale, because funding all of it, with equity alone can become potentially quite expensive.

and so are there, especially as you're starting getting into production, and you have your supply chain up and running, can you find other. Opportunities, maybe, some kind of debt sort of funding mechanisms to offset some of the capital needs that you have to scale up.

[01:25:42] Audrow Nash: Interesting. Yeah, makes sense.

[01:25:44] Opportunities for new robotics companies

[01:25:44] Audrow Nash: What do you think are some of the biggest opportunities for new robotics companies? Like where, if you were an entrepreneur, where would you be looking to start a company?

[01:25:55] Sanjay Aggarwal: Yeah, and this is again, like a little bit informed by what we've been looking at, which is that, areas like logistics. they're obviously there, there have been the, in some ways, the, the most popular area of robotics, but I, I say today, like if you're an early stage founder, It's pretty hard to find a net new use case that nobody has thought of, because, like there's versions of almost everything, right?

So I, there's always something new, but it's, the bar is pretty high to find something that's different and better than what already exists in the market, but if you're looking at, and that's, certainly true in logistics, or, some of these other kind of popular areas, but if you're going if you're going into agriculture, for example, it's still really early days, there's a lot of stuff that can potentially be automated.

there have been companies trying to do harvesting. but, there's other stuff, whether it's weeding robots, or people spraying pesticides, or Burrow does, just moving crops around the field. I think it's still really early days. those companies are, still pretty nascent in terms of what they're trying to do now.

And I am, I imagine there's tons of additional use cases that haven't even been, envisioned yet. So I think, to me, part of it is, finding industries that are probably underserved, from a technology perspective, today. agriculture being a good one, construction being another one, where it's still pretty early days.

There are a lot of companies trying to do different food oriented robotics, around, how do you automate food preparation, for example, or food assembly, things like that, that are pretty, still, I think, pretty greenfield opportunities. So I think that's, part of it, in my view is like, I think that for early stage entrepreneurs, there's probably, it's probably, It's just, they're just less competitive, shall we say.

And, and, that's to me is always a good place to start if you can find the right, the right use case.

[01:27:53] Audrow Nash: Yeah.

[01:27:53] How would you start a new robotics company?

[01:27:53] Audrow Nash: Now this, may be too big and broad of a question, but okay, you pick a domain, and you are an early stage, like basically it's you and maybe you have a another person who wants to start a company with you. how would you go about starting a robotics company? Like, how, what would you think would be the, what would be an efficient path to get growing and everything?

[01:28:24] Sanjay Aggarwal: yeah, I think finding the right team is probably a good place to start, and I think having somebody with, the combination of technical expertise and domain expertise are, the magic combination here, where you have somebody that does understand the market and can go and talk to potential customers and validate ideas and things like that, along with a technical co founder who can, build the initial, or at least lead the creation of the initial prototypes. again, like these are, in general, are systems that, that just require more resources. It's not, necessarily one person who can, sit and code up the prototype.

But, that being said, there's a lot more off the shelf stuff that you can buy. off the shelf robotic arms, off the shelf, small, small vehicles. things like that you can at least build, initial prototypes on top of. and but I think the starting point is definitely to Figure out, what, what problem you're trying to solve, right?

And having somebody on the team who comes with, a deep understanding, of the industry and then can go and, go deep and, talk to potential customers and validate ideas before getting too far, in terms of, building stuff, because you can waste a lot of time building stuff that nobody wants.

I think it's, that can be a very expensive proposition.

[01:29:50] Audrow Nash: And then how do you I suppose some people or some small teams may need to seek funding early just to start working on something. but when, do you think companies should go for a seed round or a series A funding? Like how do you know you're ready for either of those rounds?

[01:30:15] Sanjay Aggarwal: I think today for series A at least, like you need to have, initial customers, right? I think it's very hard if you don't have a. It doesn't have to be the final machine or device, but you need to have a device that's probably gone through a couple of iterations, is good enough for people to deploy, ideally in some initial production use cases, and, start to use and validate that it works, that it delivers ROI, and the customer is excited about it.

I think these are, there's no revenue target or whatever, but yeah, you want, At least one. And I, at least a couple customers who have used it, who are using it, happy to use it, every day. and are getting value, demonstratable value. I think that's the bar for a se, a series A at least.

and seed, I think seed can be all over the map, right? there, there are plenty of seed investors that will invest in just a business plan. here's a, here's a PowerPoint presentation with a, with our idea. Our idea, that's typically. for more kind of proven entrepreneurs, so to speak, like maybe you've already built a business before, or you are an early employee at Kiva, and so you have, some credibility, around what you're trying to build, everything to, okay, yeah, we've built the initial prototype, we've done some proof of concept to demonstrate viability.

things like that, can, may, be required, and there are a bunch of, there are all sorts of labels for, friends and family rounds and pre seed rounds, so there are all sorts of labels, but, I think it's very, situational, if you will, depending on, your own kind of credibility as an entrepreneur, the team that you've built, the use case that you're going after, and, frankly, just finding an investor with whom all of those things resonate.

[01:32:12] How to pick a good investor for your series A?

[01:32:12] Audrow Nash: Yeah, definitely. And then for the person or the group starting a company, what should they look for in an investor? Do you just want anyone who will give you money, but I'm sure you want some expertise.

[01:32:24] Sanjay Aggarwal: That's probably a good starting point to get somebody who's gonna give you money. But,

[01:32:29] Audrow Nash: Definitely. ha

[01:32:30] Sanjay Aggarwal: yeah, I think there are, there are definitely more and more, investors that are active in robotics. So I, always tell people that the, you can go and try to meet every investor out there, right?

But I think the most productive path, at the end of the day is going to be, number one, investors that have done deals in this space before, right? So they're not, just hey, I'm going to take a flyer on my first robotics company. but rather, oh yeah, we've done two or three, we understand, what this looks like, right?

Because it has its own somewhat unique dynamic in terms of, the path. And as I said, oftentimes for a Series A, there's like the Series A extension, and maybe there's multiple Series A extensions, right? So you want Ideally, a Series A investor who has the ability to do that and willingness to put more money on in before a proper Series B, right?

So that may be a slightly larger fund. but, so I think, getting investors that have experience in the category that understand, what it means, what, does this look like? who's able and willing to support the company through, potentially a couple of different rounds before they raise their next major round.

if they have, experience in related spaces, I think it can always be helpful. obviously you don't want an investor who's invested in a competitor, but. Now, they've done a couple of deals in logistics, and I have a logistics startup, so they understand and they can be helpful, actually, is like the, even better, they can help you think about how to do go to market, because they have other companies that have sold into the logistics space.

And so they can be, helpful for how you think about your own, go to market strategy, for example. So, I think these are like the basic, Starting point. And as I said, I think there's just more and more such investors out there. So at least the opportunity set is, getting wider, which is, great.

and then it's just a matter of, again, just finding the right, the right party, which you, who's excited about your vision.

[01:34:35] Audrow Nash: Hell yeah. Awesome. Let's see.

[01:34:39] Audrow Nash: going back to the State of Robotics report, was there anything in it that you found that was particularly surprising?

[01:34:48] Sanjay Aggarwal: yeah, and I think when we did it last year, I think this, again, this kind of, shift from AV to vertical robotics was, I think we intuitively thought it, but it was it was, surprising how rapidly the shift was happening, in some ways. so I think that, reinforced our own view and gave some, further impetus for us to say, Hey, yeah, this is probably the right place to be focusing our efforts.

I think the second thing was that the exit markets were still really early. there were, probably fewer exits than I might've imagined, in a sense. and again, I think it comes back to. we're still in the very early days of this market. it's only a few years old in some ways, at least from a venture capital perspective.

and the, just the, number of exits was, is small, and hopefully it'll grow, go bigger, but it was probably a little bit smaller than I, that I quite anticipated. again, I think that is changing and will change, pretty rapidly. But, I think that was, that was a little surprising.

And then, the third thing was again, this, kind of, what I call like the squeeze in the series B and C rounds, which is, there are a lot of, there is a lot of excitement in the early stage, even like Y Combinator, is like the, the bellwether of all early stage investing, as robotics is one of their core focus areas this time, right?

there's obviously a lot of early stage excitement, but, the difficulties of raising the series B and C was probably more pronounced than I fully appreciated. But again, I think it's just, it just puts a focus on, getting to the right metrics, getting to the right proof points, in a way that probably didn't exist, that wasn't as critical a few years ago.

[01:36:40] Audrow Nash: Did you see any evolution between the last year one, the last year's report and this year's report? was there any, vectors of how things are changing?

[01:36:50] Sanjay Aggarwal: 2021 was just an anomaly in general. And so I think everything was looking great then. 2023 was more of the same of 2022, but just smaller, a lot smaller, right? if you look at the market overall, I think even I think 2022 itself was off like 30, 40%, was off, Yet another, 40 50 percent off 2020.

It's pretty marked how, much it's fallen, although a lot of that's been just driven by the kind of the drop in autonomous vehicle investing. If you look at, this vertical robotics, as we call it, that's, that has definitely shrank, but not as much as the rest of the market.

[01:37:31] Audrow Nash: Gotcha. What do you expect? if you look out another year, do you expect that trend to continue? Do you expect it to pick back up a little bit? Any thoughts there?

[01:37:39] Sanjay Aggarwal: I'm hopeful that 2023 will be the kind of the bottoming of the market, in a way. things will start to pick up again. again, I think, entrepreneurs have, heard, from the market as to what they need to focus on. And so I think they'll just be way more focused on delivering.

these are all super talented people. and I think they will, adjust their own strategies to deliver on, some of the expectations that people have. And so they'll just be better equipped to, to raise money as well. and, and as I said, like the early stage activity is hopefully a precursor for later stage activity, right?

just the sheer number of companies that are raising like those, they won't all be successful, but, A lot of them will be successful. And there's been a lot of money that's gone into those companies in the last two, three years. So those, as those companies mature, I imagine there'll be a lot of, high quality companies coming out, ready to raise their B and C and so forth.

so again, we're optimistic, again, I think that the, last year in some ways was a wake up call for, founders in terms of what is it, what do they really should focus on, from a business building perspective. and, I, would think and hope that, 2024 and beyond will, things will start to pick up again.

[01:38:59] Audrow Nash: Gotcha. and what do you make of the Y Combinator? where they're focusing on robotics, I think that's very exciting, but what are your thoughts on it?

[01:39:11] Sanjay Aggarwal: yeah, I feel and I, I'm no expert on Y Combinator, but my general sense is that they've, Over the last few years, they started, in the early days, it was all about, kind of consumer software and then enterprise software, I think, they've always been.

a, hopefully like a step ahead of what the next trends are in the market. And so you've definitely seen, a lot more focus, for Y Combinator on emerging segments of venture, not the traditional, what's your next consumer app or what's your next, piece of enterprise software.

that's always there and, it always will be there, but. You see them talking a lot more about, climate related startups or, robotics in this example, or, they were probably early to the crypto craze, like all of that kind of stuff. I think they were there, not that they didn't, may not get it right all the time, but I think they're just a leading indicator of what, what investors, are going to be or are excited about, in the go forward times.

Cause again, Investment is all ideally about betting on the future, not on the past, right? And so you want to find kind of the next, the next big thing, so to speak. as opposed to just replicating what was successful in the, and not always just replicating what was successful in the past.

[01:40:27] Audrow Nash: Definitely.

[01:40:28] Future of robotics

[01:40:28] Audrow Nash: And then, wrapping up, what do you imagine for the future? Like, where do you think robotics is going to be in 10 years, say?

[01:40:39] Sanjay Aggarwal: really hard to predict. I think that, technology is changing pretty rapidly. and so I think just the capability of what these systems can do, is I think it's hard to envision exactly, what that's going to look like. I, think, as you, yeah, I just think that the, range of use cases will start to expand significantly.

I think today, to some extent, like the use cases are constrained by what the technology today enables, right? I think as the technology, expands in terms of, because it's all about like modern robotics is all about dealing with unstructured environments fundamentally, right?

Historically, the robotics that existed for the last 50 years was around highly structured environments doing highly structured things. It wasn't about. Perceiving the environment. It was about, move this, this, motor from, point A to point B, And that, and just do that over and over again.

And that's what it was very structured, very predefined. Now it's all about unstructured environments. Like how do you navigate, in the real world? And I think that ability is going to continue to expand, right? Like the ability to perceive the world and kind of deal with uncertainty, is obviously the core of what, modern AI is, enabling, us to do.

And so I think that just opens the, the range of use cases in a much, increasingly broader, bigger way. So I think there would be two things. one is like these companies that have been funded today, not all of them, but a lot of them will become really successful, like they will start to be out there.

You'll start to see them. maybe not on their sidewalk, but certainly if you go to a factory, or you go to a farm, or you go to a construction site, I think that, they'll be much more ubiquitous, than what today. And then number two, I think you'll start to see, all sorts of, use cases that you could never have imagined that a robot could do, because they have been enabled by, much more sophisticated, perception capabilities.

[01:42:35] Audrow Nash: Awesome. All right, Sanjay. Thank you. great speaking to you and hearing your perspective.

[01:42:42] Sanjay Aggarwal: Yeah, thank you very much for the time. It was, a fun chat.

[01:42:45] Audrow Nash: You made it!

What do you think? Are you as bullish on robotics as Sanjay? Isn't it surprising that there have only been 25 or so robotics exits above 250 million? What do you think the timeline is for a 10 times increase in this number? I bet it'll be faster than we think.

If you want to check out the State of Robotics report, the link will be in the description.

See you next time.

[00:00:00] Episode intro

[00:00:00] Audrow Nash: I'm sure you've heard about the Robot Operating System, or ROS. If you haven't, it's freely available open source software that makes it much easier to build robots.

What you may not know is that the Robot Operating System is one of the projects maintained by Open Robotics. Other projects include the Gazebo simulator and OpenRMF.

Open Robotics had a non profit part and a for profit part. At the end of 2022, the for profit part of Open Robotics was bought by Intrinsic AI, and we've been maintaining and investing in Open Robotics projects from Intrinsic, which you'll hear about more in this interview.

I say we because I was at Open Robotics as a software engineer and I came over to Intrinsic with the acquisition.

Anyways, it's been a little over a year since the acquisition and now the non profit part of Open Robotics is announcing a new governance structure called the Open Source Robotics Alliance, or OSRA, which is the main focus of this interview.

From my perspective, this is a great move for Open Robotics' projects and follow successful open source software projects like those of the Linux Foundation. I also think it will be better for companies that rely heavily on any of Open Robotics' projects.

And I believe it will be better for contributors to the community as OSRA formalizes the mentorship process and has a way for contributors to have their voices heard.

Also, a shout out to OSRA's founding sponsors.

At the Platinum level, we have Intrinsic, NVIDIA, and Qualcomm; at Gold, Apex, and Zetascale; at Silver, Clearpath, Ekumen, eProsima, and Picnic.

And then Silicon Valley Robotics is an associate, and Canonical, and Open Navigation as supporting organizations.

I think you'll like this interview if you're interested in the Open Robotics and Willow Garage lore, want to learn more about the Open Robotics projects after the Intrinsic Acquisition, or want to be part of the next stage of development for the Robot Operating System or any of Open Robotics' other projects.

I've included a link to learn more about OSRA in the description, and with that, I hope you enjoy the interview.

[00:02:27] Audrow Nash: Tully, would you introduce yourself?

[00:02:31] Tully Foote: Hi, I'm Tully Foote, the community advisor to OSRF and open source lead at Intrinsic.

[00:02:38] Audrow Nash: Awesome. And Geoff.

[00:02:40] Geoff Biggs: Hi, I'm Geoff Biggs. I am the CTO of the Open Source Robotics Foundation, otherwise known as Open Robotics.

[00:02:47] Willow Garage and Open Robotics Lore

[00:02:47] Audrow Nash: Awesome, and so I would like to go. Back to the beginning and talk about Open Robotics beginning and starting with Willow Garage. So Tully, would you start with telling me about Willow Garage?

[00:03:02] Tully Foote: Yeah, Willow Garage was a research, a robotics research company in Menlo Park. I joined it right out of school and it was really a great experience, a lot of what we were driven for was mission and impact. And through that, we started the ROS project, or at Willow, we started the ROS project in collaboration with Stanford and several other partners and universities.

And it was a great time. We were able to go out and build the infrastructure that we thought was the most important to be able to bring robotics to the greater community, in particular, leveraging the open source ethos. To help with sharing and communicate, building a community around the world.

[00:03:49] Audrow Nash: Hell yeah. And what was it just to make it a bit more tangible? It was centered around a PR2 robot. That large robot with arms and the kind of really iconic, I don't know, depth camera on the top or stereo.

So it was centered around the PR2 and

[00:04:08] Tully Foote: Yeah. So the PR2 was our main platform. We were really focused on having impact in the robotics research community. And we wanted to have a common platform for research, researchers around the world in

robotics. And so we built the PR2 and were able to distribute it around the world. And with the PR2, we also built ROS to be the software that people could collaborate with.

Because one of the things is, if we give you both robots and you run different software, it's very hard to collaborate. but by providing an open source, reference design for the PR2 that everyone could pick up and build, we actually had people that received PR2s, they've been practicing in simulation using the open source software.

They extended it with their research and were able to have publishable research. The week they received their robot, which was very different than the state of the art prior to that. Or each major research institution had their own bespoke framework for doing their integration tasks. And if you wanted to reuse or try to reproduce the results of another university, you generally had to re implement their full algorithm and try it out.

[00:05:23] Audrow Nash: And what timeframe was this? Like about when was Willow Garage active?

[00:05:30] Tully Foote: Willow Garage was active from late 2007 until about 2013.

[00:05:38] Audrow Nash: Awesome.

[00:05:38] Player and ROS lore

[00:05:38] Audrow Nash: And Tully, you've mentioned ROS, which is the robot operating system, but Geoff, you were involved with Player at the very beginning, so tell me. What Player was and then how that turned into ROS and how you came in to be involved too.

[00:05:54] Geoff Biggs: I will try to be brief. It's a long history. if you go right back to 1999, back then every robot had its own software from the maker and they had very fixed ideas about how their robot should be used. and It was pretty difficult, people trying to share code and so on. and then a bunch of guys at USC Southern California, I think it was, one of them was named, Brian Gerkey, who you may have heard of.

he, they came up with this idea for a simulator, and then on top of that, they built this thing called Player, which allowed you to basically write software for your robot. And share it with other people, and they thought, hey, this is pretty cool, let's open source it and see what happens. And they did, and it exploded.

the whole world started using it. There was a point in about 2003, 2004, if you went to a conference and said you weren't using Player, people would look at you like you were stupid. it was, like ROS is now, back then, and this is 20 years ago now, it's amazing to think about, I myself, during my PhD, back in about 2003 and 4, and one of my lab mates, we were looking for new software for our robot, because the existing software was pretty awful, and we came across Player, and we tried it out, and it was pretty cool, but it had some problems, it couldn't handle shifting sensor frames, which was a bit annoying.

And so we decided, oh, we'll fix that. And as part of fixing it, we, particularly my lab mate, a guy called Toby Collett, he pretty much rewrote the entirety of Player. And we sent it into Brian and said, hey, we've made it better and fixed all these problems. You guys are awesome. You can be developers now.

So we ended up being developers on Player. And so we ended up contributing to Player for a few years. Which was pretty cool. And that's how I got to know Brian. And, a few years later, Brian sent me an email saying, Hey, I'm starting this new company, Willow Garage, and we're making this thing called ROS.

It's going to be pretty awesome.

[00:07:51] Audrow Nash: Hell yeah. And just, how would you describe, so for those who are not familiar with ROS, Geoff, what was Player and like what task did it accomplish that made

[00:08:03] Geoff Biggs: It was Player was Yeah, Player was It was like an early Iteration of ROS in some respects. So ROS is very distributed. Every single, node is its own process usually, right? When ROS is yours, you can compose it now. Player, all of those are what we call drivers, and they were all loaded into the Player server.

So they're all built into one binary and you had a configuration file that said what drivers to load and so on and they talked by drivers would publish information and they could subscribe to information so it had a predecessor to topics. I mean it's similar concepts in terms of publish and describe but different way of achieving it and so if you had these you have player clients which you set on your desktop or on another robot or whatever and player servers which ran on the robot to control all the hardware based on the commands they received.

[00:08:53] Audrow Nash: Gotcha. Yeah, it's so cool to see how we're making, the evolution of software to become more and more reusable so people don't have to keep reinventing the wheel. Just awesome.

[00:09:06] Willow Garage and the Birth of Open Robotics

[00:09:06] Audrow Nash: And then, back to Tully. Okay, so Willow Garage is operating. How does Willow Garage turn into Open Robotics?

[00:09:17] Tully Foote: Yeah, so we, at Willow Garage, we spent a lot of time, we were able to, build ROS up, build the community, but there were a lot of people that were rightfully a little bit worried about, what Willow Garage's intentions were. it's a for profit company and we were building out, we were doing open source work and proving a very good track record and building community and getting people working with us.

but what we wanted to do was make sure that there was a, community steward that could be a little bit more independent than Willow Garage. Willow Garage was a building the PR2, looking at other products. there were seven different companies that spun out of Willow Garage. And of those, one of the ones we created was the Open Source Robotics Foundation.

And this is, was set up to be the hub and steward of the community as a non profit, such that everyone know that this is going to be a neutral party. That's there, and specifically focused on the mission of promoting open source software and robotics.

[00:10:24] Audrow Nash: And why is it important to have a steward? what's, why, was this a risk to companies who might consider using it? Or like, why, does the community need a steward? What, is the role?

[00:10:40] Tully Foote: The steward is not necessary. There are a lot of good open source projects that are maintained and operated by, for profit entities. But the neutral steward helps facilitate the community building. One of the things you could potentially worry about would be if, the project is controlled by a for profit entity, they can drive it directly in the direction that they want.

For their purposes, which might be, competing with you. So if you consider you're contributing to this project that somebody, some other company that is potentially a direct competitor controls, you might be worried about whether or not your support of the project is actually helping them more than it's helping you.

[00:11:23] Audrow Nash: Okay.

[00:11:23] Geoff Biggs: If I can

[00:11:24] Audrow Nash: would also think, oh yeah, go ahead.

[00:11:26] Geoff Biggs: there is also, something I've learned in the past year. I never thought about it much before that, but there's also the, the legal IP protection as well is actually surprisingly important for companies, like logos and trademarks and all that sort of stuff. Companies care about that apparently, so we have to protect that and it's better to have that done by a neutral steward.

Then by one company they might suddenly design licensing fees or something like that.

[00:11:53] Audrow Nash: Oh, okay. So there's all that. And then, the perspective of being maintainers and developers of the code, so OSRF, now that was the nonprofit arm. When did OSRC come to be? I don't, know, maybe Tully, do you think?

[00:12:15] Tully Foote: Yeah.

[00:12:16] Audrow Nash: profit side of Open Robotics.

[00:12:18] Tully Foote: the way that OSRF operated was that it did consulting services for companies to pay for the staff. And in the overhead, we also did the open source maintenance. This is a familiar model for many open source projects. I think that the best analogy for this is Mozilla Corporation did a very similar thing.

It started off as a non profit. but then your non profit status can be threatened by, if you're taking too much contracting consulting work, which could be considered work for not the benefit of the public, but the benefit of a specific company or individual. And so the consulting arm was, spun out into a separate company.

Where the contracts went through and that's where all the staffers were employed.

[00:13:13] Audrow Nash: Awesome. Okay. So that's very clear. So what were you guys?

[00:13:17] Tully and Geoff's roles in Open Robotics

[00:13:17] Audrow Nash: So you, Tully, you joined right out of school. Geoff, you came along at Brian's invitation. What were you guys involved with at Open Robotics? What was your trajectory of roles? Maybe start with Geoff.

[00:13:33] Geoff Biggs: Mine's actually a bit different from Tully's, so I, I didn't join Willow Garage at all, and I didn't actually join Open Robotics until 2020, I think.

It was halfway through the pandemic. Yeah. so Brian invited me right back at the beginning, but at that point I was doing a postdoc in Japan and quite enjoying myself.

I felt like I want to really, I literally just started the postdoc three months earlier and I felt I want to give this a go for a couple of years. and then, the offer was, Brian said the offer's always open. and I was working as a researcher in Japan. I thought I was quite happy, and then, I found a reason not to leave Japan because, I got a girlfriend and then got married.

so that kind of kept me glued here for a while, and then after a while I got sick of how the research world of robotics was going, just a bit too much pointless papers, I felt, rather than doing actual real world impacts, and I wanted to be more involved in what I saw is like basically the robot revolution that was starting to happen mostly thanks to ROS, you know That was something I want to be more involved in and so I got an offer from a startup in Japan And I went to join that And then after about a year and a half of that, I figured that you know This is not quite what I want to be doing.

It was too focused on one specific thing So I called Brian up and asked him if the offer was still open. He said yeah, and now we do remote as well And so I ended up working for Open Robotics Remote from Japan. Yeah, and then just before the acquisition happened, I was called up and said, hey, this is happening But we want you to go to the foundation and help build it up to be much stronger for the community It's a winding path really Yeah,

[00:15:21] Audrow Nash: Yeah. And we'll be talking about the acquisition lots in the future. So that's a teaser for what's coming. but so Tully, tell me about your path. Cause you've been involved right from the beginning, I think.

[00:15:33] Tully Foote: Yeah, I, was involved with ROS before ROS had a name or the current code base. We had early prototypes early on. Indeed, we started off in subversion and part of our messaging early on is that like we were going to be a proper open source project and do it on SourceForge, if you remember that from way back in the day. We were there for quite a while at Willow Garage doing all our early prototypes.

[00:16:02] Geoff Biggs: if you look on SourceForge for I think it's a personal robot something or other, you'll probably find the old project still sitting there.

[00:16:29] Tully Foote: and once we started getting hey, we think this is the right one, I was asked to Build the first test library where we would be able to, validate that ROS seemed to work as a structure and that somebody could build a library on top of it and from my history in the car space and keeping track of localization and everything, transforms are always the biggest challenge.

And so what I did was I wrote the transform library as a proof of concept that we can have a library. That could let be leveraged the power of ROS in a distributed system. And

[00:17:08] Audrow Nash: So you wrote the original TF library?

[00:17:11] Tully Foote: Yeah.

[00:17:12] Audrow Nash: It's awesome. feel

[00:17:13] Tully Foote: so from there I, sorry, go

[00:17:15] Audrow Nash: I've been involved in Open Robotics and now Intrinsic for a few years, but, and obviously like I've worked very closely with Tully, and Geoff, we bump into each other pretty often too, but it's so cool to hear your, both of your paths.

cause I didn't know a lot of this about you Tully. sorry to interrupt it, but keep going. It's just so cool.

[00:17:39] Tully Foote: Yeah. so I started off there writing the libraries, continued working, did more, started, got into more release management, did some management, wore lots of hats at Open Robotics. Yeah.

[00:17:52] Audrow Nash: one thing that'd be an interesting dimension is like the hat you were wearing and roughly the number of people that were involved, at Open Robotics, like So when you started getting into releases, was it 10 people, 15, 20, who knows?

[00:18:09] Tully Foote: Oh, like the release management started off back at, Willow Garage. I was the ROS release manager. I think my first one was Groovy

[00:18:21] Audrow Nash: what'd you say,

[00:18:22] Tully Foote: we

[00:18:22] Geoff Biggs: Willow Garage had about 60 full time engineers as I recall.

[00:18:27] Tully Foote: yeah,

[00:18:28] Audrow Nash: Did they all?

[00:18:29] Geoff Biggs: quite a big team.

[00:18:30] Tully Foote: about 60 full time engineers and about 40 peak contractors and interns in the building at any given time. So it was about a hundred people.

[00:18:36] Audrow Nash: fun.

[00:18:37] Geoff Biggs: Plus the chef. Don't forget the chef.

[00:18:41] Audrow Nash: yeah, I

[00:18:42] Geoff Biggs: a full time chef on staff at Willow Garage.

[00:18:45] Audrow Nash: oh, a chef. Oh man. I wish

[00:18:48] Geoff Biggs: food. It was really good food.

[00:18:50] Audrow Nash: I missed this. I, I was a little too late in everything and I missed the great Willow Garage Days. so now I get to hear about them through these interviews nonstop. but it sounds amazing. And there was a chef. That's great. okay. Sixty people at Willow Garage, roughly, plus a bunch of interns and things like this.

And then how many people came to, Open Robotics with Brian, when this, so Tully, I assume you were there. Was it just a handful or was it a large part of the group or how would, how'd

[00:19:25] Tully Foote: it started moderately small. So I think just for clarification though, the ROS team at Willow Garage specifically was only about six or seven people. They're a large fraction of Willow Garage was actually really focused on the research side and building out a lot more of the high level tools, high level research, PR2 specific things.

but the foundation got started with. It's first contract was with DARPA to support the DARPA Robotics Challenge and the team started

[00:19:58] Audrow Nash: vehicle one. Correct.

[00:20:00] Tully Foote: this is the humanoid walking one.

[00:20:02] Audrow Nash: Oh, the humanoid walking one. Okay.

[00:20:03] Geoff Biggs: Yeah, this is after Fukushima, and so DARPA had a big thing about humanizing for disaster

[00:20:11] Tully Foote: that was the main project that kicked off the foundation and it actually started as, just Gazebo. The ROS team was originally, stayed at Willow Garage.

a little bit later, the foundation got funding and three of us transitioned over. myself, Dirk Thomas and William Woodall.

[00:20:31] Audrow Nash: Okay, so cool to hear about this, because yeah, Dirk and William, so cool to see that they've been contributing like the whole time

[00:20:41] Tully Foote: Yep.

[00:20:42] Audrow Nash: Okay, so you got some initial funding, a few people came over, you, Dirk and William, and I suppose it was Brian already, so it was, like roughly you four, maybe a couple other folks?

[00:20:54] Tully Foote: I think at that time the gazebo team was about six or seven already. they started building up to support the robotics challenge. so as we came in as like a smaller portion of a slightly bigger organization, but total, like 10

[00:21:14] Audrow Nash: You were helping actively with the development of ROS, and then you wrote the TF library, so proving that you could build libraries on top of it, and then, how do we, so then, how many years went by? How many years was Open Robotics,

or I guess, how many years was, did it have the commercial component of it? Cause clearly OSRF is still going.

[00:21:44] Tully Foote: Yeah. the foundation got started in 2012. It's still going. I can't say off the top of my head when OSRC got incorporated and we restructured, I want to say that was only,

[00:21:58] Geoff Biggs: 2013, I think. I remember Brian telling me about it at ICRA in Anchorage, which I'm pretty sure it was 2013.

[00:22:08] Audrow Nash: Cool.

[00:22:09] Geoff Biggs: Yeah. It was a while. it was around for quite a while.

[00:22:13] Why make ROS 2

[00:22:13] Audrow Nash: 10 years or so before the acquisition. Okay. And then Tully, fast forwarding through the evolution of Open Robotics. what were some of the larger milestones between 2013 and 2023?

[00:22:32] Tully Foote: Especially for ROS was the release of ROS 2, Willow Garage and that legacy was very focused on ROS 1. And when we started at Open Robotics, one of the things that was really foundational was, we'd ran a workshop in early 2013, which was called ROS for Products.

And we talked to a lot of people about how can we take ROS from the research field which we've been very focused on in ROS 1 to commercial space. And there were a lot of things that we identified, such as cross platform, embedded targets, installation targets, modularity and, reliability in terms of determinism for a startup and things like that.

And these were the things that we talked to a bunch of product managers. They, we had people from NASA there. We had people from autonomous driving, lots of people using ROS and wanted to use ROS in different spaces. but a lot of times our collaborators at Bosch prototyped an autonomous lawnmower with ROS at the Palo Alto research facility, and they got it green lit for being turned into a product, which you can now buy.

but the first thing that the product team did was throw out all of the research code written in ROS 1, because they weren't able to trust it to work. The research teams trusted it, but the products teams did not. And we wanted to build a new version of ROS, which product teams could trust. And that was the impetus for ROS 2.

And so at Open Robotics, we basically took that from the ground and built it all the way up to ROS 2, which was released. Several years ago now, and that's now what we consider mainline, and we're being used in products all over the world.

The, biggest deployment is all Roombas newer than i7, I believe.

[00:24:35] Audrow Nash: Hell yeah. I have one in my

[00:24:37] Tully Foote: millions of robots.

[00:24:39] Player and ROS 2

[00:24:39] Audrow Nash: It's so cool. It's so interesting. So we had ROS 1. ROS 1 came out of Player, which came out of, WillowGarage. Was Player at WillowGarage?

[00:24:49] Geoff Biggs: There's no direct connection between Player and ROS 1. ROS 1 grew out of a project called Switchyard, which was done at Stanford. It's a way to try and improve on the concepts of Player as part of its goals. but, there's a lot of intellectual legacy because a lot of the people who worked on player also worked on ROS, and, Brian was the leader of both of them and so on, It's all mixed up, but there's no direct code. Legacy that I'm aware of anyway.

[00:25:17] Audrow Nash: Makes sense. okay, but we had ROS 1 and ROS 1 was heavily used in the research world, but then now Open Robotics gets up and running, it gets funded, it's getting contract work, and then ROS 2 comes along as a way of making it so that we can bridge the gap from research to production.

And that was a big step for the robotics community.

[00:25:42] Towards a robot revolution

[00:25:42] Audrow Nash: I think. this probably starts going towards the robot revolution that you were mentioning, Geoff. Would you tell me a bit about it and how it's, I dunno why this is exciting,

[00:25:56] Geoff Biggs: Back when I did my PhD and then when I started as a researcher, service robots, robots outside of factories and outside of cages was still very much like a dream. It's like this is, maybe we can do it in really simple cases like the Roomba, but anything more complex than that, we don't have the computing power, we don't have the software, it's too hard to develop. and ROS really solved the too hard to developer thing, pretty much. And ROS offered three important things to people building robots who wanted to build complex service robots. It offered, first of all, it offered really good developer tools. Something that all the actors before it didn't do, including Player.

They didn't offer developer tools. They didn't offer things like RViz or ROS bag. These tools that just make your life easier. You can understand what the robot's seeing, for example. It also offered functionality libraries, the TF library. Very important one there. The navigation stack for ROSFarm was very helpful for prototyping and getting things off the ground and so on.

And the third thing was it made it a lot easier for people to exchange software. even in Player it was hard to do that because everything you used with Player had to be built into Player. so it was very hard to add new stuff to Player, which was annoying. But with ROS, everyone could distribute a new package.

And it just worked with all the other stuff because all the interfaces were the same. And because of those three things, it suddenly became a lot faster for people to prototype and then iterate and improve the robots they were building to the point where you could actually have useful services. And so companies like Fetch, they took this and they built on it and actually built usable products and did the product engineering to make sure that underneath was ROS doing all the hard stuff and they had a nice interface on top for the end user to use.

And, suddenly everyone's doing robots everywhere. And this is just in the past 10 years, it's really exploded the number of robotic startups. And now we see them on farms, we see them in warehouses, we see them in Antarctica, they're everywhere, right? So it's really quite amazing. It's just like Exponential growth.

[00:28:00] Audrow Nash: For sure. I thought one fact that Melanie brought up in our interview, a few back that Amazon has 750, 000 of their Kiva robots, the ones that bring the shelves to the pickers, to the people who pick from them, that's bonkers. That's so many robots. And then the number of Roombas that are out there, like it's, very exciting.

[00:28:23] Geoff Biggs: Yeah. And you think that 20 years ago, the first Roomba had only just been released and Kiva didn't exist.

And it's in two decades, we've just skyrocketed the number of robots being used in the world. It's incredible. And, and things like Dusty, with their construction robots, these things just weren't possible even 10 years ago.

And then, yeah, ROS just changed the equation because it just came, it became economically possible to develop a complex robot application. And a lot of that is thanks to ROS.

[00:28:55] Audrow Nash: I think so too. Hell yeah. Okay, so we've talked a lot about ROS, but Open Robotics was maintaining or stewarding other products too. Tully, would you tell me about all of the main projects at Open Robotics? Or actually, if you guys want to alternate, that'd be cool too. other than ROS, what else did we have at Open Robotics?

[00:29:16] Gazebo for robot simulation

[00:29:16] Tully Foote: Sure. So the other one we've talked about is Gazebo. That's the, that was the one that I mentioned. It was the first project that got kicked off within Open Robotics. This also has legacy back to the player era. Brian and other collaborators developed Player. And then they realized that they really wanted to be able to test this.

And so some other colleagues developed Stage, which is a 2D simulator. For the test, the robot agents around.

[00:29:45] Geoff Biggs: It was actually round the other way. Stage was first. Richard Vaughan, who was a lab mate of Brian Gerkey, he developed a simulator called Arena. And then they decided they wanted to run their software on the simulator and on the robot, which was the first impetus for Player. And because they called it Player from the Shakespeare quote, all the world's a stage and all people are only players or whatever it is, they called, renamed the simulator to Arena.

[00:30:14] Audrow Nash: I had no idea!

[00:30:15] Geoff Biggs: A couple of years later, Nate Koenig came along and he said, we could do a 3D one now and because, what's a 3D stage? It's a gazebo. There's your name.

[00:30:26] Audrow Nash: Wonderful.

[00:30:27] Geoff Biggs: it all goes right back to the beginning. Yeah,

[00:30:30] Audrow Nash: Oh, yeah. Okay, so we have Gazebo

[00:30:32] Geoff Biggs: is actually still around as well you can still use it today with ROS

[00:30:35] Audrow Nash: Wow.

[00:30:36] Geoff Biggs: very

[00:30:37] Tully Foote: Going back, Gazebo was this 3D simulator that had this older, longer legacy, and we picked it up at Willow Garage and basically brought it in and made it the first class simulator, for the PR2 usage, and to be the tool that we wanted the ROS community to be able to use and leverage.

it's not the only simulator available, but this is the one that we've put the most effort in and have the strongest integrations with. And so with that, Gazebo works independently of ROS but also is tightly integrated through well developed plugins. And we've pushed it, it obviously had its beginnings back in the Player Stage era, where it was little ground rope differential drive robots running around, some maybe holonomic ones.

We pushed it up, it did some, it's done autonomous car simulation for the DARPA Robotics Challenge. We built out its capabilities about doing simulated humanoids. And walking to the level that we can run the same motion control primitives on the simulated humanoid robot as the live one, and it continues to walk and function.

With other projects, we've pushed Gazebo to support flying vehicles, both fixed wing and multi propeller ones, using first level physics approximations of lift and drag. we've then taken that below the ocean and done underwater vehicles to support offshore research and testing because it turns out underwater is very hard to test.

And so simulation is very important because it's, you can't communicate with your underwater, survey device until it resurfaces. And you really want to make sure that you don't have a bug in your code that up is down and down is up. And you send your robot to the bottom of the ocean.

All of them have very fancy fail safe mechanisms to avoid that, but if you can, every, any single bug like that would cause a mission to fail can save you weeks of work.

[00:32:45] Audrow Nash: And like hundreds of thousands of dollars too, I imagine. I I interviewed

[00:32:50] Tully Foote: Ship time is very expensive.

[00:32:52] Audrow Nash: For sure. You're talking about MBARI? For those interested, I interviewed them on the Sense Think Act Podcast, Ben from MBARI, and that is cool because their robots are going to sea for weeks at a time.

[00:33:06] Geoff Biggs: of them go for months. Amazing.

[00:33:23] Tully Foote: ROS.

[00:33:24] Audrow Nash: hell yeah. Let's see. And then from talking to another interview with Louise, who was leading Gazebo, the Gazebo team at Open Robotics for a while, she was saying one of the big advantages of Gazebo is that it's really modular. And so if you are simulating something simple, but wanted to run really fast, you can disable or just not include all of the parts that you're not using in your simulation and then because of that you can run at super fast speeds just because you're doing way less work, whereas it's very hard with completely opaque, not open source simulators and this kind of thing. So I thought that was a big advantage of Gazebo.

[00:34:06] Tully Foote: Yeah, the physics engine is swappable, the rendering engine is swappable, so people have plugged in various, hardware accelerated renderers and hardware accelerated physics engines for different research projects, and we can, let that happen. And the, super fast one is called the Trivial Physics Engine.

[00:34:26] Geoff Biggs: SimplePhysicsEngine. Yeah. it's very simple.

[00:34:29] Audrow Nash: similar. Yeah.

[00:34:30] Geoff Biggs: Yeah.

[00:34:31] Audrow Nash: That's funny

[00:34:32] Geoff Biggs: It is trivial. It's just very basic physics.

[00:34:36] Audrow Nash: Hell yeah. That's great. Sometimes it's all you need.

[00:34:39] Geoff Biggs: It is.

[00:34:39] OpenRMF project at Open Rob

[00:34:39] Geoff Biggs: We actually, Open Robotics used it heavily for the OpenRMF project, because that was fleet management. we didn't really care if the robots were being accurately simulated in terms of avoiding obstacles and real grip and all that sort of stuff. Just wanted to know they were moving around on their paths properly.

[00:34:57] Audrow Nash: So actually that's a good segue, to Geoff, tell me about the OpenRMF project.

[00:35:02] Geoff Biggs: Yeah, so the OpenRMF project started back in 2018, I think it was. And this was actually in, A proposal from the government of Singapore, their research arm, and a hospital in Singapore. They came to Open Robotics and said, we've got this problem. Our hospital has got all these robots in them, but they come from different manufacturers, and all the fleets don't talk to each other.

And you get two different robots from different manufacturers trying to use the elevator at the same time, and they get stuck, and someone has to go and sort it out. And so this was like a problem. They went, yeah, basically.

[00:35:34] Audrow Nash: Yeah.

[00:35:35] Geoff Biggs: The cause traffic jams, because they can't talk to each other at all.

And so this was something that, can you come and help us figure out how to solve this? And so Open Robotics set up the Singapore office for its initial reason was to work on that project and that project turned into It's like a fleet manager manager basically. The idea is that it works with fleet managers to coordinate actions of robots that can't directly talk to each other and then that grew even bigger to the point where it becomes basically a sort of universal fleet manager and the actual fleet managers for each individual robot don't really do much thinking for themselves they just Control the robots, and OpenRMF is the one that is responsible for assigning tasks, figuring out who, which robot is, and which fleet is the one, best one to handle certain tasks and so on.

And so this was built in Singapore. to work in the hospital where it is being used in real life today to manage several fleets of robots on an industrial network in a very critical environment. It's a hospital, you've got to do things right, you've got to be careful. And these robots do things like delivering medicines, delivering food, cleaning the floors, all that sort of stuff.

And they take elevators, they do all this cool stuff. And OpenRMF controls all of that. It even does things like Saying, okay, this robot wants to use the elevator, so it cools the elevator for the robot and makes sure it gets to the right floor, or it opens doors for the robot and so on. It coordinates all of that for you.

And so this has really become, it's become the third major project of, Open Robotics because it's, as these robots, out in the world have grown in number. Something's got to manage them all to make sure they coordinate properly. And that's what OpenRMF does.

[00:37:21] Audrow Nash: Yeah, for sure. Yeah. I imagine anywhere that we're going to have a lot of robots, which if we see more of a service robots explosion, this kind of thing will be very, valuable, I imagine. And it's hard

[00:37:33] Geoff Biggs: Yeah,

[00:37:34] Audrow Nash: a especially interfacing with all these different makes of robots that had

[00:37:37] Geoff Biggs: it's really hard.

[00:37:39] Audrow Nash: to each other. And then I guess, I don't know if it's worth mentioning, but Tully is

[00:37:48] Geoff Biggs: it's like some of them are proprietary.

[00:37:52] Audrow Nash: Yep.

[00:37:54] Geoff Biggs: Some of them are proprietary, some are open source, they have different interfaces to their fleet managers, the robots themselves have different interfaces. And there's so many environments where this is important, not just hospitals. Like you can think of a hotel.

They would have relay robotics come in and do the delivery, they'll have iRobot come in and do the cleaning robots, and all these things they need to coordinate, and so on, and all these different environments, office buildings, out on the street, delivery robots. It's really important everywhere.

[00:38:19] Audrow Nash: For sure. Hell yeah.

[00:38:22] Important, but often forgotten: Open Robotics' infrastructure

[00:38:22] Audrow Nash: And then, maybe the last one to discuss as a significant project out of Open Robotics would be like the infrastructure, the build farm, things like

[00:38:32] Geoff Biggs: Absolutely.

[00:38:33] Audrow Nash: Tully, would you like to speak to this?

[00:38:35] Tully Foote: Yeah, so I think this is one of the things that is often goes unseen and just, taken for granted. the Open Robotics team has done an amazing job of providing infrastructure for the entire worldwide community to take advantage and make the use of ROS easy and convenient. you can go out there and just sudo, add the ROS sources and sudo apt-get install the packages on modern, recent Ubuntu LTSs.

And it just works, and you can be up and running in minutes. We've got Docker containers that are pre built with the latest software. you can download tarballs and install them, and you can just expect them to work. And, we've got a lot of web services that are up. The documentation services, that are there and available for the community to take advantage of.

And having all these things there at your fingertips It's something that a lot of people are just like, Oh, it's just there. It just works. But there's actually a whole team behind them making sure that's there and works well and that has quick response time for the developer so that you have the, like, when I make a pull request against some ROS package, I'll get a little response immediately, like moderately quickly saying Hey, this passes test, this fails the test.

And then we have deeper ones that run nightly or that are manually triggered that will run cross package, cross platform tests. So we can be confident when that gets merged, it's going to work and you won't break other people. And as long as we can keep that infrastructure and running, it helps significantly improves developer speed.

Because you don't have to worry that my change is actually breaking things that are in your pull request later. So that allows us to keep developer velocity up both for the core team, but also for the greater community. The foundation provides a ton of resources for building and releasing packages for the entire community.

If you, as an individual developer, wanted to release a ROS package, And you had to mainly go and build it and rebuild it every, time your dependencies were updated. it would be much, much slower because you'd have to poke a whole lot of other people. And whereas right now the Open Robotics provides all those services to the whole community if they're releasing open source software.

[00:41:10] Audrow Nash: Yeah, for sure. I think like the build farm is pretty amazing with all this. So people will create a ROS package and then they submit it to us on the ROS distro website. And, point to the release repository and then people can like it builds it checks it manages all the dependencies And then you can as you were saying you can apt get install it Which is like other people's packages in the community and then so this lets you install big packages like moveit or nav2 pretty easily, but even more esoteric ones still just an apt get install away, which is quite cool

[00:41:53] Tully Foote: Yeah, and I think that this is something that modern developers somewhat take for granted. back in the day when we had to check everything out and build from source, and if there was a bug you had to stop and SVN check out for the code and then start the build and go to lunch and come back to try it again.

I remember when we first had the binary builds that are available quick and moderately quickly, we actually got to the point that people were just like, I'm working on this project. There's a bug that's going to be fixed upstream. I'm going to wait the two days for the, other maintainer, release, patch, fix, and build.

And then I'll try again. And I'll go do something else. and it's a, it's another whole different way to think about things when that release pipeline can be trusted and be part of your critical supply chain.

[00:42:46] Audrow Nash: That's awesome. Yeah. And one thing that I've really liked is there's a site called index.ros.org. And so that one, it is an index for all the packages and it shows you how they all relate to each other and which distributions of ROS they're available for, links to documentations, link to the repository.

It's a very cool way to see what all is out there. And also You can see the relationships, what packages use what packages, and just trace them back really deep. I don't know. It's a very cool, complex network that has so many contributors to it.

[00:43:24] Tully Foote: Yes, it's a, that's a, great website that the foundation provides and, highly recommend people. Take advantage and leverage it. That's one that, is less, less appreciated. I think a lot of the content historically has been on the wiki, but we are moving to use index instead for ROS 2, so that it's a little bit more scalable.

The wiki has reached its limits and we can't sustain it anymore due to the underlying infrastructure.

[00:43:57] Audrow Nash: Yeah, that was more of a ROS 1 thing, and I think

[00:44:00] Tully Foote: the MoinMoin wikis are designed for one to 2000 pages. For the web server and I believe we're at something like 10 to 20, 000 pages. I have scripts that go through and clean up empty pages and spam pages to clear them out. But we're still running at something like 10x the number of pages that it was designed for and

we've had to throw extra resources at just to keep it running.

[00:44:29] Geoff Biggs: Yeah, probably the only reason it still runs is because CPUs have gotten faster.

[00:44:34] Audrow Nash: That's a good point. Let's see. Okay. So those are the main areas of Open Robotics.

[00:44:42] Open Robotics before the Intrinsic Acquisition

[00:44:42] Audrow Nash: where were we say 2022 or something like, like where was Open Robotics? How were we functioning as a company? Just where were we? Maybe Geoff?

[00:44:56] Geoff Biggs: I can speak to some of it. Tully was more involved in the management at that time. as a company, Open Robotics was actually very successful. It was technically a startup, it's only 10 years old, and it never had venture funding. It was entirely bootstrapped. So

[00:45:11] Audrow Nash: No debt at all, right?

[00:45:13] Geoff Biggs: Yeah, so for the company to grow, from just a few people at the start right up to about 50 or 60 by the time the acquisition happened, with no venture capital is pretty rare these days, especially in the technology sector.

And it was in general a pretty healthy company and I think all three of us here would absolutely agree it was a great place to work, very friendly, very good culture, it was a wonderful place to work. I think, I hand over to Tully though, I think he was more privy to what was going on at the management level in those days.

I was trying to be an engineer at that point.

[00:45:47] Tully Foote: Yeah, so I think that, we'd seen a lot of growth and we've seen, there's a lot of progress and things happening in the space and nearby spaces. There's a lot of innovations that are going on that are close to, in the robotic, space or close to the robotic space. You see all the funding for AI, big companies coming in and doing big investments.

And as Geoff mentioned, Open Robotics and OSRC had been bootstrapping all along. And with our business model, we couldn't make, big investments. A lot of what we did was, find projects that we would work on that would help push forward the state of the art. but we were doing it in a lot of small steps and finding ways, to find good customer, good projects for customers.

That would help us on our vision for how to improve ROS.

We also wanted to do more strategic investment and push into regions that we thought would be, in bigger directions, take bigger steps, which is why we looked for, alternative ways to structure the project.

[00:46:49] Audrow Nash: Hell yeah, so how I understand it is we had we were contracting and so from contracting you get like a little margin of profit and so with that profit we had to I guess, I guess people were paid through the, I don't know. So you, get a little profit extra on top of everything, in addition to your running expenses.

And with that profit, you can reinvest it into large infrastructure changes, big, big features, larger initiatives, basically. And so that's very hard to do on the meager.

Not meager, but like just the thin layer from contracting, of profits. And, so because of that we were very iterative in what we rolled out. You agree, Tully, with this?

[00:47:38] Big steps for the ROS community we can take because of Intrinsic (Zenoh)

[00:47:38] Tully Foote: I think that's, we were doing the work and then reinvesting into the open source. And we, it's not, we weren't able to take big steps, make big moves, invest in big projects.

[00:47:51] Audrow Nash: What, are some big projects that we'd like to, and we, what were some ones that we really wanted to invest in that would have required some big steps, versus very incremental change as we have?

[00:48:04] Tully Foote: so the, very concrete one is the RMW Zenoh.

[00:48:08] Audrow Nash: Ah, I'm so excited about that.

[00:48:10] Tully Foote: This is something that we have, we've seen potential in Zenoh, but we, and we'd done some proof of concepts, and we had some estimates of what it would take to do this sort of project. And, But there was no companies that were willing to step up and fund, hey, let's go develop that.

[00:48:32] Audrow Nash: Yep. Just to, for background with that, we were using DDS with ROS 2 as the primary middleware and it has pretty significant scaling issues, right? When we have a lot of nodes, because they have a lot of inter node traffic, is it correct?

And then Zenoh, it's another middleware, but it does things a little bit differently in a way that scales significantly better, and it seems like it's built for robotics applications or it's just very well suited for it, and so thus it will have a lot fewer middleware problems,

[00:49:08] Tully Foote: Yeah, so part of, what we did with ROS 2 is we very specifically designed it to have what we call the ROS middleware abstraction. And what this allows us to do is be able to switch, between our middleware implementation. we specifically did this because we were targeting the DDS standard, and there were multiple DDS vendors, and we've, over the years, we've had different default vendors.

We've had a whole selection of vendors available as the middleware implementation that we used.

But part of what we did was define that middleware abstraction, and the nice thing is we can actually switch to a potentially non DDS middleware if it can provide the same functionality.

[00:49:51] Audrow Nash: I

[00:49:52] Geoff Biggs: Yes

again soon.

[00:49:55] Tully Foote: there's some

[00:49:55] Audrow Nash: middleware. there's some joke ones that still work because of abstraction

[00:50:00] Tully Foote: we, joked about an April Fool's one, which would be a, Airhorn based Morse code middleware. And we'd have two robots communicating via Airhorns across the office. never got around to that one, but I do have some like electrically controlled Airhorns in the closet if anyone wants to work with me on that.

[00:50:20] Audrow Nash: it.

[00:50:22] Geoff Biggs: care of

[00:50:27] Tully Foote: I from that one of the things we identified, we've done some projects and

[00:50:32] Geoff Biggs: that.

[00:50:36] Tully Foote: on Zenoh and there's, there are, definitely some potential benefits and we think it's worth investing in. We've done some projects with them that have worked out, using Zenoh that have worked out well and we want to, we, wanted to try it out and one of the things that's like at this scale.

We don't really know how it's going to work for everybody until we've actually built it. And so it's a moderately speculative thing to go and build this size of a project and see how it works. As you mentioned, we've ran into trouble with a bit of an impedance mismatch, I think, between the DDS vision of how distributed systems work and the ROS vision.

ROS very focused on nodes and lots of endpoints, and DDS would really rather do keyed topics, with filtering on shared topics. And so we're looking at, Zenoh as a potential alternative to try out. And so we're not, moving away from DDS, but we want to make the investment to see how it works, with Zenoh.

[00:51:41] Rust for robotics

[00:51:41] Audrow Nash: Hell yeah, and one thing that's very exciting from my perspective, for Zenoh is built in Rust, the programming language, and Rust, I absolutely love Rust. I got to use it for one of my Open Robotics projects and just completely fell in love. So all my side projects are now in Rust. but so because of that, Rust is going to be more supported.

In the open robot or that's a big challenge, I think is to make it so that we can support Rust and its build process, which is fundamentally different to C++ and how it does versioning and how we do operating systems and

[00:52:20] Geoff Biggs: Yeah,

[00:52:21] Audrow Nash: dependencies and

[00:52:22] Geoff Biggs: it's been interesting because, so Rust, and in particular Cargo and its way of building things, are very different from how C++ and Python and a lot of other languages we've used up till now work. And so figuring out how to integrate that into workspaces and the Buildfarm, and in particular, one of the big problems is that Rust users, they used to be able to just rust up the latest version of Rust, whereas the Buildfarm can only use what's available from the system package manager from the default repositories.

And so that makes it, difficult to say, what version of Rust do you support, and so on. But these are problems that we are working on solving, and, we do hope to basically have Rust support in the Buildfarm. At about the same time, that we're trying to start getting Zenoh out there and then, then things are just gonna be awesome.

[00:53:12] Audrow Nash: Yes, totally. Yeah, it's funny that there's a complete mismatch in how they do dependencies for this. Completely different paradigms. And yeah, that's a hard square to circle, I think. But, okay. I'm excited about that. But,

[00:53:27] Intrinsic's acquisition of Open Robotics

[00:53:27] Audrow Nash: now we have enter Intrinsic. I think, maybe Tully, tell me a bit about Intrinsic and, the acquisition.

I don't know, just background on all this and what's occurred. And I know it was like a year ago,

[00:53:46] Tully Foote: Yep. Yeah. It's,

[00:53:47] Audrow Nash: Good to figure it out now.

[00:53:48] Tully Foote: bit over a year ago. we worked out the agreement to have the open source robotics corporation, the for profit subsidiary doing the consulting work for open robotics. to be acquired by Intrinsic. this was a, mutual agreement where we, believe that it is beneficial for all the parties involved.

One of the, one of the interesting things when we were doing the, had our discussions, Open Robotics obviously has the mission to promote open source robotics, education, outreach, and development of software. And there was a slide with a mission to democratize access to robots. And, on the Open Robotics team, we're like, did we write that?

Or is that their slide? And it turns out it's their slide. and so this is where we, have a really good alignment on what, Open Robotics mission is and what Intrinsic is looking to do. And we're looking for, the ability to Intrinsic to make some of these bigger investments and to do things like the RMW Zenoh project, which is now underway.

We'd wanted to do that at Open Robotics for a long time. We had a proposal, we had an estimate, but we'd shopped it around and we're not finding people who are willing to invest in making that sort of effort to improve the core systems.

[00:55:13] Audrow Nash: Yeah, for sure.

I was, the ROS boss in, for Humble. version. And we were talking about DDS and if that was possible and how much of a lift, Zenoh, and if we could be doing that. And it's just, that was years before, that was probably two years before the acquisition.

[00:55:32] Geoff Biggs: Yeah, we, we actually started looking at Zenoh for Singapore because in the hospital DDS was having problems on their commercial hide down network, things like discovery not working across subnets and so on. And so we started looking at Zenoh way back in 2020.

[00:55:50] Audrow Nash: Gotcha. So and then, Tully, would you tell me a bit more about Intrinsic? Who are, who is this company? What are they doing? They wanna democratize robotics, but how do they wanna democratize robotics? these kinds of things.

[00:56:09] Tully Foote: Yeah, so Intrinsic, we're a startup. It's coming out of the Alphabet, Google X Moonshot factory. And so Intrinsic has been going for several years, incubating internally. the acquisition was actually before we announced what Intrinsic was doing. but what we're, our current product that we're talking about is, automation of industrial work cells so that you can bring more of the modern robotics to this industrial workspace.

A lot of robotics has been very focused on very simple, repetitive tasks. And we want to be able to bring a more modern approach, including vision tools and dynamic planning, et cetera, to this environment.

[00:56:58] Geoff Biggs: to

[00:57:00] Audrow Nash: And we're doing that with Flowstate.

and flow state is, it's, it seems awesome, like from the public demos, it's like a. Behavior or like a tree of actions that you put together and it makes a lot of the really hard robotics problems in setting up behavior for robotic arms and work cells much easier.

And so that's the intention so you can automate tasks.

[00:57:29] Tully Foote: Exactly, we're focused on providing the ability to deploy and, configuration management and bring a lot of the technologies from the cloud deployment space. To the industrial robotics space so that you can have your configuration management, your deployment control, introspection, logging, et cetera, from a web UI instead of necessarily needing to go in with a teach pendant to train each robot to do a specific task.

[00:58:01] Audrow Nash: And then also with the teach pendant it's if you have a KUKA arm or an ABB arm or whatever type of arm It's a different way of doing it. It's very like you need someone who's skilled in that specific make

[00:58:15] Tully Foote: exactly. The, skill, like the perception task can work with any robot out there. as long as the robot is capable of doing what you're looking for it to do. Likewise, we can, so we can swap out sensors, we can swap out robots,

[00:58:31] OSRF since the Intrinsic acquisition of OSRC

[00:58:31] Audrow Nash: and so intrinsic at the end of What year? So we're in 2024, at the beginning of 2023, at the end of 2022. so then Open Robotics, OSRC, the for profit part of Open Robotics was bought by Intrinsic, right? Okay. So Tully, you and I went over and Geoff, you stayed at OSRF.

[00:59:04] Geoff Biggs: So I was asked to, so OSRF, effectively changed, the non profit arm. It had to change its operation model. So the board of directors, already existed, had existed from the start as a non profit, had to have a board. That was, Brian Gerkey, Steve Cousins, and Ryan Gariepy, three people who have been involved with, ROS all the way from the beginning.

So they needed to find a minimal staff to start operating the nonprofit. And so they chose, Vanessa Orsi, who was previously the CFO at the OSRC. she became the CEO of Open Robotics, of OSRF she's very capable. She's done like 20 nonprofits over her time, in America.

And it's, really good CEO to have. And then they asked me to come along and basically direct the technical side of things, the project management and figuring out what we need to make the projects work better as open source projects and so on. and so That was where basically I ended up switching to OSRF rather than OSRF.

[01:00:15] Audrow Nash: Let's see with that transition. So it's just you and Vanessa that are running OSRF, at the moment.

[01:00:23] Geoff Biggs: Yeah. So we are, the two of us are the only staff, full time staff that the OSRF currently has. We do have several contract engineers from a wonderful Argentinian company called Ekumen that provide contract engineers who help out with the, Build, Farm, Maintenance and Infrastructure Development and all that sort of stuff.

They're really talented people, wouldn't survive without them, I don't think.

[01:00:49] Audrow Nash: They're, great. they were working with us way

[01:00:51] Geoff Biggs: Oh yeah, we've, been using them for, yeah, they've been working with us since back in the Willow Garage days. They're a wonderful company, so talented.

[01:01:00] Audrow Nash: huh. So

[01:01:01] Geoff Biggs: and so they are contract engineers, and they work on the infrastructure and keep things going and develop new features and so on.

And that is. The extent of the actual, paid staff at the OSRF, and then, everything else is all volunteers, the whole community coming together. Obviously, there's still a big chunk of that is an intrinsic, former Open Robotics engineers from the OSRC are still heavily involved and we're very happy about that and a long way to continue.

[01:01:33] How former OSRC folks contribute to OSRF now

[01:01:33] Audrow Nash: Tully, tell me about the, so you and I, and then pretty much everyone else at Open Robotics came along to Intrinsic. now how, are we involved still on Open Robotics? Where, the ROS 2, Gazebo, OpenRMF, the infrastructure, how are we still involved? And, is it just like They bought all the talent and now, like Intrinsic bought all the talent and now it's just a very small team.

Or how are we supporting, Open Robotics to be successful in the future at Intrinsic?

[01:02:17] Tully Foote: Yeah, so at Intrinsic, we, strongly believe in the value and the mission of, Open Robotics and the various projects we're investing to support them, supporting teams at Intrinsic doing core, ROS, Gazebo, OpenRMF and infrastructure development, as well as, people in the broader team are also contributing back in some of their overhead, or like in their, some of their small amount of time that is available to, on top of their other primary projects as well.

[01:02:56] Audrow Nash: it's like the maintenance time. It's actually, it's very similar to the maintenance time we had while we were at Open Robotics.

[01:03:02] Tully Foote: Yeah, one of the analogies is that it's very similar to Open Robotics. We have, effectively one big project that we're working for instead of many little ones.

But where Intrinsic is values and investing heavily in all these projects and wants to continue to see them be successful.

[01:03:23] Audrow Nash: Then I'm not. 100 percent sure what can be said, but how, like from the community's perspective, if they don't know exactly how Intrinsic is using various things from Open Robotics. They may worry that Intrinsic cuts off part of it and some part of the Open Robotics project suites dies because of lack of investment.

How, did the different parts of Open Robotics, and is it fair to say that, or should I say OSRF for this? How, did the different

[01:03:53] Geoff Biggs: Probably OSRF would be a bit clearer at this point, yeah.

[01:03:57] Audrow Nash: So how did the different parts of OSRF fit in Intrinsic within what we can say,

[01:04:06] Tully Foote: So we're, actively using all the parts, major components of the projects from OSRF within Intrinsic. I can't say much more about exactly how we're using them, but I, reassure you that they're, being used and look forward to being able to talk more about it.

[01:04:28] Announcing OSRA!

[01:04:28] Tully Foote: But I think that the, main thing that we want to, do, and actually a lot of why we're here, is actually to make it, make sure that the communities around these projects are bigger than one company. And that's why we've actually been pushing toward what we're just about to get talking to about now, the Open Source Robotics Alliance.

[01:04:50] Audrow Nash: Go ahead.

[01:04:52] Tully Foote: Geoff, you wanna take this?

[01:04:53] Geoff Biggs: Okay. Yeah. okay. That was a sudden change.

Yeah, the Open Source Robotics Alliance, is a new initiative from OSRF and in a nutshell, it is our plan for how we're going to improve. the governance of all the projects at the OSRF, going forward and basically make the projects much more sustainable and long term viable.

Because up till now, we've had the Open Source Robotics Corporation providing some funding and staff, engineers to work on the projects. And it was effectively a benevolent dictator for life situation. The community trusted the Open Robotics, people to Keep the projects going and do right by the community and in return, the Open Robotics people had you know, broad leeway to do what they thought was best for the projects and the funding for that as well.

But, without the OSRC anymore, the OSRF has no staff and no, no funding to keep all these things going. obviously the Open Source Robotics Foundation has not gone bankrupt all of a sudden. We've got enough of a nest egg, but we do need to have basically money and people to keep things going in the long term.

And from our point of view, we don't want to just do the OSRC again. That's, we'll just be repeating the cycle. We want something that's going to last much longer and be much more sustainable. and in particular, much more community driven, community involved than the OSRC was able to do just because of its nature, the structure of it.

And so that for that, we have basically, we've spent about a year and a half now putting together the Open Source Robotics Alliance and that is going to hopefully resolve all the issues we've had up till now with governance and processes and sustainability. The way I put it is that we want to move away from having just a few full time engineers at one company supporting the projects to where we have a hundred contributors, some of many of whom may be part time working on the projects.

it's much more sustainable to have a hundred people working half the time on the project than to have a couple of dozen working full time on it. Because if one of the full time people leaves, it's a huge loss of time and, talent. If one of the part time people leaves when you've got a hundred of them, it's much less of an impact.

And that really is the way that sustainable open source projects work. And so we're moving more towards that model.

[01:07:27] Motivation for OSRA

[01:07:27] Audrow Nash: Okay. what were some of the issues with the past way of doing things? So you mentioned the loss between, if you have just a few full-time people, or if you have many part-time people, so that's a clear example of an issue that you would wanna fix, in constructing a new structure. But are, there other good, clear examples of things that you are trying to change with this structure?

[01:07:57] Geoff Biggs: Another thing that we really want to change is how much we have in terms of resources to do things. So with Open Source Robotics Corporation's funding We were able to maintain the projects and develop them at a slower pace than we liked, at least keep them going forward and pay for the infrastructure, which I learned last year costs more than you can possibly imagine to pay for.

And we, we've been able to do that, but we want to move to having a model where we've got much more resources to support these projects. And that Basically means financial resources, we want to be able to expand the capabilities of the infrastructure to have even more support for projects or more CI time, stronger CI time, stronger, CI tools like static analysis to make sure things are really production quality, things like it.

Documentation, in open source, very few people want to spend their time writing documentation. They want to write the cool features that people are going to use. But the documentation is still just as important. And we, in the past at Open Robotics, we actually had on staff a document writer. She was very good, but she was also just one person.

And there's very much a limit to what she was able to achieve because of that. There's so much documentation that needs written. And so if we have more financial resources, the OSRF would be able to hire more, document writers, on an as needed basis or full time, appropriately, in order to really improve the documentation.

Another example, if you want the project leader to have The full time availability to commit to a project. We could do that at Open Source Robotics Corporation because they work for us. But now they work for other companies and they'll be volunteers. if you want to guarantee their time, it'd be nice if you had to pay them yourselves. And infrastructure, not just the infrastructure costs, but also the cost of people to run and maintain the infrastructure. developer advocates, Kat Scott is amazing, but again, she's just one person. We'd love to have, half a dozen Kat Scotts just going around and telling people how great our software is and figuring out what the problems are that need to be fixed.

There's always

Go ahead, Tully.

[01:10:14] Tully Foote: I want to say I think one of the things we're, looking for with this is to grow the, this core community, part of one of the challenges is that with the ad hoc nature of the governance of the project and mostly being driven through OSRC and the staff there, you really needed a moderately tight connection to connect and be able to contribute.

Like obviously, pull requests are welcome, etc. from outside, but, we didn't have a good formalized method for people to come into the project and gain, gain experience and trust and become more and more delegation was harder to do because we didn't have a formalized method.

[01:11:01] Geoff Biggs: Yep.

[01:11:01] Tully Foote: of the things we did to encourage external contribution was we created the Technical Steering Committee or the TSC, and this was explicitly done.

As a way to bring in and provide a venue for corporations to come in and provide guidance to the project. we set this up. Early on, and it's a different model than most open source projects did because we had the benefit of the Open Source Robotics Corporation team providing a lot of that guidance and core capabilities and overhead support.

we actually really wanted to make the minimum bar threshold for people to come in and said, if you're going to be actively contributing, we'll give you a seat at the table. this is. Because we had the luxury of doing that with the OSRC staff there and available. You'll see a lot of the sort of boilerplate and background stuff and infrastructure was all provided by OSRC and not the TSC.

whereas Go ahead.

[01:12:05] Audrow Nash: and this is in the Open Robotics days when we, before the acquisition, we had the TSC, and yeah, so then you had companies have a seat if they have a full time employee on working on ROS or Gazebo, one of our projects, and then, okay, so continuing from there.

[01:12:26] Tully Foote: So I think that from there, what we're looking to do with OSRA is open this up and make their provide more formal channels for the collaboration across a broader community. you mentioned for the TSC, the one of the requirements is to be able to provide a full time engineer, to the development, which is actually a really high threshold.

it's basically impossible for an individual and any small startup, that's actually very hard to do as well. and so we wanted to have,

[01:12:59] Audrow Nash: a hundred thousand dollars of the company. That's just going to go right to,

[01:13:05] Tully Foote: exactly, a full time engineer, you'd be able to have a whole staff member and, A lot of companies that want to contribute, maybe they're only a couple of people,

a third or a quarter of your workforce to specifically open source development for the platform, is not feasible at that stage, but we want to have paths for those companies to engage and have a voice in the system.

[01:13:32] Audrow Nash: Yeah. So what I'm hearing, from the previous model to the new model, one of the things that you're really trying to do is you're trying to allow more people to contribute by having accepting fractional. people effectively. So if you can contribute a fraction of your time, you are, you have a good structure for how they can contribute and how they can be a part of the community.

Whereas before it was full people and that left a lot of, people who might have been happy to contribute, but can't give as much as was required. So that seems to be a big theme in this OSRA, initiative to me.

[01:14:14] Geoff Biggs: Yeah. We're trying to move away from tying, being involved in the governance as like a member from how much you contribute, because we don't think that's fair, every contribution is very important. Even if it's just a tiny bug for us, it's going to be helping someone else because, someone else is going to have that bug too.

And the more of those we have, the better for the project and the more further forward it moves. And so we want to move away from that. But we also want to have a way to get the financial income that we need to support the projects. And so we are moving away from our, you TSC model, with its very loose, ambiguous processes and No real defined way of being more involved in a project than just sending in pull requests. And we're moving more to a, more of, I'm not sure traditional is the right word since open source hasn't been around that long, but it is more of a traditional approach to doing open source software funding and management.

[01:15:16] Audrow Nash: so is it something you can look at from like the Python community or?

[01:15:20] Merit-based contributors

[01:15:20] Geoff Biggs: Yeah, so if you look at things like the Linux Foundation, or the Eclipse Foundation, or the Cloud Native Computing Foundation, there's so many of these out there these days, They all have a very similar model, where members pay a membership fee. on an annual basis, and that gives them certain rights and benefits.

For example, they get the, the right to be on some kind of governing committee as one voice, and they get advertising benefits from being mentioned on the website and so on. And so we're going to have that in the OSRA. There will be membership, for, available for organizations, and they can be involved at the high levels of governance, of technical governance of the OSRA and the OSRA's projects.

But we also don't want to just say, okay, if you've got, if you've got a million dollars a year income, give us a chunk of that and you can be making the decisions. that's not very fair to the community, which is for ROS and Gazebo and Open RMAF and our infrastructure, the community is just so important.

It's the community that has grown them to this point over 15 years. And so we want to make sure that the community still has involvement. And so we have spiced things up a bit from the traditional model. we've kept, a lot of the meritocratic approach of some foundations like the Eclipse Foundation and the Apache Foundation for the project governance.

So the day to day of the projects, making the design decisions, deciding if a PR should be merged, all that sort of stuff. It should be done by the engineers who are actually doing it, is what we feel. and so the contributors who, Particularly dedicated to the project. they don't have to be a member of the SRA at all.

they can apply to be committers in the project and that gets them rights to merge broad quest and be involved in the day-to-day design decisions, and, all the other stuff that goes on every day in a open source project. and we're having a process for that is based on the Debian process for gaining privileges.

Where you have, you apply and you have a mentor who mentors you for a certain period of time and then the People who have already gone through that process and attained that committer status They can take a decision whether you become a committer or not. So it's all entirely meritocratic. there's no money involved at all It's based entirely around Does this person contribute?

Do they make good contributions? Are they good for the community? Are they good for the project? And if you are, you can be in. It's free. And that, is the entire lower layer of our OSRA governance structure with our project management committees, one for each project, is entirely meritocratic.

[01:18:06] OSRA Consortium

[01:18:06] Geoff Biggs: Above that is where we have more of the traditional consortium approach, where we have what we call the Technical Governance Committee.

The Technical Governance Committee takes more of a long term view. they worry about the OSRF as a whole, its whole technical roadmap as a whole, and not just not just the ROS roadmap, but the ROS and Gazebo and OpenRMF roadmaps and how do they merge together into a more of a long term vision, and things like that.

And they, that is where the consortium members get their, voice gets heard, and so they take the long term view. But even there, we don't want to just say, at the high level, pay to play, that doesn't work for us. And so we split the power. At the TGC level, we split the power between the paying members and the meritocratic members.

So representatives of the projects and representatives of the paying members, they come together and they discuss and they make decisions that will impact the longer term health of the foundation and its projects, for example, how do we use the budget to make sure the projects are getting the resources they need, for example, do we need to hire more document writers?

Do we need to invest in more CI time? Things like that.

[01:19:19] Audrow Nash: I like that a lot. So it's almost like the US government or something where you have multiple branches of it. So you have the meritocratic part and you have the consortium based part. And those both make decisions and they have their own weights in some sense.

[01:19:37] Geoff Biggs: Yeah, so we've tried to balance, we're trying to strike a balance basically between community involvement and involvement of people, of companies who are putting up big chunks of money. Basically because, there's companies they're putting up money to help fund these projects and keep them going and they do want to get something in return and it's not just all marketing and having their logo on a website.

They want to have, some of them want to have like engineering, voices involved and so that's why they get to have, some kind of voice in the TGC where they do technical governance as a whole and look at how all the projects are going and make sure things are going right and just you know keep the general health of the foundation good for the benefit of the projects.

[01:20:22] Audrow Nash: I like that quite a lot.

[01:20:23] Why companies might want to join the OSRA consortium

[01:20:23] Audrow Nash: What, explicitly are the benefits to joining as a consortium member?

[01:20:30] Geoff Biggs: So different companies look at the benefits in different ways like they depending on what they want and what sort of company they are they have different weights of what's important to them. So if you look at something like the Linux Foundation. the companies that pay huge chunks of money to be like the platinum sponsors of the Vince Foundation, they get a board seat, and so they get to be involved in the decision making of the foundation at a very high level.

But that's not really what they care about so much. So long as, the next trademark keeps on going and the project keeps on going, they're happy, generally. Because it's their engineers who are using it and product managers who are making decisions about it and so on. But they do the fact that they can just go around and say, Hey, we're a big sponsor of this and we're, got our logo on the website.

And they get, primary, they get priority choice of, sponsor booths at Linux conferences and things like that. So for a lot of companies, it's purely marketing benefits. They're just interested in the various different ways they can get marketing benefits out of it. And that's why you actually find that a lot of members of these sorts of foundations, the funding for the membership fee actually comes from the marketing budget.

Not the engineering budget,

[01:21:40] Audrow Nash: Oh, that's funny

[01:21:41] Geoff Biggs: something, something interesting that I learned, it's not quite how I expected, but it's how they work. For other companies, they want to be involved in the technical side of things. And so that's why they want to be like, in our case, they want to be on the TGC, be involved in those technical discussions and helping to steer the whole foundation for the longer term health of open source robotics.

[01:22:04] Tully Foote: Geoff I can

[01:22:04] Audrow Nash: think, oh, go

[01:22:05] Tully Foote: one other thing, which is that one of the benefits to the companies is also companies want to participate in this to invest in the project as a whole.

[01:22:15] Geoff Biggs: Yes.

[01:22:16] Tully Foote: if you think about a company that is building and relying on ROS or Gazebo or something else, if they're getting value from leveraging this open project.

and building their products and their, products are building on top of this and leveraging this. And if it was to go away next year, that would be very bad for them because they'd either have to purchase another replacement, build their own, or pick up maintenance themselves and so it's in the interest of companies that are out there and leveraging these tools and these resources that are provided in the open source to make sure that the open source project stays healthy and robust.

And so this is an opportunity also for them to provide that support and help de risk their own future looking down the line.

[01:23:02] Geoff Biggs: Yes, you can look at it as it's like saying they're paying their share of the maintenance costs. Thank you. Rather than paying for all of the maintenance costs themselves, and the dozens of engineers that would take, they just pay one small share. And everyone else pays one small share and together everyone comes together.

[01:23:20] Audrow Nash: That's really nice. Oh, yeah. And yeah, I've heard many companies thinking of it as Tully said with the de risking approach to keeping it going and one small share for maintenance is really nice because then you can aggregate and you can have people who specialize in maintaining the software rather than oh, your team suddenly has to go figure out some deep down bug in ROS 2, which is super complex.

And, someone who's very familiar with it on the ROS 2 team would be like, oh yeah, I know exactly what's happening.

[01:23:52] Geoff Biggs: Exactly. Yeah.

[01:23:54] Audrow Nash: okay. Hell yeah.

[01:23:55] OSRA membership types

[01:23:55] Audrow Nash: And what, one thing I believe is that the membership, they are tier or so there, there's probably different levels of support, but also there are different costs for different size organizations and things like this.

Correct?

[01:24:13] Geoff Biggs: Yes, so we have several different types of membership. So for organizations, we have, the standard levels is platinum, gold, and silver, which give similar rights, but different variations of them. And the cost for those Also varies based on the company, company headcount. We looked at various different models, having flat fees and, flat fees don't work because if you're a small company, it's a huge amount of money.

If you're Microsoft, it's a tiny amount of money and, things like that. so in the end, we came down to, using the, the revenue of either the company as a whole, or the business unit that's involved in robotics. sorry, income, yeah. and that gives, that's a good way to balance it. because, for example, a small start up with 10 people is going to have much less income than a very large company.

And At the same time also, a very new startup is not going to have much income at all because, they're still just kicking off, but they may have something to contribute even

[01:25:21] Tully Foote: Sorry, Geoff. We do headcount.

[01:25:21] Geoff Biggs: So that goes a good Where did we do headcount? Oh, sorry, yeah, things changed.

[01:25:28] Audrow Nash: Worries. Yeah. So it's slightly in flux, but it's proportional to the size, basically on some way.

[01:25:34] Geoff Biggs: It's, the size, yeah, so it is the size of the, company or the business unit involved because, if it's like If you look at something like,

[01:25:43] Audrow Nash: Yeah.

[01:25:43] Geoff Biggs: something like Microsoft or Bosch or Amazon, it's not like the whole thing's doing robotics, right? So

[01:25:49] Audrow Nash: Yes, for

sure.

[01:25:52] Geoff Biggs: Yeah. And so that's for organizations.

We also, of course, want academic institutions and non profits to show their support. So we have a very minimal fee to show that support. and most interestingly, it's something that many other foundations don't necessarily do. We also have a way for individuals to be members of the OSRA. And

[01:26:14] Audrow Nash: At the consortium level or,

[01:26:16] Geoff Biggs: level, yes.

so individuals can pay a very small fee, on an annual basis. And that gives them the right to be what we call end user representatives. So the Technical Governance Committee and each project management committee have end user representatives on them and their job is to represent the community of users rather than developers.

So they're looking at it from a point of view of using our software, not just developing it. And so the people who are individual members in the OSRA are able to be elected to these positions and the elections, the people who vote in the elections are the, all the end user, all the, sorry, all the individual members of the OSRA.

Thanks. And so they choose one amongst them to represent them on the TGC, for example, and that person's job is to take the community's voice to the TGC, or to the Project Management Committee.

[01:27:15] Audrow Nash: Okay. That sounds really cool. And then, All so it'll be like, okay, we have a decision about how we should take the direction, like for this next release, are we going to like maybe make a bunch of features for quad rotors versus, and I'm just making stuff up versus, I don't know, industrial robots of some sort.

And so then the community can vote based on that, based on their seats in the consortium, is that kind of the intention? And it's proportional to your level in sense, to your head count.

[01:27:49] Geoff Biggs: Yeah, it won't necessarily be a vote, it depends on the decision being made and how the decision is made. if everyone says, hey, I'm good with that, there's no point holding an official vote, right? Just waste time. But yeah, so if it's like a very small, if it's for an individual project, we're making a decision this week about whether to merge a PR, then all people who are committers on the project plus the end user representative on that project committee will be able to, discuss and be involved in that decision.

If it's a decision like this project is asked for, 100k a year to pay for a certain resource that they need to, and they say they need it for the project. But if we do that, we can't give resources to these other projects or something like that, the TGC will be involved in making that decision and the end user representative on the TGC will be part of that discussion and part of that decision making.

[01:28:41] Audrow Nash: I see. That's cool. I like that you're including individual contributers.

[01:28:47] Geoff Biggs: Yeah, we think it's important, the community and the individual, they're very important to us from the history of ROS and also robotics, hobbyists, for example, lots of hobbyists in robotics and there's no reason to ignore them. And yeah, we think that they should be involved and they should have a voice, even if it's just one amongst many, they still need to be heard.

[01:29:06] Audrow Nash: Yes, for sure. okay, that sounds great. So we have the consortium based approach and then we have the meritocratic one where you're on specific projects.

[01:29:18] Meritocratic contributors + how to mentor and get involved

[01:29:18] Audrow Nash: I imagine that there's a lot of very interesting ideas of how to get this set up and rolling and this kind of thing. Maybe, do you think?

Tully, do you think you'd be good to speak to this or should we go to Geoff for talking about the meritocratic part? You go, you guys know what I mean anyway.

[01:29:40] Geoff Biggs: Yeah, meritocratic.

[01:29:41] Audrow Nash: Yes. Meritocratic. go ahead Tully.

[01:29:47] Tully Foote: Yeah, I think what we'd really like to do is keep, we, we have a pretty good culture of this at the moment and we're trying to look to formalize it. We have project leads, we've actually asked all of the existing project leads to continue on in the project lead role under OSRA, and then for the first year and at the end of the year, they'll be up for reelection within the PMC going forward, in the next year, so we'd really like it that we can basically formalize a little bit of what we're doing. Each project committee will have the opportunity to define their own process. each of our different projects has very different composition and makeup.

The ROS project has a lot of moderately federated packages and maintainer ship is distributed, whereas something like, Gazebo is midscale with some federation, but a lot of overlap on core maintainers. And then the OpenRMF and infrastructure projects are actually much, much tighter knit communities with a smaller group.

And so the, same project management structure is not gonna be correct for each of them. And so we want to, we're gonna let each of those project committees. Set up their own structure. We've laid out some high level guides saying you need to provide structure along these lines, such as the mentorship and adoption process for new contributors to come in.

But we're going to leave that up to each, committee to actually define that. And when you become a committer. What access do you get, et cetera. it depends on what project. It depends on the scope and how they want to do that.

[01:31:30] Audrow Nash: Gotcha. So we are coming to the end of the time that we had booked. Are you guys running okay with running a little bit long, like maybe 15 minutes or something, or do you have to run,

[01:31:42] Geoff Biggs: Good.

[01:31:42] Tully Foote: Yeah, I could keep going.

[01:31:43] Audrow Nash: Okay, hell yeah.

I like the idea of it because of these. Different projects, within OSRA are different in nature.

You're allowing the people who have been most involved to like the, leaders of each of them to, decide how to roll out these strategies. Will there be additional, cause one thing, like ROS 2, which is mostly what I've worked on, between our projects is enormous. Like it's, probably a hundred repositories across the whole project and, this is just our core repositories.

[01:32:26] Mentorship process

[01:32:26] Audrow Nash: Where are we getting the initial mentors from? Is it mostly people that are now at Intrinsic or what do we imagine for, this, the onboarding steps for the meritocratic part?

[01:32:41] Geoff Biggs: So I'm actually working on this at the moment, as we're recording, we're recording this in advance of the announcement, but I'm working with the four project leads and we are drawing up the lists of the initial set of committers for each project, and what specific repositories they need to have access to.

And that's going to be our, Our kickoff point for each PMC and then from the operational date of the OSRA, from that point forward, anyone will be able to say, I want to be a committer, put their hand up and then go through the mentorship process. and so we think that probably for the first year, At least, the teams would look very similar to how they look now.

So for OpenRMF and Gazebo, it's quite internal to Intrinsic. For ROS, there's a bit of a mix, probably about, maybe not half and half, a fairly good balance at the moment of, ex Open Robotics people at Intrinsic and people from other companies such as Sony. But we think over time, as other people become more involved in the projects, become more interested in being involved in the day to day, rather than just sending the occasional PR, we'll think that we'll see more committers from outside the old Open Robotics team start becoming involved.

And that's really what we're aiming for, we want to have these project management codes be as diverse as possible and in all the different meanings of diversity. we want all of that in there so that they are long term sustainable, we don't want to have Bob from the ROS PMC left and he's not involved anymore and we don't know how to do half the project.

That's not a situation we want to be in anymore.

[01:34:24] Audrow Nash: For sure.

[01:34:25] The value of contributing to open source software

[01:34:25] Audrow Nash: Yeah, one, one thing that just thinking about my conversations on Spaces, on X and things like this, with various people in the community, especially those that are trying to get involved in robotics, I often recommend them go make open source contributions because of the mentorship that you get.

Yeah, it's really amazing because, so you make a pull request, the changes you think you should be adding, a new feature, fixing a bug, whatever it is, and then someone who has a lot of experience with the project, if it's one of the intrinsic folks or like one of the original Open Robotics people or anyone in the community that has been doing it for a while.

They probably have great software developer chops, and they'll tell you oh, you could approach this better, here's how I would frame it, here's some formatting, or, conventions that I would use that are good at reducing bugs or whatever it might be. and I feel like that's so valuable, like my, time when I started at Open Robotics, it was just such fast learning because of the feedback of the pull request process.

and I think people will become much better software engineers from doing this kind of process.

[01:35:46] Geoff Biggs: I really strongly agree. I tell younger engineers whenever I talk to them about like why they should open source. One of the things I always say is, The job I'm in today and probably most of my career is not because I got a PhD, it's because I did open source software, since 2003 or 2004 I've been doing open source software for robotics and that has introduced me to people, it's given me openings, but more importantly it's taught me good software engineering practices, just working with so many different people, learning how they work and how they do things and figuring out what works for them and how it will work for me, That's given me much better skills than I would have got just staying in one company and only doing that company's processes and just, yeah, and talking to other engineers, even, even in robotics, I started at Open Robotics in 2020 and I still learn lots, and people like William Woodall, he's, got incredible C++ skills, right?

He taught me so much about C++, and Michael Gray, who does OpenRMF again, he taught me amazing things about C templates and now Rust, yep. and so you're always learning in open source. It's great. It's a really good way just to become and stay in an amazing software engineer is to be involved in open source.

And I think the most important thing is never be too embarrassed to put in a pull request. Even if it's one line, you put in a pull request, people will see it, and they'll tell you how to make it better, and you will know next time around, and you will learn from that, and your software will get better.

And,

[01:37:25] Audrow Nash: And even if you, so like sometimes I'll be working on some side project or something and I'll be using a library and something doesn't seem how I expect. And so I'll make even an issue or sometimes pull request. and a lot of times it points out something that was either poorly communicated in their documentation or misleading or whatever.

And it's a good opportunity for them to improve their project. And a lot of times the response that I get, it's even if I think it's a silly question, they're very open to it and they're very helpful

[01:37:59] Geoff Biggs: Yeah, you learn very quickly, you learn very quickly that open source, people who do open source software, are generally open minded, they want to talk to other people about the software, they want to learn from others about how they can make it better, where the problems are, and they want to help other people help their project.

if if someone makes a pull request to my project that fixes, or adds a new feature, it's in my interest to help them brush it up and get it merged, because then I don't have to do it myself, and takes less time to help them, than it would to do it myself, and they benefit from that because they learn as well, we all benefit.

It's

[01:38:36] Audrow Nash: And they might get a feature they wanted.

[01:38:38] Geoff Biggs: Yeah, they get a feature they wanted, we get a new feature in the software, which is good for other users, it grows the software, this is why the open source software, Open Source Robotics Foundation exists, because this is the way we believe. Robotics has grown and is going to continue growing, and that's what we want to support, this whole,

[01:38:55] Tully Foote: and overall the development velocity of your open source is amazing because

[01:38:59] Geoff Biggs: oh, incredible.

[01:39:00] Tully Foote: You have this large pool of people, each one contributes a little bit and the whole, everyone moves forward at the rate of the sum of the contributions.

[01:39:09] Geoff Biggs: Yeah, and it never stops too, right? 24 7 it's always getting new stuff coming in if you've got a resource,

[01:39:17] Audrow Nash: For sure

[01:39:17] Geoff Biggs: always going forward.

[01:39:19] OSRA's badging system for contributors

[01:39:19] Audrow Nash: One thought that I have, and I don't know if you guys are planning to do this. but if not, maybe consider doing this. I think that one of the other big benefits of people contributing to open source, kind of Geoff, as you've mentioned, is that you start to meet people in the community and that has given you opportunities and jobs and this kind of thing.

What I would think would be a really cool thing is somehow have a leaderboard or it doesn't, need to be competitive per se, but some way of making it so that people are highlighted for their contributions, because that visibility will make it so other people know them a little bit better.

And maybe it's a better way to get a first job or something like this.

[01:40:12] Geoff Biggs: Oh, yeah,

[01:40:13] Tully Foote: we,

[01:40:14] Audrow Nash: would love to see that. Go ahead, Tully.

[01:40:16] Tully Foote: I was going to say we're, actively working on that. We're working on a badging system. So that people can get acknowledgement for being whatever their status is. They can show that off in advertising it. We're also going to work to try and integrate it with some of our various infrastructures.

See if we can get them badging on some of our sites and the forums, et cetera.

[01:40:36] Audrow Nash: Hell yeah. Oh, I love it. Go ahead, Geoff. Did you have anything to add?

[01:40:40] Geoff Biggs: Yeah, I was just going to say, apart from the badge system, this is actually something that we've been promoting to potential member companies as well, is that if you get involved in the OSRA, it's a really good way to find new talent that you can hire. Because the people who are actively involved at the meritocratic layer are guaranteed to be people who already know the software so they can hit the ground running when you hire them.

But they're also guaranteed to be good engineers who are open minded and willing to learn and are already, already learning just by being involved. And so that's, it's good for companies and it's good for the people who are contributing. everyone learns, everyone benefits, and everyone gets in touch with each other.

You can see, I know who that person is, I've seen their contributions going into the software, I think they're great, I want to work with them.

[01:41:31] Audrow Nash: Hell yeah. Do you ever think, so related to that, it seems like a fairly logical next step to me to also help, almost have a jobs board. Or for some of the, consortium companies that may be looking for jobs, like exposing and helping people that are looking connect to possible openings in jobs, any plan to do that, or that, to be, that could be quite lucrative,

[01:42:02] Geoff Biggs: That's actually one of the, one of the membership benefits is that, we haven't got it set up yet, but there will be a jobs advertisement system on the website, and members will be able to advertise jobs there. and, we've seen some of this in this in discourse, and, we've seen people advertising internships and full time positions and so on.

We want to basically try and formalize it and make it better for people to use. And so for members, this will be a benefit is that they can send in their jobs, that job board, and then that will be somewhere where one central place for engineers who know ROS or know Gazebo can go and find jobs that are going to be using the skills they have and know that they will have an advantage in applying for those jobs because they'll be able to promote their own involvement in the OSRA.

[01:42:49] Audrow Nash: I really like that.

[01:42:52] Educating more people to be roboticists

[01:42:52] Audrow Nash: Okay, now we're talking about getting people in jobs and what this makes me then think of is skilling people up, educating them this kind of thing so that they are in a position to do this. And part of that is the mentorship perspective, but is there going to be other efforts around education?

Maybe so that we can increase the number of people that would be contributing. You mentioned Better Docs as a big initiative. but is there anything else that will go towards skilling up people in general? It's

[01:43:27] Geoff Biggs: We, the Open OSRF actually has, as one of its goals, is education in robotics, open source software robotics, and so we do actually have some plans in this area, so I think everyone knows, the TurtleBot has been around for a dozen years now, I think, started at World of Garage first, I think, yeah, Tully was the originator of that.

As I recall. And so that's a big thing for education, but we also want to do, better in that regard. Not just providing a robot that runs ROS, but we actually want to have, tutorials that work with it, and courses you can do with it, and so on. And so we do have ideas in there, but these are going to take a little bit longer to come to fruition.

[01:44:10] Audrow Nash: It's a lot that's already happening.

[01:44:12] Geoff Biggs: We've focusing on OSRA for the first year, and, once that's up and running, then we'll have more time to focus on those as well. But yeah, watch the space. There's going to be fun stuff coming.

[01:44:21] Audrow Nash: Hell yeah. That seems awesome to me. is there anything else for either of you that you want to mention regarding OSRA? Or I don't know any, and if not, I would love to see where you think things are going, for the community and for our whole initiative.

[01:44:43] Geoff Biggs: I think, for the community as a whole, not just the initiative. obviously, I hope the OSRA succeeds, and I think it will. I think for the community as a whole, this is going to be really a beneficial thing, very much in the long term, especially. in the short term, if you're a user of the software, you're not going to see much change.

If you're a developer of the software who just sends an occasional PR, then you're probably not going to see much change for a while. If she's really getting heavily involved, then maybe they will be changed. you could be a committer, for example, but in the longterm, I think this is really going to accelerate the development of all of the OSRF projects and make them better for the people who want to use them.

So if you're a hobbyist, it's going to be higher quality software. So it'd be less. Less gotchas for you to worry about, or less problems. But if you're a company, there's going to be much more production quality. There'll be much less, trouble for you to ensure your robots work. Because you will know that the underlying software is being developed using the resources of the whole robotics industry, pulled together to provide great software.

The same way that companies can rely on the Linux kernel for putting in their products, I think, really, we're going to see acceleration of that to that level, and also acceleration of new features. just in the past year, the RMW Zenith thing is just shows what's possible when we have a consortium approach rather than just one company doing the full development.

[01:46:06] Audrow Nash: That's a good point. Yeah. We are able to make a heavy lift now under Intrinsic

[01:46:13] Geoff Biggs: Yeah,

[01:46:13] Audrow Nash: Or Intrinsic funding a lot of development. that's a wonderful thing. Yeah. And so you're saying more of the same will continue.

[01:46:22] Geoff Biggs: I believe so, yes.

[01:46:23] Audrow Nash: And we'll be able to make larger shifts that improve, and it might be documentation.

It might be education. It might be, I don't know, all sorts of other supporting things that make the community richer. Yeah. So that's all wonderful. Tully, what do you think?

[01:46:40] Tully Foote: Yeah, I'm really want to look to grow and diversify. we've been a small tight knit community and I think we, I'd like to see more people get involved and doing this. Setting up the slightly more formal structure, I think will give us the ability to reach out and grow effectively.

We've reached the limit of the scale we can grow with direct connections and where I'm looking forward to. Seeing more people getting involved, more companies getting involved at more levels with these, more opportunities for engagement and representation, actually, to get, more voices involved in the community.

[01:47:21] Audrow Nash: Hell yeah. Love it. Let's see.

[01:47:26] Robotics in the next 5 years

[01:47:26] Audrow Nash: Now, starting to wrap up, what do you guys think for robotics, like, when you think about five years in the future, what are you thinking will be different, where, just tell me where you think robotics will be in five years, and what opportunities may be on the way to there.

Wanna start with Geoff?

[01:47:52] Geoff Biggs: Okay, I think we're probably going to see two major trends. One is we're going to see a lot more multiple robots in a single space. And I don't just mean like a whole pile of Kivas. multiple robots from multiple vendors. So like that whole thing with, hotels with, delivery robots going to the rooms and cleaning robots and so on.

that sort of thing we're going to see much more of, I think. all these, single use service robots are really going to be everywhere, I think, inside and outside. And I think we're going to see a lot more of that. The other thing I think we're going to see is that robots start coming to market a lot faster.

And that will in part be because of software like ROS and in part be because of software built on ROS will start becoming much more widely available both open source and for sale stuff so like you'll be able to, we already see a few companies doing this but you'll be able to go and buy a robot chassis and then from a different company go and buy a navigation system that you can put on it for example and so it'll become much more of that robotics companies become companies who provide services using robots, and they just, do a bit of integration work to pull this together.

I think that's the way we're going to see things going. And, I really look forward to getting to that point, because that's when the promise of ROS, and Gazebo, and OpenRMF, and open source robots in general really takes off. When people start not building the whole robot themselves, but trading parts with each other.

[01:49:30] Audrow Nash: Yeah, I think I had an interview, I think the last one published, or episode three was with Polymath Robotics, and they're doing just this. And it's the start of this trend from my perspective, where they're taking ROS, they're building some nice things on top of it that make it very easy to access, and then you can just build applications on that, which is so cool.

It's starting. I think.

[01:49:53] Geoff Biggs: It's starting.

[01:49:53] Audrow Nash: Your observation is already starting, which is the coolest thing.

[01:49:56] Tully Foote: Yeah, it's, it's fun to reflect on that because actually one of the visions that we set out when we were designing ROS back in 07, 08 era. Was, we wanted to be the LAMP stamp for robotics, where

[01:50:12] Audrow Nash: Does LAMP stamp mean? I don't know what that,

[01:50:14] Tully Foote: LAMP stack. The Linux, Apache, MySQL, PHP, Perl, Python.

[01:50:19] Audrow Nash: It was how people would build websites,

[01:50:21] Tully Foote: Exactly. This was one of the really early demonstrations of the power of open source with those open source tools as generic building blocks.

If you had an idea for a website that you wanted to stand up. You could use those tools, and a little bit of configuration and coding on top of it, create a website very quickly. Literally you could make prototypes overnight.

[01:50:48] Geoff Biggs: Yeah, you can look at it as like with that, with those four tools you could build Amazon. That's the way it was basically.

[01:50:55] Tully Foote: Yeah,

[01:50:56] Audrow Nash: Yeah.

[01:50:57] Tully Foote: And Amazon and Facebook, the, all of, if you had an idea, and the nice thing is with these open source tools, they were accessible to an individual. So that if I came along, I had a great idea, I could make a prototype, and it would just be there and be ready to go. And the vision we had for ROS was to provide that LAMP stack for robotics, so that all the tools are there that you can bring together.

And, if you have an idea for a robot application, you pick these parts off the shelf, the open source ones, you put them together in the configuration that will achieve your application, and you can test it out and deploy very quickly. And I think we are now, we're, that is insight where we can actually have this thing where it may actually be like robots that are getting deployed.

It's not just,

[01:51:51] Audrow Nash: Websites

[01:51:52] Tully Foote: Exactly

[01:51:54] Geoff Biggs: We're right at the very, yeah, we're at the very edge of that. And that's, going to be the next revolution for us. I think we've seen this sudden surge in robotic startups for the next one is going to be when they start sharing stuff, selling stuff to each other, and that's going to be amazing.

[01:52:09] Audrow Nash: Hell yeah.

[01:52:10] Why open source?

[01:52:10] Audrow Nash: To maybe ask an obvious question that we all agree with, but may not be clear, and Tully, what's the value of the open source part of this versus closed source or like things you can just buy and you don't know what's going on in them? Why open source?

[01:52:29] Tully Foote: So there, there are lots of components to open source that provide value. I think that for me, the most valuable thing is the development velocity. I mentioned that a little bit earlier, but basically. If there's 10 of us putting little incremental changes into one product for a tenth of our time, that's actually the equivalent of a full time engineer.

And so if you can get this community of small contributions that keep stepping state of the art forward, you can actually do more than if you invested a full time person to develop it for yourself. And so by leveraging this broader community, the development velocity goes forward faster. in addition, and every incremental person contributing and participating in the community accelerates everybody together, which is this really powerful incentive to join and collaborate.

Because if you break off and do your own little thing, you do that, but then you incur all of the development costs and all of the overhead and maintenance and,

[01:53:38] Audrow Nash: the tech debt.

[01:53:39] Tully Foote: have to try to keep up. With the distributed community development.

so you have to both invest in the velocity and the maintenance.

[01:53:50] Audrow Nash: Go ahead, Geoff.

[01:53:51] Geoff Biggs: The way I like to put it is that, for the cost of a little bit of engineering time to contribute to the project, you get a very large amount of free labor, basically.

That's the way I like to put it, because to companies, free labor sounds great compared to paying for their own engineers, right?

And that's what you get. you're getting all these other people. are effectively working for you for free. sure, they're working for themselves and for everyone else as well, but they're working for you for free for something that you could never do yourself because it just takes too much time to do and you'd have to pay dozens of engineers or more.

it's really a massive benefit.

[01:54:29] Audrow Nash: Yeah. And also like the hardening of the code too, because you have so many people using it and you're going to find a lot more of those flaky 1 percent or one in a thousand or 10, 000 bugs

[01:54:42] Geoff Biggs: That's actually a key point there. It's not just the number of people, it's the variety of viewpoints and thought processes and use cases.

[01:54:52] Audrow Nash: Great way to put it. Yeah, Tully.

[01:54:55] Tully Foote: I was going say the ability to understand that this has been used and demonstrated by a lot of people in a lot places. Allows you to have more confidence in something like a heavily used project, and, the more people have tested it out, the more corner cases you've discovered, the longer it's been there, opportunities to find those defects come up, and if we have good processes and good quality processes to make sure that we add regression tests, For each corner case that anybody discovers, you can be more confident when you pick up the open source software.

[01:55:29] Audrow Nash: Definitely. Hell yeah, okay, that's really cool to hear all those thoughts on this.

[01:55:36] Final thoughts

[01:55:36] Audrow Nash: If you, were to summarize everything we've talked about, or just the OSRA stuff, what would be the main takeaways that you hope people come away with? start with Geoff,

[01:55:53] Geoff Biggs: I think, my main takeaway for people is that the OSRA looks like a big change, but really what we're doing is taking what we've done so far and giving it, much, better. Hope for the future, much stronger, much more sustainable future to ensure that we keep on going for a very long time to come.

The OSRA is our route to achieving that.

[01:56:25] Audrow Nash: Hell yeah, and Tully.

[01:56:27] Tully Foote: I think my takeaway would be that I hope that anyone listening to this finds a way to get engaged with OSRA. We have opportunities for corporations. We have opportunities for individuals. I think it's powerful for any of you to get involved. We're looking to build pathways to make that available and the project will be better if you get involved.

So please come out, join us, and help build this product into a greater thing. We've already got over half a dozen members committed, and I hope that if you're at a company, please consider joining us.

[01:57:04] Audrow Nash: Hell yeah, and we'll include links to all the things that they might want to look at if they want to join. Awesome, so it'll all be online for this I assume.

[01:57:16] Geoff Biggs: Yes. Yes.

[01:57:17] Audrow Nash: Hell yeah. Okay, thank you both and I really, it was just awesome to hear more like the origin stories for Willow Garage and Open Robotics, like a lot of stuff that I didn't know.

And I think OSRA seems like a very good initiative and probably very good for the community. And I see what you're saying with things accelerating. To robot revolution, and I'm looking forward to it. So thank you both.

[01:57:44] Geoff Biggs: Thank you very much.

[01:57:44] Tully Foote: Thank you.

[01:57:46] Audrow Nash: Bye everyone.

You made it!

What do you think? Will you be joining OSRA? I'll put a link in the description if you want to check it out.

One more shout out to OSRA's founding sponsors, Intrinsic, NVIDIA, Qualcomm, Apex, Zetascale, Clearpath, Ekumen, eProsima, Picnic, Silicon Valley Robotics, Canonical, and Open Navigation.

That's all for now. Happy building!

[00:00:00] Edward Mehr: these are like a 70 year old plane. So they need to have the dies for it, the tooling for it, and the tooling also sometimes doesn't exist.

So for example, for a landing gear door of a certain aircraft, we're looking at four year lead time and millions of dollars before you can get your part. so the aircraft needs to be down on the ground for four years. it's a huge, hugely affect the fleet readiness, for our military. so we're working with them turning some of those four yearly times into days.

[00:00:29] Episode intro

[00:00:29] Audrow Nash: There's a big interest in America to reshore manufacturing. Now, to do this, do we just bring the jobs back to the U. S. and overlook that the reasons that those jobs may have moved may still exist? Or do we invent new ways of doing things and leverage our strengths in technology and innovation.

This interview is with Edward Mehr, and he is firmly in the second camp. He wants to invent new ways to do manufacturing, and he's well positioned to help in this effort. He's the CEO and a co founder of Machina Labs, and they're reinventing metal bending with robotics and AI. In some cases, taking something that literally is taking us years to make, and they're doing it in just days.

It was an awesome conversation and got me excited about the potential of robots and AI to greatly improve how we do manufacturing.

You'll like this interview if you're curious about robotics and AI in manufacturing, interested in how great robotics companies are built, and if you're interested in AI for modeling complex phenomena like metal bending.

Without further ado, here's my conversation with Edward.

[00:01:51] Introducing Edward and Machina Labs

[00:01:51] Audrow Nash: Would you introduce yourself?

[00:01:53] Edward Mehr: Yes, my name is Edward Mehr. I am CEO and co founder at Machina Labs.

[00:01:58] Audrow Nash: Hell yeah. Would you tell me about Machina Labs?

[00:02:01] Edward Mehr: Yeah, so we are, working on the next generation of manufacturing floors. the real problem we're trying to solve is that today, if you want to build a physical part, you pretty much have to build a factory that is very specifically built for that part. A lot of tooling, a lot of machinery that goes into factories, are very specifically designed for the geometry, for the material you're trying to manufacture.

And that severely limits in terms of the amount of CapEx investment you have to make to build a part,

[00:02:36] Audrow Nash: So it's far more expensive because you have to invest in infrastructure to build it. Okay. Or equipment.

[00:02:42] Edward Mehr: What we're trying to do is, can we develop technologies that allow you to, make a part A today out of design A and material B, and then switch to design C and material D tomorrow without having to change your factory.

[00:02:56] Audrow Nash: I love it. Hell yeah. So how, are you approaching that? Because that, sounds very interesting to be able to switch so quickly. And I've heard that we're going from like high volume low mix meaning a lot of the same thing being made to more of lower volume with high mix and this sounds like it's working towards that what are some like what do these manufacturing techniques look like

[00:03:26] Edward Mehr: Yeah, it's funny you mentioned it going from high volume low mix to high mix low volume. I think actually the ideal combination is high mix, high volume, right? because always people think about it in terms of it's a compromise, right? Because that was

[00:03:42] Audrow Nash: a trade off

[00:03:43] Edward Mehr: was given as a trade off, right?

Like you need to have high volume and low mix or low mix and high volume. What is it? But If we want to think about it fundamentally, what is the technology that needs to be there to enable both high and low volume and high and low mix? that's actually what we're thinking about. but to answer your question directly, if you look at history of manufacturing, and if you go Maybe 200, 300 years ago, where manufacturing first started as a craft, not to start it, but like it was mostly done as a craft.

We actually had a lot of flexibility. We had these people, we call them craftsmen or blacksmiths. and they had a very creative mind. They had very limited set of tools like hammers and chisels, maybe a set of tool of tools. 10 different kind, but then you could one day go to them and say, Hey, I have this piece of rod, can you turn it into a sword?

And then they would hammer it into a sword, right? They will figure out with their mind, how they're going to use their limited tool set they have to make a, sword. And the next day you can go to them and be like, I have this piece of sheet of metal, and I want to make a shield. And then they would use the exact same tools to use the sword, but apply it differently in a creative way to make you a shield.

So they were actually very flexible. The challenge was that a craftsman was a very much of a learned skill. You had to do mentorship. You had to, to go learn it from somebody else who has done it. so it would take a while before somebody would become expert in it. And then more importantly, humans, have limitation in terms of throughput.

So you could make you maybe make one sword, two sword, three swords a day. but beyond that, You couldn't, right? So with industrial revolution, there's a change happened where we were like, okay, craftsmen are great, flexible, but they can't have a throughput. So we build these machines that can make the same thing over and over again, right?

Because we were not creative enough or didn't have enough technology to develop the same thing that the craftsman does. We could make very constraint machines, hardware constraint machines that can do the same thing over and over again. but we couldn't make flexibility because we couldn't Replicate intelligence that the craftsman has.

So for the past two centuries or more, up to even today, manufacturing morphed into make the same thing over and over again. Even today, if you're like, for example, an automotive business, a car business, the main way you can make money is create one car that everybody loves. And then make a lot of it.

That's how you can make margin. because manufacturing is very much dependent on the type of parts you're making and you cannot easily change it. Look at Tesla, they're planning to make 5, 6 million of Model Y a year sometimes. sometimes. and that means that, the game is still the same as when industrial revolution started, right?

You have to make the same thing over and over again, to make it, profitable. Now, today. Advancement in AI and Robotics. We could replicate what a craftsman does at a scale. You have robotics that has the same kinematic freedom as a craftsman, right? Maybe even more dexterity, maybe more precision, higher force that they can apply.

But then with AI, we also can replicate what happens in the mind of the craftsman. We can figure out how to use different tools creatively to replicate different types of processes. So I think now we're, the two ingredients To make a flexible manufacturing system exist. So to answer your question directly, our systems use robotics and artificial intelligence to almost replicate what a craftsman does, but in a scalable way.

Because once you build one robotic system can do a craftsman, I can just replicate it and have thousands and thousands of those systems that does the same thing. Now you get flexibility, but you also get scale through, through scaling these robotic systems.

[00:07:33] AI in manufacturing

[00:07:33] Audrow Nash: Okay. So I'm interested in what you are referring to with AI for this. what are you, what kinds of things can we use AI to come in and help with manufacturing and maybe some simple reasoning or about what to do next or what to do? how are you using ai?

[00:07:57] Edward Mehr: AI, the definition of AI has changed over time

[00:08:00] Audrow Nash: Oh, definitely.

[00:08:01] Edward Mehr: Since its inception. If you look at 1960s, and I come from a computer science background, if you look at 1960s, intelligent AI systems were actually rule based systems. We'll call rule based systems intelligent AI systems, like regular programming.

And then over time, as we develop new techniques, new empirical techniques, where we can find patterns in the data and, use those patterns to actually do an object, do a task that a human does more easily and more automated. Then it starts shaping the definition of AI to more things that the humans can do.

there are a whole array of tasks in manufacturing that falls within different categories. Up to today, most of the automation has been what I call classic, classical AI, where you define the rules and the system just follows the rules. if you look at the type of manufacturing we talked about, where a craftsman can figure out a sequence of events.

can look at a piece of metal and as it's hammering it, figure out what is the next steps that it needs to do. That's the type of AI we're talking about. It's not following a set of rules, it's finding patterns in making a task done. And that task might be forming a sheet of metal into precision from a flat sheet.

What are the set of, process parameters? What are the set of, sequence of tasks that needs to be done to get a flat sheet of metal all the way to the final product? You cannot easily define it. You cannot define it. It's not a rule base. That's why it has been a craft. You have to learn it from somebody, mentor.

but with AI, we can actually capture what is happening in the process using different ways of capturing data. And then it starts identifying what are the patterns that the craftsman uses to get to the right part. And that's what we're talking about here. it's similar to some of the things you might see in, in the, in.

Discussions that you see around AI outside of manufacturing, how can a ChatGPT then do similar to human reasoning. That's the type of AI we're talking about here. That being said, we also use a lot of more earlier versions of AI, where, for example, as I'm forming a part, can I look at the Data from the sensors and find an anomalous behavior where it might lead into a defect into the part.

So classical, anomaly detection using empirical data. Those are the things that I also use. But the main enabler is, figuring out how a craftsman forms a part or crafts a part. What are the set of rules and, guidelines that we need to develop so that the robotic system can actually have the same performance as a craftsman?

[00:10:48] Audrow Nash: Okay. I would love if you can take me through a concrete example, because I, feel like I understand a bit from how you're describing it, but I want to see like in a metal forming or something like this, how do you, how are you using this to achieve a desired part?

[00:11:05] Edward Mehr: Yes. So for folks who might have seen our videos, you will see two robots, for example, forming a sheet of metal from flat sheet. So you have a flat sheet, two giant robots on two sides, and almost like a potter, they're like two stylists looking endeffectors on two robots on two sided sheet. They come in and pinch and deform a sheet of metal slowly into shape.

Now, when you look at it, it almost looks like a very much like a CNC process, right? Where you're almost like doing waterlines of the part and deforming the part in waterlines until you get the final part. But in reality, it's not just a CNC process. It's a lot of different changes in the process parameters, but also the path the robot takes is sometimes you might go in the center of the part and start forming the center of the part, and then go back to the outside and form the outside of the part.

So why do we have to do that? Why can't just a simple heuristic CNC path doesn't get you the right part? It's because, as you're forming the part, the sheet moves in unpredictable ways. And also, there's a process, there's a phenomenon called a springback, where you might form a sheet to a certain length or certain depth.

After the robot moves away, the sheet jumps back into a different shape, right? you need to account for all these movements, all these springback phenomenons that we talked about to form the right part. So you end up, the path that the robot ends up taking to form a part that's like less than a millimeter accurate to the design is a very non intuitive path.

You can, a human cannot just think of it. and

[00:12:39] Audrow Nash: you can't think of it all at once, but you can probably iterate to get there.

[00:12:43] Edward Mehr: you could iterate to get there. Yes. And that's what, we actually generate the data. So for right now, what happens is, you run maybe a heuristic path that you think, oh, this might work well. And then you scan it and say, okay, This area was five inches off, that area was two inches off, compared to the CAD.

So I can make a slight adjustments to get it a little slightly closer, and then run another one, and then see the results, and run another one. But if I could have a model that would tell me, in order to get to this part, this is the non intuitive path you can take. And you didn't have to do those trials. you can significantly improve the performance of the system. So what we're doing at Machina is like we're starting with human intelligence and creating a very seamless interface that humans can iterate and build these parts while we capture this data and use this data to build models that slowly improve the efficiency of the humans.

in early days, we would do 25 trials to get a part done. Now we are down to five, six trials. And the goal is get to the point that right off the bat, you get

[00:13:45] Audrow Nash: Goes to the right thing.

[00:13:46] Edward Mehr: done. Yeah.

[00:13:48] Audrow Nash: That's really cool. Hell yeah. So the way I'm understanding it is you, so in this metal forming task where you have the two robots on the one on each side of the sheet and they pinch so that they can push in to deform the metal in some sort of way to make some complex part, you have a part that you want to make and you are iterating in some sense so you like you'll push you'll pinch in some way and deform it and you do that a few times and you say this is what I expected this is what it actually looks like when you scan it or something to understand the state of the part and then you can keep steps until you converge on what you actually want the part to look like then you're also generating data to make it so you have a better, you're generating data so that you can use that to have a better model of how all of the things you're doing to the metal sheet or whatever the sheet is, you're understanding what happens when you apply an action to that sheet so that you could be more efficient with your actions,

Then you're more likely to be reliable and then you're also faster because you are better able to make the part in fewer actions and this kind of thing is that?

[00:15:19] Edward Mehr: No, you're exactly,

[00:15:19] Using data for better models than physics-based models

[00:15:19] Edward Mehr: you're basically building models that gives you an understanding of what is the physics that is happening underneath. And use the data to do that.

[00:15:28] Audrow Nash: Rather than directly modeling it because it's very complex because of all the really complex contact forces

[00:15:35] Edward Mehr: Contact forces, friction, so if you want to, traditionally people when they would do this, They would use physics based modeling, right? people think of finite element analysis or computational fluid dynamics. Those are methods that allows you to simulate a physical phenomenon. The challenge is that with a lot of these processes, first of all, FEA is very slow,

[00:15:57] Audrow Nash: What was that?

[00:15:58] Edward Mehr: FEA or Finite Element Analysis, which is the physics based way of doing these things.

[00:16:02] Audrow Nash: Yeah, you simulate like a bunch of points and you see what happens and you do, it's a differential equation to solve

[00:16:08] Edward Mehr: You apply physical laws to, to figure out, okay, what is happening. One challenge with that approach is that it is a slow. So it requires a lot of computation to, to do it. We, in early days of our company, we would form these parts that would like to take 20 minutes to form. And if you wanted to simulate it using physics based models, it would take us, on a 27 core machine, it would take us a week. So we're like, okay,

[00:16:32] Audrow Nash: Not feasible. Yeah.

[00:16:33] Edward Mehr: Not feasible. I would rather just run the part,

[00:16:36] Audrow Nash: Yeah.

[00:16:36] Edward Mehr: Let's see the results in real life, as opposed to simulate. Simulate it with nature.

[00:16:40] Audrow Nash: like heat people's homes with, computers that are running at full, bore the whole time.

[00:16:45] Edward Mehr: So we, made it, we changed the parts. We're like, okay, maybe can we do this, build these models empirically? And the other challenge with physics based models is that even if they're fast, There are not accurate.

To your point, you might not be modeling everything that's in there. For example, there might be some miscalibration in a robot that you're not modeling. There might be some, friction forces that causes some issues. There might be some adhesion between the end effector and the material that you're not modeling, or you don't, you're

[00:17:11] Audrow Nash: The model just isn't good enough. To do the task. like one thing that really struck me, in my electrical engineering education is so voltage equals resistance times current, but so V equals IR, but not at high frequencies. and then so the model breaks down. So the model is correct up to some, in some environment, or at least pretty close to correct in some environment.

And so if you're doing complex things on a part, I would imagine the physics based models just don't capture all the phenomenon very well.

[00:17:45] Edward Mehr: Yeah, you're exactly right. And that's once there's a need to build these empirical models, they're faster to inference, right? So you can get the result of it really fast once you build it. And then because they're based on physical data, you're almost taking into account everything that's happening in real world.

The key though is, as you're building these robotic systems, you need to build a system that allows you to expose that data you need to build these models. And that's what some of the architectural decisions we had to make is that, how can I capture the data that I need at every millisecond of this process?

So I can model it. and affects your product development and what kind of robotics you want to use and what kind of control loops you want to use.

[00:18:22] Audrow Nash: Okay. So I just, that is very interesting. I want to get more into that, but just so I understand at a high level, as I'm understanding things now, you have two parts, really. one is the iterative working towards a task using your existing model. And the other one is the, I've captured data. Let me learn a better model part. So you have those two things. Okay. so

[00:18:51] Capturing data + modeling

[00:18:51] Audrow Nash: how did you design it for, how did you design your system so that you can make an efficient transfer to your data part of this so that you can learn a better model? Because that's very interesting.

[00:19:02] Edward Mehr: Yeah, so there is, the choice of your architecture, your hardware architecture greatly affects your ability to, get data. So you can imagine, for example, in a stamping operation, which is a traditional way of forming sheet metal parts, where you have a giant press, and you have dyes that are male and a female dye, and you put a sheet in between and you stamp it.

Not a whole lot of opportunities to capture data, okay, we're going to

[00:19:28] Audrow Nash: it's like start and end

[00:19:29] Edward Mehr: It's a start and then you can capture that Some kind of an input to the to the either it's a hydraulic press or a servo press the current or hydraulic pressure Or whatever throughout the time, but you don't really you cannot really know what is happening to your part in a very granular way Because it's inside a very destructive stamping press so with our process because we're incrementally forming it we can capture A lot of little details, like what is the forces, every once in a while, scan it, what is the response, on both robots, how much current we're putting through the motors, how much deflection we're looking at each of the robots,

[00:20:08] Audrow Nash: wow, so you're doing this on the full system. It's not just Okay, that's, really interesting. I was assuming you were just, I, what I had assumed is that you just have the sheet and then you have a model that says we applied this to this. force this point force at these locations and then we scanned it and this was the result and that would inform the model but going down to like current on motors on the arm is really awesome because i guess they're non linear in their response most likely and so then modeling that actually probably makes a lot of sense Sounds hard though.

[00:20:43] Edward Mehr: yeah, so the robot itself, and you're in robotics, so you fully appreciate this, is depending on the pose, you might get different, deflection profiles, right? If a robot's completely, extended out. It might deflect much easier with less force. So the K constant for it is much, much, more flexible than like when it is in, in a very stiff position.

So we need to actually can take into account both the system itself, how it's responding to the process, and then also the material, deforming, what forces you're required to deform the sheet. But what kind of information you're going to get, what kind of springback you want to get. So you want to capture that at every step of the process.

And the more data, the more granular you can capture the data, the more easier it is for you to make the model. But more importantly, the more data you have to build a model, because a lot of these processes require a lot of data. So if you get had to start an end model, you have to make a lot of parts.

But if I could get deflection at every step. point of couple hours that

[00:21:46] Audrow Nash: You get a lot more data. Yeah

[00:21:48] Edward Mehr: I have a lot more data.

[00:21:49] Audrow Nash: It's a lot easier to converge to a good model with that I would imagine.

[00:21:55] Edward Mehr: Yes.

[00:21:55] Audrow Nash: One question just for my understanding because I don't know too much about metal forming You are, when you're metal forming you constrain all the edges of the sheet of Metal that you're going to be forming, right?

[00:22:11] Edward Mehr: Yes, you could. we do

[00:22:11] Audrow Nash: it's sitting in a picture frame almost, right?

[00:22:14] Edward Mehr: Yeah, you could do that. that's what we do today. That's what, that's another parameter. You could, modulate the amount of basic pressure you're putting on the boundary.

[00:22:22] Audrow Nash: So then when you do that, you're pushing into it and you're pinching part of the metal out. Are you, so what I assume is that the metal gets a bit thinner where it's stretched out. and so I would imagine that's complex. I don't know between, so in a finished part, how significant is the thinness, the thickness of the material?

Changed when you push it out, quite a bit. Does that, because that would be another dimension in the modeling, I would imagine the thickness of the metal part now

[00:22:59] Edward Mehr: Yes, so it depends on what path you take to deform the material. So if I start from a flat sheet and just, create a 60 degree wall angle right on my first layer as I'm forming the part, then the calculation is relatively easy. It's basically a volume, yes, it's a volume preservation, basically, law, right?

Okay. I. Got, start from a X millimeter thickness sheet. I deform it to 60 degree wall angle. So it's going to reduce by cosine of 60 degrees times the original thickness right? So simple, trigonometry will give you the answer. But if I, but, if, for example, if I did first a 45 degree wall angle and then push the 45 degree into 60 degree wall angle,

[00:23:47] Audrow Nash: Now it

[00:23:47] Edward Mehr: my thickness profile is completely different, right?

It's cosine of 45 times cosine of 60, which is a smaller number. So actually you get less. Less thinning, right? So another, so if you add thickness preservation or set thickness targets for your geometry, your final geometry, which you can, that also affects in what order you want to form the part, right?

You might want to form intermediate geometries and then push those intermediate geometries to the final geometry.

[00:24:19] Audrow Nash: Gotcha. Very interesting. Okay. going back to the data that you're generating. if you are observing kind of the robot state, also, robot, I just, I'm, imagining your state space is blowing up and this becomes a very complex equation because you use two, seven degree of freedom robotic arms.

Is it right?

[00:24:46] Edward Mehr: Correct.

[00:24:47] Audrow Nash: Okay. And you're observing the, current at each joint in the seven degree of freedom robot arms.

[00:24:56] Edward Mehr: Yeah, you could, you have access to the current at each edge. Yes.

[00:25:01] Audrow Nash: I'm imagining it's a very challenging optimization because it's very high dimension. And maybe you use something like neural networks that are very expressive to fit this. but then you might get a big risk of overfitting and it might be tough to generalize.

just, tell me about some of those challenges of working in a high dimensional space and how you guys deal with this.

[00:25:25] Edward Mehr: Yeah, so when you're in high dimensional space, you have two challenges. basically you have two routes of making models that are accurate. One is generate a lot of data,

[00:25:34] Audrow Nash: Which you're doing.

[00:25:36] Edward Mehr: which, is something you can do and we can, we are doing as well. Or in the short term. in the earlier days where you don't have enough data, you can also try to break down the problem, basically do physics informed modeling.

[00:25:48] Audrow Nash: Yeah,

[00:25:50] Edward Mehr: So you can start

[00:25:50] Audrow Nash: you reduce your space for

[00:25:52] Edward Mehr: introduce your space.

[00:25:54] Audrow Nash: by leveraging things like physics, simulation. Okay, I see.

[00:25:58] Edward Mehr: For example, you can say, okay, if my robot is deflecting, I can either create a deflection model based on the currents and the joint angles, right? and that would be my space. Or I can just say most of the deflection happens in each joint and, then model each joint differently and then calculate the total deflection based on the configuration of the joints.

So now you're taking advantage of kinematics and physics to say, okay, I can simplify. I almost feature engineering, and they, call it in the data world, like I am informing it. Maybe there's a better way I can solve half of it for you as long as you tell me how each joint deflect.

[00:26:41] Audrow Nash: Yeah,

[00:26:42] Edward Mehr: and then, so those are some of the things we did early on.

[00:26:44] Audrow Nash: So you're compressing your dimensions, basically, so you have fewer dimensions, and it's more tractable with whatever data you have. How do you go from zero data to some data? Do you start with physics based and you just poke it a few times and see what happens and then start to bootstrap a model or how is that?

[00:27:03] Edward Mehr: That's exactly what we did, right? Do like back of the envelope, not even physics space, back of the envelope calculation initially. I remember our initial deflection compensation model wasn't even in joint space. It was in Cartesian space, which has nothing to do with the robot, but we're like, okay, maybe

[00:27:20] Audrow Nash: so I would have approached it too.

[00:27:22] Edward Mehr: like they're like different in z direction as opposed to x y, so I can do a Cartesian based, deflection model. That took us a long way until we start seeing tears in very complex parts, and then we're like, okay, This is model, this is pose dependent, so we need to start, increase the complexity of the model.

But at that point, you have enough more data, enough data to start looking into what that model will look like. Yeah, the key,

[00:27:48] Audrow Nash: Oh, go ahead.

[00:27:50] Edward Mehr: I think the key for basically, I was listening to this podcast with Andrej Karpathy about why, or no, actually it had to be with Ilya, about why, ChatGPT exists, but not, A lot, it's much harder to do foundational models for robots. key is because you need to gather capture data. And in order to capture data, you need to operate a huge fleet of robots. Now, yes. And then this, means that you need to figure out a way to find an application where you can create value based on heuristic simple models to begin with, but that application after the data can have a potential to significantly improve.

but you need to find that application. That's a tough part. Find an application where even today with heuristic models, you can provide benefits.

[00:28:39] Audrow Nash: I think Electric Sheep, who was the first interview on this podcast, is doing a great job of this. They are doing lawn mowing. With very small mowers. And I think that's a very strong application for this low risk, simple, just like they're doing localization and path planning and things. And they're learning a lot of that.

but I think there are probably a lot of other spaces that could do a similar approach. So what I would suspect is that realizing, okay, data, is really important. The more data you have, the more expressive your model can be and the better it will perform.

so I imagine. That realizing that you decided to grab all the data you could. and then you've slowly been increasing the number of dimensions that you're actually using in your optimization. Is that true? And then also, how, where are we now in terms of, like, how much is in the models? How expressive are the models? How many dimensions are you including in the models? This kind of thing.

[00:29:45] Edward Mehr: So yes, you're right. So early days, basically build a lot of stuff based on heuristic or back of the envelope calculation or

[00:29:53] Audrow Nash: Heuristics still.

[00:29:55] Edward Mehr: Yeah. And, and then we start forming parts, like I said, early days, we had 25 trials before we get a part, right. Now we're down to five or six.

And initial models we used a lot of modeling, the techniques we use were so very simple. Like you could even start from regression. because once you break it down into a simple thing, you can think of, let's say, a joint deflection as a simple, one, one degree polynomial, right? because it's mostly

[00:30:26] Audrow Nash: simple models for these things.

[00:30:28] Edward Mehr: right?

So, we start from there. and I think last year, so to give you a little bit of concept, like last year, it's actually a paper that we, published with Northwestern around predicting the forces in our process using graph based neural networks. so that's like the, state of the art last year.

and it, starts allowing you to do cool things. for example, in that, paper, we use transfer learning.

[00:30:59] Audrow Nash: what? Transfer learning?

[00:31:00] Edward Mehr: Transfer learning, and the way it works is that instead of trying to make a lot of assumptions about our system, as you can imagine, you mentioned yourself, is that there are a lot of parameters.

What if your calibration is even wrong to begin with, from system to system? How does that affect the whole

[00:31:14] Audrow Nash: Or you could lump them into additional dimensions that you're doing,

[00:31:18] Edward Mehr: Exactly, you could lump them in an additional dimension, but then you need more data. So what we ended up doing to answer your question, in the short term, what can you do to actually improve the performance of the models?

We started using transfer learning. We said, okay, with every system, first form 10 layers, and then retrain the model that you have on just that 10 layers in that machine with that process parameters. And then predict what's going to happen for the rest of the part and optimize for it based on that.

And then we got much better results because now you're learning the extrinsic very quickly. different, specific features that each machine might have or each configuration might have and learn those in the first few layers, apply it using transfer learning, and then, you've got to be more accurate.

So that was the state of the art last year. Now this year, we are going back again at that bigger prize. Can I go all the way from this is input geometry, what is the final design if I go through a certain set of parameters? So that's what we're working on. I think we have enough data now to capture that.

so, stay tuned. I'll let you know how that model goes. And we're pretty open. We publish most of the stuff we, we do. Like I said, that paper is going to be in NAMRAC and it's going to be presented.

[00:32:30] Audrow Nash: Hell yeah.

[00:32:31] Edward Mehr: But that's, the, state of the art in terms of, the modeling today.

We're still using neural networks, but we're trying to basically go with the bigger scope of the, model and not do small feature engineering or, do things that are like physics informed.

[00:32:48] Audrow Nash: Very cool. Yeah. You're removing a lot of the heuristics and seeing if you can learn them and if that will be better. And then transfer learning is very clever, I think, because you can learn a good bit and you can make it general, but then you can learn the very fine details on each specific machine, or with a new material, or that kind of thing.

[00:33:11] Edward Mehr: It's funny, craftsmen do that too. So I used to do sheet shaping myself, and we used to, I used to work, at the shop, in Pomoda, we would do, panels for hot rods. And you do it under a, PowerHammer, you form it. But a lot of times you get a material, you don't know what it is.

they say, okay, this is a mild steel, maybe. And then you hammer it a little bit and you get a feel of it. You were like, okay, this is how this material operates. And then you, estimate what you need to do and how long it's going to take you. and I think it's the same thing with transfer learning, get the robot to form a slight little part or like few layers of the first part, and then capture that and use that data to figure out and form the rest of the part.

And that's something that humans do. I had a, blacksmithing teacher, you learn all these little tricks where oh, if you go to a junkyard and you want to buy good metal, we used to carry these hand pocket grinders.

[00:34:04] Audrow Nash: Oh, wow.

[00:34:06] Edward Mehr: and then you grind it and based on the amount of sparks it makes, you can know how much carbon it has.

[00:34:11] Audrow Nash: Oh,

[00:34:12] Edward Mehr: and the more carbon it has, it's usually, yeah, a stronger material. But this is, these are all concepts, like you try the material and you see the feedback and response of the material, and you know exactly what you're dealing with. So this is what craftsmen used to do, like mental patterns that created for a long time.

I think now with robotic and AI systems, we can actually replicate that, and maybe it's a faster way to scale these processes.

[00:34:33] Audrow Nash: Yeah, it's super cool. Hell yeah. Tell me a little bit. So you mentioned you're publishing research papers, which is very cool. I'm glad you're contributing to the collective knowledge of the whole robotics community.

[00:34:45] About Machina Labs

[00:34:45] Audrow Nash: tell me a bit about your team and Where are you?

I know you're in LA, but tell me about, just about your company in general. how many people you are? How large is your space? What kind of things are you working on? is it still very early researchy days or are you having customers and all these details?

[00:35:09] Edward Mehr: Yeah, for sure. so we started in 2019. and, we started in a facility. It's also a, defense contract manufacturer. That's our landlord. We got the back portion of this facility with actually a little bit of shares. We paid the landlord rent in shares and they're very supportive. but since then we can, we have, we can expand it in the same facility.

We have 35, 000 square feet in this facility. we've deployed 11 robotic cells downstairs, so 22 robots in total. and then we just acquired another space. It's a 66, 000 square feet. It's two miles away. So now we have around 100, 110, 000 square feet in total. we are slowly moving into data space as well.

but the goal of that space is not so much house our robotic cells to increase the, rate of manufacturing.

[00:36:01] Audrow Nash: More data and everything else.

[00:36:02] Edward Mehr: yes. So we are, hoping to get to a point where, we can manufacture a manufacturing cell in a month. so, we can get 12 manufacturing cells a year,

[00:36:12] Audrow Nash: It's so meta. That's great.

[00:36:14] Edward Mehr: Yeah. So, that's, the company's right now close to 70 people. We're planning to get to around 130, 140 people by the end of next year.

[00:36:23] Audrow Nash: Wow. Damn.

[00:36:24] Edward Mehr: so we're growing rapidly. the cost, the product is

[00:36:28] Audrow Nash: by the end of next year? So in the next, what is it? I don't know, 18 months or something?

[00:36:34] Edward Mehr: year and something. Yes. mid next year, I think that's, Yes.

and yeah, so the company, almost has been doubling the staff every year. and, it's already in customer's hands. early days of, our company was about making parts. one thing I learned from my previous job was that do not lock yourself in the lab. And by the time you're done building it, the product is already obsolete.

so we ended up even the first time, and even we didn't have robots, we were lucky enough to get customers to be like, okay, what are some of the complex challenges you have on sheet metal? Send it to us. We're going to, we're going to work on those. so I think NASA and Air Force were one of our early customers who provided us with the challenges they had.

so since day one, we had Customer Paying Parts, even our first year of operation, I think we made 300, 000 with me and my co founder, so it was, a good amount of interaction very early on, and we've been able to like almost double the revenue or more than double the revenue every year since then,

[00:37:42] Audrow Nash: Hell yeah.

[00:37:44] Edward Mehr: the system right now, we do manufacture parts in our facility, but that's usually in the goal of eventually them deploying the system in their, in their,

[00:37:54] Audrow Nash: Do you want to sell work cells effectively? Oh,

[00:37:57] Edward Mehr: Yeah, we already have deployed ourselves into the customer facility. So for example, one of them is at Warner Robins Air Force Base, working on building, components for, sustainment and repair of aircrafts. they have an interesting challenge in the military is, they have, hundreds of weapon systems and defense systems that are, some of them, 70, 80 years old.

think about B 52. and these are sheet metal airplanes, right? And every time a component, let's say a landing gear door gets damaged, they either have an inventory of it, but in a lot of cases they don't, because these are like a 70 year old plane. So they need to have the dies for it, the tooling for it, and the tooling also sometimes doesn't exist.

So for example, for a landing gear door of a certain aircraft, we're looking at four year lead time and millions of dollars before you can get your part. so the aircraft needs to be down on the ground for four years. it's a huge, hugely affect the fleet readiness, for our military. so we're working with them turning some of those four yearly times into days.

[00:39:05] Audrow Nash: Hell yeah.

[00:39:06] Edward Mehr: Yeah, so that's like one of our biggest customers. it's deployed there and we're working with some of the other folks in automotive and aerospace as well.

[00:39:15] Machina Lab's funding

[00:39:15] Audrow Nash: Okay. 70 people, two places in LA, 22 robots. How are you guys funded? So you mentioned you're already getting revenue. but I assume you've taken some investment rounds or how, have you guys been funded up to this point other than the revenue you're making?

[00:39:34] Edward Mehr: Yeah, so we're a venture backed company, which has its own challenges and good, things and bad things. to this date, we've raised about 45 million in venture funding, right? So we're, very,

[00:39:49] Audrow Nash: What round, have you

[00:39:50] Edward Mehr: We just raised our Series B last year

[00:39:53] Audrow Nash: Oh, congrats. Oh yeah,

[00:39:54] Edward Mehr: last year. Yeah.

[00:39:56] Audrow Nash: That's fantastic because I hear that it gets much harder, like B and C are the two hardest funding rounds for robotics, I think for hardware in general

[00:40:05] Edward Mehr: yes.

I think that especially last year was pretty tough because, once you get to B and C, you need to have commercial traction. you need to show you have a good revenue and you also have good margins that is suitable for a venture funded business. That's the one challenge with venture funded business is that what is their alternative?

The alternative is a software or ChatGPT that it's I don't know, what is the margins there? Like 80, 90%, whatever it is today,

[00:40:33] Audrow Nash: I don't know.

[00:40:33] Edward Mehr: For us, so we need to be able to compete with that. and it's much tougher for robotic companies with CapEx expenditure to get there. Out. But I think we have been making a lot of interesting choices and hopefully wise choices that allows us to our margin to stay up so we can compete with those type of, those types of businesses, but I guess tougher for, as you pointed out in B and C because economics is the main driver of raising a C round or a B round.

[00:41:00] Audrow Nash: there are a lot of good, I don't know what the word is, like headwinds or forces or something. like we're reshoring a lot of manufacturing is what it seems. and so I would think our government is very supportive of what you guys are doing, which I think is wonderful. And then, I don't know there's just probably a big interest like just seeing what feels like the zeitgeist on X it's like manufacturing is becoming cool wasn't the case years ago and I think it's a wonderful thing for our country in general.

[00:41:36] Edward Mehr: I agree. I agree. I think this year, especially, I think the, leaf is turning. I think last year was still a little bit tougher. Everybody was in holding mode. I think now there's a lot of excitement about manufacturing, but I think also at the same time that puts the,

[00:41:52] Audrow Nash: High expectations

[00:41:53] Edward Mehr: Expectations on it, which is in a good way under robotics and hardware and manufacturing companies to really go about this in a smart way.

Yeah.

[00:42:02] Audrow Nash: Especially with cash being so expensive relatively now because then yeah you have to be pragmatic or you go out of business this kind of thing

[00:42:11] Edward Mehr: Yes. So turn this excitement and make it sustainable, right? So, that all these companies that come in, obviously a lot of them will be successful, some won't. but I think it's new grounds. we have to think about our business models are not going to be the same as SaaS business models.

Our, the way we're going to make margins is going to be different, but we need to pave those paths. Like it's interesting that even investors. Don't know exactly what the model will look like, but if the margins are not 60 70 percent at the end of the day, it's tougher to compete with some of the alternatives they have, so we need to be very, we need to think about it extra hard and be very adaptable and agile here.

[00:42:50] Audrow Nash: Yeah, so what was your valuation?

[00:42:53] Edward Mehr: Yeah,

[00:42:54] Audrow Nash: I don't know if that's public, but

[00:42:56] Edward Mehr: yeah, It's not public, but we raised relatively with good,

[00:43:03] Audrow Nash: Didn't trade that much equity for everything.

[00:43:05] Edward Mehr: Yeah, so we, have been every round, we at least doubled or tripled our, more than double and sometimes tripled our valuation, from the beginning. I won't be able to give you the exact amount,

[00:43:18] Audrow Nash: Yeah, ballpark would be

[00:43:19] Edward Mehr: it has been increased.

Again, we

[00:43:20] Audrow Nash: has been increased. It was a good round, basically. Good, yeah, because a lot of companies have been seeing down rounds, I think, which just, I don't know, cash is becoming tighter, or less. I just, had an interview,

With a person in venture capital and they were saying that in 2021-2022 kind of thing, it was just like everyone was throwing money at everything.

And so people raised huge up rounds and then, now the expectations are coming more back down to earth. And then, That has created some friction for a lot of companies and a lot of investors.

[00:44:06] Edward Mehr: Yeah, no, you're absolutely right. I think, we were lucky that every round is more than double the

[00:44:12] Audrow Nash: Yep. So you guys are doing great. Hell yeah.

[00:44:15] Edward Mehr: I think part of that is as tempting as it is to get a very high valuation in early days, I think it is very important to, almost pace your progress and set milestones in a way where you don't set yourself up for a down round. I think a lot of folks come in and, tell very amazing stories, which is great. I'll get the biggest one I can, and then it's okay, the milestone now I'm setting up because if the valuation is high, the expectation is high too. So the milestones you set up for yourself for the next round, if you don't meet those, then You know, the valuation drops and that's not a good thing for your employees that are stakeholders in the company.

Obviously not a good thing for your investors as well, but more importantly for the employees who are, leaving jobs at Microsoft and Google to join your company. I want to always show them that, this is a better alternative than staying in those places. If you have a downturn, that's not a good thing.

[00:45:11] Being in LA

[00:45:11] Audrow Nash: How, so another thing that's interesting to me, you guys are in LA. Tell me about the LA, and actually, how, how close to, the major part of LA are you, far out east, or where, where, are you in the LA region?

[00:45:32] Edward Mehr: Yeah, if you're familiar with it, I think we're in the Valley.

[00:45:35] Audrow Nash: Oh, hell yeah. So

[00:45:36] Edward Mehr: it is Los Angeles, the city. and, you're probably half an hour away from downtown, if there's no traffic.

[00:45:44] Audrow Nash: If there's no traffic. Yeah.

[00:45:46] Edward Mehr: Yes. for like maybe 45 minutes away from SpaceX, for those people who are like anchoring a lot of harbor companies around the SpaceX in South Bay.

So we are on the other side of the LA. So South Bay is traditionally have been aerospace manufacturing hub. We are in the other matter of fact, in the other manufacturing hub, Chatsworth, which is North end of the LA area in the Valley. We have companies like a lot of aerospace companies and machine shops here as well.

So it's a big manufacturing hub. it's one of the other, basically it's the other manufacturing hub,

[00:46:18] Audrow Nash: What's the other one from that?

[00:46:21] Edward Mehr: South Bay, like

[00:46:22] Audrow Nash: Oh, other one in LA. I see.

[00:46:23] Edward Mehr: yeah, El Segundo, Torrance, Hawthorne, that region, which is like 45 minutes away from us, yeah.

[00:46:31] Audrow Nash: And so do you think it's a very good thing for you guys to be very close to like relatively close to SpaceX and a lot of military bases? is it a really good spot?

[00:46:43] Edward Mehr: I think so. I think we have a good ability to attract hardware talent here. Software is a little bit tougher, right? Because

[00:46:54] Audrow Nash: Oh, they're in SF or in the

[00:46:56] Edward Mehr: Area, or maybe East Coast a little bit, Boston,

[00:47:01] Audrow Nash: Oh,

[00:47:02] Edward Mehr: and New York. So it's a little bit tougher on the software side. But overall, I think, the hardware talent is much more accessible here.

[00:47:11] Audrow Nash: Gotcha. Very cool. Let's see. So pivoting a little bit,

[00:47:16] Why make a company that is 10x better than alternatives

[00:47:16] Audrow Nash: why haven't we seen many big and successful robotics companies yet? So

[00:47:25] Edward Mehr: Yeah, I think nobody really has figured out what is the right business model for robotics companies. I think for a robotic company to be successful, for any company to be successful, I think, especially a startup, you need to almost provide 10x improvements. the reason for 10x improvement is that, as a startup, most, you're mostly odds are against your success. You might run out of cash, your talent might not come in, you might make a few mistakes that causes your company to die. So you're really prone to failure. So if the opportunity you're going after is not 10x better, the outcome is going to be 10 times better, at least, than the current outcome. Then you're, you have very limited room for mistakes because if you're 10x, the

[00:48:20] Audrow Nash: So you view it as margin in a sense,

[00:48:23] Edward Mehr: in

[00:48:24] Audrow Nash: the amount of times better. It's yeah. So if you, only do two times better, you have a very small margin

[00:48:29] Edward Mehr: yeah, you make few mistakes and now you're as good as the traditional technology, right? And then you're like, okay, is it really better? Do I want to pay more for robotics in this space?

and I think that's a problem. A lot of companies had, there are a lot of incremental improvements and I don't think you can build a company on incremental improvement, at least in this space. In finance, maybe you can, but in business space, you can't,

[00:48:51] Audrow Nash: And this space being robotics, you're saying, or technology?

[00:48:54] Edward Mehr: and slash hardware, I would say,

[00:48:56] Audrow Nash: Slash hardware. So

[00:48:58] Edward Mehr: need to have that room for, failures. And say, okay, despite all the failures, maybe I end up being 2x better. I started at 10x and then I ended up being 2x better, but still 2x better, right? so, that has been, I think, a story with a lot of robotic

[00:49:16] Audrow Nash: a really strong concept initially that creates really great improvements.

[00:49:21] Edward Mehr: You almost have to be very ambitious initially and have a path to make a very ambitious company work.

And then while you're shooting for the stars, you maybe land on the moon, right? But, but that's, I think that definitely much definitely applies in this space. And that's why we haven't seen a lot of successful companies.

[00:49:39] Audrow Nash: Interesting. What are some examples of, naming specific ones that haven't innovated, innovative, innovated, 10 times, possibly not be good, but can you give me some like cartoons of companies that have not been ambitious enough, and then I'd love to hear ones from your perspective that are.

Really pushing it, 10 times or more to be better.

[00:50:08] Edward Mehr: it's a good question. the example I give usually is, to some extent, I come from 3D printing world, and I think that could potentially be, if you consider 3D printing machines as robotic machines, they're automated

[00:50:22] Audrow Nash: Yeah, totally. They sense, they think, they act.

[00:50:24] Edward Mehr: I think that is one area where, we initially thought it was like going to be revolutionary, but I think the reach of the parts it can do is not as big as we thought.

that was the point. And if it was providing advantages in some of the applications, it was not 10 times better. Like in some applications, it was like, if it was like you're building a rocket engine, a heat exchanger additive is really good. And what if you're making a bracket for some vehicle?

Yeah. argue your way one way or another, but it's not 10 times better.

So you invest a lot of money. There's a lot of room for failure. And at the end of the day, it's not 10 times better. It doesn't drive Adoption. So I think that was one example and that's what I learned actually from, when I was in 3D printing, look at the market size of the 3D printing right now is around 16 billion, everything, plastics, metal, everything,

[00:51:17] Audrow Nash: Yeah.

[00:51:19] Edward Mehr: and there's probably close to like somewhere from 6 to 10 billion dollars in investment that went into that technology.

[00:51:26] Audrow Nash: Not that much of a return.

[00:51:27] Edward Mehr: So not, much of a return there, right?

[00:51:29] Audrow Nash: And there's a whole bunch of failures and everything too.

[00:51:31] Edward Mehr: There's a lot of failures. Yep. yeah. Now, when we started this technology, for example, like one of the things I really wanted to go after a market segment that has significant room. so that's why we went after sheet forming as a first, foray into manufacturing in this company.

and that's a 250 billion market. so lots of room for improvement, but then also, started thinking about, okay, what is the portion I'm replacing? I'm replacing the dyes, and I want to go after sector that the dyes are very, expensive, right? or the lead times are very high.

[00:52:08] Audrow Nash: Military was a great thing for that then.

[00:52:11] Edward Mehr: Yes, So, I think, yeah, 3D printing was one of those examples where, overall as an industry where the benefits were not as great as we thought. There were some applications, though. If you look at within 3D printing, you look at like dental. They found an opportunity in an application that could provide 10 times better for dentures and aligners.

They're like 3D printing can customize these things that were 10 times better than the traditional alternative, where you have to put like a little thing in someone's mouth and, and now you can scan it and you're going to get it, right? so I think that was the area that, and then you have to, mold, is expensive, you have to manufacture it, now with 3D printing, we can get aligners or dentures really, easily.

and it's also, looks much more realistic with 3D printing, you can do all kinds of colors, and you can make it really look realistic. even within 3D printing, you look at the areas where there was opportunity for improvement was 10 times, then they were successful. But any area that you were just marginal improvement, it's dying out.

[00:53:14] Audrow Nash: How would you so how did you find your area? You mentioned you wanted a big market and then you wanted you looked for where there was a lot of pain already in a sense is that kind of and then so from that you find your problem and you look in the solution space for what can be possibly solving that, and then you arrive at one that might be feasible as a 10 time solution or how do you think of it?

[00:53:39] Edward Mehr: For us it was a little bit, not as linear as you think, because I think there's three components. There needs to be a big market, there needs to be a solution, that also there needs to be enablers that you can provide to that solution, that provides that ten times increase in improvement, right? so for me, I was in additive space.

We saw the challenges we've seen with additive. So for a couple of years, actually, I started thinking about, okay, if you want autonomous and agile manufacturing that is not product specific, and it's not part specific, it's not material specific, where would that area be? I looked at post processing and manufacturing.

I looked at CNC machining. I looked at a lot of different processes and then sheet metal is the one we landed. Due to three combinations. One is, it was a huge market size. There was a good amount of legacy work that was being done. So the technology was almost ready. And then my background and my co founder's background could elevate it to the next stage, which was use robotics and artificial intelligence to bring the cost of making this manufacturing cell down, but also make it accurate, which was the only shortcoming the sheet forming had. You couldn't make accurate parts with it with incremental forming that was done in academia. so for us it was a little bit of an exploratory process. I don't think there's a, I don't think there's a recipe that I can,

[00:55:03] Audrow Nash: Yeah. It's not simple.

[00:55:03] Edward Mehr: Advertise.

You just got to iterate a bunch of processes and see where your skill set can provide an enabler, but there's still a huge market. and the jury's out. We'll see. I think we think it's a big market we are going after, but we'll see. We'll see how we do.

[00:55:19] Audrow Nash: you've made it this far, which not to say that's success is not certain by any means, but it's a good indicator I think, and especially 'cause your valuation has been good and you have, I feel like a lot of wind at your back already for a lot of manufacturing coming to the US. So yeah, jury's out, but it's looking good.

[00:55:42] Edward Mehr: Looking good so far. Yeah, I think, it's looking good so far, but it is tough. I think that's probably the toughest portion. I'm actually, I have an all hands coming up, which we're going to talk a little bit about Makida's master plan. I'm planning to actually put that on Twitter as well, or on X, so people can see.

But I think, Yeah, you need to really try to be, in early days, you need to be ambitious, but not overly ambitious in terms of tech, that the technology is not feasible. but you need to be as, as ambitious as you can think the technology can take you. So you have enough room for failure,

[00:56:17] Audrow Nash: Yeah. A way, that I've thought about it is, I've seen many companies that have ambitious plans, but their plans rely on four, five, whatever technologies that are all at the cutting edge of being ready at the same time. And that kind of thing doesn't lead to terribly good things. There's too much risk in it, in my belief.

whereas the companies that I see that are a lot more successful, they pick almost always old technology. Except for one area where they're going to innovate, and that's the one where they're going to really push hard. And they usually have some sort of like great team that is all leaders in this area.

and they're going to push that one technology to the cusp of whatever it is that they are trying to

[00:57:16] Edward Mehr: yep. Yep. It's two sides of the thing, like you need to have a big opportunity, which provides you a lot of, room to failure. But then on the risk side, if you have five or six or seven different risks in technology, then also then you're going to exhaust your ability to fail, like you're going to exhaust your failure, quota,

[00:57:35] Audrow Nash: Oh yeah.

[00:57:36] Edward Mehr: so you need to just make sure, yes, like physically you're, you, you can get there pretty fast. And this is actually feasible technology wise for sure. that's a big portion of it. Like I said, in our case it was like, robotics, can they make the price of the technology the system cheaper using robotics?

And that's why we use off the shelf robotics, as opposed to building our own gantry system or our own custom system, right?

[00:57:58] Audrow Nash: And also it's less to work on. It's one of those, you're using old established technology, not, you're not building an arm yourself.

[00:58:06] Edward Mehr: I don't need to build a tool changer for it. There are already tool changers existing for it. I don't need to build a calibration system for it.

There's already a calibration system out there. All I need to then focus on, if I use robotics off the shelf, I need to focus on my intelligence. which is enabled by these model buildings, right? So now I have one big technical risk, but it's a good risk. It's a risk that we're seeing with all the developments in AI is constantly being improved and de risked.

So you're absolutely right. You don't want to, you don't want to build a company on 10 different risks, and hope, hope that all of them will align somehow.

[00:58:42] Audrow Nash: definitely.

[00:58:43] What if you weren't venture funded?

[00:58:43] Audrow Nash: Now, not not saying accepting venture capital money is bad or anything, but I would just be curious about your thoughts on this. Do you think you could do what you've done without venture capital dollars for this kind of thing or what would the path be or how much slower would it be?

what are your thoughts around this?

[00:59:13] Edward Mehr: Venture also ties into this conversation we just had really well. Because what does Venture do? Venture is going after opportunities, the colloquially say, each check should be able to pay the fund back. So they're taking a huge risk, right? Especially in earlier stages. They're like, okay, it's 1 million check I'm writing, needs to pay this 100 million fund back.

That means there's opportunity that 1 million can turn into 100 million for every investment they make, because they probably have one percent chance of success. so by default, and some, founders sometimes complain about this and say, Oh, why is venture investing in these crazy ideas? Because that's how their model works.

Their model works, but relies on one success out of a hundred. but when you're relying on one success out of a hundred, that needs to be very big upward, very,

[01:00:07] Audrow Nash: Lots of potential.

[01:00:08] Edward Mehr: Lots of potential and very opportunistic and very huge ambitious, concept.

So ties into that conversation that we had as well, like if you're building a robotic company that's incrementally improving something, Venture wouldn't be interested in it because it'd be like, okay, how much improvement, how much room is there?

so, ties very closely. I think if you do have that concept and you're convinced yourself, very honestly that the opportunity big and I can build it The venture is good dollars. I think venture comes in they have expertise they have network. So it's not just about money It's about being able to have a network of people who can connect you to the right people.

For example, one of my investors You know connected me to a lot of folks at Stanford. I took a a program at Stanford. It's an MBA like program at Stanford. Helps a lot with building the company, hiring the right people. so I do think venture is very helpful, but not all companies are venture fundable.

if your company's not venture fundable, and you're going after venture dollars, you're in for a lot of pain. Because the expectations are going to be high. You're going to have really bad interactions with your board members.

so that's something that to be avoided. So make sure, a lot of times it might make sense to bootstrap your own company.

A lot of incremental improvements make sense to bootstrap it. try to see if you can get some private funders or your own money. Slowly build it out. but if the concept requires significant amount of capital and there's a huge opportunity, venture dollars can help a lot.

[01:01:44] Audrow Nash: Gotcha. So it's basically the upside. The potential upside is a big way to make this decision of if venture capital might be appropriate. And also, I suppose there's the kind of implicit thing of you need to have a good use for that money. but if it's Incremental, as you say, then there's probably not that good of a use for it.

You can just do it yourself. And if it is going to be a hundred X or something, then maybe venture could be a very good fit for this.

[01:02:21] Edward Mehr: Yeah, and to your point, I think one point that you brought up is pretty important to do is that raise the amount of dollars you need, because it bites you back afterwards as well, like in terms of down rounds, right? So, try to figure out how much dollar you need and try to raise as much as you need.

Not, necessarily more than that. Because that's also another challenging thing that comes with the ventures. The higher amount you raise, the higher the expectation is going to be. So make sure you're pacing your business and progress based on that.

[01:02:51] Audrow Nash: Yeah. Makes sense.

[01:02:53] Or growing quickly

[01:02:53] Audrow Nash: So you guys. You're 70 people and you've been growing really fast and you're expecting to grow a good amount more pretty quickly. how has that experience been and what have some of your lessons or findings been?

[01:03:09] Edward Mehr: Yeah, I think we're chatting this with some of the, other team members here. I think the toughest part of building a startup is actually still to this day, it is not so much the technology, it's the, people.

[01:03:22] Audrow Nash: Definitely.

[01:03:26] Edward Mehr: People are what the company is. It's not the technology.

Tomorrow, the technology can still be there, but if all those people go, we don't have much to build from. so I think the biggest lesson is hire well. and what that means is, it needs to be people who are passionate, mission driven, and excited for that technology. You don't want to just hire smart people. If you want to hire people that I have to drive, excited about the mission that you're after, the larger problem you're trying to solve, they can make light of themselves, they can, in, in, face of challenges and adverse situations, they can still move on. so there's a lot of, basically most of my advice is around.

Choose very carefully who you bring to your team, especially as you're scaling, because your company is the people you hire. So that's one condensed version of advice, but there's a lot of branches there to dive into. What does it mean to be a good employee, a good team member?

[01:04:33] Audrow Nash: Do you want to speak to it just for a bit?

[01:04:36] Edward Mehr: Yeah, I think, If I want to condense it, there's probably or two or three characteristics, that I've found is very important. Number one is mission driven. if you are building a company that provides 10 x improvement, you want people who are excited about that 10 x improvement, that vision that the, that, the company built.

You don't wanna just bring experts in robotics or machine learning that have no connection to that. they don't care if manufacturing world becomes better or worse, right? they're here to just provide their expertise. so mission drivenness is a big component of it. I think probably the most important portion. the second one is obviously you want people who are smart and that doesn't mean necessarily they they know and they're expert in that field. It's good to be expert in that field, but more important than being an expert is that somebody who can learn their way. One thing I learned throughout my career is that every few years you have to learn something new.

there's a new robotic framework, there's a new modeling technique, there's a new So yeah, and you need to spend your weekend or whatever, take a course on Coursera and learn it and see what's going on. To this date, I still do.

[01:05:52] Audrow Nash: Oh, constantly. Yeah, I'm sure.

[01:05:54] Edward Mehr: Constantly, I was like looking at the new, some new changes in transformer architecture over this weekend.

I'm just, I'm like, okay, what's going on? I had to look at it and learn it. so, that's constant. So like smart, that means that they learn fast. and that's more important than experts. and the last piece is, I think, It's, you want doers. a lot of experts and a lot of smart people would love to chat with you about how 10 million ways a thing cannot be done.

And you're like, great, I don't want to talk with you about 10 million ways that it cannot be done. what is the way that can be done? And let's try it. And the problem is in our space, there's a lot of unknown unknowns. So you can't even argue about it. You don't know, like you might think, but until you actually have done it, you don't know.

So you want doers, you want people who have more bias toward, once I have an idea, I want to get it done as soon as possible to see the results as opposed to. Stick, be stuck in analysis paralysis and chat about it for days. I think those three things, if somebody has it, I think it's a core things that, that means it's probably a good person for early stage startup.

[01:07:06] Audrow Nash: That's funny. The doer one is very funny. yeah, it's really interesting. that people can, I don't know, it's like, it's like your craftsmanship approach where you have the metal and you're gradually forming it but you're looking and seeing what you do to try to get it to the overall shape that you want.

It's like you need to do that rather than just perfect solution all at once, because that's very hard to do and come by and it might just be wrong by the time you implement it. Yeah.

[01:07:40] Edward Mehr: I mean that specific concept, when we were first starting the company, I was talking to a lot of robotic experts. And we're like, Oh, we're going to do this. And everybody was like, Oh, the robot will deflect. It's not going to be accurate. It's not going to form a part. And these are experts in robotics, like people built, segmented robots, like the ones we're using.

and then it was funny. And then six months later, we built the first system and we formed the part. And they're like, Yeah, I guess it could be done. But like the conversation before until it was done, it was all about ways that it cannot be done. and then once it's done and it's shown and it's done, then like people tear in a leaf.

but, but those are the people you don't want in your team. You almost want people who are almost like naive enough to try it. right? But they have enough expert to back and build a good system, expertise to build a good system, but they're not too smart that prevents them to actually take an action.

[01:08:31] Audrow Nash: Yeah, it's funny that we say too smart to do the thing. I don't know. It's, almost like if you're too smart you see all the things but then you can't do it. I don't think it is that though. I would imagine it's more fear of trying and failing or something like this. I think some people just don't like uncertainty and pushing into something and not knowing if it will work or not is terrifying in some sense.

[01:08:58] Edward Mehr: Yeah. I mean I have a lot of smart friends, I would say very smart, is that because they can play that chess in their head. The reason I say they're too smart is that if they play chess with me, they will always beat me because they see 10 moves ahead, right? and I almost say a person who's moving, seeing 10 moves ahead might not be a good fit for your stage startups, right?

Because, the problem is in really in chess, you might see 10 moves ahead and it's a very predictable environment. But there's a lot of unknown unknowns. so you want to almost, it's almost as explore the map. It's one of those game, strategy games where you,

[01:09:35] Audrow Nash: Fog of War,

[01:09:36] Edward Mehr: open up, let's go in that direction.

And it's the map opens up a little bit. And then we can make a decision. That's what really happens in real world. It's not really chess. So very, smart friends. And we're like, this will happen. Then this thing will happen. Then that thing happened. That's why it's not possible. And we're like, but the moment you do the first state.

There might be 10 options open up that you didn't know you couldn't think of today.

in chess, that's not the case, but in real world, that's the case. Because it's much more chaotic.

[01:10:03] Audrow Nash: totally. Or if they were playing chess, it's okay, you have to pre decide your next 10 moves or whatever, and you get to pick every single time what your move will be looking at the state of the board. You would have a pretty significant advantage .

Anyways, I wanted to get your thoughts on a bunch of like contemporary technology things.

[01:10:26] Humanoids: challenges + Elon Musk

[01:10:26] Audrow Nash: Let's start with humanoids. Tell me, about what you think, what some of the challenges will be, and then the future that you imagine.

[01:10:35] Edward Mehr: Yeah, I think humanoid's an interesting concept, I, like it. I think humanoid's from the perspective of generating excitement about robotic space is as, close as it gets in terms of oh, they look like humans. They can do everything we do.

yeah. And there are environments where you certainly need humanoids, inside homes, elderly care, might be a good, area. Even in that area, there, maybe there are other form factors that are better, but, there are certainly areas where humanoids are very good.

And like I said, PR wise, it's a very good story, because it looks like a human walking around, reasoning, talking, so it's amazing in terms of excitement. There are some challenges in terms of people are working on what does the actuator actually need to be in a humanoid? I have some of my friends working on that.

and exciting developments there. But, I think, We realized a while back that human form is not necessarily the most optimized form for all tasks. in manufacturing, we already know robotic arms are much better. They can be much more precise. They can have much more stiffness. They can apply higher loads.

So the amount of flexibility you get over the performance times performance is actually pretty big. And it's much better than a human for form factor. So I would say in areas like manufacturing. or Logistics. A simpler form might be more effective, and that mostly, I think, is robotic art, because you provide the same kinematic freedom as a human, in a much simpler, easier to manufacture robotic system.

[01:12:25] Audrow Nash: Oh, yeah,

[01:12:28] Edward Mehr: so I think they're going to be, eventually, hopefully, we're going to have a multitude of robot form factors. and it's going to be different for each application. Humanoids will have a space, but I still not, I'm still not convinced how big that space is. as, as much as people talk about it, I think it's still pretty limited because we long, too long ago, we figured out that human form is not necessarily the best form for, many applications.

[01:12:59] Audrow Nash: It's interesting. The thing that stuck out to me about your explanation is that robot arms might be the optimal form for a lot of things for what we're doing. And that's a really interesting idea. And it makes good sense to me, being that it's really just the most simple way you can have some sort of end effector moving around doing something.

So if you need.

[01:13:26] Edward Mehr: degrees of freedom on the end effector, yeah. Yep.

[01:13:29] Audrow Nash: So it's the simplest way for that, and also if you look of robotics at the moment is that a lot of the things that are very valuable that we're doing are simple mobile robots, where you're just having a robot drive something around moving from here to there because we have a lot of those types of problems and the state of the technology is very well suited for that type of problem at the moment, and what I would bet, connecting with your idea, is that the next like 10 years or something might be where we see a lot of robotic arms doing useful things in more spaces.

but, and they'll probably be a more optimal form factor than a humanoid which has a smaller arm, or smaller arms, not as much reach, not as much torque, or stiffness, or these kinds of things. I wonder, what do you think of that idea?

[01:14:31] Edward Mehr: No, I think, you're absolutely right. I think there's analogies in other industries as well. I think of robotic arms, they first got deployed in 40s and 30s, roughly around that time. I don't know exactly when was the first robotic arm was made, but it was somewhere in Germany was used for, in 30s or something or 20s.

so it has been in the industry for a long time. So there's a lot of legacy has been perfected for a long time. Now there's multiple vendors that are competing, so it's almost commoditized. so it's a very good interface. The parallel of this is to, we see a lot of growth in NVIDIA, growth in NVIDIA today.

What was that company? It was a

[01:15:12] Audrow Nash: Just an enabler. Yeah.

[01:15:14] Edward Mehr: Gaming chip company, right?

[01:15:16] Audrow Nash: Uh-huh..

[01:15:17] Edward Mehr: There is something to be said why NVIDIA became cornerstone of, computation for neural networks. It's because of the legacy. It's a chip that already existed maybe for another application, but it's a very good fit. and it was perfected.

Now, maybe people are thinking about other types of chips. But the foundation was built on a legacy system that had other applications. Because all the bugs, all the issues have been ironed out, and it's easy to use it, and it's commoditized, and the price is low, and you can just use it.

I think the same analogy works for robotic arms, where these systems have been perfected.

We have a very good supply chain, Siemens makes a shit ton of, electric motors, and Nabtesco makes drives, and they have perfected this supply chain. why do we want to disrupt it for another form factor? There needs to be very good reasons, and I think that the established supply chain usually wins out if the improvements are not ten times better.

[01:16:24] Audrow Nash: because they get a lot of efficiency from the amount of time they've been in the space and the amount products that they've pushed out. Yeah, that's a very cool perspective. I'll have to think more about that.

I like that a lot. let's see, do you, what, so one thing that I think is especially interesting with humanoids is you have large valuations, like Figure, I don't know, raising a billion dollars or whatever it is. and they are their own company raising the money. I wonder if it will be like autonomous cars, where you see Cruise gets a billion dollars and then its very hard to deliver on the product in the long run. But the thing that's especially interesting to me is then Tesla with their Tesla bot, or Optimus, and the thing that's really cool with that is that they are their own captive, market in a sense.

Like it's a very farsighted initiative. They don't have to appease venture capitalists, who want to return on investment within some timeframe of years. so I feel like it's an interesting dynamic, that they have been able to fund this research internally and maybe in on a longer time horizon it will be exciting or maybe just the research can be applied to other areas like maybe the top half of a humanoid is very good for upholstery in car manufacturing for example.

What are your thoughts on this?

[01:18:07] Edward Mehr: Yeah, it's a good point you brought up. I think all Elon companies should be seen in a specific light.

[01:18:16] Audrow Nash: I'm excited to hear what that light is. Yeah, go ahead.

[01:18:19] Edward Mehr: I think one portion of it is like he does things that are very exciting, right? Now he's, in charge of this platform X. And, if you look at a lot of the companies, he's, he goes after that PR angle, right?

because I think that's what, That's what generates enough excitement for him to create a big problem space that he can solve. But the angle that people usually forget, I think, about Elon is that he does everything, in my opinion, and I may be wrong, but based on my experience working at SpaceX and interacting with a lot of people who work with Elon even today, Elon's number one goal always has been becoming multi planetary, and I think you can see most of his companies in light of that.

Obviously SpaceX is at the core of it, with the Starship and Falcon 9, finding a way where we can get to Mars and other planets. Electric propulsion is also another reason that, that's pretty much the only way you can travel in other places. You cannot use, necessarily a biofuels and things that we have been

[01:19:24] Audrow Nash: You run out or

[01:19:26] Edward Mehr: out. Yes, electric is probably the most, available way of doing propulsion in those places. and then you look at Boring Company. If we go to Mars, there's going to be radiation out there, we need to go down in the ground. So

[01:19:40] Audrow Nash: Aha, that's wild. I never connected all this. Okay.

[01:19:44] Edward Mehr: you need to go down and do that. neural link is also, we're going to come back to it, but it's, about kind of high bandwidth control interface.

and then I think of humanoids also an easy interface for astronauts. if you build a system for humans, then you can replace them with humanoids for, outer space applications. So a lot of times, a lot of bend to Elon's companies are around, I think he's building different components of that larger term goal that he has, which is becoming multi planetary.

And he's being very creative about it, like finding business use cases on earth that can propel this. But I do think he always had that bend toward multi planetary and that's why he starts from there basically. He looks at what do I need on Mars and then

[01:20:32] Audrow Nash: aha.

big build backwards and what do you need to build here on Earth.

So that's that Figure. I don't know. I don't know enough to know, but I think Elon companies at least are unique in that perspective.

Yeah, that is a very interesting way to frame it. And it makes sense. I had never connected those dots. I love it. Hell yeah.

[01:20:54] Audrow Nash: And then so now moving on to AI. when you guys, when you, when we started, you were saying AI, you're using AI and I think AI has become a good bit of a like buzzword marketing thing.

Like everyone wants to say AI when they mean machine learning,

[01:21:09] Edward Mehr: Yeah.

[01:21:10] Audrow Nash: which is optimization from my perspective, like regression or things like this. And, so from my perspective of what you guys are doing, it's not chatGPT, it's not, you're using neural networks, but I, don't know, tell me about your thoughts on AI and then we'll go into more specifics.

[01:21:38] Edward Mehr: yeah. It's interesting. the definition of AI has changed

[01:21:43] Audrow Nash: Oh yeah,

[01:21:44] Edward Mehr: over time, right? I think, like I said, if you look at back in 60s and 70s, like rule based systems were considered artificial intelligence. then when I was in school in 2004, we were talking about kind of tree search as a AI system where like you basically, like rolling out future in multiple steps, you optimize and then you get that.

And then with the data, the definition moved toward ML. Basically, can I find the pattern? In, in a set of data, the same way human finds a pattern, like you look at a picture and the pattern of colors and lines tells you it's a dog. and that became like the state of the art for AI. Now, I think with transformers, that again, change into large language models, became the state of the art of AI because you could talk to it and it will talk to you like, like a human.

But the core of large language models that are GPT or transformers, is the same as, to me, for me, it's the same as a more simpler neural network, but, and then

[01:22:52] Audrow Nash: It just

[01:22:52] Edward Mehr: is the same as, and each node of those is basically a regression mode.

[01:22:56] Audrow Nash: Yeah,

[01:22:58] Edward Mehr: so they're all almost like maturity of the same system, all the way even to large language models. so I categorize them as the same technology, and AI being the term that we always apply to the most advanced cutting edge portion of that. so yes, we do use machine learning. we are using graph, neural networks. So closer, and then with this next project, we are using transformers, to do a multimodal,

of

[01:23:31] Audrow Nash: So you can have a context remembered

[01:23:33] Edward Mehr: You can have a context and you can also do, we're talking about multimodal models now where you can feed it an image of a CAD of a part, but you can also feed it an instruction of what accuracy you want. And then come up with the robot actions. So it's a multimodal model that we are looking at

[01:23:48] Audrow Nash: That's super cool. Okay, I got to learn more about that kind of thing.

[01:23:52] Edward Mehr: So that is closer to what is happening with ChatGPT. Now, I think the definition of the AI will progress every time we beat the boundary. Now, is it the Terminator or the, Skynet? Not yet. in pop culture, that's what people think of AI. But, if you go into academic text, the definition has changed over time.

So I don't know, I don't know what to call actually. like you can talk about, go with a pop culture reference. I don't think anybody's there yet. but if you go with the LLM, then you can talk about, okay, transformers are maybe AI, building components AI. We are modeling using transformers. So it's a question of how you define it.

[01:24:37] Audrow Nash: yeah, for sure. Yeah, it's been funny that it's whatever is, I don't know, I guess AGI Artificial General Intelligence is like the new thing everyone's trying to do and that's the holy grail of all this AI stuff but it's, funny that it's oh, artificial intelligence will never get this and then it completes in chess and then it.

competes and beats the, top guy or the top person and go, and then it's oh, it's not AI yet. Every, one of these ones we redefine ai. yeah. I see what you mean. Where we change the definition. What are your thoughts on ChatGPT, or large language models more specifically and where they're useful, I suppose?

[01:25:27] Edward Mehr: Yeah, so I think The way I think about large language models, and I'm not an expert in the field, I have very surface knowledge, in that field. I think there's something special about them. obviously they're language models, so they can just talk, right? But I think they're capturing the way we think, because if you can say, I think of language as the way we are encoding a human knowledge and thought process, I think mastering that is as close as possible we have figured out to this date into, into, thinking.

So then there are areas in, manufacturing where that directly applies, right? and mostly is around interface improvement. if I'm programming a robot to do certain things, can I just talk to it? I can just get to a point where I say, Okay, should I code up forming this geometry?

Or can I just say, form this CAD and then say, Okay. Scan it and they will tell me what is the metric and I say can you improve it by 10 percent and then they will do another round of forming. So the direct area there is like removing friction in an interface with robotics. And I think that definitely is on our horizon as well.

We're looking into that. Maybe not first order problems we want to solve but it's definitely in our horizon.

[01:26:44] Audrow Nash: Eventually. Yeah.

[01:26:46] Edward Mehr: But with multimodal models I think we are getting, much closer to that being applied in manufacturing. I was following one of our great friends, Pieter Abbeel, at Covariant, they released RFM1 where it's a multimodal model.

You can just say, okay, here's a picture of a bin and here's an instruction, pick up a banana from it. And the robot will, the model outputs robot actions on the other end. We'll pick up a banana from that image. so now. Now we're getting not just text, we're using the same component, which are transformers, same building block, but creating these multi modal, multi interface, multi medium, models that can input image, still frame, large language, and then outputting the same thing.

[01:27:36] Audrow Nash: You just convert it to data and it will do something with the data.

[01:27:38] Edward Mehr: And they're combining image and natural language in a way that, the brain does. and that's pretty exciting. I think that has a huge potential to interrupt, disrupt, the robotic space.

[01:27:51] Audrow Nash: Gotcha. One of the things that was interesting to me about especially working with ChatGPT and maybe this won't be as much of an issue for robotics companies because the task space could be smaller but there's a lot of weirdness around predictable results where it'll just hallucinate something and they are getting better like specifically in my experience lately, Claude3 is way better, at hallucinating far less than ChatGPT.

That has been really cool. That actually might be the first one that I can actually use rather than just getting really early ideas about something or like very basic coding help. that one might actually be, I might actually be able to enlist it for like podcast related tasks. But, what do you think about the hallucinations of these models?

[01:28:48] Edward Mehr: Yeah, this is a new, not a new thing, right? we're from a robotic system. We always, we've worked with deterministic systems where same input gives you the same output every time. But we have been dealing with humans and humans are very non deterministic, right?

And they are a big part of manufacturing process, right? Humans are a big part of manufacturing and the way we got around that is through checks and balances. Right, and we say, okay, if the output of this result needs to verify this way, as long as it's passing these verifications is good enough. Because in reality, we could not rely on human output.

One day I'm sick and my, mistake, make a mistake. And the next day I'm, I am, I might outperform what I usually do, right? So we already have built systems that rely with, that, deals with ambiguity. So I'm not that concerned about it, honestly.

[01:29:39] Audrow Nash: Yeah you just have your checks and verifications I think that's great because you're right, you don't need to give it total control and have it do crazy. if you were having a robot choose actions from a language model or maybe like a language action model or something like this, or a large action model, you could just run it through a path planner at a very high level with a very low fidelity simulation really quickly to make sure it's not going to collide with stuff.

And that would be a very simple check on the proposed actions, which is actually a very cool idea.

[01:30:11] Edward Mehr: No, you're absolutely right. And then the other piece is, I think, at least the way we thought about it, is that once you want repeatability. You can use neural networks and machine learning and AI to get, to develop something and say, okay, this is the, like in our case, for example, you can ask a model to just say, what is the process parameters?

What path the robot needs to take to form this part, this aircraft wing, below one millimeter accuracy. And it will come up with that. And maybe you make a mistake and you have to fine tune the model. But then the moment you want to make 10 of them, then close it. Don't ask every time from chat, from your model to how to make the same thing over and over

[01:30:52] Audrow Nash: You already got

[01:30:53] Edward Mehr: close it.

So you can also create these, almost silos and be like, okay, in development, I'll use the models then, but in production, I'll lock it down. at least that's how we were thinking about it. For example, for our case is that we don't want, once we figured out how to form this wing, we don't want to ask the AI anymore.

We just want to replicate the recipe it came up with.

[01:31:15] Audrow Nash: Yeah. I like that a lot. Any, thoughts on the form that these checks and verifications for LLMs, like I'm just thinking of like unit and integration testing for this kind of thing, and maybe some like high level safeguards, but any thoughts on this? Cause it's a really cool idea. This is what I'm going to think a lot more about, I think.

[01:31:35] Edward Mehr: Yeah, I think you nailed it in most respects. As long as it's not going to destroy the system, I think you can let the robot do what it needs to do. As long as it's not destroying itself. it's that means, maybe don't go above certain currents, don't run into yourself.

You just talked about collision.

[01:31:51] Audrow Nash: So you give it a, safe like parameter, a safe set of actions,

[01:31:57] Edward Mehr: yes, and that's a much bigger space than what you could think of, but at least the robot doesn't damage itself.

[01:32:03] Audrow Nash: Yeah.

[01:32:04] Edward Mehr: And I think as long as it's not damaging itself, I think you can probably let it explore.

[01:32:11] Audrow Nash: huh.

[01:32:12] Edward Mehr: At least that's how I'm thinking about it.

[01:32:14] Audrow Nash: Yeah, that makes a lot of sense to me. What do you think about the role of simulation? In all of this and sim2real and everything like this

[01:32:22] Edward Mehr: Yeah. That's another interesting area, right? like you can let these robots to be in the simulation world. For us, it's a little bit tougher because the main reason we went to AI based models is that physics simulation was complicated, and tough to do. but that's another branch of work we're doing at the moment.

we actually did this proposal with, Lawrence Livermore at the National Lab a while back and trying to create a simulation of our process that is on GPUs and it's faster. so I think there's opportunity there to create augmented data, even if, as long as the simulation is fast, maybe it's not as accurate, but if it's fast, then I think it can be helpful, right?

Because you get like roughly the right thing, and then you can do a lot of checks and balances, and it generates you a lot of synthetic data to also improve the model. As long as directionally, it's correct. If the model is directionally not correct, then it's going to be really tough.

[01:33:23] Audrow Nash: what do you mean directionally if it's doing completely the wrong thing

[01:33:26] Edward Mehr: yeah, completely wrong.

It's maybe it doesn't accurately figure out what's going to happen in the real world, but you know that if you increase the speed, it will, the outcome also moves in the right direction. That is similar to what happens in the real world.

[01:33:42] Audrow Nash: Yeah, one work that I saw quite a while ago I'm sure you guys are aware of stuff like this, but they were using a very high fidelity simulator and they were using machine learning to try to link one frame to the next of complex objects. And so the simulator, they ran it super slow in super high resolution, very tiny timestamp, all these things like this.

But they learned the transitions and then within a space of. Things that it had seen it was a very fast simulator because you just had a pass through the feed forward network effectively. and that kind of thing seems really cool. And if you apply a check on that, which says, Hey, are you would like super low resolution version of the simulation?

Let me make sure that things are not just completely wonky, everything's shooting across the screen or whatever. Then you could probably use that kind of thing for learning, at a fast rate, which is quite cool.

[01:34:44] Edward Mehr: No, I agree. I think that, that building that kind of a proxy. you're almost building an empirical machine learning based proxy model for the simulation using that technique.

which is, fantastic. I think that's a good way of, if you cannot make a lot of data in the real world, use the simulation to augment it.

And as long as, like you said, directionally is correct. It's not going to do, if I move this to the right in the simulation, it's going to move right in the real world as well. As long as that directionality is there, I think you can generate a lot of data and learn a lot of patterns from it for your model, and then retrain it or do transfer learning in the real world to then fine tune it based on what's happening in the real world.

[01:35:27] Audrow Nash: Oh, yeah. changing topic a little bit.

[01:35:32] Thoughts on manufacturing

[01:35:32] Audrow Nash: Looking out at the manufacturing space? What are some things that are top of mind for you, predictions maybe in the next five years or so? or where are you watching? what's interesting to you, looking out at the future and considering our world and manufacturing?

[01:35:51] Edward Mehr: Yeah, like we said, there's a lot of wind for, on shoring manufacturing. I think there's going to be a lot of policy, both investment and the government investment and private investment in that space. But what I think is, it's interesting, there's two camps. There's a camp of Let's replicate what we did in the 1960s and 1940s and World War II.

Let's bring all that scale back. oh, let's create foundries, and let's create, machine shops, and let's make a lot of them, right? And there is, I think, a camp of Let's reinvent those technologies so that they can actually be economical in the United States.

[01:36:34] Audrow Nash: Totally.

[01:36:35] Edward Mehr: and I think it's interesting.

I think the first cap is easy to argue for. It's, I think to me, it's a lazy idea. and there's a lot of tractions for it, where we're just saying, let's, do what we did in the 1940s. and 1930s. Let's just scale it. But there's a reason we lost it. there's a reason that as the, wages and standard living increased in the United States, we offshored these things to other places.

That reason is not gone, right? So I think the manufacturer will not look like, the future manufacturer will not look like 1940s. We're not going to be able to replicate what happens in China here.

[01:37:16] Audrow Nash: And

[01:37:16] Edward Mehr: you want to beat China,

[01:37:18] Audrow Nash: it doesn't make any sense here. our populations are, we're looking like there's fewer younger people and they all want to do jobs that are not manual. So it's

[01:37:30] Edward Mehr: But there's a lot of push. A lot of people are pushing towards okay, let's bring these capabilities back. So we'll see if that pans out. I think jurys still out. If that's, in my opinion, that's not going to happen, or it will be a lot of failed

[01:37:46] Audrow Nash: option where we just try to bring everything

[01:37:48] Edward Mehr: bring back and they're going to

[01:37:49] Audrow Nash: have the people for it. I think

[01:37:52] Edward Mehr: but they're going to push forward.

I think politicians will push forward. A lot of people will push forward, oh, let's make thousand factories here that are same old technologies, right? I do think though, We need to be a little bit smarter about it and take the harder route and create new manufacturing technologies that give us a competitive advantage versus the traditional that our opponents are using, because that's how we're going to have a sustained advantage.

same cat foundry will not work. It's a more advanced foundry here is going to work. That's going to be robot powered and it's going to be, intelligent. Same sheet forming and stamping in China is not going to work here. We need to figure out how we can create technologies that form sheet metal in a much more different way, in a much more agile way.

Gives you a more significant advantage over the current technology, not just scaling it with dollars. So I think that's basically the direction I'm predicting. I think both will happen. One of them will likely fail, and it's not going to be as fruitful. and then the other, I think, is the route to go.

The latter is the route to go.

[01:39:02] Audrow Nash: I think with COVID, and Like maybe lockdowns in China or something. The, like a lot of textiles were not coming to the US and so I think we innovated quite a bit. I heard this somewhere, and I am not a hundred percent sure, but what I ha, what I believe I heard was that the, Textile manufacturing had a big renaissance here because we evolved or we created new ways of doing the manufacturing so we could have the stuff we wanted. and if say, transport lines and if the, if it's harder to get stuff from abroad, Hopefully a lot of the things we can figure out how to make here in the US out of ingenuity, like that story about the fabrics and what you guys are doing, making it more agile, manufacturing, using our strengths, which is technology, I would say.

[01:40:04] Edward Mehr: yes, they're similar thinking is also existent. in military and to some extent some of it is good like for example we were talking about okay you know what China can have higher production rate why can't we just manufacture drones that are maybe lower quality but the same production rate right and and there's no type of thinking moves you toward maybe we need a little bit of it but moves you toward replicate what's happening in China which i don't think it was like at best we're going to become as good as China and then But then we don't have the demographic, the workforce to back it up.

So let's focus on it like a different way of making. Maybe we should make drones that are, have the same agility, same production rate, but maybe last longer. They're better. and let's invest in those technologies. So anyway, I think there is an easy way out, which is like replicating what we had in the past, which might

[01:40:58] Audrow Nash: But may not work for this kind of thing, as you're suggesting. Yeah, and then there's the other way, which is make better ways of doing things. And that's probably the best way to do it. I forget who I was talking to, but someone was saying that, if we wanted to build an iPhone in the US, they have and take all of this with salt, because I don't remember the specifics but it's like there's a whole city in China that has like a million people that all work for the iPhone manufacturers and like we don't have imagine a full city that is a hundred percent around the fabrication of an iPhone. Which is bonkers to consider here. Like I'm in San Antonio, Texas and it is a big city, but we have so many industries here.

It's not like we don't just all work on an iPhone for this kind of thing or

[01:41:52] Edward Mehr: no, you're absolutely right. you look at

[01:41:53] Audrow Nash: to live.

[01:41:55] Edward Mehr: it's machine, operators running these machines over and over, again, making phones and things like that. But yeah, I don't think it will work here. there's a reason, worked in 1940s, but I don't think it worked now.

so yes, I'm in, in total agreement with you.

[01:42:12] Audrow Nash: Hell yeah.

[01:42:15] Advice to get involved in manufacturing

[01:42:15] Audrow Nash: What advice do you have? So say someone is in the early part of their career and they want to get more involved in the manufacturing space. how would they do so? they have a background in technology and they want to get more involved in the manufacturing space. what would, what advice could grease their path, make it a bit easier to get started and productively contributing?

[01:42:43] Edward Mehr: I think, obviously there's like millions of ways to get involved, so whatever I'm suggesting is probably like something that I anchored on

[01:42:51] Audrow Nash: Yeah, for sure.

[01:42:51] Edward Mehr: more relevant to me. I think the big, we have a lot of smart people working on a lot of smart things, right? folks in AI and software, electrical engineering, a lot of folks doing a lot of smart things. I have noticed that people think of manufacturing out of reach, because as a society, we don't grow up around manufacturing anymore, right? We grow up in a lot of technology, but we don't see manufacturing day to day, and to some extent, that removed that agency and that removed that believe that, I can do this, right?

one of the things that happened to me when I went at SpaceX, I used to do a lot of manufacturing as a kid, but when I went out to SpaceX, the large scale of operation, the manufacturing operations we were running, gave me and a lot of other people who were at SpaceX a lot of confidence. They thought, oh, it's totally fine, I can do this, right?

So what I would suggest to people who are smart and have been working in other disciplines that are much more theoretical, you're a smart guy, just start making something.

learn to weld, go learn to do a carpentry, do some kind of a fabrication, and maybe get involved with people who do fabrication.

But I think the biggest portion is once you do it, you realize, oh, first of all, it's really fun. Second of all. I can do it. Now I can apply this skill set that I had in robotics to do it as well. when I was doing sheet shaping by hand, I was like, okay, I can do it sheet by sheet, I can make a door for a car, whatever.

But then I was like, what, I spend a lot of time in robotic and computer engineering. Maybe I can do this. I can, make it, make, put a robot together and build this. I think creativity comes from when you connect multiple disciplines. So go do those disciplines. Don't be scared of going out there and taking a welding class.

Don't be scared of go taking a carpentry class. And then start thinking about how you can connect the dots between multiple disciplines that you might be involved. and I think that's how we're going to build those next generation manufacturing technologies that you talked about and not just replicate the past.

Is these smart people coming from world of physics and electrical engineering and robotics and AI doing these old mundane tasks that sound like mundane, but then you're like, Oh no, like I can significantly drive optimizations here and improve these techniques by 10X, 20X, 30X.

does that, make sense?

[01:45:19] Audrow Nash: Totally get involved, start building stuff, go do a welding class, do a carpenter class. I think I'm gonna follow your advice with that. 'cause I am, coding all day and I love it. But I also would love to do more with my hands, and building stuff. I'm like, I, have a pretty decent background of building things around, but I don't do much lately and I feel like I'm losing it for this, like simple robotics projects have been challenging.

[01:45:49] Edward Mehr: Yeah. And I think the good thing about it is it's funny, even when you talk about the craftsmen, you start having these conversations of material and properties. It doesn't sound like a material scientist, but it's so much deep knowledge that you gather by, by just talking to physics world, the world, the physics of the world becomes more accessible to you and you can reason it with it in intuitive ways.

When you start building versus before, it's just like textbook and Oh, elastic versus plastic strain, stress strain curve. Like you really get it. And then you start like feeling it with your hands, what that means. And it allows you to have very intelligent conversations and develop very complex topics.

Much simpler, as opposed to coming from a metallurgy school or whatever. Anyway, I think it's, it's hard to describe it, but I think once you start putting your hands on things and building things, somehow you feel much more empowered, to do things.

[01:46:49] Audrow Nash: Hell yeah. Okay, we'll end there. Great conversation, Ed. Really appreciate you being on the podcast.

[01:46:57] Edward Mehr: same, thanks for, for the engaging conversation and, I hope other people's enjoyed as well. I definitely did.

[01:47:04] Audrow Nash: Bye, everyone.

That's it. You made it.

What did you think? Is Machina Labs onto something? Do you see more great uses for AI in robotics in manufacturing? Let me know in the comments or on X. See you next time.

[00:00:00] Introducing the Episode

I'm going to tell you about a company that might help you buy a home in the near future.

Housing is expensive. We all know that, and it seems to be getting less and less affordable. A big part of that expense is the labor cost, and the labor cost is probably going to get worse and worse, as more people are construction than are going into it, at least in the US.

To me, it seems like this is a losing game, that is, unless we change how things are done. And that is what Cuby Technologies is set up to do. And they plan to deploy their factories all over the U. S. in the next 10 years.

In this episode, I talk with Oleg and Aleks, who are both co founders and the CEO and COO of Cuby, respectively, to learn more about Cuby and how they plan to change things.

I think you'll like this interview if you're interested in the housing shortage and how robotics and automation can help, how a complex industry like construction can be simplified to drive down the costs, and how Cuby has de risked a large technical problem with much less capital than you might think.

I thoroughly enjoyed this conversation and I hope it works out that in a few years when I want to build a house, I can go through Cuby.

With all that, I hope you enjoy this interview.

[00:01:27] Meet the Founders: Oleg and Aleks Introduce Themselves

[00:01:27] Audrow Nash: Oleg, would you introduce yourself?

[00:01:30] Oleg Kandrashou: Yeah, my name is Oleg. I'm an engineer. And, I found, the company Cuby with Aleks. I'm an experienced businessman and set up a bunch of manufacturing companies. I have a PhD in economy. I wrote a book, how to generate, how to create a big engineering team, how to, control them. This is

[00:01:51] Audrow Nash: Hell yeah. Oh, awesome. And Aleks, would you introduce yourself?

[00:01:58] Aleks Gampel: Yep, Aleks Gampel here, one of the other co founders of Cuby. My background has mostly been un linear, mostly private equity, focused on real estate, but I've built operating businesses that sit on top of real estate backed by venture. And now we've been hyper focused on building this construction technology to help solve some of the housing issues that exist in the U. S. and beyond.

[00:02:23] What is Cuby? An Overview of Mobile Microfactories

[00:02:23] Audrow Nash: So Aleks, you want to tell me at a high level about Cuby?

[00:02:28] Aleks Gampel: Yeah, I think the simplest way to define what we do is we design, develop, and deploy mobile micro factories. those happen to be our product and that product ends up manufacturing and assembling homes at mass scale. but really we're a vertically integrated, full stack hardware software technology business.

And the industry that we focus on happens to be the construction sector, one of the most vital and important sectors to any economy globally. very

[00:02:59] Audrow Nash: Would you tell me about mobile micro factories? It sounds awesome, but what does it mean?

[00:03:07] Oleg Kandrashou: First of all, I want to explain you about the Cuby, and based on that I will explain you what, the microfactories, how the microfactories look like. roughly five years ago in my previous company, I found a company, it's called Encata, it's 200 plus, engineers, hardware, outsourcing company who help deep tech startups to grow from idea to mass production.

And this company doing projects from robotics, cyberspace, and different other areas. And five years ago, we made the decision to build our own headquarter, our own building and begin to communicate with the construction industry. And I understand that the industry is totally dysfunctional in comparison with manufacturing because the same team can do the same projects and the result can be different.

And the same situation was in the automotive industry in the beginning of the last century, where there were a lot of design bureaus. All of them produced different types of cars, and the result was not so high, the productive capacity, and the quality was very low. Until the Henry Ford came up with a conveyor,

and right now we can use a good quality car with a reasonable price.

And we did almost the same, but, in a more complicated construction industry. But in this industry, there are a lot of obstacles. And one of these obstacles is just, it's a big volume you need to transport, the modules or something like that.

And we think about, why not to come up with a factory and to move it to the construction site, or as close as possible, and to use local material and local labor, begin to produce something like some kind of Lego blocks, or And with this Lego blocks, begin to, produce the houses around the factory.

And that's why we come up with the technology itself, with the software how to control and operate it, with the machines inside the factory, plus the factory itself that can move from one place to another and erect very fast and begin to produce the houses around the factory.

[00:05:09] Aleks Gampel: Just, to double click on that and to give a visual, so a mobile microfactory being our product, it's literally almost four dozen shipping containers that arrive anywhere in the world with a bunch of stations and machines. embedded within the containers and within a matter of weeks can erect to form a factory and these stations quick deploy within the factory to either one produce raw inputs that go into a home think a window or a helical pier for the foundation etc or they prep certain inputs that go into a house not from raw but preparation cutting you know prepping etc and all those things then in stages get sent down the street in batches to be assembled into homes.

But what Oleg is describing, Mobile Microfactory, it's really a first principles approach. It helped unlock, A, logistics, B, ability to quickly set up factories that aren't gigafactories that are hundreds of millions of dollars worth of capex, very little fixed costs because you don't need to rent some massive warehouse that you're competing with Amazon on.

Etc. And because you're proximate to where the construction is happening, you can now be the unskilled labor assembling the home as well as opposed to passing off whatever that end product is to a third party who doesn't know how to assemble it. So we're an end to end solution from manufacturing to assembling of the homes, replacing the general contractor and the subcontractor altogether for home builders.

[00:06:43] Oleg Kandrashou: And that's why we are more, engineering, robotics company than the construction because to come up all of these machines, 'cause it's not exist in the market. We have right now more than 150 engineers in the company

[00:06:58] Audrow Nash: It's awesome. Hell yeah. Now, that sounds super cool to me. I want to go back just a little bit. like you mentioned some of your background.

[00:07:07] Aleks' Journey: From Real Estate to PropTech to Cuby

[00:07:07] Audrow Nash: Aleks, I would love to hear how you came to Cuby and like what your professional path has been.

[00:07:14] Aleks Gampel: I think out of the two of us, Oleg is a lot more impressive and he's what deep tech, salvates over backing. my background is not linear. I haven't done deep tech in the past. Oleg has spent 20 years building businesses in hardware and more specifically scaling them and bleeding around, scaling up, which is really hard in manufacturing.

And most folks underestimate that. And I think Elon will any day tell you that design is like one percent of the actual system behind manufacturing. But anyways, I come from a real estate family, so I've looked at buildings my entire life. I like the built world. it's the backdrop to everything we do.

We work, we live, we play in buildings. So it's something that's always attracted. been attractive to me. So investing and developing real estate, and being in that ecosystem is really my background and what I speak, but I also like technology. So luckily PropTech became a world and a word where you have certain businesses that are operating businesses backed by venture, PropTech, property technology.

[00:08:16] Audrow Nash: I don't know fully what it means like property tech. I'm thinking Zillow, but I don't know if

that's

[00:08:20] Aleks Gampel: so

[00:08:21] Audrow Nash: maybe that's an

[00:08:21] Aleks Gampel: it's anything that

touches real estate and is a technology company. So it could be, your audience, it could be Zillow, it could be WeWork, and everything in between, but anything that's generally backed by venture requires certain cost of capital. Let's call it R& D, research and development, but sits on top of the built world.

So I've built operating businesses that, that sit on top of the built world. So I've developed this niche of, Hey, technology and how can it integrate into the real world ecosystem? And I met Oleg because,

at the time I was helping a mentor of mine build a real estate business. And there was a lot of demand for the product that he was building, which happened to be lowercase affordable.

And there wasn't enough skilled labor to actually build that product. So therefore I started looking for alternative methodologies to build. How do you build more effectively? How do you build more efficiently? And I started talking to my venture ecosystem as to what technologies they were backing, of which I could be a customer of.

And those categories of technologies were everything from 3D printing homes to volumetric modular, think giant rooms built off site and shipped to the construction site, and prefab, which is really the category we get placed into, although we're really not. That's almost like a deconstructed home, that's prefabricated

buildings. I couldn't find a single solution that was cost effective. What have been around long enough to for me to see its existence meaning weren't well run. There was regulatory risk around certain solutions So like a combination of reasons why I could not be a customer a family friend introduced me to Oleg out of necessity was building his own headquarters for his previous business And given what Oleg knows via Toyota's production system versus what he saw working with a general contractor and developers day and night.

So Oleg was like, holy crap, let me build in the space because I know the system and I can apply it to construction. And I met Oleg at about 50, 000 engineering hours or so. between 50 and a thousand, 50 and a hundred thousand engineering hours. And I instantly saw

[00:10:38] Audrow Nash: What are you guys at now

for

context?

[00:10:41] Aleks Gampel: Oleg, where do you place our engineering hours now?

350?

[00:10:45] Oleg Kandrashou: now. 400? Roughly.

[00:10:48] Audrow Nash: 400. So you were like an eighth as far as you are now.

Okay.

If you

make it linear.

[00:10:56] Aleks Gampel: No, very, yeah.

[00:10:57] Audrow Nash: So you met around,

[00:10:58] Aleks Gampel: Yeah. We, met at early stages and the business has evolved a lot since then, both commercially and, one of Oleg's philosophies in the business is we have to iterate very quickly, which is in a second, he'll show you the R& D center. That's why we have And, engineers sitting side by side from the fab, the manufacturing of the machines, so we can iterate.

So so much of what we've built in the past has already been thrown in the trash and redone, et cetera. So

[00:11:24] Oleg Kandrashou: roughly to, to compare roughly just to come up with a fighter jet. It takes roughly 700, 000, like that, just to compare how big, the project is

[00:11:37] Audrow Nash: Wow.

More than halfway through a fighter jets creation.

[00:11:42] Aleks Gampel: part,

of

[00:11:42] Oleg Kandrashou: Not the fighter jet, but, the is the same size.

[00:11:46] Audrow Nash: That the

time.

[00:11:48] Aleks Gampel: Audrow part, of the reason, so you understand like the anchoring of IP. It's really three things. It's the mobile microfactory itself and the things that go inside of it, the machines, plus even things as simple as containers. Once you create 40 custom containers, they're no longer certified.

you can't insure them when you ship them. So now you have to certify containers and test them. So even as simple things like this. Requires certain amount of thought. Second

[00:12:17] Audrow Nash: Engineering effort. Yeah.

[00:12:19] Aleks Gampel: layer is around the end product, the kit of parts that go into the homes, and the third layer is software. We have an entire operating system that was built in-house to power of the factory.

The process behind the factory, the onsite assembly, none of it was taken from the third parties. It was built in house.

[00:12:39] Audrow Nash: Okay. So that's awesome. We have a lot to talk about with all of that.

Oleg, do you have anything to add? Or I'd love to hear a bit of your path through this. You've mentioned you've done a lot of things. but do you want to give a more detailed explanation of your background and how it fits with what Aleks has said?

[00:12:56] Oleg Kandrashou: you know that I don't like to explain my, awards or something like that because it's not, good.

[00:13:02] Audrow Nash: Yeah,

[00:13:04] Oleg Kandrashou: and, yeah.

[00:13:05] Audrow Nash: But what you've worked on is cool to discuss

[00:13:09] Oleg Kandrashou: We, as I told you that we are doing something that does not exist in the market just to, to solve that problem that we have. And based on that, we need to have a lot of engineers and we need to do a lot of iteration.

And based on my previous experience to, to control big engineering projects, we set up the process that help us very quickly produce something and make mistakes.

[00:13:37] Inside the R&D Center: A Tour of Cuby's Innovations

[00:13:37] Oleg Kandrashou: I can, if you want, I can show you right now how it looks like.

[00:13:40] Audrow Nash: Sure. Yeah. Happy for it.

[00:13:42] Oleg Kandrashou: I will do it. Like, that. Okay, this is my office. From the left side, it's manufacturing. From the right side, it's engineering, mechanical engineering center. Thank you. This is also one of our inventions.

This is an inflated building that we can move to any parking lot and we use them also for our own facilities. This is mechanical engineering sitting there. here it's technological guys who are doing the software for the CNC machines. This is a manufacturing where we produce the prototypes and check all, everything that would come up here.

And you can see that we're in the, just the parking lot, put the cupola here and put all of the machines.

[00:14:20] Audrow Nash: That's so cool.

[00:14:21] Oleg Kandrashou: to produce.

[00:14:22] Audrow Nash: love it. I love that you're in a parking

[00:14:24] Oleg Kandrashou: yeah, and begin to produce as fast as possible everything that we are doing here. We have almost all manufacturing facilities there. We have a, laser cutting machine, bending, milling machine, powder cutting, welding, and a lot of different stuff here.

And everything is moving from one department, for example, this is a, paint department where we cover, the steel with a powder coating like that.

each project is going from one department to another with such kind of, device like that. When they finish the job, he pushes the green card and shows to the next guy that he finished his job and he needs to do another one.

That's why, and for sure, everything is controlled by the software, and we control the movement of all the device around our manufacturing, and all of them is going to the final department of assembling, where we assembling the prototypes and where the engineers check everything. And all of the engineers that's also working here on the manufacturing, because I don't want them to produce error customs.

That's why they need to be group workers before they begin to design something. And,

[00:15:36] Audrow Nash: What was that? I missed that last part of what you said. They, you, they, what was it?

[00:15:42] Oleg Kandrashou: before they begin to do some design in the design center, the designer should work in the manufacturing for one or two months to know how all the

[00:15:53] Audrow Nash: Oh, I love

[00:15:53] Oleg Kandrashou: and only after that they begin to design. Because I don't want that they come up with something that the manufacturer can't produce.

And here, it's a department where we assemble different prototypes of the machine. For example, this is a toolbox for the construction, where we put all of the tools. We come up with that, with the cameras, and so on, that we use in each of our construction

[00:16:14] Audrow Nash: a toolbox. That's, for the people that are assembling. You have that

[00:16:18] Oleg Kandrashou: Yeah, for the construction side.

For example, this is an automated robot that moves, that we will use in the factories, that will move the pallets from one department to another, controlled by our software. We also have, such kind of developing like that. For example, this is,

[00:16:37] Audrow Nash: Okay.

[00:16:37] Oleg Kandrashou: is an air purification system. This is a system that we produce to clean the air from the laser cutting machine, because we can't.

Buy it in the market because the size that we put such kind of the machine into the ship container is not exist. That's why we need to come up a new type of the machine from the scratch, put them and so on. And that's why we have thousands of some kind of device like we design and to produce like that.

This is how, do you know, everything is look like here. I will, I can show you how, do you know, I control personally all the, you know the process of development, and if any engineer, we have three I will explain to you. During the design process, we have three touches. One of them, when the engineer have an idea, he have a task and have an idea how to solve it.

The next one, when he do the 3D model and to show all the engineering stuff. The third one, when he prepare documentation and the software for the CNC machine to produce prototype. Three of them. I control three of them. And if any engineer have a question, I'll show you. If you have a question, he put The flag like that.

And I see,

[00:17:55] Audrow Nash: Ah ha, so you see it. That's great. I love your clear

[00:17:59] Oleg Kandrashou: Yeah.

[00:18:00] Audrow Nash: So you go over to it

[00:18:01] Oleg Kandrashou: flag. And when I have time, I just move, come here and solve all the problem. And that's why I can solve, I can make a decision and to control at the same time, the hundreds of engineer and make a decision on each steps. Because, This project is very complicated, and you need, at the same time, know all

the pro exactly, all the process of the machines, of the technology, of the

software, and

[00:18:26] Audrow Nash: your head.

Yep.

You have the, big understanding. And so then you can go over to the

engineer. You can see if it fits in the larger context

and

[00:18:35] Oleg Kandrashou: Yeah,

[00:18:36] Audrow Nash: make your decision very quickly.

[00:18:37] Aleks Gampel: Oleg is the best delegator I've ever met in my life worked with, and part of that at Oleg, you're humble about it, but talk about your book, because I think a lot of that book is there a lot of, our operating philosophy, so maybe you can share that with Audrow.

[00:18:53] Oleg Kandrashou: it's interesting stuff that

[00:18:54] Audrow Nash: Yeah. Feel

[00:18:55] Oleg Kandrashou: When I just, wrote a book, I forgot about it, I sent it to Elon, and after a couple of years, he began to tweet it. And how to control the engineers that. It's a very funny story, yes.

[00:19:07] Audrow Nash: Oh, wow.

[00:19:07] Oleg Kandrashou: Yeah, but for me, it's

just, Yeah.

for me, it's just, the, methodology, how to hire and fire the engineers and how to control them.

If you want, I can quickly explain you how it's looked like and what this book is about.

[00:19:20] Audrow Nash: Yeah, I would love that. Yeah, and I would also love a copy of the book, just

[00:19:23] Oleg Kandrashou: yeah,

[00:19:25] Audrow Nash: It sounds awesome.

[00:19:26] Oleg Kandrashou: how it's looked like.

[00:19:28] The Vector Theory: Oleg's Unique Management Philosophy

[00:19:28] Oleg Kandrashou: I'm a physicist. I have a physicist background and all my life I tried to compare and to find any physics analogs in the business or economical process. And if I will find this physics analogs, I can predict how the system will develop itself.

For example, five years ago, we find that any employees we can compare with a vector. Where is the length of the vector? This is his personal skill. And the angle, this is his loyalty to the philosophy of the company. And the effectiveness of the company, this is just the total vector sum of all its employees.

We take the book of vector algebra and come up with rules to the rules for hiring, firing, for the HR department, for the leaders of department, how to control the people, and what you need to do. And this is a simple method, lecture, mathematics, how to control the people. This is the book,

this is my book, what, the main principle of what is written there.

[00:20:31] Audrow Nash: what a wild thing. That's so interesting to, Try to distill it into vectors that you can sum and you can keep them probably in some

[00:20:42] Oleg Kandrashou: Exactly. And that's why, for example,

[00:20:44] Audrow Nash: some

[00:20:44] Oleg Kandrashou: Yeah, for example, you have very good guy, good professional, but his philosophy is vice versa of you. And that's why he will damage your company more than will give the value of that.

[00:20:59] Audrow Nash: yeah. That seems like a good, it seems like a very interesting and probably 'cause yeah, you're right. If someone might be incredibly technically talented, for example, but they are bad to work with. and they slow down the company for this kind of thing. They could actually be a net negative on the company.

So if they have a little vector this way and a bigger vector in the other way, then they actually subtract a little bit. That's very interesting. Okay. Hell yeah.

And then just touring around your building, there's a lot of really cool things. first. So the building you're in, the inflatable building, is what you use for your micro factories, correct? Or it's very similar?

[00:21:37] Aleks Gampel: Can I explain? Oh, okay. You explain. Go ahead.

[00:21:42] Oleg Kandrashou: This type of the building we produce ourself. This is our technology. We come up with, and we use the same type of the facility to our own, R& D center, laboratories, factories, and so on. But the mobile microfacture and the building of the mobile factory is almost the same, but we put them on the ship containers.

And in the ship containers, all the machines is already pre installed. And that's why it will look like the same but different, because this type of the building to move, it will take me more time than the factories, our mobile micro factories.

[00:22:17] Audrow Nash: I see.

Okay.

Aleks, did you want to add

[00:22:21] Aleks Gampel: I think I can share my screen here, right? Yeah.

I just want to show your audience so they have a visual. Our mobile microfactory is containerized, so these are all containers that line up the base and the perimeter of what the mobile microfactory is and the structure that Oleg showed earlier. Is this pneumonic inflatable dome, but the inside of the factory is essentially containers with a lot of our machines that get stuffed inside and then later when they're on site, they get deployed into stations, essentially.

[00:22:54] Audrow Nash: Very, cool. And one thing that was interesting from seeing that is you see a lot of the machinery on pallets because I guess you just move it. and that makes it easier to just move off or move out from the containers, organize on the floor. So what strikes me is what you've done. It's a huge complex problem that you're working to solve with this. And I'd like to go through the big parts of the micro factories and the containerization of it. And then the kit of parts that you're actually sending and then the software, or the kit of parts that you're using for assembly. and I think that will be the best way to have a good understanding of what your business is doing. And then from there we can see more. Oh, go ahead.

[00:23:42] Addressing the Housing Crisis: The Need for Innovation

[00:23:42] Aleks Gampel: Just a tiny step back to give folks context on why we're doing what we're doing. So let's just create the,

[00:23:48] Audrow Nash: Oh yeah. I'd love that.

[00:23:49] Aleks Gampel: so if you look at generally first world countries today, everyone's very familiar with the housing crisis, meaning there is not enough homes. And because there's not enough homes, they're all brutally expensive because they're brutally expensive.

You have, things like the late household formations and risk of population collapse, so really big things that are pivotal to, thriving civilization, if you will, or thriving, country, etc. So the reason why we're not building enough and the reason why there's not enough homes is because not enough young people want to be construction workers.

If you pick the US, for example, 40 percent of the workforce in construction is due to retire in the next 10 years. That's really devastating

[00:24:33] Audrow Nash: Yeah. It's

[00:24:33] Aleks Gampel: for every seven

young people that, sorry, for every seven people that

retire in construction, only one replaces them. So we're running into this issue. We're like, we need more homes.

There's not enough people to build them. And construction happens to be one of those really skilled professions that takes a long time to replenish. So we exist. Oh, sorry, go ahead.

[00:24:54] Audrow Nash: one. I would love if you have some additional context on other countries. I think a lot of the like rich world, I think the US actually demographically is healthier than almost all of Europe, for example. but so do you have any idea of how, the ratio changes for different countries?

one, one.

[00:25:16] Aleks Gampel: any first world countries experiencing similar things in construction and the way we quantify it or what we look at is if you look at the dollar that it costs to build a single metric of output of a building. How much of it is skewed toward a 60 70 percent ratio of labor related cost versus materials and other?

In the U. S., it's about 70 percent labor cost. We exist to reduce that by a major fraction because we're applying lean manufacturing and Toyota's production system to construction by reducing that skilled labor and the skilled labor hours required to build buildings. We essentially tackle the issue around supply demand challenges in housing by reducing the skilled labor hours and skilled labor altogether that's required to build homes and later all types of buildings.

[00:26:07] Audrow Nash: Gotcha, hell yeah, starting for homes for now and then eventually all buildings, that's what I hear. I guess labor, when you think of labor, counts for many things. It probably counts for transportation of the parts, it probably in some of the manufacturing of the components that you're going to use for the house.

you're taking from a whole big process, you're moving it super close, you're sending, materials that can be turned into things, or, turned into parts. Parts that you'll use for construction, how much of that 70 percent do you think you're able to eat away? maybe Oleg?

[00:26:48] Oleg Kandrashou: for sure. This is a typical operation, the typical, exercise that I'm doing in different industries. I did it before.

[00:26:55] Audrow Nash: I'm sure.

[00:26:56] Oleg Kandrashou: And when I begin to use the lean manufacturing approach and a lot of automation, I reduce the labor force more than 10 times. This is a typical operation.

[00:27:06] Audrow Nash: Oh my god, so it goes to 7%?

[00:27:08] Oleg Kandrashou: 10%,

[00:27:09] Audrow Nash: Oh my gosh, that's amazing. Okay, so the goal is to make it

[00:27:15] Oleg Kandrashou: is some operation where we reduce it 2 or 3

times, but there is some operation where we reduce it more than 10 times.

[00:27:21] Aleks Gampel: It's not linear,

What do you mean with it's not linear? I expect that it, there's a few things you can knock out that are very high value and then it's, it just keeps decreasing in value. So you have, okay, but so what I'm hearing with this is, new homes could cost 40 percent of what they cost now, something like

[00:27:40] Oleg Kandrashou: Something like

[00:27:41] Audrow Nash: because you're 30

percent plus

10 percent for

[00:27:43] Aleks Gampel: I'll tell you this right now. the

average cost to build a home in the U. S. today, on paper, I think from 2021 is 156 a foot. In reality, that number takes into consideration the top 20 home builders who have massive volume and build really efficiently. But if you're Bob Smith, the home builder, You're like 200 bucks a foot, 250.

You're higher numbers, especially in the Northeast. We today feel pretty confident building a home for around 100, 110 dollars a foot, all in.

[00:28:15] Audrow Nash: That's amazing. Okay, and that is so, cool. So you're getting this huge reduction. That's the motivation, labor shortages, drive up costs. And you can probably, I would bet you, so if six out of seven. We're losing six people per, or we're losing seven people and gaining one.

for what's the time unit for that?

[00:28:40] Aleks Gampel: That's already today.

We're short almost a million

[00:28:44] Audrow Nash: I expect that will get worse in the future. And then I expect that cost will drive that 70 percent labor because there will be fewer workers. I expect that will drive that up. Do you imagine similar things? Like it'll go from 70 percent to 90 percent or 80 percent or something like that.

Is it a reasonable forecast?

[00:29:02] Aleks Gampel: say it, less dramatically. This is the right time to be building what we're building because it's a necessity, not like a, good to have. It's a necessity, and if you're Bob Smith, the home builder, who happens to be our customer, there's no way you're competing with what we call the primes, which is if any of your audiences, bought a house from the top 10 home builders, think DR Horton, Lenar, Toll Brothers, some names that folks might be familiar with, there's no way that Bob Smith, the home builder is competing with that guy on cost to build at the moment.

And so this is an essential need or some need to have, meaning want to have.

[00:29:44] Audrow Nash: let's see. Yes, I see. Oh, like anything to add?

[00:29:47] Oleg Kandrashou: No,

[00:29:49] Audrow Nash: Okay, so the motivation to me seems really strong. I would love

[00:29:54] Oleg Kandrashou: one motivation I can add.

[00:29:56] Audrow Nash: oh yeah, go ahead.

[00:29:58] Oleg Kandrashou: right now, when you build a lot of buildings, one of the main problems for you is not only the cost. This is a quality control and repeatable quality that you can generate money. For example, if you will build at the same time, a hundred homes, the probability that you will do all of them with a high quality that you can sell decreased when you increase the amount of houses that you build at the same time. That's why the system that we can provide with manufacturing conveyor approach, all the quality is controlled by the conveyor process itself. That's why it's not the construction anymore. It's manufacturing from scratch from A to Z.

And that's why everything is controlled step by step in the conveyor belt. And this is the second motivation, especially in that countries where their salaries. Not so high in, in, that's why the, our reduction in labor is not doing dramatically, can affect on the price for them. One of the main motivations that they can do it was a repeatable quality control.

[00:31:06] Audrow Nash: Gotcha. Yeah, that's a big win I imagine because you can make it so that you are producing the parts and it's more about the assembly and once you have simple assembly Then it's a lot less likely you have variation and quality For that kind of thing and that drives down the labor cost because it doesn't need to be as specialized

Okay,

[00:31:27] Aleks Gampel: also to give your audience a bit of context.

[00:31:29] Audrow Nash: Yeah, go ahead

[00:31:30] The Construction Process: Challenges and Solutions

[00:31:30] Aleks Gampel: is typical construction? What is the typical process to, to build a home? Cause I think a lot of folks assume that it's, same as turning on a light. It just happens. But,

[00:31:41] Audrow Nash: Okay. Yeah get into it.

[00:31:42] Aleks Gampel: So you have a developer or a sponsor or someone that's actually executing on putting the project together of building a home or a set of homes.

And that in itself is a pretty, disjointed task. You have to get through planning and design, right? You've got to choose a site. You've got to design the type of home you want to build. You got to get the financing for the home. Then you need permits and legalities, right? Because you can't just innovate in this space.

You can't just build. You have regulatory bodies that allow you to build certain things and don't allow you to build other things. Then you have to actually construct, right? When you're constructing, you have to prepare a site. You have to lay the foundation. You have to frame the house. You have to install the systems.

things like electrical, plumbing, HVAC. HVAC being heating, ventilation, air conditioning systems. You have insulation and drywall, you have exterior and interior finishes, and you have these regulatory bodies, inspectors, that are ongoing throughout the build, showing up and inspecting certain, certain, stages of that build.

And then you have final inspection to TCO to physically be allowed to sell or rent that home to an end consumer. Then you have final touches and landscaping and then, that's it. But it's a lot of steps in between and what's more interesting is there's all these different parties involved in these steps.

It's not a harmonious process. It's all these misaligned, disjointed, fragmented third parties. You have this general contractor who hires subcontractors and then there's specialty contractors. So you have all these. different bodies involved. And today a home gets built with say 10 to 14 people over the course of say 9 to 12 months.

If you're really good you're at 7 months. We're trying to be that entire unfragmented solution and we're trying to build homes in about 1 to 2 months. with only four unskilled people in two shifts, so it's quite a big difference, but the reason why construction is so challenging is because every project is a one off.

That's the crazy part. You can, Audrow, the home you're in today, right now, it could be the same bank financing it, the same general contractor, the same developer, building the same exact home next door will come out different. That's the crazy part. So that's how far construction is away from typical manufacturing, which can create repeatable product with very little deviation.

[00:34:09] Audrow Nash: Yeah. Actually, thank you for going into that to explain the scope of it. Yeah. and so you guys, are you doing things like foundation work and how does all that work?

Maybe Oleg?

[00:34:23] Oleg Kandrashou: I think that it would be better if Aleks will explain, because I talk a lot.

[00:34:26] Audrow Nash: Ah, go ahead.

Ah,

[00:34:29] Aleks Gampel: yeah, so

[00:34:30] Foundation to Finish: Building with Cuby's Technology

[00:34:30] Aleks Gampel: we touch everything from a home, from the foundation all the way through interior finishes. And there's elements that we literally make from raw input. Meaning, we get raw glass into the mobile microfactory, we get one SQU of glass, and we turn it into a double pane window from scratch.

There's things like foundation that we literally make and we choose a particular type of foundation. We like what's called a helical pier foundation or screw pile. For your audience, it's literally these giant screws that get screwed into the ground really deep and the frame of the home sits on top of them.

So we do from foundation to interior finish.

[00:35:07] Audrow Nash: That's really cool. I wonder, so I'm in Texas and we either have, I think, concrete foundation or we have People have the homes like sitting on blocks of wood or pallets or something like that. Like even not even I don't know typically smaller homes, but so this helical one You screw it in you get some stability from going so far down and then you it's similar to the wood one But you have the benefit of being far into the earth and the stability that comes from that.

Is that

[00:35:43] Aleks Gampel: Yeah, it's actually a really great foundation system for, potentially flooding, hurricanes, tornadoes, etc. It's a great foundation system, because it gives way. There's concrete cracks. And then just so your audience gets context, a typical home is made up of lumber, like the home you're in right now, it's framed with lumber, which actually in different parts of the world is perceived as a very, cheap solution, that doesn't have a lot of longevity and durability because we reduce the skilled labor coefficient by such a significant margin.

We can afford to use higher end materials. Therefore, we use steel for our framing. We use steel beams.

[00:36:23] Audrow Nash: Oh, that's so cool.

[00:36:23] Aleks Gampel: And then up of what's called, it's a take on SIP panels, which is a essentially a non structural wall with two steel coils, uncoiled, and it forms this thin interior exterior formation of a steel with pure foam in the middle, which is like an insulation foam that forms the non structural walls.

So like slight different take, but all still regulatory compliant, but it makes for a higher, more efficient, more durable home.

[00:36:53] Audrow Nash: We bought this house a few years ago. We're thinking about our next house and, I've been looking into different things and I'm not that advanced in looking, but, what you're describing is around what I've been wanting. So I hope you guys come to San Antonio, Texas, and I can have a house built out of what you're doing.

Let's see. So I'd like to go back. A little bit now and talk more about, your mobile micro factories and what's being sent with them and how are you shipping them and containerization and all that. why don't we start from the actual machines that you're sending. and I'd love to hear, unfortunately, I feel like there's so much to talk about with this, but I'd love like a high level understanding of what machines you're sending, what you had to do yourself, why to some level you had to do it yourself, and then we can go into the containers and then, go from there. but Oleg, what machines are we, are you sending and, what are their roles and what do you have to do yourself?

[00:38:02] Why Shipping Containers? The Efficiency of Factory in a Box

[00:38:02] Oleg Kandrashou: Okay, first of all, I will explain you why we use ship containers. For example, you have, you need to open the factory, and you have a big warehouse. And for example, you have all the machines that close to this warehouse, everything that you need. It will take you no less than seven months just to take all of the machines.

Put them onto the four floor plan to connect them, put all the tools, set up the process and so on. It's taken no less than seven months.

This lead time of the factory is very big, very high. You pay the money, the salaries, the renting and so on. Our goal is just to open next 10 years 270 factories only in the U. S. And that's why we have no this seven months in each factory that we plan to open. It'll take all our life, and based on that, we begin to come up with a factory in the box. That's why we have a Papa Factory, where we produce the factories, we take ship containers and put in pieces, ships into this ship, containers, all the machines ready to work, all the wires, plug in all the tools is just on, on the toolboxes and so on.

Everything is ready. And this, container is like a device, separate device. We stuck this container one on one in the line, connect them, and they begin to create the conveyor belt. And in this case, our lead time, our time that we start manufacturing, it's one month. And only by this approach, we can, distribute our technology very fast.

[00:39:54] Challenges and Competitors

[00:39:54] Oleg Kandrashou: Because in our industries, there are a lot of competitors. All of them is focused on the technology, but the market is so huge that none of them, all of them can affect somehow on the market. Because if you produce everything with very high production capacity, you need to distribute and copy paste your factories on your technologies. That's why, with this approach, when we put all the machines into the ship containers, there is a lot of restrictions, the height, the weight, and so on, and to produce

[00:40:31] Audrow Nash: So I'm imagining like those shipping containers. They're the large is it like the what is it? Is it like 10 foot by 10 foot by 40?

[00:40:40] Oleg Kandrashou: yeah,

[00:40:41] Audrow Nash: Gotcha. Okay.

[00:40:44] Customizing Shipping Containers

[00:40:44] Oleg Kandrashou: that's why to produce everything in house, screw piles, windows, glass paints, walls, finishes, and so on, We need to have the machines. Half of these machines, we can adapt to the ship containers. Some of them we can't, and we need to reinvent this machine.

And some of the machines is not exist in the market, and that's why we need to produce a design and to produce it ourselves.

[00:41:11] Engineering and Production Innovations

[00:41:11] Oleg Kandrashou: For example, we take this line that produce, peer panels. But in this line, we need at the same time in the conveyor, we need to screw The holes in the walls during the walls is going on the conveyor belt.

That's why we have a CNC machine that is drilling the holes and moving during the, when the line is, for and this kind of machine is not exist in the market, or for example, the, the wrapping machine that wraps all the walls at the same time, but the size is much higher and so on.

[00:41:47] Audrow Nash: Okay, with all these containers, you have your forty foot shipping containers, you said that there's something custom about them and you had certified. What about them is custom? So I suppose they fit in the volume of a 40

[00:42:07] Oleg Kandrashou: of the main stuff that the walls of this ship container, we need to remove it when we put it inside the inside. And to remove it, when you just change something in the containers, you need to pass through the certification as a transportation. And that's why if you will drill the holes in the container, you can use it into the ship. That's why you need to drill it, you need to pass through all of the certification, you need to make your design, you need, you make to do all the tasks that it will not affect on all other ship containers when you, when it will, during the shipping.

[00:42:45] Audrow Nash: Yeah, and so do you have, so how many shipping containers do you send to set up a micro factory?

How many was it?

[00:42:54] Aleks Gampel: So

[00:42:55] Oleg Kandrashou: depend, on

[00:42:55] Aleks Gampel: this is also important because containers, they go on site as well on each construction site, so it's not, so it's all the equipment for on site, it's all the equipment for off site, so it's both.

Was just going to simplify it, for your audience and to answer your original question, about 50 percent of the machines are proprietary to us. Some things are simple, like shelving systems, some things are complicated. And the other 50 percent we get existing certain machines and certain systems from the market, but we'll customize them to our needs.

And they'll even come white label of our logos on them in many

cases.

[00:43:36] Audrow Nash: Oh, did you have something you wanted to say with

[00:43:39] Oleg Kandrashou: Yeah, the ship containers on the construction site. To assemble the building, it's not enough only to produce a skid of parts. For you, you need to have all tools in the machines and the robots. that help the construction assembling work is to assemble the building. And in this case, we have also a lot of tools and machines that we come up to help them.

For example, we have a grid in our building. It's 10 feet and 20 feet. We have columns on this grid. And we have a special robot that we fix on these columns that we'll use as a CNC machines and put the semi dry concrete slab on the first floor and do it automatically. Because we need to save, the labor hours of the workers.

After that, it's begin to draw on the floor plan. all of the, the walls and so

[00:44:32] Audrow Nash: so you just know exactly how to set

[00:44:34] Oleg Kandrashou: Yeah, and why we need to come up also machines for that, guys. And that's why we have, to all each construction site, we have two ship containers. Small one, it's 20 foot. One of them, it's with a shower locker, everything that's where the people can spend their time and can change their, their clothes. Another one, it's, All the tools, machines, robots, electric tractors and so on that help them to do their job.

[00:45:05] Audrow Nash: That's awesome. So this seems, to be a Herculean effort, just absolutely massive. And I'm, amazed that, to me, a fighter pilot, maybe, the details, and engineering, it's, huge to go into it, but, this seems huge. Like maybe I would expect that it would be larger.

Than a fighter pilot. but how, tell me a bit about getting to here, and, 'cause I think you're doing a good bit of your engineering in Europe,

[00:45:41] Aleks Gampel: Eastern Europe,

Yes.

[00:45:42] Cost Efficiency and Global Talent

[00:45:42] Aleks Gampel: So one thing we've done that's probably not traditional, but we would maybe advise more deep tech companies to do right. Deep tech inherently is pretty hard and deep as in complicated technology that doesn't fit into SaaS, let's put it that way. It's a massive engineering effort and it requires a lot of engineering hours, very expensive in the US, which requires a lot of upfront funding and high risk octane funding.

So what we've done is we've been able to arbitrage engineering hours and engineering talent in Eastern Europe. That's how we got through so many engineering hours, raising likely, you know, a 10th of the cost and capital that maybe competitors in the space have raised. So we'll be able to get the commercialization having raised less capital and getting through all the technical de risking with less capital.

So our ratio of technical de risking per dollar has been really efficient.

[00:46:40] Oleg Kandrashou: Yeah, but it will be very difficult to hire 150 engineers in the United States and do such kind of project like that.

[00:46:50] Audrow Nash: So you're say, why would it be difficult? Is it just the cost or is it like actually hiring

[00:46:55] Oleg Kandrashou: It's the plus we will need to compete with big players like Tesla or something like that.

[00:47:03] Audrow Nash: Where in Eastern Europe?

[00:47:05] Oleg Kandrashou: We are in Minsk, in Belarus.

[00:47:06] Audrow Nash: What's the ratio that you think? one engineer in Belarus versus one engineer in the US, how much are you saving?

[00:47:17] Aleks Gampel: One tenth.

[00:47:19] Audrow Nash: Is it is one 10th? Wow.

[00:47:21] Oleg Kandrashou: Roughly,

roughly, maybe 1/5, something like that.

[00:47:26] Audrow Nash: somewhere between. Do you, and then, Oleg because of your management style, this works really well in the quality checks you have in place. Do you compromise on quality with less expensive engineers or is it, you get about the same thing or you have better processes to manage quality or how do you think of it, Oleg?

[00:47:47] Comparing Engineering Schools

[00:47:47] Oleg Kandrashou: Do you mean that to compare the engineers from Belarus and to compare the engineers in the United States?

[00:47:53] Audrow Nash: Yes.

[00:47:54] Oleg Kandrashou: this is a different

[00:47:55] Audrow Nash: Because you're saying it's a tenth of the

[00:47:56] Oleg Kandrashou: can explain it, it's a different school.

In the US, I had a couple of engineers in the United States and they have a little bit difference, Different thinking.

They try to solve the problem directly. They want to, use a robot. They want to use a lot of, gadgets, machines, and so on. But in the school of engineering from this part of the world, there is a tree. It's theory of solving engineering problem. And this theory is helping to think about the engineering is a little bit different.

I will explain you the, case. For example, you have a goal, you have a task, to find, you have a room, and inside the room you have, for example, like a lot of pipes, and in these pipes you have a task to find some leaks, for example. You need to find the hole inside, hole on the pipe in the room where there is a lot of pipes. How the, the engineers from the US will solve the problem? He will find, he will come up with the robots that will inspect each pipe and using a lot of CNC software, AI machine system and so on, will solve the problem. but engineers from this part of the world will solve this problem a little bit different.

He will switch off the light. Put inside the, put inside the pipe some Luminescent, and we'll find the liquid. This is a difference in thinking.

[00:49:36] Audrow Nash: Ah.

[00:49:36] Oleg Kandrashou: And based on that, in some solutions, US engineers will 100 percent much better to do that. But in

[00:49:45] Audrow Nash: but in

[00:49:45] Oleg Kandrashou: In another stuff, when you need to do something to, solve the problem, but we reduce that, reduce the cost, and so on, this part of the world is better. This is a different schools. That's why I explained

[00:49:58] Aleks Gampel: Audrow, I have something to add to that. So I get your question. You're asking,

with such a reduction in cost, is there a compromise in quality, essentially? We don't think there is, but not everyone can employ the system we've employed, because A, you need a founder that's top down innovation, which is Oleg.

And you need someone from that part of the world, which I was born in Russia, but moved to the US when I was eight. Oleg was born in Belarus and has lived there for a big portion of his life. and now obviously splits time. And unless you culturally understand how to manage that culture, it becomes very hard.

But we haven't seen a reduction in quality, mainly because we live in an era now in the U. S. where, so much of the world has gone towards software. You're a software engineer, and the last decade has been all software, and there has been no, there's been a very little amount of skilled engineers that have gone into hardware.

We're now getting that back, but it's early days. These are young kids coming out of school. They're not gray haired. I've been through the trenches of building complicated manufacturing systems, but we're getting back to that. But it's hard to see this boom and renaissance in deep tech and compete with the engineers, the good ones that do exist in the US.

So we found this, it's not for everyone, but this other alternative to be able to get through what ultimately is just engineering risk.

[00:51:21] Audrow Nash: yeah, I think so too. And also a thing is Oleg, as you're saying where it's okay, you can solve the problem differently. I've definitely seen this where say companies in Australia will come up with a very, clever solution because they don't have the same venture capital markets, that you.

[00:51:39] Oleg Kandrashou: You're right.

[00:51:41] Audrow Nash: So I think, I imagine it's quite similar. and that's really cool that you're able to de risk a huge technical problem.

[00:51:48] Oleg Kandrashou: Yeah, but, and that's why I explained to you about trees. Have you heard about this theory?

It was one guy, it was one guy, whose name was Al Truller. This guy come up with a theory and a methodology how to solve engineering problems. And there is exact way how you need to solve it. There is a matrix with problems and with solving. And based on this matrix, you can easily find the way how to solve this or that solutions. And the theories help engineers in this part of the world to solve such kind of problems very easy and very fast. A lot of U. S. based corporations begin to use trees as a method to solve their corporate problems. It's a theory of thinking.

[00:52:41] Audrow Nash: Let's see.

Yeah. I, I, want to, both your book and this trees idea I want to explore after, cause these seem like really interesting ideas that I don't know much about. and they seem well principled, which is nice. I always like a good theory that is well, like simple. I let my favorite thing are simple systems with big effects.

I like those. And this sounds like one of those. Hell yeah. that sounds great. With, one question I had with, you said everything comes all wired up. and so I'm imagining you have this shipping container, you take off the walls, you have a whole bunch of stuff inside that's all connected, and you distribute it in a way that has been planned in your larger building.

How is it to pack these systems back up. So I'm done. we've made a micro factory. We've used it. Now, we want to move it somewhere else. how does that work? Is that a really challenging thing?

[00:53:40] Aleks Gampel: so a clarification because sometimes mobile micro factory might be misleading because a lot of folks assume like you need to go build a house, you send over an entire factory to build a house, but that's, not really how it works. It's we go where there's demand,

[00:53:54] Audrow Nash: you have one for a bunch.

[00:53:56] Aleks Gampel: there's, demand. We put up a mobile microfactory and for the next several years, it's servicing this, 150 - 200 mile radius. In a recurring manner, and maybe eventually once you've paid it back, like generally each of these factories, they can stand for 20 years. We're missing six and a half million homes in the U. S. You pick any dot on the map, that's thousands of homes that need to be built. But in what you're saying, it can be picked up. There's a cost to it. It's about, a couple hundred thousand dollars. You pick it up and you can technically go reuse it elsewhere.

[00:54:31] Audrow Nash: so the idea is the it's more about deployment of factories close to the end use than it is about picking them up and moving them wherever there's demand. I see.

[00:54:44] Aleks Gampel: yes.

[00:54:44] Audrow Nash: Okay, and you can technically pick them up but you want them to be there for many years because there's demand and this kind of thing.

Earlier we mentioned it's a product. So are you, do you have like you were saying, the smaller mom and pop contractors who are doing home assembly that can't compete with the larger companies that you were saying, how, is this a product for them? I was thinking you do some sort of rental model or something, but it sounds like it's something you purchase outright.

[00:55:15] Productizing Factories

[00:55:15] Aleks Gampel: So it's a product for us. The way we think about Kube is that we don't love using this analogy, but I think Elon Musk has been very public about talking about the system. The reason why, for example, Tesla's More relevant than other, competitors is because it's not the model three, like the model three, it's an okay car, right?

Like it definitely, you can get comparable cars. What makes Tesla incredibly unique is their high margin. Why do they have high margins? Because their product is not really a car, it's the factory itself. So we think about the system and day one, we were thinking about not building homes. We were thinking about how do you scale the system, which happens to be a mobile micro factory.

How do you copy paste that hundreds of times?

[00:55:59] Audrow Nash: very cool.

[00:56:00] Aleks Gampel: Now to get to commercialization, which is your part two of the question. Because we need to get, 200 plus mobile micro factories out into the market over the next 10 years, that's really our ambition and we think it's doable. The way we're doing it is very much tied to cost of capital. first venture and then how do you wean off venture and get into it? Things like project financing, equipment leasing, potentially, big credit lines from banks. So we first are white labeling our factories. The analogy there is, McDonald's is a good example. I could be the crappiest cook in the world, but tomorrow I can run the most P and L efficient F& B Outlet, F& B being food and beverage, very profitable outpost and it's because McDonald's has invested also hundreds and thousands of hours into their hardware, software, system, SOPs, supply chain, hiring standards, whatever.

We've essentially done the same thing. We're able to go find local partners in different regions to put up factories on our behalf that they finance, yet we take a license fee or operating fee on. With time though, we will eventually own our own, but our customer for now happens to be anyone we launch a factory with.

The factory's customer, so the by product, which is the home and the customer of that by product, are mom and pop home builders within a 150 mile radius. I hope that, I know it's a bit complicated, but I hope that explains it.

[00:57:34] Audrow Nash: The mom and pop part, I don't quite understand. Cause it sounds like it's a big capital expenditure to get these 40 containers to set up this factory. so someone else who has more capital would purchase this, I imagine. And then they would contract to the mom and pop shops.

Yes. To build things is the way you're thinking. I just I don't quite understand the mom and pop part versus Why would one of these big five or whatever it is? Companies not just buy your thing and make themselves way more efficient, especially because they probably have the capital How do you think about this?

[00:58:11] Aleks Gampel: Yeah. So again, it's all tied to cost of capital. So right now we're very focused on making sure that the mobile microfactory itself is a very profitable standalone venture. So we resonate a lot with ultra high net worth families, larger developers slash general contractors. Private equity groups that understand manufacturing and industrials.

So they're just buying into

the

[00:58:35] Audrow Nash: like a real estate private equity group could be a great

[00:58:40] Aleks Gampel: a factory private equity group or an ultra high net worth family that comes from legacy manufacturing and industrials. So anyone that sees value in launching a manufacturing business, that's incredibly profitable. So we're essentially finding someone else to fund it and taking a licensed operating fee on it.

But the groups that benefit locally from the system are home builders, and generally it's smaller home builders that need to find better solutions because they can't compete with the really big guy that's in their market, the Lennar, for example. Maybe with time we start working with the top 20 home builders, but we believe right now it's very tough to work with some of these big corporates just given the space.

[00:59:23] Audrow Nash: Oh, the speed of them adopting anything. Yeah. Okay. So you could work with the big ones, maybe eventually when they see that part of their market share is being eaten up by smaller people that are doing these jobs, but they're hard to get into and they're hard to move and all this kind of thing. Is that kind of the

[00:59:42] Aleks Gampel: exactly.

And we have this three tier master plan over the next 10, 20 years. And the way we

[00:59:48] Audrow Nash: Love master

[00:59:49] Aleks Gampel: the way we think about it now is like most cost effective way to put out as many factories as possible and service B2B. But with time, as our cost of capital goes down, and we can just put out our own factories, control their destiny.

I know this is really Oleg's vision even more so than mine, but eventually the idea is you have someone like you that's eventually looking to build a different home, would go on our website, we would be the lead, you would design the home and that home would get pushed to one of the nearby factories to get put into production and into eventual assembly.

So like you bring the land, You design the home, we build the home. that's the ultimate goal, eventually, is to somehow become the developer as well, in a way.

[01:00:36] Audrow Nash: Ah, yes, that would be really cool. But it's, you can start with less and the system's already very complex. and so you can do this. Okay. That's really cool. thank you for clarifying that just for, so for our. Listeners,

[01:00:56] Should You Invest? Expected Cashflow from a Mobile Micro Factory

[01:00:56] Audrow Nash: if we have anyone who like say it is an ultra high worth individual or anything.

I think the, two numbers that would be important is a rough ballpark of the cost of one of these. And another one would be like the throughput for the number of houses, that you can expect for a, from one of these factories. Any, ballparks for

[01:01:18] Aleks Gampel: Yeah, for sure. And I think what's interesting we generally get a really wide gamut of folks that reach out to us because I think we're some of the first folks in the world to productize a factory, right? Because that's what we're offering is this

[01:01:30] Audrow Nash: It's super

[01:01:32] Aleks Gampel: what's interesting is part of our, our innovation is the ability to launch factories for about 10 million bucks. That's not a lot of money for manufacturing world at all.

if you look at the 90 infrastructure projects that, Biden is essentially, activating, those are multi billion dollar factories. We're talking about 10 million bucks, this mighty mobile tiny factory, right? Like it really is mighty because we're able to spit out about 200 single family homes a year out of it.

That's the general output capacity. so these things become very profitable annual ventures.

[01:02:11] Audrow Nash: What would you, I'm sure you can do the math based on average housing, and your expected margins. do you have those numbers on hand? I think that would be really cool. 200

[01:02:21] Aleks Gampel: if you

[01:02:22] Audrow Nash: The cash flow to be

[01:02:24] Aleks Gampel: if all goes you're essentially. The capex to put up a factory is about 10 million bucks. And if you're successful in filling up the factory to full capacity, you're looking at 50 to 70 million in top line per year with about 10 to 20 percent EBITDA margins.

[01:02:45] Audrow Nash: 20 percent margin.

Okay, that's super cool.

I hope that one of these comes near me,

[01:02:51] Aleks Gampel: We're looking in Texas, actually. We're looking in

between you and Austin right now.

[01:02:57] Audrow Nash: yep, it's such a hot area, there's so many people moving here and it's just lovely, it's crazy because I've been here two years and the traffic has already increased, we're noticing it, but, okay, that's super cool, thank you for providing those numbers, I think that's a, good, thing to think about for people who might actually be interested in setting one up and i hope you set up one let me know if you set up one in San Antonio Austin area because maybe in a few years we're going to look to buy build a

house

and that would just be awesome

[01:03:33] Aleks Gampel: could always tech, we technically could always do you a favor. So like our first mobile micro factory, the first of a kind is set up in Eastern Europe. We don't like doing this, but we could in theory pack all the different kit of

[01:03:45] Audrow Nash: ship all the

[01:03:46] Aleks Gampel: because it's in about

37 stages. That's the max stages. We can pack it into several containers and just send it to you and it will still come out cheaper than the way you're building in, Texas today.

[01:03:58] Oleg Kandrashou: And in order, we're getting, robotic perspective of Cuby.

[01:04:05] Assembling Houses with Humanoids

[01:04:05] Oleg Kandrashou: We have a goal in maybe five or ten years to, to use humanoids just to do everything in our factories. And I, I, will explain to you why. We have a special software that controls and gives the tasks, this sort of complicated stuff that we did last three years.

This is software and software is working like that. The house itself has final numbers of kit of parts. When we, in our software, design the house, it's automatically, shared on all of these parts, and in each of them we exactly know how much hours and materials we need to spend, and the instruction and the sequences where we need and when to produce it.

After that The house go to the factory and the construction, the worker in the factory push the button and the system give him exact task what he need to do right now based on the pipeline of the houses and his personal skills. That's why this is some kind of a Uber for the workers in the factory and in the construction side.

This is non stop distribution task managers that control the task and give the task with full instruction. After that, as we have a 3D model of all the houses, we have Unreal Engine department that create for us an automatical and automatically video instruction, but it is not a video how to do this or that stuff.

This is a video based on the exact house that we assemble now, and this is stage by stage generating the video instruction, plus we use additional reality. Because we have all of the 3D models to show where and what we need to install. Based on that, all of these tasks

[01:06:00] Audrow Nash: it's like custom IKEA with videos. That's

[01:06:02] Oleg Kandrashou: And based on this step, we exactly know the coordinates and the vector coordinates of each part that we need to install.

And based on that, for us, it's easy to give the task to humanoids what they need to do. Because we automatically

have all,

[01:06:18] Audrow Nash: following instructions. And you already have it in simulation.

[01:06:21] Oleg Kandrashou: And this is what, how the software is look like.

[01:06:25] Aleks Gampel: I was just going to add, given this is a robotics podcast, Audrow,

so we have a controversial take, right? So a lot of, folks trying to industrialize construction or certain aspects of construction, they automatically jump to robotics. But in our industry, if you compare it to like Tesla, for example, Tesla had, almost 50 years of Toyota's production system that they got to work off of what Ford has done, in terms of a conveyor belt. construction has never had even the slightest improvement on efficiency from a human productivity perspective. So we're like that first iteration. So before you deploy any robotics in the wrong direction, in the negative ROI,

[01:07:05] Audrow Nash: Oh,

[01:07:05] Aleks Gampel: You first have to make humans more efficient. very little of our stations are actually fully roboticized and automated.

But with time, they all will be, when we reach that point of figuring out the right direction for efficiency, and once we cap that efficiency.

[01:07:20] Oleg Kandrashou: In lean manufacturing approach, it's called Kaizen. Kaizen is the process of non

[01:07:25] Audrow Nash: Ah, I've heard it.

[01:07:26] Oleg Kandrashou: That's why you need to start with labor, and after that, by the Kaizen, you need to change the man by the robot, but you need to do it only in that case, when everything is going very good.

[01:07:38] Audrow Nash: Very well. Yep. Yeah, you basically, it's like the programmer thing where premature optimization is evil. Kind of thing. You try, you basically, you're figuring it out. You're getting your process nailed down. When the process is really humming along, then you can automate the things that are the most time consuming for this kind of thing.

Something like that.

[01:07:58] Oleg Kandrashou: Exactly.

[01:08:00] Audrow Nash: That's a very cool approach. And yeah, I think what you've done seems to lend itself very well to being more automated, and it's really cool. I feel like a thing that's a very clever and probably a huge advantage for you guys is the nuance that it sounds like your scheduler has where you have the full build of the house and all the components and all the steps and you're using Unreal Engine to display things and provide the videos of exactly what to do which makes the job easier.

and yeah, I think it is like an Uber of work because you can just do the next task a person doesn't have to be involved in any of the planning. They get a specific task to do they do that. They get paid for that. That's super cool I'm I think of everything i'm most excited about your scheduling setup and the whole infrastructure required to

make that work. That's super cool. Yeah, go ahead

[01:08:55] Aleks Gampel: Oleg, maybe you can cover some of the software because what Audrow is getting excited about is like one small subset. this software, I just got the first demo of Audrow, two days ago. I was mind blown, A, the amount of work that's been done, but also, like, how comprehensive it is.

[01:09:13] Cuby's Software Setup

[01:09:13] Oleg Kandrashou: Okay, I will explain it. We have four blocks in the software. Big blocks. The first of them, we call it CubyFactory. This is a software where we come up with the machines, all the function of them, plus all the process between the machines, where we need to put all the parts, all the materials, all the instruction, and so on.

And based on this, like a creator or something like that, we give the ability to give the task in the factory. The second software, we call it Cuby Construction. This is a Cuby Construction. It's the same software for the factory, but for the construction, because four guys in two shifts working in 35 stages of the factory, and it's control all their tasks and the construction side. We have Cuby Logistics. This is a task that gives the, the driver, tasks where, what they need to take and where to move. Because, it's nonstop distribution task for, the, for, the driver and to deliver. We have the software for it. We, it's, it's Cuby Pro, this is the software where we do all the registration, the, all the cabinets that all the workers can see.

Their track number, their learning, how they're learning and so on to control, because we use unskilled paper. And the last stuff that we have, on Unreal Engine. Based on that, cover on that. And we have a software, like a before construction worker come to the working, to the factory or to the construction site, we have a game.

where he plays the game on the computer doing the same job that he will do in one, in the next month, playing the game in one month, and he exactly knows what he needs to do in the next couple of months, and he has some small trainings based on that.

[01:11:05] Audrow Nash: that's so cool.

[01:11:05] Oleg Kandrashou: And the last stuff that we have, this is the software that we are working on right now.

We have, Additional Reality, where we take all the models from Unreal Engine and put them into that coordinate that it should be in to show where to need to install. Next year and a half, roughly, we finish the software for the consumer, where the consumer itself, like playing the Sims game,

[01:11:32] Audrow Nash: you can walk through and see the space.

[01:11:35] Oleg Kandrashou: exactly, we have it right now already, we have it. right now, but my play, my goal is like that. The small family, the young family, sitting near the computer, playing the Sims game, Construct with very simple instruments, construct their house, and they see exactly the budget right now. He can change something, the budget will change, he push the button, and the system give the task to the factory to produce it, take the loan, and in two months they will have their house on their plot.

This is a goal that we're doing as close as fast as possible.

Yeah,

[01:12:10] Audrow Nash: I love that too, that the pricing is straightforward. Because you're like, you add a window here and it's oh,

[01:12:16] Oleg Kandrashou: not easy.

[01:12:18] Audrow Nash: yeah, it makes sense. But, oh my goodness, how nice that is. Because everything is so hand wavy now, it feels so having that where you see instant feedback for what you expect is really

[01:12:30] Aleks Gampel: It doesn't exist in construction today. In fact, actually, the way you get quoted on pricing,

between general contractor and the developer who's hiring the general contractor. I'm sure you've heard it like real estate and construction general is never on time and it's never on budget.

[01:12:49] Audrow Nash: definitely. Yeah. So this, the idea is very, nice. I feel like it's, like Tesla in their pricing where you just like, instead of going and negotiating a car, whatever, you just agree right when you select it on the website kind of thing.

[01:13:02] Oleg Kandrashou: It's the same, because each house that the factory built has a serial number. It's like a product.

[01:13:09] Audrow Nash: Yeah. Yeah.

[01:13:10] Aleks Gampel: makes our system more complex actually is Tesla, they only have, what is it now, four car options.

[01:13:16] Audrow Nash: A few permutations. Yeah,

[01:13:18] Aleks Gampel: We have hundreds of permutations. different size of a home, layouts, the combinations create different finishes. So it literally is in the hundreds of permutations. Now imagine trying to control that.

That's why the software is so important here.

[01:13:33] Audrow Nash: Definitely. Yeah, very cool. And with the nature of your system, it's going to be a bit more modular in building a home. And so then you can cost all that up and that's really cool. But we're getting to the time when I have a hard cut off.

[01:13:48] Scaling and Future Plans

[01:13:48] Audrow Nash: but so I would love to see what you guys think of so you say in the next 10 years, you want to build 220 of these micro factories in the U S what, does the future look like for you guys?

So you're going to be working towards this. What are the challenges? What are the new things you think you're going to be getting into? and actually so have you, you currently have one in, Belarus, is it correct?

Belarus, is it correct?

[01:14:17] Aleks Gampel: Yes, it's our test signed our first commercial large scale contract for a factory that will go to Las Vegas. And then we have several more. Thank you. We have several more that we're trying to deploy, but we're going to cap for the next one. And we're going to cap for the next year or two and just perfecting the first three we're going to launch.

And then we can go really fast. And what we've built now allows us to go mass produce these already today, but we can't do it until we've perfected everything and have created the right cost of capital.

[01:14:51] Audrow Nash: Make sense. So are your next few years head down working on these, making your first three go and then from your first three you'll do 15 and whatever until you scale up and can do the larger because I imagine as the 10 years go you do more and more deployments, for this but I guess what does, it sounds like you've done so much interesting technology.

Is it really just scaling, getting the first ones out and then scaling?

Is that your Outlook on

[01:15:22] Aleks Gampel: so I'll even tell you how we speak to investors is, we've done most of the technical de risking, our TRL, Oleg, would you say 7 going to 8 or 8 going to 9 at this point?

[01:15:33] Oleg Kandrashou: know, we, are doing right now TRL 8, and TRL 9 will be when we'll finish the first two factories as a commercial purpose.

[01:15:44] Audrow Nash: I don't know what it means

[01:15:44] Aleks Gampel: Dan Golden from NASA coined this term TRL. It has to do with like, how ready is the technology to be commercially viable? Yeah, so we've done most of the technical de risking, now we're switching over to risk around operations, execution, and scaling. That's really where most of the risk now lies.

[01:16:07] Audrow Nash: Yes. Okay. So that is the thing for the next 10 years, say for this kind of thing where you're going to be scaling, just for the TRL one. How, many levels are there for this? I'm not familiar with the, oh, so you're at the end and that's why you're ready for scaling. You're

[01:16:27] Oleg Kandrashou: and when you have the, the product ready to mass production and TRL 9 when you take the product from the conveyor belt.

[01:16:36] Audrow Nash: That's awesome. Okay. If there is anything that you hope our listeners and watchers, get from this interview, what, are your main takeaways? we'll start with Oleg.

[01:16:52] Oleg Kandrashou: I can say that don't forget, don't afraid to do the big projects. That's why you need to do non stop and if anybody will tell you that you're doing very, complicated stuff and you will not be at the end, don't listen to them.

Continue doing that.

[01:17:11] Audrow Nash: Hell yeah. And

[01:17:12] Aleks Gampel: I think I'll be less inspirational And more vain than I'll like, but we're always like good investors, we like folks that want to build factories that want to build homes. So anyone that wants to chat with us about how we can commercialize together, we're always ears.

[01:17:29] Audrow Nash: Hell yeah. Any best ways to reach out to you guys?

[01:17:32] Aleks Gampel: I think our website could be technologies. com and there's a direct submission form if anyone wants to talk to us. yeah, that goes directly to me and Oleg.

[01:17:43] Audrow Nash: Hell yeah. Awesome. Thank you for the time and this has been awesome to learn about what you guys are doing and I hope I can get one of your houses in a few years.

[01:17:52] Aleks Gampel: You're, you're, you're now a friend, so we'll, we'll try to be as helpful as possible.

[01:17:55] Audrow Nash: That's it. What'd you think? Do you think Cuby's approach is promising? Would you like to spring for buying one of the micro factories with me? I'm just kidding. I have nowhere near enough cash. Although it does seem like a good idea.

That's all for now. See you next time.

[00:00:00] Episode Introduction

[00:00:00] Audrow Nash: How can we make it easier to build robots? Good developer tooling is a great start, and one of the biggest parts of tooling in my mind is visualization. In this interview, I talk with Adrian Macneil, who is a co founder and the CEO of Foxglove.

If you're a robotics developer, you probably know them and use their visualizer. If not, you probably want to check it out.

What you may not know is Foxglove has a whole data infrastructure from recording data to getting it off the robot and selecting which data you want in the cloud. And of course, their specialty, visualizing it, which works for live data and recorded data.

In this interview, we talk about all the parts of Foxglove and how they fit together, opportunities for startups to build robot tooling, Foxglove's 2.0 release and their decision to move away from open sourcing their visualizer, and Foxglove's upcoming one day conference, Actuate.

Actuate looks like a blast . Its goal is to have conversations where robotics companies can share their learnings kind of like what's done at academic conferences, but with more of an industry focus.

If you do decide to go, you can use the code Audrow, A U D R O W, to get 20 percent off It's not a referral, I don't get anything if you sign up, but I'm flattered to have a promo code. Anyway, I hope to see you there. With that, here's the interview.

[00:01:28] Introduction to Adrian Macneil and Foxglove

[00:01:28] Audrow Nash: Hey Adrian, would you introduce yourself?

[00:01:30] Adrian Macneil: Hi Audrow, my name is Adrian Macneil, I'm the co founder and CEO at Foxglove.

[00:01:37] Audrow Nash: Awesome. would you tell me about Foxglove and how it came to be?

[00:01:42] Adrian Macneil: Yeah, absolutely.

[00:01:43] The Genesis of Foxglove

[00:01:43] Adrian Macneil: So Foxglove we think of as a visualization and observability platform for robotics. We've been around for about three years now as a company. Prior to Foxglove, I spent five years working in the self driving industry at Cruise in San Francisco. Worked on a lot of the internal infrastructure, internal tools, developer tooling, simulation infrastructure, a lot of those pieces at Cruise.

And really came out of Cruise feeling like there was this enormous, number one, there was this enormous opportunity for automation, and autonomy and robotics in the world, seeing if we can, put a car on the streets of San Francisco. how, many other sort of tasks are done by humans today that are dangerous, repetitive, difficult, that are honestly much easier than driving around San Francisco.

if we have self driving cars, how come we don't have self driving tractors in an empty field or, self driving forklifts in a factory yet? so I came out with this sort of huge excitement about what was possible with autonomy, but at the same time, really feeling that, the industry was being held back by a lack of tools and infrastructure and frameworks.

having seen kind of everything we had to build from scratch at Cruise. So that was the genesis of Foxglove was how can we push forward the robotics industry? How can we, build tools and infrastructure that really accelerate the industry? and we started with this, piece around visualization.

We started with, the Foxglove Visualizer, putting that out there, making it easy for folks to bring in different types of data, for example, from ROS. or very quickly, early on, we started adding support for other frameworks and other data formats like protobuf and things. and letting people bring this, multi modal data into a, into kind of a visualization framework, that was really where we started.

and this is a problem that's pretty unique to robotics and a lot of these kind of, ML workflows. If you think back to Traditional observability, you think about tools like Datadog or Grafana, they're really built around time series data, metrics, lots of plots, but no ability to show you 3D data, no ability to show maps, no ability to show point clouds or anything like that.

So we started with the visualizer and then over time we've expanded into this broader what we think of observability as everything from how you're logging on, a robot, how you're offloading data from robots, how you get it to the cloud, how you store and index it, and then how you can visualize and analyze that data.

[00:04:14] Audrow Nash: Yeah. So you're carving out a very nice vertical with this all around. All things data

[00:04:19] Adrian Macneil: Basically, yeah, we see the whole, data life cycle of, from your recording on the robot. because these are, things that are just, they work quite differently from a traditional Infrastructure space, right? Like I, I came to Cruise, I wanna say nine years ago or something at this point, from an infrastructure background, not from a robotics background.

And I saw how how much robotics engineers struggled with a lot of these things. But the traditional tools that had built up around sort of web servers and the web over the past couple decades did not immediately apply to robotics.

[00:04:55] Challenges in Robotics Data Logging

[00:04:55] Adrian Macneil: So you think about some of these things like, what types of data are being logged, I mentioned, right?

With websites, you're usually, the logs coming out of like a web server is typically, that's pretty lightweight in terms of data, you're generating a lot of text logs, and then you're generating maybe some time series metrics, things like, your server response speed, memory use, and CPU on the server, and things like that.

But it's like pretty lightweight telemetry in the scheme of things. and then, it's, and then this is deployed in a server room, so you've got this, we're deployed in a data center, we've got, effectively infinite bandwidth, and we've got, very lightweight kind of telemetry coming off web servers.

And then you go to robotics, and it's the complete opposite on both of those, right? You've got, instead of, very lightweight telemetry, you've got incredibly dense data, sensor data, video, you've got point clouds, you've got, all of this kind of data. It's very multimodal, it's very heavyweight, quite often.

and then. Almost universally terrible internet, or, some,

[00:05:55] Audrow Nash: Yeah.

[00:05:56] Adrian Macneil: not a data center, let's put it that way, you've, there's, you're either, you're in a factory, you've got Wi Fi, you've got shitty Wi Fi, you've got, or even if you, even if it's a fixed position robot and it's connected with Ethernet, it's still in a factory.

There's still maybe you're lucky if you've got a 100 megabit uplink or something. or we deploy it into a, field where you've got maybe like some spotty 3G service. you're getting into maybe Starlink if you're lucky, but you're not going to be streaming multiple HD video feeds over that.

So you invert on both of these spectrums. because of that. You can't just go and reach for an existing off the shelf observability Something like Datadog is just completely non applicable to the robotics space. And yeah, we just, we see this as something where robotics is, the needs of, robotics, engineers and robotics companies is unique.

and as, a specialized tool for this is important.

[00:06:54] Audrow Nash: Now I would love to go, I would love to go into each of the parts. of the you mentioned so you have the visualization logging transport like tell me about these

[00:07:05] Adrian Macneil: Yeah, absolutely. walking through an order of how we see the, data journey, the data life cycle. on a robot. recording is one of the, the first ones that we think about, right?

[00:07:20] MCAP: A New Standard for Robotics Logging

[00:07:20] Adrian Macneil: one of our open source projects at Foxglover is called MCAP. MCAP is a standardized kind of recording and logging format.

if you think about recording in a robotics context, probably the, sort of, The thing that a lot of people would be familiar with is ROSBAGS, and the ROSBAG sort of file format was designed along with ROS1, probably 15 years ago or more at this point, and, but it was one of the first formats that sort of natively supports recording multimodal streams of data, into a single file, and we think that this is a very important property, being able to take all of these different streams of sensor data, putting it into a single file, the alternative, and we've seen this at companies that sort of created pre mcap or, didn't use ROS early on.

the alternative typically ends up being a junk drawer approach to logging, right? So there's just a folder on the robot. there's some CSV, one piece of the stack is logging some CSV data. One piece of the stack's logging an MP4 file. One piece of the stack is dumping some random binary data to disk.

and then there's some YAML files thrown in for good measure. so this is junk drawer approach to logging. Now, great, now we get these logs off the robot. It's just like a folder full of crap. We zip it up and who knows, how you, go and, interact with that later.

but we think that this, nice property that ROS bags originally, Created as everything should go through a single format, even if it's completely different types of data, even if it's some of it's structured data, some of it's unstructured, some of it's, multimodal video point clouds, some of it's time series.

you want to put this all through a single long format, make sure everything is timestamped, consistent. That was a great property of ROS bags. but we saw, the, robotics ecosystem is broader than ROS. We saw a lot of companies struggling with, with where they were not using a ROS stack.

And therefore some of them weirdly converted data to ROS just for logging. Some of them did the junk drawer quite a few companies we spoke to had just invented their own sort of bespoke binary logging format. So we've just, we've created something internally. We just dumped these, this binary data to disk.

good luck interacting with any other services, or, good luck even reading your own files from a year ago, because who knows how that, how the format evolves. There's no kind of spec to it. so this was a problem that we set out to solve quite early on, back in 2021 with Foxglove.

We just, we saw this as, A key sort of problem in the industry that there were no open standards outside of ROS Bag that people were using for logging created the MCAP project, that actually got accepted I think maybe a year ago or a year and a half ago it got accepted as the default logging format in ROS2 which is great.

We're seeing a lot of adoption of MCAP within the ROS community, but also, a lot of adoption of MCAP outside of ROS2. There's a number of some of them I think on the MCAP website. Logos that you'll see of big companies. I don't want to name ones that I'm not allowed to. So I'll just have a look at what's actually listed publicly on the website.

got, yeah, there's some ones here like Andoril, Dexterity, Coco, Apex AI, Tangram, Wabi. So there's a few of these, really large companies, both within and outside of the robotics ecosystem that have, adopted MCAP. So that's, how we think about the logging piece, right?

And because the logging pieces on your robot. that needs to be open source, that needs to be a completely open standard, open framework. MCAP has libraries in about six different languages. There's, C and Python and Rust and JavaScript and Go and anything that people are using on robots.

[00:11:05] Audrow Nash: does it

[00:11:06] Adrian Macneil: and

[00:11:07] Audrow Nash: like so how would you describe at a high level how does mcap work and how does it unify things and why is it better than rossbag

[00:11:14] Adrian Macneil: Yeah, you can think of MCAP as a successor to ROS1bags. so ROS1bags, was a file format where it can take basically all of the channels of ROS data, and save them into a single file format. Every message that goes in is timestamped. There's an index at the end of the file in the ROS bag that sort of helps you if you need to seek around and things within the file.

But there was a few sort of, Challenges. The ROS one bag format was not bad. a few challenges that people had with it. One is that it was tied to the ROS ecosystem. not just, in two ways. One is that you could only log ROS data to it. So if you had other data, you've got some protobuf data or some just other sort of generic data you wanted to save.

you were back to the juncture approach, now you've got to save that separately. another way it was to ROS ecosystem is that the, ROS, whatever Bagpile, whatever, the libraries to interact with it are also only available through the ROS repos. You can't just like pip install a general purpose package that would let you interact with it.

And this becomes a problem in, And, ML like further on in your robotics development, you've got ML pipelines, you've got ETL pipelines. You've got, you want to do post processing on the data. You don't necessarily want to install like all of ROS, or set up the ROS repos or any of this kind of stuff.

You've just got a standard sort of, Python app that you've written and you want to just pip install something interact with it. That was not possible. in the past with ROSBags. So you had, a couple of these challenges, and then in the early days of ROS 2, there was a switch to, a SQLite based, logging format, which, it's, unclear that sort of some of the design decisions and people behind the history of how that decision came to be has been lost, I think, to the, sands of time.

was one, it was, it appears to be a decision that was made in the early days. of ROS, but, and I think the intent behind the SQLite login format was to bring it as I mentioned, ROSbags was hard to interact with outside of the ROS environment. The intent was like, hey, let's use an existing open standard SQLite.

There's plenty of readers out there that would let you unpack that. Unfortunately, the downside of that is that SQLite is a relational database, so it's not designed for, high performance stream, append only logging or anything like this. It's not, in terms of how you think about inserting a row into a SQL database, it's hyper optimized for jumping around and, and inserting rows arbitrarily and updating indexes and all of these things that, are not at all necessary when you've got a stream of log data that you just want to flush to disk as fast as possible.

so people were seeing a lot of problems with that. Notably, it would basically just drop a lot of messages, if it ran out of CPU or, the CPU was being too taxed, it would just start dropping messages, and then they would not make to your log file, there were some challenges being faced there, and then, yeah, with MCAP, the design of the MCAP file format is quite similar to MCAP, A ROS 1 bag.

The other analogy I would say with an MCAP file is it's similar to an MP4 file. So an MP4 file, people think of as a video file, but it's not actually. An MP4 is just a container. You can put anything inside it, and inside the MP4 file you're putting streams of typically something like H. 264 encoded video, plus some, AAC encoded audio, or whatever.

MP3 encoded audio or any of these kind of things, right? So it's a container. MP4 is a container format Just like Matroska is also a container format, MKV These are containers and then inside them you have channels of audio and video MCAP is similar to that except inside it you can have channels of anything.

You can have channels of ROS messages, you can have channels of CDR messages from, ROS2 and DDS, you can have channels of protobufs, you can have channels of flatbuffers, you can have channels of JSON, whatever you like. So think about it as just a container format, everything that goes in the container, is timestamped so that it fits into the sort of stream that you're writing.

and yeah, then you can go and unpack that later in. Like I said, the libraries are available kind of independent from the ROS ecosystem. even if you're using it within ROS, that's a nice property.

[00:15:40] Audrow Nash: Yeah. Okay. That sounds really nice. So you have this container where you can store all these different channels, you can pull them out really nice. And then you also have different language support that don't, that doesn't require you to use, or install ROS on ecosystem, use Ubuntu or Windows or whatever.

whatever it's supporting, but it's a heavy dependency. And so that's, very nice. and that, that probably makes it a lot nicer for, as you were saying, like ML pipelines, people who are doing data analysis, they don't need to maintain a working ROS distribution on their

[00:16:19] Adrian Macneil: Yeah, even within Foxglove platform, some of our backend services that ingest MCAP files or stream MCAP files are written in Go. for example, and they have no dependency on ROS. We're not running on, on, Ubuntu 2004 or outdated version of Ubuntu tied to a specific ROS distribution or anything.

It's just, hey, we've installed the mcap go package and that has the, has everything you need to interact with it. So you have this sort of like level of independence that I think is,

[00:16:45] Audrow Nash: Very desirable.

[00:16:47] Adrian Macneil: and then of course you still have the, tight ROS integration for people that are using ROS.

[00:16:52] Audrow Nash: Definitely. Okay. and so that's the recording part. So you're putting it in a format that you can use. So MCAP, does it refer to anything or is it an acronym?

[00:17:04] Adrian Macneil: it's, unofficially it's Message Capture, which is a play on PCAP, which Packet Capture. we don't expand that anyway, we call it MCAP, and, it was, yeah, we went through, had a sort of internal voting on about 50 different names, I think, but MCAP, MCAP, I don't even remember what the other suggestions were, this was three years ago now, but, But yeah, there was, yeah, I think MCAP kind of stuck out as, being, philosophically similar to PCAP files, which are a network file format that allows you to capture all of the packets going out on your sort of IP address, through, a GCP IP stack. You can capture all of these, all of the packets going forth and then you can go and inspect that later.

We're thinking this is not, This is a higher abstraction, right? It's not at the level of individual packets, it's at the level of messages, ROS messages or other messages in a PubSub system.

[00:18:00] Audrow Nash: Nice. I like that. And it also makes sense because maybe people familiar with PCAP can map over their understanding and it's just, it's a different

[00:18:10] Adrian Macneil: yeah. It's, yeah, it's, conceptually similar, although mostly it's just a catchy name. And then at some point, some point, one of the team members threw in the, the hat logo, caught on. And, and that was a little bit of a dad joke that made it in with, yeah.

[00:18:31] Audrow Nash: Okay. So we have MCAP for recording. Where do we go from there?

[00:18:35] Data Upload and Ingestion in Robotics

[00:18:35] Adrian Macneil: got to get the data off the robots. So this is how we think about as, upload and ingestion. As I said, within the robotics space, usually you are logging very, dense data and then you've almost universally got a, pretty limited internet connection. bandwidth constraint is just a universal problem in robotics.

so how do we get the data off the robot? during early R& D, this is not, As, it's not really as much of a problem people run into, they just, they log in on the robot, probably the robot's right in front of them, they, hook up an ethernet to the robot and SSH and copy some files off, or, they just plug in a portable hard drive to the robot and, off they go.

As you start getting to robots in the field, you start getting, the robots are not deployed where your engineers are, where your team is, so you want to start debugging data, you're not going to have like your entire engineering team on site at wherever the actual robots are being deployed, even the first few, and certainly once you get to scale of thousands or tens of thousands of robots, you're going to have them deployed all across the country, all across the world, and you're going to have your engineering team is, going to, possibly be in one place or possibly be distributed, but probably not where your robots are.

how do we get the data off? first of all, it depends what, data you want to get off. We see a lot of people, eventually move to some kind of rolling record, type setup. So This, might involve having a day or maybe max couple days worth of data logged on the robot. You've got enough storage space on there for, 24 hours of log data, say.

But you're just overriding it, unless something goes wrong, unless there's something notable.

[00:20:18] Audrow Nash: like a home security system or

[00:20:20] Adrian Macneil: yeah, exactly, yeah. It's if someone breaks into your house, you got to go quickly pull the tapes out before, Yeah, or you're like a, a black box recorder in an aircraft or something like that.

and then why, and then you, but you do want to capture the data if it's, for example, something went wrong, obviously, like robot, crashed into some, like packaged goods or the fell down the stairs or whatever the robot was doing wrong. Those are obvious ones. The other areas that you might want to capture data are things going well.

You want to just capture, a distribution of data for your ML training, things like that. you need, you want to get a nice distribution of, different events. but we usually see people moving towards some kind of, rolling record type setup. The exception to that, would be, companies that are operating in public, so self driving cars, but also things like cyborg delivery robots and things like this, there is usually a higher, there's usually a desire with those types of robots to save it for more than just what you could fit locally, probably more than a day, right?

You want to have, Keep keeping most of the data around for a month or a few months or something like this in case someone calls up and says hey you cut me off on my bicycle I'm suing you or something like this you want to not say you know it may be weeks later that you find out about a potential incident so we usually

[00:21:49] Audrow Nash: they generate so like autonomous cars at least so much data. So I imagine they need to offload it because they can't store it because

[00:21:58] Adrian Macneil: yeah, so yeah, yeah, so in the AV industry, and it's a maturity curve, you get to the point that, that sort of like at Waymo is at today, and you're absolutely going to see a rolling record and not saving everything that's happening, but until you reach that level of confidence, for, all of the early years, it's usually something that looks like, logging to port of logging to hotswap hard drives on the car.

And then when the car is coming back to charge or to fill up or be cleaned or whatever service, we're just popping those hard drives out and, switching them out for fresh ones. from there, of course, you still got this just ridiculous quantity of data because, self driving car can be logging multiple terabytes an hour easily.

so you got to get all of that data. Off a car and up to the cloud that requires a pretty extreme internet connection. But yeah, that's the AV use case is a little bit unique in that world. One where you're operating in public and you're a lot more sensitive things Potentially might have gone wrong that you might not find out about.

You don't see that as much in a warehouse AMR or something like that. Usually the amount of damage that an AMR can do driving at three miles an hour in a warehouse is fairly limited. And also a lot of the time, a lot of these warehouse AMRs have a lot of people around, they have buttons on them where you can report incidents and things like this.

So you might be perfectly happy just saving a couple minutes. maybe a minute or two minutes of data every time there's something notable then just throwing the rest away.

[00:23:36] Audrow Nash: Okay. So back to this data upload problem. what do you guys have in place? So I guess there has been this, what people have done is they often record the data in place and, or, but I guess if you want it for machine learning and so you want to train on that data, you want to get that data off.

connectivity is very bad general or not very bad, right? Generally, but often can be very bad. so how are you guys solving this or what's your approach to getting data off the robot

[00:24:13] Adrian Macneil: Yeah. exactly. And so we offer, and I think there's, a lot more to build out here over time, but we offer a agent that you can, we call the robot agent. You can drop it on a robot. and then it'll watch a directory of, log files, of MCAP files or bag files. So let's say you're running a ROS robot or a ROS like robot.

It's recording a lot of data, to MCAP files, to ROS bags. those are typically getting, the files typically get rotated every sort of 60 seconds to a few minutes. Quite often people are splitting the files up every few minutes just to, to, to make them easier to work with. we have an agent that'll sit there and just watch that directory.

keep an eye on new files that are coming in. and it notifies the server and the cloud about what files are available. and then, we, have an understanding that, hey, you've got this of robots out there, they have these, this data is available for import, from the edge. And then, you can trigger an upload in one of two ways.

You can either do it from the robot, so you can, send a message to the on, to the robot agent, say, flag, flag this particular file, or say, can you queue that for upload? or you can do it as a pull based mechanism from the server, from the cloud. So I can go to the web portal, I can click on that robot, I can see the timeline of data that's available.

I can't view it yet. I know that there's data there, but I can't, I don't know what that data is. But I, might, the, pull based workflow, the push based where the robot initiates it is great when the robot identifies something wrong because the robot identifies that there's been a collision or the robot identifies some other kind of error that it wants to report.

It can trigger an upload. The pull based workflow, where the user is going to the cloud and requesting it, is good when they get a call from the operations team saying, Hey, at 2pm this afternoon there was this incident with the robot, can you guys take a look at it? You might want to go in and fetch, just grab 10 minutes of data around, around about when a, some problem was claimed to happen, and then offload that, pull it, From the, yeah, from the robot and have that sent to the cloud so that you can then go investigate.

[00:26:24] Audrow Nash: Okay. So you're not doing, it's not towards like dashboards

[00:26:29] Adrian Macneil: Yeah, no, we we've yeah, exactly. We've, stayed away from that. there are a lot of tools that do that already. you've got, Foment is probably one of the most known ones, but, in all, but there's, there's a lot of these tools that are creating, more of a live overview.

I call it like a management platform. we don't do anything in the fleet management space. it's, a little bit It's a different problem category, I think, to what we're solving, we're focusing on the needs of robotics engineers, robotics developers, and getting, what is usually, async access to raw, detailed logs about incidents, versus a fleet overview, which is more of an operations workflow of just hey, I want to see where my robots are.

I want to click on this one and send this one home. I want to see how many, tasks per hour are being completed by different robots. this kind of stuff is, yeah, it's, a different kind of product category and problem space. It's adjacent to what we're doing, but it's not something that we have focused on now and we're unlikely to be focusing on in the near future.

[00:27:40] Audrow Nash: Okay. So that sounds cool. So you basically, you have something that you have your agent is watching a folder and it's letting you know when there's something there and you can pull it from wherever, whatever computer you're on when you're not close to the robot or will push it or it can push it periodically.

and it's really just, you're going to upload the whole thing when it's pushed or pulled, to wherever it is that it should be sent

[00:28:08] Adrian Macneil: Yeah. And snippets of data. there's different ways to configure this. Sometimes people, sometimes another, pattern see sometimes is, people will actually log two separate sets of, MCAP files, on their robot. They'll be logging one that's all the lightweight telemetry on their robot.

So things like just the robot pose, maybe GPS position, system state, just stuff that's like very lightweight, and then a separate file that's logging all of the heavyweight sensor data, the video, the point clouds, anything like this. the beauty of that setup is that you can actually just always upload the lightweight stuff.

So you always get some amount of detail, not immediately because it's still an async batch upload workflow, but at some point within maybe 10 minutes of, real time, five, five to 10 minutes, you're seeing enough data that you can, figure out where a robot was at any point in time.

That might help you narrow down on, say again, someone calls you up and says there was a problem at 2 p. m. That will help you narrow down on exactly when that problem was and roughly where the robot was. Then you can select a time range and go and download and upload the, fetch the more sort of detailed HD logs.

So that's quite a nice pattern that we see,

[00:29:23] Audrow Nash: that is a really cool one.

[00:29:26] Adrian Macneil: But yeah, you do all of that, and then your data gets to the cloud, right? Yeah. and again, all of these, one of the philosophies of how we build Foxglove is trying to make this really modular, because not everyone wants all of these pieces, right?

Some people have already figured out their logging, some people have already figured out their upload workflows, they don't need help with that. but, others haven't. We just say, pick and choose the components that make sense for

[00:29:49] Audrow Nash: Yeah. So you support the full vertical of like I record my data to, I do an analysis on my data. but, or whatever it might be, I use it for training. but you can use it for only specific parts. So if just want to upload the data or if you just want to visualize the data or you

[00:30:09] Adrian Macneil: yeah, if you just care about uploading and organizing your data in the cloud, that's a, thing you can do. Typically we find the, you get to the sort of last part we'll get to around visualization analysis. Usually that, those workflows are pretty common across everyone.

but yeah, absolutely a lot of People have come to us and initially wanted to focus on the data workflows and then they get into the visualization. Sometimes it's like start with the visualization. I'm happy with how I get data off the robot. I just want to visualize it. Great. That's fine.

We can start at that end. but yeah, as we're getting all of this, we're triggering these uploads, we're getting logs coming off the robot.

[00:30:46] Cloud Indexing and Data Management

[00:30:46] Adrian Macneil: then, you get to the data in the cloud and again, We see a lot of, robotics companies, at least pre Foxglove, was just, there's an S3 bucket, we've got an Amazon S3 bucket, we've dumped a bunch of bag files in it, or something like that.

so if you know what file, you know the file name you're looking for, great, but if you don't, good luck to you. so that was, the, the problem we set out to solve with the cloud indexing was, Okay, as these files are coming in, let's, let's do an ingestion step where we look at metadata on the file.

We look at what time range is being covered by this file. We look at what robot it came off. what topics are inside the file. we go through this ingestion step, and we store all of that in a nicely organized indexed, bucket. Either we can, either Fox Club hosted or it can be hosted in your own, your own VPC with any cloud provider.

but we, go through this ingestion step. We organize it into a nice sort of folder structure within your bucket. And then we also give you a web UI so that you can search across it, right? You can, just go in and type in and search metadata. You can look at a timeline view. You can do a few more of these things to, a more, sort of like visual overview.

if you want to look up, you know which robot you're looking for, and you know the time range, or you don't know the name of the file, it's much easier to find the sort of particular logs that you're looking for.

[00:32:08] Audrow Nash: Yeah, I feel like this is something that's so simple and so valuable

[00:32:13] Adrian Macneil: yeah, and everyone has, and, I don't claim a lot of the stuff that we do, it's not rocket science, it's just really undifferentiated, because you see every company build a web UI for looking at ROS bags or something like this, and you're like, this is just silly, everyone shouldn't need to build this.

[00:32:31] Audrow Nash: Yeah, for

[00:32:33] Adrian Macneil: yeah,

[00:32:34] Audrow Nash: And then a UI for filtering through the data. yeah, I'm seeing a lot of, analog to like home security

with this, 'cause you can look for this camera at this

[00:32:46] Adrian Macneil: right, pull

[00:32:47] Audrow Nash: look for this robot from this time window. and you might be able to filter by a specific event or something that occurred, but so you're pulling out metadata.

By is very simple by processing, what has been uploaded. You get the start time and time. Maybe you key or tag it with things like this is this robot in this warehouse this

and

[00:33:09] Adrian Macneil: Yeah, can, yeah, tag all of that assign it to what we call a device, which is your robot, and then you're able to view all of that data on a timeline, or you'd be able to pull up just the logs for particular. just for a particular robot, or if if you are tagging them with metadata, such as which factory or, which site it was operating at, tag it by that kind of metadata as well.

but you've got this, this, kind of quick search across all of the data.

[00:33:37] Audrow Nash: it. Okay. I think that's really nice. and that feels like something where companies might plot along in their like terrible naming convention. Like the file

[00:33:47] Adrian Macneil: It's got the robot

[00:33:49] Audrow Nash: where It's this is the robot. Yeah. And then, that is probably, it works, but it's super painful.

especially as you get a lot of robots and, a lot of data. And then this makes it so much easier. And then you have the UI, which is really nice for filtering and things okay.

[00:34:07] Adrian Macneil: a lot more that we can do there too, today we let you search by metadata that you've pre added, some of these things like you said by robot or by, other metadata you had like factory and things, but there's a lot of other ways that you could imagine exploring this data, one we would like to add in the future would be a sort of a geospatial search over,

can I see all of my data on a map if you just see like a heat map and then say in and okay, I want to pull all of the logs, all of the times that we've been past, whether it's an indoor map or an outdoor map, it's not, similar kind concept, but I want to pull all of the logs that I've, or all of the times that robots have been to this corner of the factory or they've been through this intersection or things like that, those types of searches, are things that we would like to add the future to make it, really easy to, pin down on the particular data that you're looking for.

[00:34:59] Audrow Nash: Yeah, I feel like you have, there's a lot, I mean you could do a lot of analysis, you could do a lot of displaying stuff, it's, the ceiling's the limit for what you could do to expose different data things there, but that seems very valuable. Spatial seems super valuable, specific events could be valuable, having them write their

[00:35:19] Adrian Macneil: Yeah.

Yeah. Yeah. We do it. Yeah.

do have an event. Yeah. We do an event concept. So you can whether whether those are being created through the API from things that are happening on the robot or whether it's being created, you can just go into the UI and annotate your own events, manually, but tagging, tagging points in time where notable things happening.

And

[00:35:39] Audrow Nash: the super cool. Okay. So we have this cloud indexing, so you're able to get your data very effectively. where do we go from there?

[00:35:50] Adrian Macneil: Analysis and, in particular visualization.

[00:35:53] Visualization in Robotics

[00:35:53] Adrian Macneil: so visualization is, like I said, this is where we started the end of the problem that we started

[00:35:58] Audrow Nash: Yeah, I think in our last interview, which might have been two

[00:36:02] Adrian Macneil: was a it while ago. Yeah.

[00:36:03] Audrow Nash: point it was most of what you had was the visual visualization and you probably were working on the cloud part and it was pre mcap

if i remember correctly so okay yeah time

[00:36:16] Adrian Macneil: Yeah, it does. Yeah. a lot has been built in the past, past few years. but yeah, so we started with this visualization. and again, another difference between sort of observability with, servers and traditional infrastructure versus robotics, and traditional infrastructure, again, you think of a tool like a Datadog or Grafana or something like this, you're creating a lot of nice pretty plots and dashboards and things, but usually only with data that's already in the cloud.

this is, why the, the arc of data goes from like logging to get it to the cloud, to indexing, to visualizing it. but in robotics, visualizing cloud data is only one of, I would say three kind of common workflows, right? So one is visualizing cloud data. Another is visualizing local files.

I've just got, some files that I've, off or, SCP'd off a robot or not uploaded or I'm around locally. another is live visualization, so I'm this could be either, I'm connected to a robot over Wi Fi or Ethernet, and, I'm literally just plugged into a, robot and I want to see the data coming in live and I want to check that, the colorations are correct and that the data makes sense, or, another, option with the live one is if you're running some kind of re simulation or some kind of simulation on your, just on your local workstation, your local laptop or desktop, and so you've got a robot stack running, you might not have an actual sort of full, hardware in the loop or anything, it's just running, a stack locally, but you still want to be able to see the, output of, what your stack thinks is happening, what your stack thinks is seeing.

[00:37:57] Audrow Nash: i'm thinking very similar to RViz this

for

[00:38:01] Adrian Macneil: yeah. And Rviz, really only supports the, live visualization workflow. So when you want to visualize things with Rviz, you can connect to a live running ROS. You can see that in real time. But if you want to go and replay, if you want to go and explore a BAG file or MCAP file, You have to separate, you have to go start up ROS in the background.

You have to start up, run ROSBag play. So you've got five different kind of terminal windows in the background with your ROS stack running and your play running. And, then if you want to pause and rewind something, you've got to do that sort of in the terminal

[00:38:34] Audrow Nash: it's a pain.

[00:38:36] Adrian Macneil: which is so there's no, with something like Rviz, you have no native support for a kind of, for a file based or a cloud based, replay workflow.

You only have this kind of live visualization.

[00:38:47] Audrow Nash: Now, one, one thing with RViz is it's running on your local machine. I know Foxglove can be but running through Electron or however you're displaying it these days. but so RViz might be more performant, would you think?

[00:39:05] Performance and Platform Considerations

[00:39:05] Audrow Nash: So if data is coming in really fast and you want to display, or does it not matter because it's human perception

[00:39:10] Adrian Macneil: Yeah, it really depends, yeah,

[00:39:13] Audrow Nash: how do you think about this?

[00:39:16] Adrian Macneil: it, depends a lot, right? in terms of, the rendering, our 3D rendering and things like that's all running through WebGL. a lot of the things, a lot of the things are actually already hard or accelerated. A lot of the code in a browser environment, people think about a browser environment, The browser has evolved a lot in the past 10 years, right?

You've got companies like Figma and things that are really pushing the boundaries of what's possible. you've got services like we're using now, services like Google Meet, things like this where they're doing like live video and streaming and things in the browser. Most of that code that's running is C code, right?

Like most of it is running in the background of the browser stack that's implemented in C and you're just triggering it from JavaScript. the same is true with a lot of this 3D rendering, right? So a lot of the stuff that's happening in Foxglove when you're looking at a 3D, View, like you would see in Rviz, you are, that is.

that's running through WebGL, which is then being, like, in most cases, hardware accelerated. Same when we do things like H. 264, like video decoding and things like that. There are actually, these days, there are browser APIs that will let you use hardware accelerated decoding for video and things like that.

So we leverage a lot of that.

[00:40:28] Audrow Nash: That's awesome. Yeah, it's so cool. It's like a rising tide lifts all boats. You get to benefit from the massive investments going on into the browser.

[00:40:37] Adrian Macneil: Yeah, and it's a great platform, right? the browser is very extensible, it's very easy to develop with, it's, we can let people, it's pluggable, right? We can let people create, plugins and things very easily, or create little forms and things like that, that they want. and then also, the other thing with Rviz is, this sort of, mostly gives you the 3D panel, right?

If you want to do things like plots, you're popping out to a separate like RQT or your plot juggler or things like that. you don't you can't just bring it, or maps and things like this, you can't bring it all into one, into one.

[00:41:11] Audrow Nash: a bunch of windows open for

[00:41:14] Adrian Macneil: but the performance, the performance thing is, not a, a major thing that comes up.

It really depends on what people's data and workflows look like. And there's such a variety within, within the robotic space that you, just, it's, hard to say concretely like X works better than Y, without looking at type of data that people are trying to use it with.

[00:41:38] Audrow Nash: very Was this the case before, say three, two years ago or something when we talked? Because I remember, or maybe it's just the general perception that because it's a web app, it can't have good performance. and it's surprising to me and very nice to hear, that you're getting wonderful performance and it's hardware accelerated

[00:41:58] Adrian Macneil: yeah. It is, I will say there's like a lot of stigma around, out there around kind of Electron apps things like that. but I think in, some, in some cases Electron is like a victim of its own success, right? it's, it's created a platform that, is, so easy to create.

Cross platform applications with, without having to go into, some of these more esoteric, yeah, like problems that, yeah, have on specific problems, for specific platforms. There are actually some, sort of, Other alternatives that have come out more recently, there's one called Tauri, or Tauri, I'm

[00:42:39] Audrow Nash: looks

[00:42:39] Adrian Macneil: Tauri.

So this this is a middle ground, right? It's it's still letting you use the web stack to create an application, but it's using the, OS specific built in web view components. the nice thing about that is, is, it reduces the file size that you're downloading and having to ship a, specific version of Chromium, that build.

this, that is also its biggest. Downfall, right? Because the nice thing about, the way that we package and distribute Foxglove is that every, we can guarantee that you're, it doesn't matter you're on Ubuntu 18 or 20 Windows or Mac or Windows 10, Windows 11. We're not trying to guess at what a web viewer, a native web viewer is doing.

we're not dealing with the problems like Safari is actually like really behind on a lot of, edge web features, for example. We're not dealing with hey, this problem that only seems to happen on Mac. the, very occasionally

[00:43:36] Audrow Nash: better for you guys as developers too. You have far less maintenance burden. and like ROS, we, just as a developer on the robot operating system, it's every time there's a problem on Windows, no one wants it.

[00:43:53] Adrian Macneil: No yeah, And then also it's good luck if you're on a Mac, it's basically unsupported. they call it tier three something, but, yeah,

[00:44:01] Audrow Nash: build from source, but we don't

[00:44:03] Adrian Macneil: exactly. Yeah, yeah. So this is, these are huge benefits, especially when you're a small team trying to focus building a great product.

I will say performance, and, issues come up from time to time and customers bring them to us and it's usually a case of looking at just specifically what's different about their data and how can we, improve that. It is a thing that, that we spend ongoing effort on, and if folks do have problems, we encourage them to bring them forward because it's, like I said, sometimes it's, it's hard to know if someone has a, an issue like where exactly that's arising from.

[00:44:39] Audrow Nash: Definitely. Okay. So we have going back to visualizations. Okay. And it is an Electron app, which you

[00:44:47] Adrian Macneil: correct. Yeah, we, built it as a, yeah, built it as a, web application. most of our users are on desktop actually, I think from memory, it's, like a majority of people are using the desktop app more so than the, than actually using it in the

[00:45:02] Audrow Nash: through the browser,

[00:45:04] Adrian Macneil: about the desktop app can just install it and it works on any platform.

but

[00:45:09] Audrow Nash: cool.

[00:45:11] Adrian Macneil: so, yeah, we, have, that, and I guess, we're saying this word visualization, which I assume to like most sort of roboticists is a sort of familiar concept, but, surprisingly, surprisingly unfamiliar outside of robotics, right? People you go to an average VC and say we make robotics visualization tools.

you'll get, a blank stare, right? this is not a, a that

[00:45:34] Audrow Nash: what do you need to visualize?

[00:45:36] Adrian Macneil: Right, what do you mean? Who would use that? Why they, get all these questions like, why would I use that a robotics developer? I don't even know how to explain it. I was like, fact,

[00:45:47] Audrow Nash: to realize how much in our

bubble we are the idea that's not clear is, surprising to me too. And I suppose you talking to VCs, this is something that's really come up they're out of

[00:46:00] Adrian Macneil: Yeah. Yeah, that and, yeah, because, every industry has its, sort of, its, uniquenesses, but I think, as, especially as you look at, investors and things, a lot of the times, industries are a bit more mature and they're a bit more mapped out and they can go and see these little, logo clouds and understand how all the things work together.

Pieces fit in and they Oh, you do a thing in XYZ category. And then you pop up in, in a category that, that's not represented on their little logo cloud. And, you have to explain from, sort of first principles. But I do, I I, do think, and if anyone, listening is not a robotics developer, visualization is something that is critical to robotics development from day one of your very first robotics hobby project.

We're working on, autonomous robots in space or autonomous vehicles or autonomous trucks and production, right? it's a thing that, you put together your first toy robot and then you, the first thing, and you stick a camera on it, the first thing you're going to want to do is like, all right, what can I see the data coming out of the camera?

can I see, like, where this sort of robot thinks it's, it is in its, surroundings? You're doing some, slam, or you're doing a 2D LiDAR, or, even the most basic robot, you immediately run into this problem of how do I visualize, what the robot is seeing and thinking?

And

[00:47:19] Audrow Nash: Yeah. How do I, check where I am or where the robot

[00:47:23] Adrian Macneil: And it's an inherently multimodal problem. It's not something that you can just look at text logs or, plots and things to understand. you need to be able to see across all of this data. You need to see in 3d space, a model of the robot and what it thinks its surroundings are.

And things like, lidars and radars that are giving you a sense of depth and space. And then you also want to see video data coming out of cameras. You also want to see, GPS data on a map if it's got a GPS. You want to see you do have a lot of like plot data.

You want to plot things like joint sort of positions or motor torque velocity, things like this. You do have text logs as well, but you have all of this, right? You have plots, you have text logs, you have maps, you have 3D, you have cameras, and so you need a tool that is inherently a sort of multimodal visualizer and that's what Voxlog does.

[00:48:17] Audrow Nash: yeah. Yeah. So it makes it so you can. Understand the state of the robot so you can understand what it's doing, so you can develop things, so you can understand what might have gone wrong or see if there's any errors or anything. lots of use

[00:48:37] Adrian Macneil: if, I, you could say that it helps you see what the robot is, how the robot sense, think, and acts.

[00:48:45] Audrow Nash: Yeah, very

[00:48:46] Adrian Macneil: it's, yeah, it's what is the robot seeing? What is the robot thinking? how do we understand, what is the robot thinking, which is usually what is the robot's kind of mental model of the world and what are the actions that it is planning on taking?

Yeah.

[00:48:59] Audrow Nash: cool. Okay. What are so thinking about visualizations? You've mentioned several But what are like the big? Classes of visualizations. I'm thinking like you have your plots you have your 3d One where you have the point clouds and maybe your robot model something like Rviz and maybe then you just have logs that say messages that you want to introspect something, how do you think of the big classes of you might

[00:49:33] Adrian Macneil: Yeah. Yeah. 3D, 3D and images are definitely One of the larger buckets, all of these things are critical, right? But I would say like 3D and images. So we have, a panel for visualizing 3D data. And we have a panel for visualizing 2D images. They're actually the same panel under the hood, this is a fun fact.

So the 3D, we have a panel for, the, 3D scene can render anything from a 3D point cloud, a 3D, bounding boxes of objects, like you said, a 3D robot model, and the robot's pose, so like where it's holding its arms or joints, things like this, also in the 3D panel, you can render a 2D image into that scene, right?

So if you've got a camera mounted on the robot, you can project that, that camera image into, 3D space to see, what it is seeing from that perspective. conversely in the image panel, you can start with a layer of what is an image that we're seeing. So this is just like 2D pixel coordinate space.

but you can project into that image anything from your 3D scene as well. So you can project Over it, point clouds. As long as you've got transforms, right? As long as we understand, the frame of of camera and the relationship between your camera and your LiDAR or things like this, but we can project over the camera, point clouds.

We can project over the camera bounding boxes. So you can actually see anything from the 3D scene in the 2D scene, you can see anything from the 2D scene in the 3D scene, and the, even though the, the, UI for them is slightly different, they're actually like the same piece of code under the hood.

so that's one. really big bucket, I would say. And then the other, main, or big, one that's just like heavy uses of plots. so plots are just, very important, obviously for time series data. lots of different sort of types of things that people are trying to look at there.

But, yeah. this can be things like encoders, motors, joints, there's, any number of any, sort of numeric, any time series data that you've got, you can plot there. or not time series, you want to plot things like, planned velocity in the future, or, you want to plot sort of movement even in 2D space, all sorts of things that you can do with plots.

And then, yeah, other ones that get.

[00:51:54] Extensibility and Customization

[00:51:54] Adrian Macneil: but there's, we have about 20 different types of visualizations that you can do in Foxglove and an extension API that you can create your own visualization panels. but the raw message run, think you called out, so just being able to inspect a specific, again, in the ROS world, you can do this by opening yet another terminal window and doing rostopic echo.

you still got, look at like an average ROS developer workspace. There's always like five terminal windows and like three, and Rviz and RQT and, there's just like 18 windows rearranged. we want

[00:52:24] Audrow Nash: they all have their

[00:52:26] Adrian Macneil: we want we want to eliminate this.

yeah, so we have the, what we call the, raw message panel. That's basically like a ROS topic echo. It just lets you see, inspect individual images and pause them. we have an out, like a, GPS map and outdoor map that gives you that's a one that's quite helpful if you've, if you're

[00:52:48] Audrow Nash: Or outdoor.

[00:52:49] Adrian Macneil: for outdoor types of robots, you just want to, to display that on the map and, sometimes there might not be coordinate mapping from the GPS data to your 3D data, so you can't necessarily, you can't necessarily just like drop in a street map or a, or like an open street map type thing into your 3D panel, but you do want to see where the robot is in space.

and yeah, bunch of other things like that. Diagnostics, a lot of these kind of things. Text logs.

[00:53:20] Audrow Nash: Hell yeah. Do you, is there any, does Foxglove, visualization, does it fully replace Rviz and RQT or are there any missing?

[00:53:33] Adrian Macneil: yeah, great question. I would say, it's probably impossible to fully replace Rviz for 100 percent of people's workflows in that some people, I guess a couple of buckets of things I'll call out. One is people that have created custom Rviz plugins. we don't have any way to Automatically import or

[00:53:54] Audrow Nash: under everything and guys are

[00:53:55] Adrian Macneil: QT codebase and we've got a web stack. We do have extension APIs, extension concept. You can publish extensions either publicly or internally to your sort of Foxglove organization. but you're not going to be

[00:54:09] Audrow Nash: and with no custom plugins on Rviz. it a fair parity or are

[00:54:16] Adrian Macneil: I would say if running no plugins and others, it's pretty good. The other one besides custom plugins I would pull up is, is, like the Moveit, and I think Nav2

[00:54:26] Audrow Nash: all that intergations

[00:54:27] Adrian Macneil: yeah, we have some of those features, but there's definitely a handful of things that, I would say we're not at parity, with Rviz, plus like the MoveIt or the Nav plugins,

[00:54:39] Audrow Nash: Cause they've just been developed for Rviz or

[00:54:43] Adrian Macneil: right, exactly,

[00:54:44] Audrow Nash: that makes Yeah. that's, those projects are huge. so be a lot

[00:54:50] Adrian Macneil: And things that we would definitely like to, had, plenty of chats with those teams and the things that we would like to support in the future and feature requests come up reasonably frequently, but like you say, it's just, there's a lot of, surface area and visualization.

[00:55:05] Audrow Nash: very

because it turns into a browser or something. it is a browser, but it's like it, the amount of visualization and display and eventually maybe you want a UI all sorts of things. It just, it becomes like a sandbox for making other is what it

[00:55:29] Adrian Macneil: yeah, it's, it's, it is very open ended, and Our goal is not to solve 100 percent of anyone's we to solve want 90 percent of the problems, but every, company is different. There's different types of visualizations You need to be extensible and let people, get the last 10 percent of the way themselves, on the flip side, not have to start from zero.

There's a lot of companies, especially outside of the ROS ecosystem, a lot of companies that have just, that are using a completely in house visualizer that They have built over the years, right? And again, this is something that, maybe that's taking up like half an engineer's time or something within the company or depends on the company size.

Maybe it's someone contributing every, once a week, and maybe it's like a full time team of a couple of people working on it, but it's a significant investment in, 90 percent of that being undifferentiated, right? 90 percent of that being the exact same shit that like every robotics company visualize.

[00:56:27] Audrow Nash: yep, for sure. And also I'm sure that their in house one would not be as good as something that is, that's the only thing you got. you guys have several parts this project, but it's

[00:56:37] Adrian Macneil: our is yeah, we got the whole,

[00:56:39] Audrow Nash: a robotics company.

[00:56:41] Adrian Macneil: exactly. leverage a team of, we got ten engineers right now, I think, focusing on, leverage a whole team focused on,

[00:56:49] Audrow Nash: for sure.

[00:56:50] The Evolution of Foxglove Visualizer

[00:56:50] Audrow Nash: And you guys have been working on it for years and out of Cruise, which I don't think we said explicitly,

[00:56:56] Adrian Macneil: yeah, and one of, a couple of our engineers, have been working on this project for seven, eight years now.

It's yeah, the visualizer did actually, start at Cruise and some of our, very small portion of our code traces its, traces its way back that because created this visualizer at Cruise, we partially open sourced it at Cruise, it got a little bit of traction, but it was also, too tightly coupled to a bunch

Cruise assumptions and self driving assumptions and was not as useful for other types of robots, other types of like manipulators or mobile manipulators or AMRs and things.

it was, like, had a lot of built in assumptions around you're probably going to be outdoor, you're probably not going to be able to move the camera in certain ways, your robot, a self driving car doesn't have any sort of or, really like joint states or anything like this.

so it was, we partially open sourced it. at Cruise and it did get some adoption, with Foxglove, it was like, hey, all right, let's, take that as a foundation, move it on. I think now we're, we look at it's 15 percent or something of our code. It dates back to the Cruise days.

It's a pretty

[00:58:04] Audrow Nash: Yeah. And it's

[00:58:06] Adrian Macneil: yeah, small and decreasing fraction, but, but it is that lineage of the project that we've been working on and some our

[00:58:14] Audrow Nash: For sure.

[00:58:15] Adrian Macneil: time now.

[00:58:16] Audrow Nash: Hell yeah.

[00:58:17] Customizing Visualizations with Plugins and Extensions

[00:58:17] Audrow Nash: Now tell me about the plugins and extension API.

[00:58:21] Adrian Macneil: Yeah. So we have, Bunch of different ways that you can customize, the visualizations.

Like I say, everyone, there's a lot of different things people want to do with visualizations. So at a first kind of premise and, also in addition to that, one of our kind of founding beliefs is that robotics tooling and infrastructure should be framework agnostic. So quite early on, quite early on, we started with ROS support, but quite early on we, we tried to, as much as possible, Decouple ourselves from ROS while still supporting ROS really well and make it in a way that anything that you can do in Foxglove, nothing in Foxglove requires ROS.

Anything can be done just in a general purpose way. so we think about extensibility. The first thing doesn't even require an extension. It's just how do you render arbitrary things, right? So we have a bunch of primitives that you can render. You can publish messages, either directly from your robot, or with a, what we call a user script, which is a little script.

Running, in line in the browser that you write that does transformations. but you can take your, can take your robot data and you can turn that into things like we have, 3D primitives, like a cube primitive or a line primitive or a, point primitive or a. A Model Primitive, if you want to render a mesh or something like that.

so we have these building blocks of you want to, sure, you want a stop sign in your 3D panel. Okay, just bring a mesh, bring an STL file or, or something like that. And you can publish that, wrap it up inside a Model Primitive message and publish that, to Foxglove.

So you have first of all, these building blocks of like, how do I visualize things, As well as more common sort of higher level abstractions like a point cloud, for example, great, we can just take a point cloud message, and visualize that you don't have to write out, 30,

[01:00:09] Audrow Nash: Yeah. already there.

[01:00:11] Adrian Macneil: or anything crazy like that.

and then, so you have all of these kind of like drawing primitives and that's the, that's the first place that you would start. And then, like I said, you get into extensions.

[01:00:23] Advanced Visualization Techniques and Extensions

[01:00:23] Adrian Macneil: We have, a thing called user scripts that I mentioned before where you Basically subscribe to topic data coming from your robot that might be a custom message that you've written.

Let's say you've got a custom message that's like tracked objects or something, right? You want to render that so you can write a user script that just says read the tracked object and output, like an array of, like cubes or something like this cube primitives. so that's, just like a really quick way that you can turn arbitrary data into something visual.

And then we have extension versions of that. So the user script is like a lightweight, just like happening fully in the browser. You're editing it and there's like a full on text editor just sitting there ready to go. is,

[01:01:09] Audrow Nash: nice. That's so

[01:01:12] Adrian Macneil: yeah, It it, it's just a lot easier to play around with things basically.

You don't have

[01:01:16] Audrow Nash: do something very quick. And if you have a very tiny. Transform that you want to apply to some data so you can display it. That's very Okay,

[01:01:25] Adrian Macneil: Yeah.

[01:01:25] Audrow Nash: that's for only lightweight things, because I would assume, I like my, code editor

[01:01:31] Adrian Macneil: right,

[01:01:32] Audrow Nash: So then if you have larger projects, now you

[01:01:35] Adrian Macneil: Yeah. So that's when you would leap to like. A full on what we call extension, with extensions, there's, a handful of different extension types. You can, you can do those kind of transforms. So for example, we have, I think it's called a message converter extension where you can, you can say, anytime we see a topic, of type foo, here is a transform, a known transform that would apply that and turn it into a Foxglove schema, something that we know how to visualize.

So maybe you've mucked around with your user script and from you want to go from your tracked objects to like something visual You're like, okay. that's nice Audrow's figured this out How do we, but I've got a team of 50 engineers who all want to do the same thing You can bring that into your code editor, bundle up as an extension, deploy it to your company as this topic matches this like particular internal data type This is how you would turn it into something that FoxLog knows how to visualize.

[01:02:30] Audrow Nash: So with that, anytime the topic matches, do you, so I, do you have, do you code it in the extension that I'm looking for a topic named foobar joints or whatever?

[01:02:43] Adrian Macneil: or with with that

[01:02:45] Audrow Nash: is that

[01:02:46] Adrian Macneil: You with that one in the extension With that example, in the extension, you match on a type, like a data

[01:02:53] Audrow Nash: Ah, cool. Way better.

[01:02:56] Adrian Macneil: slash, tracked objects or something, that's kind of custom data type that you've created, so you would just match on that, it doesn't necessarily need to be the topic name, it's just the, any, message that matches type, and then how and then this is something, again, you, would write this in your code editor.

[01:03:16] Audrow Nash: Yeah.

[01:03:17] Adrian Macneil: Yeah. We have both. are topic based and the, extension is a, yeah, the extension ones work on a type base, but,

[01:03:30] Audrow Nash: Cool. Okay. Yeah, okay, I was just thinking how to make it not as A

[01:03:36] Adrian Macneil: Yeah, You don't to be

[01:03:38] Audrow Nash: and maybe I'm coming with Ross, two ideas. So maybe the, language is slightly different.

[01:03:44] Adrian Macneil: Yeah, but the thing, yeah, the way we think about the type thing is, it's, really just like teaching Foxglove how to visualize your custom message, and it doesn't matter what topic it's coming on, it's just anytime you see, because we know how to visualize it. A ROS point cloud. We know how to visualize, or we know how to visualize like a ROS marker message or something.

We don't know how to visualize like an Audrow tracked object or whatever, so we create, you can teach us how to visualize that by creating one of these little like transformer extensions, converter extensions, and then, and then you can deploy that, like I say, either publicly or.

Just

[01:04:21] Audrow Nash: Internally or

[01:04:23] Adrian Macneil: yeah.

[01:04:24] Audrow Nash: That's really cool. yeah.

Hell

[01:04:27] Adrian Macneil: and then there's, there's three or four different extension types, but the other kind of key one that people, make a lot of use of is you can actually create an entirely custom panel, right? So each panel in Foxglove is just its own separate little HTML JavaScript app.

and the simplest possible panel you could create would just be, exactly, Hello World, or just a button or something, right? You can, in fact, the button example is quite cool, because you can just, you can use the API, you can create a little, just, HTML page with a button on subscribe to any button clicks, you could actually publish a message back to the robot, for example.

and is a this cool way, we have a built in kind of button panel, but it's people want a lot of different things out of buttons, so you may as well just make your little form, create whatever, you can display whatever you want, you can handle button clicks, whatever you want, and you can publish messages back to the robot as well.

[01:05:19] Audrow Nash: Now given. Like I, I've worked with, like I, I tried to make some Foxglove extensions in the

[01:05:28] Adrian Macneil: Oh yeah, cool.

[01:05:29] Audrow Nash: and, but, so with this, it's all, so you're running Electron and your UI is, it's a type script with React, right? if you wanna add these, ru these buttons and things you're using TypeScript and

[01:05:44] Adrian Macneil: don't, there's no, we, use TypeScript and React, there's no requirement, on that for an extension.

[01:05:51] Audrow Nash: the, if you wanted to do your custom one, is there a

[01:05:55] Adrian Macneil: No, you can, we do, I think. because we use TypeScript, I think we, a lot like we generate and we'll use But there's no, TypeScript is just compiled to JavaScript under the hood.

so you can use plain JavaScript if you want. and then, but as far as React, there's no, there's actually no dependency on React. Within the panels themselves. So each panel, if right, you want to add React, do. That's fine. But yeah, you can just stick like a very basic, hello world, HTML, like you would have written in

of thing there if you want.

[01:06:32] Audrow Nash: it's very flexible. You basically say load this web page from or something like this and it just does it. Okay, so you get all that flexibility in.

[01:06:40] Adrian Macneil: And then, yeah. then up to you if you want to. I've seen some people do crazy things with extensions. You can use that to load some web assembly or, do, anything you want really.

[01:06:51] Audrow Nash: Oh, that's really cool. Yeah. WebAssembly is so exciting

[01:06:57] WebAssembly and Performance Enhancements

[01:06:57] Adrian Macneil: Yeah, some very, yeah, it's still, there's very interesting things being done with WebAssembly, it's still an earlier stage technology and there's a lot of, there are rough edges around how you move back and forth between WebAssembly things and browser things, but it's, coming along really nicely and we do use, we use WebAssembly internally, for Some performance critical things too, like when you get into, for example, like decoding bags and you're doing like decompression, lc4 decompression, lc standard, things like this.

we're actually using what you are getting in Foxglove is C plus plus code that's being compiled to WebAssembly to do a lot of that, like decompression and things.

[01:07:37] Audrow Nash: So it's nice and fast. That's great. What a cool thing. Yeah, I really, I'm excited about WASM, WebAssembly, just in general. so can you, this might be a, so we've already mentioned you can make UI, which is cool. So you can make a button, it can publish a topic, and then that can be used to control your robot or do something.

Or maybe even pop up different parts of the UI or do things within the Foxglove program, I would imagine. Yeah. Would you ever imagine this being like a front end for a simulation? Like we have GZ server, Gazebo server and then we have GZ, I don't know, client or whatever it is, the renderer. could you, would you ever imagine using Foxglove as a way of displaying simulation or i don't know connecting it more other

to

[01:08:35] Adrian Macneil: Yeah.

[01:08:36] Foxglove's Role in Robotics Simulation

[01:08:36] Adrian Macneil: We do see people, like today, people use Voxelver a lot to visualize the output of a simulation. Because typically simulation, there is the perspective of the simulator, and it's, you can think of it as Okay, here's this third person view of, what the simulator is trying to create.

But then that data is being created into sort of false, it's creating sensor data and that's being fed into your robot stack. You do want to visualize the other end of it, which is like, what does the robot think that it's doing? so that is very common. People like visualizing the output of simulations is very common.

the, sort

[01:09:13] Audrow Nash: you mean the result of what the

[01:09:15] Adrian Macneil: Yeah, exactly. Yeah, like the recording, if the robot, if a robot is driving around in the real world or whatever, moving around in the real world, yeah, exactly what others would see. So the robot, is moving around in the real world, you would, you have a recording of what it saw.

if a robot is, moving around in simulation, you have a recording of what the robot What things it saw, what the robot thinks it was doing, so that's like you say that, the sort of the Rviz output of, yeah, the recording and what kind of came out of the simulation, so that, that is necessary, I guess like the question then becomes, could you have one UI, which is solving both the simulation need and the robot need, I think it could be done.

I think it would be a ton of work. we don't have any plans to do that. I think that, it, logic, it would conceptually make sense that you could create all of the buttons that exist in Gazebo. You could create, you could, create a Foxglove plugin or it could be a first party supported thing and I could just talk to, like you said, the Gazebo server.

but you would have to recreate all of that UI and it would be a ton of work. Which, number one, so that's like a lot of a lot of work that has to be recreated. And then you've got this problem that, are a lot of simulation, frameworks and engines in the robotics space, right?

And now you've done all of that work, and now you just support Gazebo, but you don't support, IsaacSim, you don't support, any number of other, simulators that people are using. yeah, and so you just end up in this kind of

[01:10:54] Audrow Nash: you've over committed to one

[01:10:57] Adrian Macneil: We've got enough things to do in the visualization space. if I think it was done by a, I think if, maybe if that was like a third party, a plugin or something, and that someone was dedicated to maintaining that, it could maybe be done. But it's, yeah, it sounds like a lot of

[01:11:12] Audrow Nash: it's not directly interesting Alright, it's an, it's, you're over committing a bunch of resources to something that kind of works anyways for this, and I guess it does require a lot of, I don't know about the build story of Gazebo on different platforms, okay,

[01:11:29] Adrian Macneil: you come down to you've done all that work and now you only support one simulator and then, other people in the industry are different things.

[01:11:39] Audrow Nash: yeah, but Gazebo, for example, it, you can swap out the physics engine, and that could still be, A cool thing.

[01:11:49] Adrian Macneil: yeah, it would be interesting and I think that statement would make more sense in the ROS world, right? Like within the ROS ecosystem, it might make more sense to, to have Rviz and, Gazebo as a single UI because that is part of just one ecosystem and you don't need to, but for us with our sort of our stance on being framework agnostic and things, it's I don't want to, Have a situation where it's we're only really useful if you're using XYZ simulator and not, a

[01:12:20] Audrow Nash: Not one. Yeah. Okay.

[01:12:23] Adrian Macneil: so a little bit of a unsolved problem there.

[01:12:28] Audrow Nash: For sure.

[01:12:30] Challenges and Future Directions in Robotics

[01:12:30] Audrow Nash: is there anything else to mention about the visualizer or plugins? Or is this the full extent of the stack, or are there

[01:12:38] Adrian Macneil: this is, largely how we think about it, right? We talked about recording, we talked about getting data off the robot, we talked about indexing data in the cloud, we talked about, search and discovery of data, talked about visualization. some of the things that, you know, when you think like a little bit.

A little bit down the road, things that we would want to do to round out that are around, better indexing of data, better search and query across data,

[01:13:02] Audrow Nash: Yeah, I imagine making things easier for like machine learning training

[01:13:09] Adrian Macneil: Yeah,

[01:13:09] Audrow Nash: Because you have your indexing that to me seems like especially given like the zeitgeist in robotics It feels like everything is about All the AI all the machine

[01:13:19] Adrian Macneil: Absolutely,

[01:13:20] Audrow Nash: so

[01:13:21] Adrian Macneil: Yeah, and we're still, yeah, this is yeah, it's, becoming a much, a much sort of more important part of robotics development. Historically, I think, a lot of robotics machine learning was limited to just the perception side of things and just like basic image detection models like YOLO and some of these things, classification models and things.

But now, now we're seeing people move to like much more end to end learning, training based on multi modal data, not just images. and yeah, like, you alluded to, you've got all of this data in your cloud. how, does that kind of feed into your pipeline? And I think one of the, one of the more interesting pieces there that we're thinking a lot about right now is this concept of dataset curation.

so you've got all these

[01:14:07] Audrow Nash: don't Securation is we said

curation.

set curate. Oh curating

[01:14:13] Adrian Macneil: yeah, set, sorry, yeah, understands my Kiwi accent, it's, yeah, the data, yeah, so you've got, Data set, right? You want to build a data set, a training data set. and you've got all of these kind of examples.

You've collected all these samples of like robots doing good things and completing tasks successfully. And you've got a bunch of examples of robots doing bad things and dropping objects or backing into objects or, whatever they're doing. you want to Add a lot of metadata to these.

You want to build up a data set and you want to like tag and organize them. You want to know, do I have a good distribution of data across like different tasks that I'm trying to complete? and, all of these types of things. So we need that metadata and then we also need. full versioning history of all of that metadata, because we want to know, go train a I model, I need lineage back to what was the, the, sort of data set I used to train that model or all of the episodes, all of these kind of, recordings that we used to train

[01:15:18] Audrow Nash: Yeah, you want all of that

[01:15:22] Adrian Macneil: So that becomes interesting. then people, there's, usually, you're not taking a robot recording and directly feeding that into just like PyTorch or something, right? There's, usually some transformation that's happening there. so again, these are things that are on our mind as, we think about this, life cycle of data from, logging and recording to cloud.

absolutely, things that we would like to help out with in the future, but it's also, try to make sure that we do, do our core things well before we get too distracted with everything Yeah,

[01:15:56] Audrow Nash: Yeah, it seems like there's been the joke, that I've seen on X quite a bit where it's like, Everyone's digging for gold. So all the AI And so NVIDIA is selling

shovels thing. see

[01:16:09] Adrian Macneil: doing

[01:16:10] Audrow Nash: you

[01:16:10] Adrian Macneil: well. Yeah.

[01:16:11] Audrow Nash: but so I feel like you guys are well positioned to help with data annotation, curation, these kinds of things.

cause you're already, you have this top notch visualizer for seeing data and then you could make it so that it's very easy to annotate and export to your

[01:16:30] Adrian Macneil: yeah. And where possible, there are lots of existing, you think about like image annotation or whatever, right? there are lots existing, out this. Like we,

[01:16:41] Audrow Nash: Robotics

[01:16:43] Adrian Macneil: yeah. Multimodal, there are definitely, fewer tools, there are ones out there, I think, to the extent possible.

we would, we can't, build everything, right? We can't be the tool for like robotics development or something. the

[01:16:57] Audrow Nash: got to pragmatically

[01:16:58] Adrian Macneil: my belief is that there needs to be way more companies building robotics, developer tools, right? And we would like to hey, look, we've got the core strengths that we're good at here.

I'd love to have more integrations with other tools out there. And, we talked about some of these things, the, the, more of the operational workflows and like fleet management and things, right? It's it doesn't make sense for us to build everything at once.

I'd rather be good at some really core, good at core, good at some core features.

[01:17:29] Audrow Nash: Yeah. I think that makes a lot of sense.

[01:17:32] Adrian Macneil: yeah, labeling, a whole kind of can of worms. I think it's,

[01:17:36] Audrow Nash: it's true, a million types of data, all the

[01:17:39] Adrian Macneil: yeah, the, high level, I, think a, much kind of, more sensible thing to start with is just more of, metadata labeling, so it's like you're going through and you're wanting to, particular episodes and which ones are good or bad or which ones are matching different criteria or, Yeah.

some companies are even looking to do like, English captioning, right? It's just this is a person like picking up an apple or something. because they're feeding this into models now, right? They're feeding in just like the natural, they take an existing like vision language model off the shelf, feed in a bunch of, robot data, like the text captions plus, plus the robot state and pose and things like this, plus the sensor data, feed that in and they can build, What's called a world model that has a concept of tying these movements and things back to action, which is really cool.

[01:18:28] Audrow Nash: Okay. Yeah, that, that is really cool. Are there, Like we, we don't have that much time, but, the, you'd love to see way more tools in robotics. so what would, you mentioned some, but could you rattle off a few that you think are good opportunities

[01:18:48] Adrian Macneil: good. Yeah.

[01:18:48] Opportunities in Robotics Tooling

[01:18:48] Adrian Macneil: If you're going start a robot setup today, I would say if you're, thinking this, if you're thinking about a company, starting a robotics tooling startup, just go work at any robotics company for, two years, and you'll up you'll have suddenly 50 ideas for startups for a podcast that need to exist.

the areas that seem underserved, deployment to robots, for example, there's huge industry of, CICD for servers, right? There's almost no one, there was one company doing this, I don't know if they're still around. but just like how do you deploy and update robots in the field?

You've got to think about deploying them without breaking them. That's a, huge sort of, thing that's interesting. Configuration management for robots out in the field. How you like, keeping track of the fact that you've got like potentially different model, robot models, different SKUs, different sort of what service history has happened.

everything that every company ends up building internally should probably, if it applies to multiple robotics companies, should probably exist off the shelf. So there's a lot of the stuff around the operations of just like Deployment, Updates, Configuration Management, other, like simulation is another one that I think is quite, underserved.

It's there are some of these open source things like Gazebo, but I think there's a lot, there's a lot missing in there and like frameworks to let you run simulations at scale, the frame, tools to let you manage create simulation scenarios. how, do you manage, Testing a new release.

you need, like a whole catalog of different scenarios that you're ready to go simulate and you need to be able to run different aspects of that. You need to be able to score the output of simulations and say is this, is this better or worse? It's not just Hey, run this test and fail if we crashed or something.

It's through the metrics, how well are we doing? are we better or worse at completing all of these tasks in aggregate? There's just endless, I could go on for hours about all of these things. And. Like I say, you just, you work at any robotics company, look around pretty quickly,

[01:20:50] Audrow Nash: You

[01:20:51] Adrian Macneil: you'll see what's being built in house that really is not unique to that company.

[01:20:56] Audrow Nash: Yeah. I think that's a very good. Way to think about it, just whatever's being built in house is probably something that you could hire someone to do something better and then just use. So the doesn't have to maintain it and do undifferentiating work.

[01:21:11] Adrian Macneil: and the end state of this is, quite, is, that even a lot of the autonomy that's being built is not. should not be people in all our robotics companies today I think see data as a key sort of value driver for them and they see like their autonomy stack as being like really core to their mission it's okay we're solving autonomy for but the reality is that that you know when you're creating a business the thing that you are creating the value for a company is like The value that you're creating for a customer is the problem that you're solving for them and not how you implemented a solution to that problem.

and a lot of these autonomy problems are like, picking and placing or navigating around warehouses. how many different

Or outdoor

a lot of these kind of just and that's why things like NAP exist, right? But it's like, navigating around autonomously is not It's not a core kind of value driver for your company.

What is valuable is the problem that you're solving for your customers. And quite often is the distribution of how you're getting to the customers and how you're wrapping it up with service and how and support. and you think about the SaaS, the vertical SaaS industry today. There's millions of startups building like

[01:22:24] Audrow Nash: Very

[01:22:25] Adrian Macneil: specific solutions and making tons of money.

Tons of money, go out there, they talk to customers, they, listen to a business problem, and then they glue five things together, random open there's, sure, there's a bit of but you're not, they're not solving a technology problem, they're not taking on technology risk, they're taking on kind of product market fit risk, so they go and, They listen to customers, they build a thing as quickly as possible by gluing together as many off the shelf components as possible, and then they go and sell it, and work on it in the market, and bundle it up with support, and they solve a problem for people, and they get paid very well for doing that.

the robotics industry today is the complete opposite of that, where you have, a lot of companies that are, Solving blindingly obvious problems. It's wow, if you could get a robot to do that, would be fantastic. This is that'd be amazing. But they're taking on this massive technology risk of now I've got to get the robot to do that.

It's, completely backwards. And I think when robotics has succeeded, we will see that, flip. We'll see that invert, right? Where A robotic startup will mostly be talking to customers, listening to their problem, figuring out how you're going to connect with those customers, how you're going to market to them, how you're going to sell to them, how you're going to support them.

And the technology will, to the extent possible, be taking together a lot of off the shelf hardware, a lot of off the shelf software, a lot of off the shelf infrastructure, a lot of off the shelf autonomy components. Probably just grabbing some off the shelf, open source foundation models.

Fine tuning them a little bit, and solving a customer problem, solving a customer need, solving a business problem. Not, all of this technology risk that people are taking on today.

[01:23:56] Audrow Nash: Definitely. A hundred percent. I, this is part of the reason to me, companies like, do you know polymath They were one of my,

[01:24:03] Adrian Macneil: Similar kind of thing, right? It's yeah, you don't need to building the autonomy stack.

[01:24:07] Audrow Nash: yep. That's, I think they are. early on this trend and I think we're going to see a lot more of that is my guess where they're solving navigation for those who don't know and outdoor navigation like driving slow in fields something this and you just call it from apis like that's that seems like the way to go for a lot of

[01:24:32] Adrian Macneil: I

[01:24:32] Audrow Nash: you don't have to do that

[01:24:34] Adrian Macneil: get to a we point where those, similar companies, maybe it's Polymath, maybe it's others, but similar things exist across the board, for

[01:24:42] Audrow Nash: domain

[01:24:44] Adrian Macneil: for you're right.

[01:24:47] Audrow Nash: We're on the same page.

[01:24:50] Adrian Macneil: Yeah, so I think, yeah, we will, definitely see a lot more of that.

[01:24:54] Audrow Nash: hell yeah.

[01:24:55] The Shift from Open Source to Closed Source and Foxglove 2.0

[01:24:55] Audrow Nash: so talk to me about the, the 2. 0 Foxglove and then, the decision to make it closed source for, I know some things are open source, but, some things are now closed So tell me about all that.

[01:25:11] Adrian Macneil: yeah, absolutely. So I think there were, two kind of arcs that happened. Like I said, Foxglove were about three and a half years old now. there are a couple of sort of things that we noticed over the first few years of Foxglove. the first one was that we started out, we started out with the Foxglove visualizer And then we added on this idea of we're going to build a data platform kind of thing.

And we thought of them as very separate components. And they are separate components to an extent. Like I said, the visualization is useful, not just with cloud data. You can also use it with local data or live connections. and the data is independently useful. but we took that too far and we built them as Literally completely separate code bases with separate repos and UIs.

and then we start tripping over ourselves with how sign on works across them and sign out works across them and how, if you're viewing some, you're exploring some data to find the right log file and then you open it in a visualizer. You've lost all reference to how you got here and what the metadata associated with this recording was.

And, if you want to just be clicking through a list of a, a list of a bunch of different examples of events or things like that, you want to just be able to quickly, Click through them. You don't want a whole separate tab. That's just the visualizer and a whole separate tab That's just the browsing interface.

So we came to this realization that It was even though they are You know separate product features It was a mistake to build them as completely separate apps and completely separate UIs And so we came to this decision that we needed to, to merge these together and that was the FoxClock 2.

0 was like hey, we're going from these, completely separate, literally separate code bases, separate tools, to, to one kind of observability platform, like I said, it's modular, it's got a bunch of features, you don't have to use all of them, but, it's one UI, you sign into it, you just, it's, one kind of app, and you can do things like With 1.

0, you could not do the data browsing in the desktop app. There were weird inconsistencies where you could, on the web, you could browse the data and visualize it. The desktop app, you could only visualize it, you couldn't browse it. now all of that's gone. It's one app. You can use it on web, desktop.

All the features are available on both platforms. and so that was, the first key part of, 2. 0. and then the second piece of it was, like you said, around open source. Our initial sort of decision was, our initial kind of like plan for the Foxglove company was that we would build this open source, visualizer, and then we would have a paid service that is like the data management service.

yeah, the, kind

[01:27:48] Audrow Nash: So you can get it with

[01:27:51] Adrian Macneil: yeah, with teams. And the idea was that, there would be some sort of team visualization features, but there would also be most of the team features would revolve around data management and, like I said, uploading data, organizing data in the cloud and

[01:28:05] Audrow Nash: Yeah. That's what I understood from our last interview

[01:28:07] Adrian Macneil: which was, that was the plan that we set out with and, we tried really hard to make that work. I think the challenges that we ran into, is that visualization itself is just a really, a bottomless, a very difficult problem, a bottomless of feature requests.

and of our effort, even, even today like 80 percent of our effort goes into the visualization, right? so we ended up with a sort of challenge where we're trying to sell a product. but some people were not interested in the, some people don't care about the particular sort of the cloud data features, right?

They're like, we figured out the cloud data piece ourselves. We really just want the visualization piece. again, we try to build a modular platform, so that's nice. You just want the visualization piece. but that's a thing that we don't do. we can't make money on, so it's just an endless stream of incoming feature requests and work.

and, work that we do for people, but no way charge for that. The only thing that we're charging for is the, the cloud data management, which some customers, didn't want. So there was a bit of a mismatch between, what we were spending most of our time on building, and, the thing that we were charging for.

And, that, that causes a problem as a company because we want to make the visualization great. But, it's just, it's not possible to do that if you're spending most of your time building the free thing. I guess another, I would say, and coming from, I'm a developer by trade and, I spend a lot of time using a lot of open source projects and we cared a lot about open source.

I think the analogy, and we still care a lot about open source, we support MCAP. I think, MCAP is, an example of a great, open source project as a company, right? It's something that you put out there. It makes a lot of sense to be an open standard. it's, a library. It's not like a, it's not like an app.

So it doesn't have buttons and UI and things like that. It's a library. People are much more likely to contribute to it. We get a lot of community contributions with MCAP. by contrast, we get very, almost round to zero community contributions for a, web based JavaScript visualizer tool, right?

and then also with MCAP, it doesn't take a lot of our focus, right? Maybe it's like a 10 percent focus for us, but not an 80 percent focus for us. we, maintain the project and keep updates and we, there's, ongoing development on it, but it's not, It's not the, primary focus of us as a company.

And I think you need to think really hard. if, anyone is founding an open source company, you need to really think about like how much the open source project is, going to take of your like overall development resources, and make sure that you. that, you have the, capacity to also invest in whatever the thing you're selling with.

In some, senses, and this is a little bit of a tangent, but when people ask me about creating an open source, company, I think one of the biggest challenges with open source is that you actually have two product market fit problems. because you have a, you still have, even though the open, whatever, usually every open source company has like an open source thing and then they have a paid thing that sort of supplements it, right?

and you, even though the open source thing is free, there's still a product market fit problem. You still have to make sure you're creating a thing people want. You still have to spend a lot of time on kind of marketing it and posting on social media or word of mouth or whatever. You still have to spread growth of that thing.

and then. That entire thing repeats itself with the paid product that you create, right? Yeah, you have to, have to create a thing people want. It has to make sense. It has to be enough of a value add, on top of the, on top of the free thing. and it's the same thing. You have to figure out how you're going to market it, how you're going to sell it.

And so you've, created twice much work for yourselves. but yeah, we, just, we really found, that, we were not able to, Invest the time and effort we, needed, in the visualization thing. We weren't really getting the sort of community contributions that we would get with something like MCAP.

and frankly, a lot, most of the people we talked about to this were like, most customers even, and potential customers were like, why aren't you just charging us for the visualizer? that's the thing we want, why aren't you just charging us for

[01:32:26] Audrow Nash: Ah ha,

[01:32:30] Adrian Macneil: but, we still have these principles, of the, that I think we're, placed when we started, which is that we want it to be, easy for hobbyists to use.

We want it to be easy for academics to use. We, we realized again, since. Most of the people that were using Foxglove, were not taking advantage of the fact, they liked the fact that there was a free version, they can download it, they can get started, especially, like I said, hobbyists, academics, or, the first few people at a company where they're just trying it out, they like the fact that it's easy to use.

they weren't creating a custom fork of it, or a custom build, or deploying a custom build, or anything like that, right? It was mostly just the fact that it, that there is a free version it's easy to get started with, and I think that was well placed. So even though, we, came out with this announcement about, around 2. 0, so I think it was about three months ago, saying, hey, we, we, we're not continuing any further development on the open source version of 1. x, we did have features that were always The, for example, like you go and download the Foxglove application, that always included some closed source components.

So that was always our distribution, included the open source and, some of the other components. We just said, hey, the open source release, we're not continuing to develop that. the 1. x code is still there if you need it, but we're, focusing on going forward. We're focusing on just keeping all of this, within, the, closed source build that we have been releasing.

honestly, it got very little, As far as, announcements go, it got pretty little attention. And I think, like I said, it didn't really change much for most people. most people were already using, they downloaded our desktop app, that was already a closed source build because building things like a macOS app with signatures and all of these things is like, it's not, particularly easy to do anyway.

That already wasn't The open source build they were using, the update to 2. 0, it looks pretty similar. We've added some new features and things and we've, spent a lot of time on performance lately, but there was no disruption to workflows, in especially, for the broader community, for individuals using it, for academics using it, for hobbyists using it.

there were some companies that, that were using it in a company setting. again, they were already, we already had some of these things like prompts. If if you're using it with more than x number of people, you're already seeing hey, over three people, you've got to pay.

there wasn't really any significant kind of change in the experience there for people. the only people that are affected was, Sort of a small number of people that we're we're walking in. On the other hand, I think another sort of advice, piece of advice that I give to people building a similar sort of like open source based company now, especially having reflected on a few years of this at Foxglove, is that you need to have in your mind a mental model of What percentage of people that use your free thing are going to upgrade to your paid thing?

It's not even really to do with like open source, right? But open source makes this problem

[01:35:41] Audrow Nash: Yeah. just a conversion rate for keeping your

[01:35:45] Adrian Macneil: And typically that is going to be pretty low. So you think about, a lot of, there's a lot of open source databases out there, for example, right? So there's something like InfluxDB. this is like a time series database, and then you can use like the hosted version, or there's MongoDB, or there's Elasticsearch, a lot of these are open source, or, sometimes slightly more creative licenses more recently, Redis and things, but, they have open source database, and then like they have a hosted version.

But it's a very, extremely generally applicable thing, like Elasticsearch is a full text search database, or Redis, you can use it for just about anything. if a fraction of 1 percent of people that use that, move on to the paid service, that's still like a, large, it's plenty, it's still like an enormous kind of market that you're talking about.

robotics, unfortunately, is not like that. And, basically every, We can't afford for only like 1 percent of, of, of pro of big companies that have raised a lot of money, or, in some cases, big multinational, car companies and things like that, that, that have enormous global presence, enormous revenue.

Every one of those that sort of decided to, to just use the open source version because that was sufficient for them, becomes like lost revenue opportunity, right? And that is, again, if you're operating in an enormous market with like tens of thousands of potential, or hundreds of thousands of potential customers and companies, that's fine.

If you're working in something a bit more niche, like the robotics industry, you can't afford to have just like such a tiny conversion, soon.

[01:37:19] Audrow Nash: Yeah. That makes good sense to me. Yeah. Because we're a much smaller community than like the web developers say, I.

[01:37:27] Adrian Macneil: or,

[01:37:28] Audrow Nash: Or databases, which yeah, are everything.

[01:37:30] Adrian Macneil: is like every company that needs a database. yeah, but it's been good. Honestly, the response to that was super positive. everyone, I would say maybe there was like one or two messages that popped up on like ROS Discourse or things that were a bit upset about it.

But, yeah. On the whole, pretty much everyone was like, yeah, that makes sense. It's totally fine.

[01:37:54] Audrow Nash: Good.

[01:37:55] Adrian Macneil: the, main concerns that came up, maybe we miscommunicated a couple of things, but some people came up and thought we were getting rid of the desktop app, which sort of me off guard cause I didn't

[01:38:05] Audrow Nash: not what you're doing at all. Yeah.

[01:38:07] Adrian Macneil: fact, we're adding more features to the desktop app. but that popped up, with a few people and we had to correct that misconception. And the other one was just, like I said, is making sure there's still a free version. So it's not really any different. If you go to our website and hit the download button today, it's the same experience that you were getting three months ago.

we've still got the extension APIs. That was another one. So people were concerned that the extension APIs were going away. but, these things are, largely the, the same experience. It's just that we have a more clear licensing now, where if you're using it for up to three people, in a company or in a team, that's, free.

And then over that, you got to start paying us.

[01:38:48] Audrow Nash: Might make sense. you guys need to eat

[01:38:50] Adrian Macneil: we do. Yeah. Yeah. like I said, yeah, we want to create this really great software. We want to keep investing in it. And,

[01:38:57] Audrow Nash: you have to exist order do that. so I don't know. Okay. That makes good sense. I, think, like to me, even just the first point of it's easier for us to maintain. is huge because guys your time, and I've been involved with the ROS project for a long time. You need the staffing to do that, or you can sour relationships too.

someone puts out a feature request or something, and if you don't get to it after a while, a little bit of bad blood that's created there this of kind if it's like endless visualization features, I can understand how kind of turning that off, because the expectation is if it's open source that you're going to be triaging everything.

[01:39:48] Adrian Macneil: come up with the, yeah, the, reports and things and yeah, and we do want to get that. And now, honestly, like said, the, moving fast thing is important too, right? Like now, internally, we, just have a single, like I said, we merged a lot, all between the, sort of data and the visualization.

So moving

better product experience. We're not juggling like open source repo that we're then trying to incorporate into another product and things. It's just we're building this one app. people, sign up for it, they use it. and a pretty, a concept people are pretty familiar with.

[01:40:20] Audrow Nash: For sure. Yeah, I guess like it I guess the moving from open to closed it like saddens me a little bit But I understand for very pragmatic reasons and I think it was probably a very good decision and I bet it was a very hard decision

[01:40:37] Adrian Macneil: I

[01:40:37] Audrow Nash: making the decision and I think it'll be better for guys

[01:40:41] Adrian Macneil: I was probably, but with, every decision as a company, you always think, wow, we could have made that decision a year ago or two years ago. It was in hindsight. I think, like I said, coming from an engineering background and using a lot of open source software and I was probably the most, saddened out of everyone, right?

People pop up and they're like, oh, we're sad that you're moving, and I'm like, look, I am too, but but, we had to do

[01:41:09] Audrow Nash: I think so.

[01:41:10] Adrian Macneil: it made sense, and I think when you talk through it like that you say, look, these are the constraints that we're working with,

[01:41:17] Audrow Nash: Totally. Yeah, you guys have to stick around if you devote your resources to something that's not helping, you stick around. if you, guys went out of business, that would be a much bigger loss

[01:41:26] Adrian Macneil: exactly, It's not a choice between us continuing to develop and pay, right? It's look, we want to exist as a company.

[01:41:35] Audrow Nash: for

[01:41:35] Adrian Macneil: we're

[01:41:37] Audrow Nash: it's such an exciting time, so you definitely want to exist super exciting time.

But,

[01:41:42] Adrian Macneil: time to be in robotics.

[01:41:45] Audrow Nash: so we're, coming to the end of the time. one thing I wanna make sure we talk about, tell me about the Actuate Conference and, why and what you're showing and everything

[01:41:59] Actuate Conference: Bringing Robotics Developers Together

[01:41:59] Adrian Macneil: yeah, so Actuate is a one day event that we're putting on in San Francisco. It's going to be on September 18th. got a really nice space that we've, venue that we've booked in the dog patch in San Francisco. and this is, an event, a conference for robotics developers.

Anyone? robotics is a very sort of cross functional domain. There's a lot of people working on a lot of different things. But we want to bring together engineers, across people working on perception, planning, AI, simulation, frameworks, hardware even. We want to bring together, bring together a lot of these different disciplines.

It's a one day event. We've got some really great speakers lined up. and we've got some, some really great attendees lined up. The goal of this is how can we bring together just lot of the community in the Bay Area where we get to talk to.

As I would say, in the position that we are in as Foxglove I get to talk to a lot of different companies and hear a lot about how they're building their robots. none of our customers talk to each other. there's a real gap in the robotics industry of, like deep technical content on how people building robots in production.

There's a lot of, there's a lot of content out there from the academic perspective, right? There's a lot of papers, there's a lot of robotics conferences you can go to and listen to. a lot of sort of the academic perspective, not so much in industry. I would say the, stuff that exists in, the industry today, in terms of, like talks and conferences and things, ROSCON is, a similar level of, has the technical depth, but it's centered around ROS, I would say, ROS is, one framework in the industry, but we see a lot of, our customers not using ROS, and we'd like to have a broader perspective there. and then you see a lot of kind of robotics trade shows, like Automate and things like that, where people, you've got the sales team, they've got their robots, they're, showing off their robots, but it's not how they built the robot, right?

It's, this is just look at the robot, what the robot can do. So we felt that there was this kind of, there's, a real gap for high quality technical content, in the robotics industry. Talks and not just listening to the talks but getting together and networking with other people in the industry.

And I think, because it's a one day event, we're expecting it's going to be a mostly Bay Area audience. I do know some folks are flying in for it, for the first year that we're doing this, we're expecting that we're mostly going to have kind of Bay Area attendees. I think we would, look to go, a bit larger next year, probably.

And then in terms of speakers, we've got a handful of really great speakers lined up. We've got, Sergey Levine. He's a professor at Berkeley and co founder of, company Physical Intelligence. We've got Chrysler Lalancette. He's the technical lead of ROS. Steve Macenski. He's the technical lead of the NAV2 project. Kat Scott from Open Robotics, I think you'll be very familiar with her. Vijay from WAVE, he's the VP of AI at WAVE. So we've got a whole lot of good speakers already, and then I think we have a number of, we currently actually have a call for proposals open right now. Number of super interesting proposals we've had come in.

Like I said, we want to really find a great balance of content across people working on AI, people working on, perception and planning, people working on simulation, just cover the whole sort of spectrum of different things that, that we do. that are needed in robotics and help encourage them sharing, have some really great speakers, but also have really great attendees and, high quality kind of conversations.

[01:45:40] Audrow Nash: Do you, is there much planned for the evening after

[01:45:45] Adrian Macneil: Yeah.

[01:45:45] Audrow Nash: I find like the unstructured time to just talk with all the me, that's my favorite part of the whole

[01:45:53] Adrian Macneil: Yeah. And I also, I, yeah, that, yeah, there's, structured lunchtime. There's a cocktail hour. I think it's from four to six or four to seven or something like that. So we've got like a happy hour at the end as well. so there, there will be plenty of kind of unstructured time for chatting. Also, just a single track event is a really fun thing. I think, a lot of these larger conferences, get to a point where there's all these multiple talks and there's rooms and presentation, there's a hall with all of the, vendors and things and you spend most of the conference wandering around.

but I think the, beauty of a, single track event is that, there'll be nice punchy talks. You come on, hey, we're having this session, just like everyone listen, and then everyone can go out and we'll have some, some time to chat and get to know people. so yeah, we absolutely like the networking times good, but I also think the beauty of a single track event is that these have any of questions of oh, which talk should I go to?

Or should I just maybe I should be networking instead of listening to the talk and yeah, it gets a bit, many Too Yeah, it'll be fun. It'll be awesome. yeah, we'd love to have you there, Audrow, and, yeah, we will, we'll make that happen.

[01:47:06] Audrow Nash: Hell yeah. Yeah, I would love to go. if it works out, I think I'll be there. I think you had some code with my

[01:47:15] Adrian Macneil: Absolutely. Yeah, So for folks listening that have made it this far in the podcast, so a couple of things. One is we currently have early bird, like I said, the event's on September 18th. It's a single day event, San Francisco. it's from about 9 a. m. or 9. 30 to, 4, or something like that. We have early bird ticket sales going on right now. So I think the first hundred and fifty tickets that we sell are 150 dollars or 149, something like that. So get in quick before those sell out. And then we have an additional, I think on top of that, it'd be additional 20 percent discount for listeners of the podcast.

Just put code Audrow in there. Check out A U D R O W. that's the secret code, which will get you an additional 20 percent off, 20 percent off the early bid price if you make it in time, or it will be, 20 percent off the full price if you're a little bit slow, so get in quick.

[01:48:08] Audrow Nash: Hell yeah. Awesome. so with that, I think we shall wrap up. Great talking with you, Adrian. So cool Foxglove and all the great work you guys are doing.

[01:48:21] Adrian Macneil: Absolutely, yeah,

[01:48:22] Audrow Nash: see you at

[01:48:22] Adrian Macneil: As always, thanks, Audrow.

[01:48:25] Audrow Nash: All right. Bye everyone.

That's it! You made it!

What has your experience been with Foxglove? Or if you haven't used it, do you think you'll give it a try?

If you like this interview, I'd appreciate a like or a review. It helps other people find the podcast and pleases the algorithms. And if you'd like to make sure you hear about new episodes, make sure to follow or subscribe.

Our next episode is a very strong robotics company in the food space. They're doing great work with robotics and machine learning, and they've already done over 5 million food servings in production.

That's all for now. See you next time.