Skip to main content

Intersect: The Pursuit of Cheap Robot Technology With Breeze Automation

Some people fear robots; others give them a welcome embrace. Either way, one thing is for sure: The robots are coming.

In this latest Redshift video series, “Intersect,” host Paul Sohi, Autodesk Fusion 360 evangelist and self-proclaimed design nerd, chats with entrepreneurs and innovators doing new and incredible things with advanced technology. Here, Sohi and Gui Cavalcanti, CEO and co-founder of soft-robotics start-up Breeze Automation, discuss everything from the possibilities of integrating machine learning and robotics to the challenges of developing inflatable (pneumatic) robots. Breeze Automation is a resident at the Autodesk San Francisco Technology Center, the setting of today’s bayside chat.

[Video Transcript]

Paul Sohi, Autodesk Fusion 360 Evangelist: My name is Paul Sohi. I’m an industrial designer for Autodesk. I like to build things; take on design challenges; and collaborate with designers, engineers, and fabricators from all over the world to build better things. They have some pretty great stories, and these are some of them. Welcome to “Intersect,” presented by Redshift. 

Welcome to Intersect. Today I’m joined by Gui [Cavalcanti], who previously worked at Boston Dynamics, Megabots, and is now at Otherlab and Breeze Automation. Gui, welcome to the show, man.

Gui Cavalcanti, CEO and Co-Founder, Breeze Automation: Thank you. Nice to be here.

Sohi: So you’ve had this really incredible career. You worked at a bunch of different places. What was the driving force to start Breeze, and can you tell us more about what Breeze does?

Cavalcanti: Toward the end of my career at Megabots I said, “Look, these robots are just too expensive, right? Nobody can afford to buy them.” Otherlab had a program already for the past six years developing pneumatic robots, right? Air-powered robots—different kind of fluid, but basically the fundamentals are the same. The math is a lot harder. 

Sohi: Fluid dynamics suck. 

Cavalcanti: Yeah, exactly. What Otherlab had was this technological base of really, really cheap components to make pneumatic robots possible, and they needed to keep developing the hardware, right? So the previous team had spun off into a company—they still had all the hardware—and I looked at it, and I said, “Look, this is the opportunity I’ve been waiting for to really make the cheapest possible robots I can imagine.” That’s the mission: take all of the advantages of fluid power systems that I’ve been working on for a decade, right? Ability to be outside, ability to withstand whatever environment you throw at it, ability to generate really high forces for very little weight: Take all of that, and now make it cheap.

Sohi: What I love about it and what I find so fascinating, just as you’ve been talking, is a lot of what you discussed and a lot of the things that you’ve built wouldn’t be possible without the technological advancements that have happened as you’ve grown up. So with that in mind, what are the things that are coming soon or technologies that are being developed currently that you’ve got your eye on that you’re really excited about that you see applications for?

Cavalcanti: I’m really excited about the possibilities of actually integrating machine learning and robotics. And I think that’s been in the lab for a very, very long time. And I think there were some pretty hilarious missteps early on where people say things like, “I don’t need any of this classical education on how systems actually move in the real world; I’ll learn it all from first principles.” And then you have a machine sitting there learning from first principles for decades, and now people are finally saying, like, “Oh, hey, maybe all of those classical controls that make robots work, that have been worked out for the past 80 years, might be the launching point to actually do the learning we want.” And so now, you’re kind of finally seeing, like, get grounded in the reality of how robots work and get grounded in the reality of what machine learning is capable of. You’re starting to see that mix, and you’re starting to see some really, really impressive results.

Sohi: You talked a little bit about the challenges, primarily from a financial perspective, and how that limits what you do. But beyond that, what are the other kinds of challenges that you’ve been faced with Breeze?

Cavalcanti: The robots that we’re working on are inflatable. They’re pneumatic and hydraulic robots where the actuation isn’t from a hydraulic cylinder with a rod that comes out and pushes really hard on something else. It’s actually from these chambers of fabric. In the interest of pursuing very cheap things, you inflate a chamber of fabric, and it moves your robot.

And that means two big things: One is that it turns out actually inflating in only the direction you want is really hard. When you inflate something, it wants to go everywhere. It wants to be a sphere. Spheres aren’t that useful. So trying to harness inflation as an actuation source is really absurdly hard, both from a theoretical perspective and in the reality of like, “Oh my God, it’s just a balloon. Why don’t you do what I want?” Right? So that’s one.

But the other part is that we’ve, as a society, built this entire technological tool chain for basically things that look like milling machines in the ’50s. Milling machines in the ’50s wanted to remove thousandths of an inch of material quickly and precisely. And so the evolution of the gear motor of a rotary encoder on the back of a motor with a gearbox in front driving a thing and attached very, very rigidly to your system has been the modality for all industrial robots since those machines came out. 

So you have this system. It’s designed to be really, really rigid at all times. And then it turns out animals and humans are, like, orthogonal to that, right? We don’t care about precision at all. We care about gentle contact forces. We care about balance. And if you take a machine who’s been designed entirely for precision and you take it outside and you’re like: “Pick me that apple off a tree that’s swaying in the wind, plus or minus a foot. And do it now. What are you waiting for?” What you’ll end up with is, if you’re lucky, a gripper that has speared the apple and is coming back to you. Most likely, you’ve ripped the branch off the tree in the process of getting your industrial arm all the way up there. You certainly haven’t been tracking the way the tree’s been moving unless you have a very good sensor system that costs far more than the robot arm in the first place.

So it turns out that when you have a system made for precision, it just doesn’t do the things humans and animals do. If you’re going to try and do the things that humans and animals do, if you’re going to try and operate outside, if you’re going to try and do work in unstructured environments, you’ve got to invent everything yourself. Because nobody up until recently—in the last 10, 15 years—has really cared about operating in unstructured environments, operating outside the pristine factory-floor environment. And then, suddenly, you go outside, and you try and do anything—and your sensors are blind—your robot causes damage to itself in the world if it touches something.

Sohi: That’s really fascinating. It’s actually helped me a lot to understand the driving force behind soft robotics. When I looked at it for the first time, it felt like a tangential progression, and I didn’t really understand what it’s used for. But that makes sense. You touched on a lot of stuff there, and it feels like if we’ve thought of the zeitgeist of mechanical knowledge in the way that you’ve described it—which I would say is really accurate—to some level, you can argue that the factory and the machines as we’ve described them, because those are really easy to understand, and we want to put order on stuff. And so we’ve always thought of everything else as squishy and chaotic. And now we’re advanced enough to understand that there’s a lot of order and precision to these other things, too. It’s just messy and unbalanced in ways that are not familiar to us.

Cavalcanti: I can give you a visceral demonstration for your viewers.

Sohi: Sure, yeah.

Cavalcanti: So let’s say I’m a precision robot arm, like an industrial robot arm. Tell me how I, as a precision robot arm, would approach picking up this cup.

Sohi: I’ve played with some industrial robots, so I would probably write some really basic code and then run it through that step-by-step and keep adjusting it at every step until I got it right. But they would only work if that cup was in the exact same place every single time. 

Cavalcanti: If the cup wasn’t in the same—like if it was just, “Hey, you’ve got a table here”—what would the system need to look like to be able to pick that cup up?

Sohi: So complex. It would have to be a person, basically. It needs vision. It needs to have some sense of feedback when it’s grabbing things. And then you have to write a fair amount of complex code. And so it’s funny you bring that up. I use this as an example sometimes to help people get their head around robotics. When you move your arm, it’s a completely autonomous process, and you don’t even think about that. But now you have to translate all of that, every last element of this, into code to make an inanimate object do the same thing.

Cavalcanti: Right.

Sohi: So what is the answer?

Cavalcanti: So if I were a precision robot arm, I would have probably multiple layers of sensors to figure out where the cup is. Just to locate the cup relative to myself, I’d have a system of encoders—pretty standard on an industrial arm—have some sort of contact sensing. And I would sit here, and I would come up with a trajectory so that nothing touched before I could grab. And so I would very slowly approach, and the way that my gripper was open, I grab. And if I have feedback, great; I can know when to stop grabbing. If I don’t have feedback, then maybe I know I’ve grabbed it when it shatters into a couple pieces. And maybe, more likely, is I don’t grab it, and then I do this. And all the humans in the room groan. Was that sound familiar, like a couple of robot demos you’ve seen? 

Sohi: Yes. I’ve tried to recently make a robot to flip pancakes, and that was a nightmare. We went through a lot of batter.

Cavalcanti: So, comparatively, if you have a soft robot whose job is to understand the forces that it imparts on the world and the world imparts back, that’s the basis. And then precision is the next step. The problem is a lot easier. The problem you can solve the same way a human would solve this problem. If I were blind and I knew there was a cup in front of me, I’d probably do something where I had my hand out and said: “Oh, I’ve touched it. I’m going to grab it.” A soft robot can do the same thing. It’s called a haptic search, where I say: “First of all, I’m going to make contact with the table. All right, I know where the table is. And now I’m going to keep my gripper open, and I’m just going to go until I’ve made contact with that gripper. And now I know the cup is there, and I’m going to grasp, and I’m going to make sure I have enough force on the cup to have enough force so that it doesn’t fall, and I’m going to grab it.” 

The cup could be anywhere on the surface, and that algorithm could work—especially if you can say: “Oh, hey, whoops. I know I contacted there, because I have force data from everything, and I didn’t see it in my gripper, so I’m going to back up, and now I have a better idea. Oh, there it is.” When you change that kind of paradigm from, “I have to know where the hell this is to grab it,” to, “I’m just going to bump around, and I know I won’t cause damage,” everything changes about how you can interact with the world. 

Sohi: With that in mind and what you’re developing at Breeze, if you’re looking back on this in maybe 10 years, maybe it’s 20 years, whatever it is, what does the vision of success look like to you guys? What is the, like, “Yeah, we did that”? 

Cavalcanti: Cheap robots out in the world doing really hard tasks that current robots can’t do. So, for example, there is no small underwater robot arm right now. If you want robot arms underwater, you have a 5,000-pound minivan-size robot with its own boat. That boat is $200,000 a day to go somewhere, drop the minivan in the water, and use excavator arms to try and do something. And we have operators who’ve told us, “Yeah, it takes us an hour to see a rock on the sea floor and get the arm over, pick it up, and then put it in a box.” An hour.

Sohi: Seems exhausting.

Cavalcanti: Yeah. It is.

Sohi: Playing the worst video game ever.

Cavalcanti: Right. So human-scale robot arms that can do work underwater just don’t exist. Because when you scale your vehicle down to the size of vehicle that could easily be tossed overboard on a raft or something like that, the arms are small enough that they snap off if they’re rigid. It’s like the underwater environment doesn’t care how you want to behave. If you are rigid, you break, and that’s just it.

Sohi: So Autodesk has been connected to you for a while in different capacities. Whether it was with Megabots, with Breeze Automation, and now you’re here at the San Francisco Tech Center. And what has that been like? How has this space helped with what you guys are doing? Or has it hindered it? 

Cavalcanti: No. It’s been amazing to be here at the Technology Center. We estimate we’ve saved $40,000 to $50,000 in prototyping costs. Because what we’re doing is at the bleeding edge of hardware—like building our own custom valving and putting it into our own manifolds that, oh hey, happen to be structural parts of our own robots, which requires super weirdo shapes and mechanisms to work, and we can only prototype mass-manufacturable processes, which is the ultimate goal, with real tooling. You can’t make a mass-manufacturable thing cheaply. It’s pretty hard to do it with low-cost processes. So being able to use a real milling machine, use a real CNC lathe, use all of these printers that are printing at strengths of their materials that are realistic to what you could get out of an injection-molded process, I think that’s what’s really letting us do enough hardware iterations to nail down the stuff that has to—already works in theory—then when you reduce it to practice also has to then work in practice. And those steps, the more iterations that you get, the better your final product is.

Sohi: All right, Gui. This has been awesome. Thanks for being on the show.

Cavalcanti: Thanks for having me.

Sohi: Thank you.

About the Author

The Autodesk Video team creates compelling customer stories and thought-leadership videos.

Profile Photo of Autodesk Video