AI+HI Project

Yes, And: Building Collaborative AI Through Improv

Episode Summary

Explore how the principles of improvisation — flexibility, collaboration, and adaptability — can inspire the design of human-centered AI systems. Just as improv thrives on the “yes, and” mindset, successful AI must work in harmony with human creativity, adapting to our needs while amplifying our potential. Mickey McManus, senior advisor and leadership coach at BCG, shares his perspective on the parallels between improv and AI design: “Great improv isn’t about stealing the spotlight; it’s about listening, adapting, and building something greater together. AI should do the same — supporting humans, not overshadowing them.”

Episode Notes

Explore how the principles of improvisation — flexibility, collaboration, and adaptability — can inspire the design of human-centered AI systems. Just as improv thrives on the “yes, and” mindset, successful AI must work in harmony with human creativity, adapting to our needs while amplifying our potential. Mickey McManus, senior advisor and leadership coach at BCG, shares his perspective on the parallels between improv and AI design: “Great improv isn’t about stealing the spotlight; it’s about listening, adapting, and building something greater together. AI should do the same — supporting humans, not overshadowing them.”

Subscribe to The AI+HI Project to get the latest episodes, expert insights, and additional resources delivered straight to your inbox: https://shrm.co/voegyz

---

Explore SHRM’s all-new flagships. Content curated by experts. Created for you weekly. Each content journey features engaging podcasts, video, articles, and groundbreaking newsletters tailored to meet your unique needs in your organization and career. Learn More: https://shrm.co/coy63r

Episode Transcription

[00:00:00]

Nichol: Hello, I'm Nichol Bradford SHRM's, executive in residence for AI Plus. Hi, for this episode, we're coming to you live from the A IHI project 2025 in San Francisco. Thanks for joining us. Today we're diving into how leaders can choose the right AI initiatives and build strategies that are both future focused and human centered.

Our guest this week is Mickey McManus, senior advisor and leadership coach at BCG and one of our speakers at our event this year. Welcome to the AIHI Project, Mickey.

Mickey: Thanks for having me, Nichol.

Nichol: Okay. So I like to start with a little background. Can you tell us about your journey in leadership and innovation and what inspired you to explore the intersection of systems thinking design and ai?

Mickey: So originally I was trained as a designer, um, like a product [00:01:00] designer.

So if you make one of something, it's called craft, but if you make a a, a million of something, you actually have to understand how to make it, how, how the machinery works, how the factory works. Um, but, but design is kind of an interesting thing, uh, and a lot of people just think of it as aesthetics or, uh, some place on the art spectrum, but really.

If it's between people and people, we call that social sciences. If it's between machines and machines or things and things, we call that engineering, uh, how the gears go together and stuff, or the software. Um, but really design is how people and things work together and how you create a sort of prosthetic to help people see further.

Reach further, how to create a sense of wonder, whether that's an experience or a thing, a product. And you almost want the product to disappear. So that, that was my entry point, designing things. It might be mobile devices, it might be, I've worked on cars, I've worked on different vehicles. Um, but then in uh, 20.

Uh, 2001, I joined this [00:02:00] crazy lab called Maya that came out of Carnegie Mellon and Maya stands for most advanced, yet acceptable. And they were just super system thinkers and they were looking at how do you find the most advanced technology MA yet acceptable for normal humans? And they had labs and we had cognitive psychologists and engineers, electrical engineers, and designers and architects and game designers.

And we were all looking for how to tame complexity. And, and to do that you have to look at complex systems. So you have to think about systems. Uh, and, and so that was my entry point, I think, to both human-centered design, but also understanding systems are weird. Complicated systems are one thing. Complex systems you push and they push back in a totally different way.

So we learn, uh, kind of secret little rules and levers, how to change a system or how to shape a system.

Nichol: Well, you know, I love that origin story. I wonder, would you define for our audience what exactly is [00:03:00] human-centered design?

Like, we hear about it a lot, but I don't know if you know, we or even I always know what it is and. Yeah, hearing this story also about, you know, where it came from is

Mickey: Yeah, well, and Maya has been around since 89 and, uh, we just celebrated over, I think 30 some years. And, uh, and, uh, it was later acquired by, uh, Boston Consulting Group. So now it's part of BCGX. But, um, at the time the idea was some of the complexity could be hidden in software.

You could offload things that humans aren't good at. Some of the complexity could be hidden in aesthetics or in form and function. Some of it could be hidden in the way we do things, our mental models and our norms, and we could like take advantage of the way the brain works. Um, so we were saying how do we hide complexity?

'cause sometimes you can't get rid of the inherent complexity. If you want the power, you get the complexity for. For free, whether you like it or not. So how do we balance power and ease of [00:04:00] use? Um, but it was very curious. Um, you know, there are great video games like Legend of Zelda. Do you know what the purpose of Legend of Zelda is?

Like, what's the goal in Legend of Zelda? Any idea?

Nichol: delight and adventure?

Mickey: broadly true. It saved the princess. So if it were designed by, uh, a usability designer, somebody who would try to make things easy, there'd be a big red button that said, save princess. It'd be the most boring game ever, and nobody would play it.

But instead you have to like. Do adventures. You have to build a team, you have to fight the boss levels all on the way. And so you're actually learning while you go. And so we would have game designers because if you want to teach something somebody to somebody actually making it fun and playful and pushing people into that desirable difficulty, space is powerful.

And if you wanna make it really easy, just get rid of half the features and, and stop doing engineering for engineering's sake. And so human-centered design came out of that. That [00:05:00] concept that we could actually fit the way humans think and, and, and extend them and kind of co-create with people is a really important part.

So instead of me designing it in, you know, black turtle net, t-shirt, you know, whatever, and, and handing it off to somebody. We go, okay, let's build some rapid prototypes and let's actually have people try stuff. And so it's a lot about successive approximation and, and having people build stuff and then talk about it, think aloud, explain what their mental model is.

Does it make sense? A bunch of me mechanisms for it, but it's all about putting people first.

Nichol: And how does the concept of human-centered design really translate now to designing ai, human collaboration, and also where do you see people thinking about it in the right way? And what are some of the things that you find concerning as to how people are thinking about human and ai, humans and ai.

Mickey: It's really interesting. Uh, one of the things I have studied for a long time and done [00:06:00] research on is the nature of technology. So how do you nurture it and what is the nature of it? What is, what is, what can you always say about technology?

And, and almost people, a lot of people that are technologists never actually explore what, what do we mean when, when we say technology, but it's the harnessing of some phenomena in the universe. For use. That's the most broadest definition. So, so, um, lightning strikes, it creates this fire. I pick up one of these, one of these logs and we have a fire and that keeps us warm.

So we harnessed fire a natural phenomena for human use. And, and I. Com and technology always complexifies. So everyone's kind of used to, in uh, high school the Cambrian explosion. You know, there was this Cambrian explosion where there were like, you know, animals with 16 legs and five legs and things. And that's because when you unlock new capabilities that are phenomenon universe, it's suddenly it, you get this, this churn and its explosion.

And I think AI is one of those Cambrian moments. It [00:07:00] is an explosion of a new capacity. And so we have to play with it. We have to figure out how to use it. We have to explore how to nurture it so that it, can grow into something useful, but it's also super powerful. So I think this intersection of humans and ai, um.

We don't need a GI, we don't need like artificial general intelligence or super intelligence to do really bad things. Turns out you can have really dumb AI and take down the stock market, you know, and, and the flash crash in 2008 or 2009 was an example. It was basically algorithms fighting and they took down two thirds of the DAO in like five seconds.

So we don't need all that to do something bad. But generally people are kind of amazing and good. And if we can help. Give them capacity and help them feel a sense of ownership and agency. Like, I, I can do this. It, it's gonna unlock amazing things. So I, you know, I think that's where we're at. And maybe my [00:08:00] last comment on that is just if, if OpenAI went out of business tomorrow, if Microsoft stopped investing tomorrow, which Google gave up on Gemini for 30, 40 years, we would see some explosion of new things no matter what, because humans.

Like humans can't help themselves and we're gonna invent and we're gonna play. And so how do we help put some good guardrails in place just to help, you know, make sure the bad actors aren't there, but also how do we enlist a lot more good actors, you know, and really help grow something that we could never do before.

Nichol: I love the way sometimes you describe AI and human collaboration as sort of a jazz improv. Can you talk about that and how that metaphor might help leaders think differently about integrating AI into their roles?

Mickey: It's really interesting. If you look at, uh, there's some amazing cognitive science studies about how jazz performers.

Learn how to be great improvisational players. And you can see the same in, in acting as well with improv. If you look at like [00:09:00] Tina Fey and the Second City, also legendary with this, but, but they've done a ton of study on jazz and musicians, how they, how they learn by repeating the greats. You know, take the A train and they kind of first, they first copy and then they start leaping into new spaces and they found that when people are.

Collaborating like the drummer is doing this and the the, the violin is doing this, or the, you know, the, the bass guitar is doing this. Um, you have a solo moment where one of them gets to like, try out something that night. They're gonna just do some cool, you know, it's Quest, love doing some crazy drumming or something like that.

Um, they found in these brain studies that as people were collaborating, they turned off the fact checker. And they found this very reliably. There's a fact checker that's turned off when you're asleep, and it's, it's a way for you to simulate. Uh, I almost got hit by a bike today. When I sleep, it tries an elephant, it tries a car, it tries to see if I could fly.

And we th we see those as dreams, but in the morning, it consolidates to the things that give me a new, a new [00:10:00] survival skill. And they found that this fact checker turned off during jazz, that you hear somebody going in some weird direction. It's not hitting the caco, the, the drum beats that you expected or the.

It's kind of sounding caco a little bit. Um, but you go with it because it's collaboration. You gotta take whatever they offer and then you gotta play with it. And so they play with it. And you play with it one night and maybe you're just like, okay, that was weird. You know, and they turn the fact checker back on, they find a beat and they keep going, but sometimes they sleep on it and they're like, oh, what, what Bono was doing there was really interesting, or what Wynton was doing there was really crazy.

I wanna play with it. And that's how we actually grow our brain in this way, is we, we collaborate, and I think AI is giving us a bunch of offers. everything is an offer in improv. It's like if they're doing that thing, I'm not a drummer. but I might be a singer, I might be a bass guitarist or, something, and I've gotta play with that.

And so I think if we think of AI as this improvisational space, [00:11:00] it's giving us these offers We don't have to take them But if we say, but all the time, if we're like, nah, that's okay, we won't maybe grow. If I go, Nicole, amazing. You have four extra arms today. What happened? if you say yes, and

Nichol: they sprouted last night,

Mickey: right?

Man, you're gonna, it's gonna be amazing when you wear that, angel costume tomorrow. so we, start building and we start to have fun. And we're starting to actually see this in the, there's a great new. Paper out, uh, that Ethan Mulli and Kareem Ani at UPenn and Harvard did with Proctor and Gamble, where they actually did this thing called a cybernetic teammate, where they gave people, um, no AI for a task, a real task inside of Proctor and Gamble.

Uh, then they took a team with no AI and said, take this task. And then they did a team with AI and they saw the difference and they get an individual with AI so they could compare all four. And what they found was the individual with AI outperformed the individual. Um, the team [00:12:00] without AI did worse than everybody because coordinating in a new team with different jobs and disciplines is kind of complicated.

And, uh, then they found that the team that used AI outperformed and they had more fun. They actually measured, um, joy in this process and it was because it actually helped them. Learn about different things. They could have the AI emulate what the operations person would worry about or so they could learn from each other, but they played.

And so I think there's a lot more we have yet to do, much of the study of this.

Nichol: I think we're just really at the very beginning of how AI can support teams of humans to discover new things. let's go to.

Basics

in people getting started because our, our audience is pretty broad and many of them are different parts of their journey, and many of them are being asked about their AI strategy for HR.

So [00:13:00] when someone is asked about AI strategy, whether it's for HR for their entire organization, how do they begin to identify what are the right projects to pursue, especially when there's so many possibilities and also so many vendors. Telling them what their priorities should be.

Mickey: You know, I think, I think there is this.

Confusion that you need to have a tech strategy or an AI strategy. Um, I think you need to have a strategy, like what is your strategy? And so we think, you know, we kinda have this, this cartoon at BCG that is strategy has to be at the center. I. Um, there might be outcomes that you go, oh, if this is my strategy and my value creation or my value capture strategy, what's the outcome I could do with AI that I couldn't do?

So I think from an HR team, I would say, what are you really aiming towards? Where are you going? What do you, what would be wild success that you never could imagine? I. Could one person handle 5,000 people? Could one. You know, what, what would the a, a Swarm of Nicoles do if, [00:14:00] if they were all enabled? You know?

And so what's your strategy? What is, are, do you wanna serve a million more people? Do you want to enter a new market? Do you want to handle, uh, an aging workforce? Whatever that is, that's your strategy broadly. And how to move the needle and we go, look, you gotta move the needle. Don't waiver. Don't, don't take on a project that kind of maybe helps reach more people and maybe lowers the cost a little bit and it just vibrates.

I think you're really saying, how could I dramatically increase this because it forces you to. To be revolutionary a little bit instead of evolutionary. If you just say, move it a little bit, people will do the same thing and they'll just sort of deploy what they're doing and add, try to add ai. But if you say, no, I want to go from 22 days to 22 seconds.

Oh, like, we know we can't get there from here using normal ways. That's a really big needle move. And And that's a great example, like Prudential did this. They went from 22 days to write a a piece of insurance to 22 seconds, [00:15:00] building a data factory, thinking through how to use kind of classic ai, generative ai, but having that bold.

Strategy first and then saying, okay, how do we test and learn quickly? What would be our rapid prototype for this? And how do we have a small team? I think that's the second part. Don't try to boil the ocean. Don't try to say we're gonna do it everywhere. Get a small team, uh, and say, okay, how could we do this with a small team?

Have conversation and co-create with people. Have them help you solve it. 'cause people, people are amazing and I think that's part of it. Once you've done that and you've got a small team getting it. Then you can go, okay, we, we've taken that, we've built some stuff. We've got some, we've learned some rules of the road.

We're getting muscle memory, we're getting better. We're starting to learn how to play together as a jazz band. You know, we're, we're figuring it out. Now when you say, I want to extend it, you've got some evangelists. You've built, we can do this, we did this. It's actually part of your culture. And I think cultures are always becoming, you know, they're not fixed.

[00:16:00] It's like, yeah, Bob did that. I can't believe Bob did that. Well, you know, now, okay. At least it wasn't somebody outside. So I think a part of it is we are always becoming, and I think we're always kind of faking it a little bit until we make it, you know, we're always kind of inventing ourselves and our cultures and so I'd encourage people to do that.

Yeah.

Nichol: Yeah.

One of the things that you said just sparked a question for me about, um, going from 22 hours to 22 seconds, is there a, a value in. Adding a constraint like that when we're trying to do moonshot thinking for this, and I'll, I'll give you an example. One of my coaches does, um, you know, does a thing around 10 xing, whatever you wanna do.

And it's not pure moonshot 10 Xing, um, kind of Google X style. This is Strategic Coach. And then the idea is that the, the 10 x, so if you, you know, if you make. You know, $20 for [00:17:00] something. Um, the idea that, that for the same thing, you would need to make, you know, 20,000 for it, the, the way that it forces you, because you can't, you can't make the 20,000

Yeah.

From here. So is there, is there, how do you see constraints like that as it relates to moon

Mickey: it helps screenshot. I think you have to force yourself. They might be artificial, um, but it forces you to think differently about it.

Um, another thing that I've seen work really well, and, and these constraints, I think help a lot. Um, Charles and Ray Eames, uh, sort of designers from the 20th century that made most of the furniture you've sat on and all sorts of things. Um, somebody said, how can you work under so many constraints? And they said, how can you work without them?

Hmm. Like, we need those to actually force us to be creative. Um, and so I think that's a part of it, but another mechanism that works really well is if I say, Hey, uh, uh, in five years, 50% of the C-suite is a [00:18:00] swarm of ai. Um, how did we get there?

Nichol: but go on. I mean, I think that's a good framing.

It's like if 50% of the C-suite, if 50% of the C-suite is a swarm, which I think we would have to explain to people what that is. So if you would define what a

Mickey: is Yeah.

Well I think, you know, a lot of people right now have been, uh, and you might've been hearing about, uh, agentic ai. So agent-based AI that's saying, I wanna spawn an agent to go wander around and do some things and come back and implement some things and come back. I think people still keep thinking of it as an agent.

Like, I'm gonna go give my assistant something to do. They're gonna come back and do something, or I'm gonna task somebody to do that. But when we think of ai, uh, I love Kate Compton's definition of of ai, generative ai. She's the head of, uh, the future of play at Lego, and she said AI is somewhere between a hammer,

the

ocean, and a swarm of bees.

I. Like [00:19:00] it doesn't, it's not what you think it is. And so, uh, when people say age agentic or a swarm, it might be more like a swarm. It might be a million. Nicoles all exploring. It might be 50 Mickey Nicole hybrids that because we've worked together so well, we've kind of built an amalgam that goes out and does things.

So, so I think there's this opportunity and none of the user experiences are built for this yet. Everyone's still doing a chat interface. It's sort of. It's sort of like seeing the very first iPod and thinking that'll be the future by itself. There's so much to do. But if we really start thinking about this, um, it's almost like I could have a crowd of one, just me being a crowd.

'cause I could spawn lots of different agents to try things, different models even. Um, so that's what I mean by that. But if we go back to that question of like, in five years, 50% of the C-suite is a swarm of ai. I dunno which ones of the C-suite are a storm of ai, but if that were true, um, [00:20:00] if I say that happened, it did Nicole in 20, you know, 2007 or 10 or something, or 2030.

Um, how did it happen? I. And if I ask you, how did it happen? It, it turns on a different part of your brain. It turns on this kind of storytelling part of your brain, and you're like, well, maybe, you know, first p and g did this because they doubled down on that cybernetic, you know, worker, and they found that this could work.

And then, and people built some better tools. And this, if I instead said. Tell me how we would reduce the number of human C-Suite members. You know, looking forward, you wouldn't come up with as many ideas. There's this thing called, um, prospective hindsight where you run into the future and then you kind of have hindsight.

It triggers a totally different part of the brain. It triggers storytelling. Like if I said, oh, you know. Uh, uh, my son just fell in love with this, with this woman, you know, uh, from from Mars, you would immediately be [00:21:00] like, okay, how did she find him? What happened? What do you mean a woman from Mars? You know?

And you'd start to like, think about it. 'cause we're just social animals and we, we come up with stories all the time.

Nichol: So, going with that a little bit, um. You know, starting today, how should organizations begin preparing for a hybrid leadership model where AI is more integrated into decision making? Like, what, what should we do today?

Mickey: Well, I do think it's kind of interesting, you know, um, I think one of the other speakers today mentioned, uh, that the CEO of Spotify put out a note this week saying, you know, everybody, like, before I even give you one more headcount, I wanna see how much, how much, uh, money you've spent. Brainstorming with an ai, you know, things like that.

And you think basically this is table stakes. You've gotta be using it. I'm using it every day. I think a lot of the modeling has to come from the leaders. I think if the leader's not actually engaging and playing, uh, I think you're doing it as well. Just play with it and learn from [00:22:00] it. If you're just saying, go do it, because I said so. You know, do what I say, not what I do. I don't think it works.

Nichol: Yeah. The Microsoft research, even last year, not their 2025 research, but their 2024 research showed that organizations where the organization knew that the C-suite was using AI themselves had a higher uptick than ones where they, they didn't know like if the, if the C-suite.

Was using it, but they weren't telling anyone, and that wasn't a part of their communication strategy, then that was not effect as effective as, you know, like Jamie Diamond saying, yes, I use it. You know,

Mickey: that

Or read making an AI and then playing

Nichol: it.

Mickey: it. Right? So if, if, if you are playing with it and learning, then you're actually engaging in the jazz.

I also think, you know, back in the companies that I used to run, I, I insisted that people, um, all the leaders, all the, all the directors had to go take improv classes. So, [00:23:00] so, um, at Carnegie Mellon, they have one of the best theater groups in terms of degree programs. Um. Uh, and so we would bring in theater improv folks.

And at Stanford there's an amazing one person, uh, team and person that teaches improv. And we've brought them in When I was at Autodesk, we'd bring them in and have them actually help us so that leaders are learning how to play and learning how to build on other ideas and learning how to kind of say yes.

And, and everything's an offer. And I think actually doing that would be useful even with ai. Uh, 'cause when you start learning those tricks, you start applying them everywhere.

Nichol: Yeah. Do you have any great examples that you've seen through your work with BCG of leaders in AI systems complimenting one another, and, and how are organizations getting it right or wrong?

Uh, when they combine these?

Mickey: Well, I do think there's a lot, the organizations that are leading, we're, we're seeing laggards people who are just sort of trying to figure it out. Some of [00:24:00] them are saying, don't use it at work. And then you get shadow ai. People are still gonna use it 'cause it's built into and stuff.

Um, uh, the, the, the people who are just doing that are just deploying to start with. They're like, oh, well, Microsoft's got this copilot, or Google's got this Gemini thing and they're just deploying it. They're not really getting the value because this is a new world. And there's new signs to be done. It's a new frontier. Then there are groups that are re-imagining the workflow. And I think that's really, that's the middle group. That's the group that's like, actually I think making real progress. They're saying, wait, how would we do workflow different? How would this team work with this team? How would we deliver, how would we, if it's a, you know, fast moving consumer product, how would we build something different to reach our, our partners, you know, the retailers or how would we play with it?

And then the, the leaders, and they're not always, um. Like digital natives. These are some classic companies. They're basically saying we're gonna invent into, we've got so much data that nobody even knew about that's [00:25:00] not in any of the training models 'cause it's not on the internet. But, you know, we've got like a hundred years of information and we've got all this tacit knowledge in all of our people, and they're inventing entirely new streams.

They're inventing entirely new offerings. And so I, I'm seeing that happening. Um, but this idea of it's a new space. Do your own science. I think we have to actually have a scientific method to this because we can do stuff, we can test and learn. We can do ab tests, we can do things. And I think this idea of, um, like, like I just mentioned, um, the cybernetic.

Workmate that, uh, Ethan and Kareem and, um, and p and g did, they're experimenting on themselves. BCG did that too. We took 700 analysts and we said we're gonna have 17 tasks. Some of the tasks that are classic for BCG analytics and, and consultants are creative and some of them are like root cause analysis, business tasks.

So we took 17 [00:26:00] tasks, 700 bcg. Um, we, we recruited real scientists from social science, from UPenn and, and MIT and uh, and Harvard. We had them structure a, a, a good thing, a, a good structure for a protocol for science so we could control some variables. One group of people, we didn't give them access to ai, we just had them do the tasks.

Our control. Another group, we give them an hour of training using a, like a for a GPT-4 model. Um, and so, so we had one group that didn't have it at all. We had one group that had an hour of training and then we took a third group and we didn't give them any training, but we did give them access to ai. So we had these three groups and what we saw was, um, that most junior people were lifted up to be almost as good as the rock stars.

'cause we had baselined, all the analysts. Uh, by 43%. So it became a leveler and it was like, okay, wow, we can level up people. We also found that individual creativity was better. Yeah, [00:27:00] group creativity was, was worse. And we believe that that's because we only give them one model. And just like if I only had one other teammate and they went to this school with this brain set and this norm,

Nichol: right. Whatever

the, whatever the cultural norms of any model is

Mickey: Yeah, just like a person. So we know that hybrid vigor, that that, that having people with different cultures and backgrounds and ages and different things actually leads to outperformance over and over again. There's tons of studies about it. Hybrid vigor comes from nature.

You know, if you're a monoculture and the world changes, you're, the species dies, right? So, so part of it, we believe

Nichol: that many of the organizations that are having the easiest transition to AI are innovative organizations.

They've just picked this up. It's a new thing, and they, you know, they have a, a, a hybrid vigor way of doing things. They have a belief in expansion. Um, they, you know, they grow from expansion and they tend to be growth centered [00:28:00] organizations. And there's cultural things that are. Often found in innovative companies.

Do you think that ai, this age of AI is going to force the non innovators to start to behave that way? Will the competitive market force them to change in order to stay

Mickey: competitive?

Nichol: Well, I

Mickey: do think it is, um, it's a, it's a big complicated market and there's both luck. Serendipity and luck whether we like it or not. And then there's skill. You can say you have a strategy, but when you're a CEO, like you can set the culture, you can set the tone, um, and if you do it well, you kind of set the vision so that everyone's aligned.

And then you give people lots of autonomy 'cause you don't know how to get there. You say, I wanna get across the river. You don't say Build me a bridge. You know, that would be

Nichol: like,

sometimes they say build

me a bridge

Mickey: but

but if I say I, we need to get across the river. One, one teammate might go, let's build a [00:29:00] trebuchet and throw people across the river.

Let's build a boat. Let's build a bridge. And you, they have the ground truth. So a good leader can't shape much sometimes. Um, but, but I know when I do coaching for senior leaders, I'll here all the things they can't do, they'll, you know, they'll be like, well, I don't really have control over that. Or, oh, I don't really, I'm like, well, tell me what you do have controlled over.

'cause you have more control than you think. I think we also. Almost abdicate control. Even as leaders, it's, it's hard to do that. And so we have to always be reinventing ourselves. So it might be that these companies that haven't built the muscle already, the muscle that you're talking about, which is really a vitality muscle, how do we stay vital and keep vital are gonna have to, they might also just get lucky, you know?

And they might be in a fine location and the world works with them. 'cause I don't think we can discount

luck either. What? What I like about. Using AI and using hybrid vigor and these kinds of things is you increase your luck by actually being able to explore a bigger

Nichol: space.

[00:30:00] Yeah. I was just about to ask you about increasing the, and for our last question, um, we like to end with practical advice.

So in this age of ai, how does a leader increase their surface area of

luck?

Uh, what are the practical steps that they should do with AI to increase their AI luck surface

area?

Yeah, I

Mickey: Oh boy. I don't know.

You know, I think, I think, um, sometimes, and you're doing it even here at this conference, you're getting people to like mash up with people that they, they don't know each other, you know, they're maybe from a different industry, they're doing different things. We can learn from that. So I think a part of it is.

Is shortening the connections. And, and then a part of it is, is doing really rapid tests. But if you don't do it with some scientific method, you don't know if it was luck, you know? And so if you start to see an output that looks like something's happening, then double down. But you, I think we need to, uh, it's weird that I notice [00:31:00] that everyone kind of forgot how scientific protocols work from high school.

You know, we all learned it. Um, but, but I think we've gotta actually become more. Um, it's weird that I'm saying more scientific to increase luck and also more improvisational, but I think we need to do both of those and, and I would encourage people to, to look for shortening those connections. At, at BCG, one of the things we had was we had a Slack channel called Creative Technology Club, and this was before GPT came out.

Um, and so, so only like. 12 people were in it, but they were like exploring spatial computing and this and that and, and uh, and one of the guys, um, uh, was playing around with this thing called Runway ml, but he was feeding it West Wing episodes and having it rewrite West Wing. It was not even visual. He was just using it as a tool.

Um, and this creative technology club was just the Slack channel and uh, and then GPT hits. And of course, the folks that were already exploring this were doing experiments that weekend. And then we got the head of AI [00:32:00] ethics to join the channel. And so every time we had some crazy new thing, we were like, Hey, could you get us an enterprise version so that we could have a sandbox in play?

'cause of course, you know, we've got. Very privileged data and we don't wanna be training somebody else's program on it. And certainly we, we can't with our client's data. So suddenly that ballooned and it's like thousands of people. And as we're doing it, we're getting sandboxes and we're experimenting.

And then we've had hackathons and, and what's happening is this person that was playing with runway ml. He's a filmmaker. He was from Maya, um, and just a, a, a really fascinating person, but we had him to help us tell stories with, with uh, with, with movies and stuff. He's now like rockstar known around the world at BCG as somebody, but he was like hidden in Pittsburgh in this little role as filmmaker, but because he was so excited, he grabbed a hold.

And so I think the other thing is we need to create ways to find weak signals within our organization. Even our partners, like, if, [00:33:00] if we can find a partner who's actually playing with this stuff, let's grab a hold and, and do it with

them. So, so I think that's a part of it. When, when I was at Autodesk, um, we were looking at generative design for Honda and we went to Japan and we were like playing with what happens if you could just define goals and, and constraints and spawn thousands of agents to come up with a better engine and a better.

Cooling system and, and because they were motivated, 'cause they're trying to reduce weight, they're trying to figure out how to do this. We are motivated together, we pulled each other. 'cause sometimes you lose, you lose momentum internally. And so it helps having sort of some external

Nichol: pressure.

So the practical advice is really get a crew together and play.

Like the Number

one thing you can do, get a crew together and play other things. You've said. Really understand that you have more control than you think you do, and then also to, you know, really, you know, don't limit your thinking to believing that you'll only have [00:34:00] one agent or there's only one way to use ai, that actually there's many that are available to you.

And by getting a crew and playing, you'll find

Mickey: And that sensor network, you know that a system is only as good

Nichol: its

Mickey: organs. So how do I discover that rockstar that's hidden because there are three levels deep and they're over in the corner, but they're excited and let's double down on them because they might be the

Nichol: future of

our

Mickey: company.

And so I think that's important. Um,

Nichol: great. Well, thank you so much, Mickey. I I so appreciate you. And so that's going to do it for this week's episode. A big thank you to Mickey for sharing his expertise and deep insights with us and to our audience. We hope you'll join us at the a IH project in 2026 till then sign up for our weekly newsletter at SHRM dot org slash a IHI and see

Mickey: you

next

week.

[00:35:00]