In the rapidly advancing world of AI, designing systems with equity at their core is essential for driving meaningful and inclusive change. Lisa Gelobter, founder and CEO of tEQuitable, emphasizes that “it’s about thinking expansively — ensuring that communities aren’t left on the outside.” Explore how organizations can create AI systems that prioritize fairness, inclusivity, and representation, while addressing systemic biases to foster workplace cultures that truly work for everyone.
In the rapidly advancing world of AI, designing systems with equity at their core is essential for driving meaningful and inclusive change. Lisa Gelobter, founder and CEO of tEQuitable, emphasizes that “it’s about thinking expansively — ensuring that communities aren’t left on the outside.”
Explore how organizations can create AI systems that prioritize fairness, inclusivity, and representation, while addressing systemic biases to foster workplace cultures that truly work for everyone.
Subscribe to The AI+HI Project to get the latest episodes, expert insights, and additional resources delivered straight to your inbox: https://shrm.co/voegyz
---
Explore SHRM’s all-new flagships. Content curated by experts. Created for you weekly. Each content journey features engaging podcasts, video, articles, and groundbreaking newsletters tailored to meet your unique needs in your organization and career. Learn More: https://shrm.co/coy63r
This episode is sponsored by Brightmine.
Ad Read: [00:00:00] Navigating constant legal and regulatory changes is challenging. Brightmine HR and Compliance Center provides trusted, proactive updates, leading practices and tools to help you reduce risk and strengthen your HR strategies.
Using AI assist simplifies research and decision making,
empowering HR teams to act with confidence and reduce the time spent on manual searches. Request a quote at brightmine.com.
Nichol: Welcome to the A IHI project. I'm Nichol Bradford Cher's, executive and Residence for AI Plus. Hi. Thanks for joining us. This week we're exploring AI for inclusion and ethical innovation in the workplace. Our guest is Lisa Gelobter. CEO of tequitable and a pioneer in both tech development and workplace transformation. From building the tech behind, making the intermi move and Macromedia and shockwave [00:01:00] to reshaping workplace conflict resolution with AI at Hulu, Lisa's career sits right at the intersection of systems changed and human empowerment.
Lisa, welcome to the A IHI project.
Lisa: Hi. Thank you. Thank you. Thank you.
Nichol: I'm so thrilled to have you. I've been looking forward to this. Um, I, I'd love to start with background. So let's start with your journey in computer science. What inspired you to concentrate on AI machine learning so early, and how did that shape the way you think about using technology for impact?
Lisa: Yeah. You know, it's really interesting. I didn't have AI and ML on my radar as I was kind of going through studying, uh, in college, and I ended up. There was actually another, there was a, the only woman professor in computer science at the time, uh, that I ended up essentially getting a, a TAship and like a summer internship with her on an NSF grant.
And it was about, it was the first time she was gonna teach a class [00:02:00] on, uh, Lego robots. So basically it was artificial intelligence, but based on a robot that we made at Allego. And so I did a bunch of research with her and there was just a moment. It was like four o'clock in the morning. 'cause you know, we're just up trying to get stuff done.
And it was, I was trying to teach the robot to seek light, right? To figure out whether something was darker, like, and just self-correct and learn. Uh, and I was just really frustrated and I was like, somebody has to figure this out. Uh, and my aha moment was. Oh wait, no, nobody has, like, I could invent something and that was kind of the really cool part of it was just like, it's, it's right.
We are at the cutting edge back then of, of what's possible and like, there are no like, definitive answers. And so that was, that was a really, really incredible moment for me of like, oh, that's cool.
Nichol: And it also speaks to why [00:03:00] exposure is so important. You know, you, you took that class and you had that moment and then that shaped your career or the direction that you went in, and you, you've been an exemplar for so many people. I. So, um, from, so you've gone, you, so from leading Hulu to Equitable, your work is crossing between media and government and social impact.
Could you explain for our audience the through line that connects these roles in your mind and how that helps with workplace transformation?
Lisa: Absolutely. I do wanna say just in terms of kind of, uh, I don't, I don't know, you didn't use the word role model, but that kind of, that idea that the representation, um, I. It sounds like, uh, my progress was smooth, uh, throughout the, throughout the course of, of my education. Uh, but it turns out it actually took me 24 years to finally graduate from college.
So, uh, so yes, [00:04:00] I studied computer science early on, artificial intelligence, machine learning early on. But, uh, let me tell you, it was a journey to get it all the way wrapped up. Um. But in terms of like how I've ended up here and what my career has looked at again, and a lot looked like, in a lot of ways it does seem like it's an intentional progression.
Uh, but it wasn't. Um, and it's one of those things where at every new opportunity you think, well, what am I gonna learn from it? Right? There was a lot of decision making that went in in terms of like, well, what's that opportunity? What am I gonna learn? Is it gonna be exciting? Is it gonna be interesting?
Will I make enough money to support myself? 'cause obviously 24 years graduated from college. Uh, finances were a factor. Uh, support myself, support my family. But look, I've been fortunate enough to work on some very transformative technologies, right? So Shockwave literally made the web move and introduced animation, multimedia, interactivity to the web.
Uh, I helped launch [00:05:00] Hulu. Um, I ran digital at BET under Viacom Blank Entertainment Television. Uh, sister. Network of MTV. Um, and then I went to work at the White House under President Obama, uh, where I served as the Chief Digital Service Officer for the US Department of Education. And for me it was really there that I kind of finally internalized and fully groked how much we really could harness technology to solve what had been previously thought of as intractable problems.
Right. How do you make systemic level change? How do you make societal level change? So ultimately when I was at ed, one of the things that I was lucky enough to work on, uh, was a project called the College Scorecard College Scorecard, which in just over three years was credited with improving college graduation rates in the us.
I. By a point and a half, right? So that's the dream. How do you have that kind of significant impact at scale? Um, and so as I was leaving the [00:06:00] administration, right, trying to decide what I want to be when I grew up, because it is a journey, I was like, look, if we can send a Tesla Roadster into outer space and create space debris. Right. Maybe we can use some of those same best practices, innovative strategies, product development approaches right here on our home planet to solve some of the issues for the underrepresented, underestimated, marginalized communities, but really more importantly for anybody in the workplace, right? In terms of equity, in terms of fairness of the workplace, right?
So not necessarily about. Diversity, but really, how do you, how do you get everybody to succeed? Right? Actually, the tagline for Tech Equitable is work culture that works for everyone, right? And so that idea of, I mean, there's, there's so much there, especially as you think about the tech industry and you think about, um, the opportunity to, to, uh.
To close wealth gaps, to actually have people set up for success in their jobs, [00:07:00] that that can make a huge difference. Not just locally, not just to the person, not just to the company, but frankly to the GDP. 'cause at some point, right? We are running out of people to do the jobs that need to get done. And so if people can be really set up for success in them, that was, that was the mission.
Nichol: Mm-hmm. Well, there's three really amazing things that I just wanna circle back on. One, you're, you know, my, my path hasn't been a straight shot either, and I think that everyone's path is not a straight shot. You know, you see those, uh, cartoons where, uh, uh, it's like a little drawing where it's like, what it looks like, what it was like, and it's a, it's, it's a curly cue.
And, and I think interestingly, I. You know, what we're learning in the age of AI is so much success has to do with adaptability and the ability to, to, um, the ability to adapt. Also, a, a big part of what this future of work is going to look like [00:08:00] is, um, you know, what we're passionate about and that's also aligned with what's changing.
And so. So I just wanted to reflect that back. The other thing that you talked about that's so important is culture and, and, and I love the way you're thinking about culture because what we're seeing is that this AI change is not, it's not a Windows 95 upgrade. It's a social, cultural behavioral change.
And the companies that are succeeding, it's, they're, they're working on that. And then that last part too, that I love is like, I'm very excited about the age of AI because I feel like we, we. Finally have the tools to actually support societal level change and, and, um, you know, support for many different types of people.
Um, so I, I love all that and I just wanted to reflect it, but I do wanna talk about equity tech. So what does it mean to [00:09:00] design ai, not just for efficiency? But for equity, like how do we see these systems driving better outcomes for employees and organizations alike?
Lisa: Yeah. You know, I think it's a fine line and precarious road that we gotta follow. I think that, I think that the problem is right. Garbage in, garbage out. Right. So as, as AI are learning or being trained on, you know, large data sets or kind of even learning from how people are using it, um, it can be, it can start leaning in a certain direction.
And so I feel like, I mean, there's a few things in terms of making sure that that AI is kind of. Grounded in equity and, and really inclusive of all kinds of communities, right? So I think that, um, thinking about your sources, thinking about making sure that [00:10:00] there aren't, again, communities that are on the outside, right?
So one example was, I think. I dunno if it's still true, but you, when you used to Google, um, like a wedding dress, um, it would provide you with a bunch of white wedding dresses. Except in Chinese culture actually they're red, right? So it's that kind of thing of actually really thinking expansively about the scope.
And the other thing that I think is really important is as we think about using AI within HR, within people, teams, thinking about. How do you compensate for some of that? What might be bias, right? From historical bias? So for example, right, if I'm using it to look at all the folks who've been promoted in the last 10 years, uh, and what are, what are their commonalities, what skills did they have, what made them look alike?
So that I can then look for more people with that [00:11:00] pattern and, you know, figure out who are gonna be who, who is likely to be promoted now? If you only promoted a certain type of people over the last 10 years, right? If it's all white men with MBAs, um, that's not necessarily gonna be skills based, right?
It's gonna be patterns based, uh, which is maybe not really representative of what you should be looking for, if that makes sense.
Nichol: Yeah. It reminds me several years ago when, uh, a lot of this, when we were first starting to see some of the early problems with the system, there was a, a job board that did not show CEO roles to women because of exactly what you described.
Lisa: Yeah.
Nichol: Yeah.
Lisa: And unfortunately you also know that Workday is now getting, um, sued for, I guess, discrimination and how they're actually sorting out interviews. I don't know. I don't know anything about it, so I probably shouldn't actually talk about it. But, [00:12:00] um, yeah, no, there the, the tools, unless we're being really thoughtful and conscientious, it's, it's gonna result in a lot of people getting left behind just like the digital divide.
Nichol: what I'd like to move on to. And it's something that, that, you know, you, you and I have talked about before and we really share, is that AI should empower people and not replace them. So how does that principle show up inequitable and, and what does human-centered AI look like in practice when you're addressing workplace bias or conflict?
Lisa: Yeah. Well first I actually wanna touch on. The empowerment that comes with it. So for me, I think that there is an incredible opportunity for communities that haven't typically had access to resources to really be able to harness AI to create things that they wouldn't have otherwise been able to create.
I think that there is an incredible opportunity for. For, again, for communities. So folks who are entrepreneurs, right? So there's black communities [00:13:00] certainly in terms of, right, they see a problem and they solve it and, and now that they might have tools that they could actually think bigger picture and get more stuff done, I think is an incredible opportunity.
And so I would love to make sure that across the board that people aren't afraid of ai. Um, and I think, you know, I think younger folks coming up. They're coming up as part as they're gonna come up as part, as part of their education, as part of their, as part of their growing processes. And so making sure that folks aren't scared of it and really teaching them how to harness it for things that they wanna do, I think could be incredible. so for example, the way, one of the ways that Equitable uses AI is, first of all, so. Uh, tech Equitable is a third party tech enabled organizational ombuds platform, right? So we provide conflict resolution to employees, uh, and then using the data that we get that's anonymized and aggregated.
We, uh, we, I try to identify, uh. Right. Systemic [00:14:00] issues, issues that might be camouflaged to surface them to the organization, right? Because what we're really trying to do is work on both sides of the equation, right? We wanna empower and support employees, but we also wanna help organizations get in front of and address issues before they escalate, right?
We're trying to create this virtuous cycle, right? So, and, and it spans the gamut, right? It doesn't have to be this big, scary, terrible thing. It can be my boss made a sexist crack, or. I have a coworker who's always taking credit for my work, or I had somebody in a meeting like speak over me and I didn't feel heard at all.
Right? So those are the kinds of things, which again, they're not the totality of the other person. I'm never gonna go to HR for that, right? Because that feels like the nuclear option. I'm not trying to get them fired. But I would like the behaviors to change. And so that's what we're trying to do is try to make it really, again, set, set employees up for success and then through that, get access to data that nobody else has had access to before.
Right? Because it's not an employee engagement [00:15:00] survey. It's people are coming to us, they're not shouting into the wind, they're coming to us because they're getting advice. They're getting a real time like, oh my gosh, what do I do in this moment? And, and so that's why people are coming. And then we're getting to use that to actually, again, illuminate shine's spotlight on.
Some of the issues within an organization. Um, and so I think some of the key things for us is we will never given the sensitivity of the, of the materials that we're working with, that we're engaging with visitors on, we will never pretend that our AI is a human. Right. There's so much of the work that we do is based on eq, is based on empathy, is based on, right.
We can't afford hallucinations. We can't afford, um, to not be really, really thoughtful. Around their particular situation. And again, you know, the thing that people talk about from an AI perspective is look, as long as there's something out there that already exists, that has done something like this, AI [00:16:00] can be pretty good at it.
But the whole point is so many of these situations are in fact, unique and then coming up with actionable, right, immediately usable. Suggestions or possibilities is something that requires a, a bunch of creativity that, again, don't follow patterns that have already existed. And that's one of the things that we really pride ourselves on, right?
We are, that's our expertise, uh, is being able to do that. Um, and look not for nothing. We have in fact, right, I've done some, some trial runs, some tests to see what would happen if we use ai. And the truth is they're like 1 0 1 super basic. And honestly not great at giving those kinds of suggestions. And so, so for us, the way we do use ai however, is we have a library of content, um, that is, again, experts have written.
Uh, and so we do use AI to, so if somebody comes, a visitor comes to the platform and they enter a story, uh, we [00:17:00] use AI to evaluate that story and recommend appropriate pieces of content that might be really good matches for them. From within our library, right? So we're not counting on the AI to give them advice.
We're counting on the AI to do some data analysis and direction, if that makes sense. So trying to figure out how to make that balance between being a utility and really streamlining things, but then also making sure that you are serving the needs of the user.
Nichol: Yeah, it's a, so a couple of things. One, A IHI is is one of our platforms, but one of our other tent poles is civility because the, of the amount of conflict that actually happens in workplace, a lot of it is. Political. Um, but there's, you know, there's more than that because we spend so much of our time at work that, you know, just, you know, by location.
Um, many of our conflicts also happen there. So,
Lisa: [00:18:00] I think that there is a, there's a study that talks about how employees spend 2.8 hours per week on conflict.
So,
Nichol: Yeah.
Lisa: it's, it's super time consuming.
Nichol: It's super time consuming and, and to your point, it's like it's not something that you want a hallucination around and to be able to be served a library, I. To support one in doing that. And, um, you know, and then I, I also suppose a bit of role playing, uh, could be, could be done where people get to practice.
Um, but it's, it's really critical, um, that Well, so I'd love to hear your thoughts on younger generations. 'cause we, we've spoken about that a little bit before and so, you know, younger generations they're coming into. The workforce and you know, they are, they, you know, [00:19:00] they're having to learn, uh, how to work.
Uh, we see AI breaking down, potentially breaking down, um, the early part of the career ladder. And so, you know, what have you learned from the young people that, you know, you've also served about how we can better help them approach AI and also what's happening in the workforce and, and how can we be responsible with that?
Lisa: Yeah, I think it's a great question and I think a lot of people are struggling with it, right? Because so many people are saying. But if, you know, our kids use ai, when will they learn to write? When will they learn to do this skill or that skill? Now the fact of the matter is, I gotta tell you, my father didn't want me to use a calculator for a really long time.
'cause the whole point was for me to know math. Um, and, and now nobody even thinks twice about math. Right? They have a calculator in their pocket [00:20:00] and Right. And so, and everything is electronic at the grocery store. And, and so, um, so I think that there's. Uh, there's new tools. I mean, this is a paradigm shift, but I think the other thing that's really important to think about is how, so for example, I have actually been doing a bunch of, um, interviews around using AI in software development.
Um, and the fact of the matter is, like, my main key takeaways are it's not so much whether AI is gonna work for you or not, it's, you actually have to really change the model. By which you work, right? It's not saying, oh, ai, go do this. It is all right. Right? Rather than thinking about like, oh, I'm a programmer, and so the AI should just be able to do that the same way that I do.
No, it's a, it's a very different model. It's thinking about like, all right, well, what will AI be successful? What are my biggest pain points that I think could actually be really streamlined given, given the time and energy? [00:21:00] Um. Once we implement ai, like, all right, how do you break down the problem in a way that it's going to be able to decipher and respond to without going off the deep end, which it can happen.
Uh, and so, so really it's. It's transforming how you operate, what your skill is as an engineer. It's not sitting there writing code, but it's actually taking a step up and thinking more strategically what kind of systems architecture do you need? Um, and giving enough direction so that pieces can be put into play.
And it's just a very, it's a different skillset to be able to like harness it and utilize it. And so that's what I would like to start us thinking about for, for the youth is really. Teaching them how to make the most of it, right? Not just, not just morph how we've always taught things to be based on these new tools, but actually reframe our approach to things and, right.
And some of it's about, you know, [00:22:00] critical thinking skills, right? So you're not just believing whatever an AI tells you. I think there's a lot of different skills and components that go into it, but I think that's, that's kind of also my biggest realization just through this research is it's not. Do the same thing we've always done with a new tool.
It's change how we operate and how we execute against things.
Nichol: Great. And so, um, I wanna talk a little bit more about your research and then also examples our audience. Um, you know, they're HR leaders and they are first responders to this, a IH like they're, you know, I, I was sharing with someone the other day, the, um, you know, when the Spotify CEO made the mandate that you can't hire.
Unless you can prove that AI can't do it. Like when the CEO came out with that, what many people who are not in HR don't know, is that the [00:23:00] execution of that from workforce planning to, you know, uh, task analysis, like the execution of that, that mandate is almost entirely by ar HR. The execution of that mandate is almost entirely by HR, even though the, you know, the CEO is talking to the managers and saying, you can't have another headcount unless you can prove that it can't be done by ai.
HR is the one that's like on the ground, you know, uh, executing and manifesting that directive. Um, and so, as is, as such as, you know. HR leaders are first responders. We love examples and cases. So can you share some examples of where, you know, whether through equitable or, or other things where you've seen people, uh, and seen success with these complex areas like, you know, mediation and feedback or [00:24:00] conflict.
Um, some examples from your platform that you'd like to share.
Lisa: So in terms of like from our platform to your point around practice exercising and stuff like that, I think that's a really, um, concrete use of it. But I think actually taking a step back and thinking about broader picture, which is HR is boots on the ground in terms of trying to figure out transformation.
And so I think, you know, I think for them the balance between the, the people impact and the business impact I think is so critical to keep in mind. And then I would also go back to this. It is gonna be about redefining skillsets, right? It is what is going to make a marketing person or a salesperson successful?
What can a human do that an AI can't? And how do you. Like you have to change the dom description, you have to change the interview [00:25:00] process and the rubric against which you measure stuff. So I'm not sure exactly what, like what other kind, like con I'm, I'm trying to think about concrete examples. I mean, so one example that, uh, again, we're out there interviewing and talking to a lot of CHROs or C chief people, officers, and.
So one of the big issues that people have is if they are a global company, uh, turns out their headquarters are in the US so their language of business is English, but many of the employees, their first language isn't English. And so the nuance that comes across when you're writing an email or, and you're in Slack.
Can be lost. And so there's a ton of miscommunication that happens between those different offices and those different people and those different cultures. Uh, and so I will say that there's, you know, there's a possibility to use AI to actually help narrow, like bridge that gap and to [00:26:00] really, um, help people communicate better.
So that's, that's a concrete example where you might use ai.
Nichol: Yeah, and, and what sort of principles. You know, we, we talked a little bit, we, we, we've, we've talked lightly about some of them, but sort of like, what are your top principles for that you use or that you would suggest for people who are, you know, building AI or bringing AI into their organizations and, and how they approach these sort of sensitive human issues.
Lisa: Yeah, I think, I think there's still a lot of learning to do. I just be really upfront about that. Um, I think that, so one of the things that I. Everybody should be thinking about is what's gonna happen to PII personally identifiable information, right? So if you are in HR and you wanna use tools to better manage the [00:27:00] workforce and you wanna use AI tools to do that, how much access?
Are you okay with having a public AI have to your employee's personal information, right? Do you want to create your own right? Do you wanna have your own LLM? Do you wanna actually branch off and make sure that it is all data that you control? So I think that's one of the things that's really important to think about is, is where do you wanna draw that line?
I think similarly figuring out your. Tolerance for hallucination, um, but also, what's the word? Um, non-deterministic. Right? And so understanding that if you ask an AI a question once, twice, and three times, you might get three different answers and figuring out, right. What is your, what is your default tolerance around, around either straight up [00:28:00] hallucination or.
How you wanna communicate and what is, what's actually okay to say. Um, and then I think the key thing really is it's all about training. It, it is, and that's, it takes a lot of time and a lot of investment. And so basically rather than just saying, Hey, go do my job, you actually have to teach it right as.
As you might somebody who's just entering into HR, right? What would you say to them? What, what would you point them towards? What kind of documents? What kind of policies, what kind of principles you actually have to write. It's the AI is naive and so you have to teach it and inform it with the principles and values that you live by and that the organization lives by, right?
Because, you know, go fast and break things. Is one set of values. Uh, another might be, you know, user first or, you know, uh, yeah, I mean user first serving, right? Making sure you're super serving your customers, right? Those can be two very [00:29:00] different models, and you would wanna train the AI differently to actually execute against either one of those, if that makes sense.
Nichol: Yeah. Okay. So final question and it's, it's always our closing question. So our, our audience has heard this conversation and they're interested in using AI responsibly in their organizations. What do they do on Monday? What do, what do you recommend that they do on Monday?
Lisa: Yeah, I love that. I mean, for me is making sure that everything is actionable, I think is so critical actually to everything that we do. Um, so I'm really, I love that. This is your last question. Question. I think for me it is, it's going back to that thinking about a new model of the way you operate. So what I personally would do, it depends on where you are in the spectrum, right?
How far you've already embraced and how much you've already experimented. But from the basics, it's looking at your pain points, [00:30:00] pick one, and then evolve from there. And then really again, Try it. Train the ai, try it, train the ai, right? And so actually building up this muscle, right? Rinse and repeat.
And I think if you start really small, you'll start to understand how you can make it work for you. And so that's what I would suggest. Pick one thing and, and, and keep it really tight.
Nichol: Mm-hmm. Well, and you know, one of the things that I'm thinking after this conversation, you know, I have a, uh, my framework is that the top two survival skills in the age of ai, I. One is adaptation. The second one is ideation, which is iteration with creativity. Um, and then the top two leadership skills. And leadership is not just from the top.
It's empowering change and empowering learning. And you know, as the task. [00:31:00] Um, as the, the software line rises of the types of tasks that AI can do, we're really gonna get into the people part. And the people who will really succeed are the ones who can really navigate that conflict. You know, that just happens when you know people are around a goal and can really navigate that conflict, but also help the organization
Lisa: Yeah.
Nichol: navigate the conflict so they can decide what they're doing.
And do it that's really gonna make the difference. And so I, I love, you know, I, I love what you're working on and, and how you're thinking about and helping people find the right information to help them navigate the conflict that they're working on, working through. And I also love the realization, um, to not, um.
To not actually drop that, that into, to not rely on, on models for that. Um, and so I love that too. [00:32:00] that's it for this week's episode. A big thank you to Lisa for sharing her experiences and insights with us. Before we wrap up, I encourage you to follow the AI Plus Hi Project wherever you enjoy your podcast.
And if you enjoyed today's episode, please take a moment to comment, leave a review, and help spread the word. Finally, you can find all of our episodes on our website at SHRM dot org slash ai. Hi, thanks for joining the conversation, and we'll catch you next time.
Ad Read: Brightmine introduces AI assist a new gen AI powered tool in the HR and compliance center, providing HR professionals with swift and reliable answers to complex compliance questions.
Request a quote at brightmine.com.
[00:33:00]