On his third day in office, President Trump issued an EO titled “Removing Barriers to American Leadership in AI,” replacing former President Biden’s 2023 EO on AI. So, what do these changes mean for the future of HR and business? Host Alex Alonso dives into the details with Rachel See, senior counsel at Seyfarth Shaw, and Ben Eubanks, chief research officer at Lighthouse Research. Together, they explore the intersection of competitive innovation and workforce protections. The conversation also sheds light on the administration’s stance, which argues its AI policies will empower American industry and result in job creation.
On his third day in office, President Trump issued an EO titled “Removing Barriers to American Leadership in AI,” replacing former President Biden’s 2023 EO on AI. So, what do these changes mean for the future of HR and business? Host Alex Alonso dives into the details with Rachel See, senior counsel at Seyfarth Shaw, and Ben Eubanks, chief research officer at Lighthouse Research. Together, they explore the intersection of competitive innovation and workforce protections. The conversation also sheds light on the administration’s stance, which argues its AI policies will empower American industry and result in job creation.
Subscribe to The AI+HI Project to get the latest episodes, expert insights, and additional resources delivered straight to your inbox: https://shrm.co/vvpyid
---
Explore SHRM’s all-new flagships. Content curated by experts. Created for you weekly. Each content journey features engaging podcasts, video, articles, and groundbreaking newsletters tailored to meet your unique needs in your organization and career. Learn More: https://shrm.co/coy63r
[00:00:00]
[00:00:05] Alex: Welcome to the AI HI Project. I'm Alex Alonso SHRM's. Chief Data and Analytics Officer, thanks for joining us today. This week we're discussing President Trump's recent executive orders regarding AI and what these changes mean for businesses and HR leaders. Our guests this week are Rachel See, senior Counsel at Sfar Shaw and SHRM's.
[00:00:28] Policy partner, and Ben Eubanks, chief Research Officer at Lighthouse Research and Advisory. Welcome to the IHI project, Rachel and Ben.
[00:00:39] Ben: Glad to be here.
[00:00:40] Rachel: Thanks,
[00:00:41] Alex: good to see you Good to see you both. Uh, I I want to give you a moment to kind of introduce yourselves and talk a little bit about what you do in your current roles.
[00:00:48] Uh, Rachel, why don't we start with you.
[00:00:50] Rachel: Sure. Um, hi, I'm Rachel c I'm Senior Counsel, uh, at Sci Farth. And my practice here, uh, focuses on AI risk management, uh, [00:01:00] focusing on the, uh, labor and employment space before, uh, coming back to Sci Farth in 2023. Yeah, 2023, uh, I was at the Equal Employment Opportunity Commission, uh, in a bunch of roles from enforcement, uh, to management and also working on AI policy for Commissioner Keith Soling. I, I am a former AI regulator uh, helping employers, uh, navigate through regulations and, uh, all, all of the technical and, and social and legal issues with using AI for hr.
[00:01:31] Alex: Ben, how about you? Tell us about you.
[00:01:33] Ben: Absolutely. So I'm the Chief Research Officer for Lighthouse Research and Advisory, and before that I was actually an HR practitioner in the trenches. So I love bringing the practical side and the data side and trying to tell some stories about how I help all of our friends in HR do their work better.
[00:01:47] I. Written several books. The one that's probably most relevant to this conversation is called Artificial Intelligence for hr. It was really the first book in the world written on that topic. Bringing those two things together and trying to make it easier for employers to understand the [00:02:00] far reaching promise and the far reaching potential concerns around AI and how that fits into that bigger picture.
[00:02:06] Alex: And I can say that having worked with both these individuals, uh, independently, both their work is incredibly meaningful and really is pushing the conversation around ai. Uh, so, and especially as it relates to the world of employers. Uh, now I want to go ahead and, and, and pivot and let's get to the meat of what we're talking about here today.
[00:02:24] Uh, before we dive into President Trump's AI executive order, I, I want to take a moment to rewind and think back to October of 2023 when former president, uh, Biden introduced the first executive order ever focused on artificial intelligence. That EEO specifically required tech industry, uh, to develop safety and security measures and standards, but then also introduced consumer and worker protections gave federal agencies a to-do list to oversee the progress of the kinds of technologies that are related to ai.
[00:02:57] My question for you, when thinking about [00:03:00] this, can you share why it was such a, a critical moment or directive at the time and. Why was it needed? And, and, and I'll start first with you, Ben. What do you think?
[00:03:10] Ben: So at that time, if you can rewind back that far in your head, some of us have a hard time doing that, but um, we were less than a year from the implementation of chat GBT into the, or the availability of that in the market and love it or hate it. That AI conversation suddenly got a. Tremendous shove forward. I've been speaking and sharing on this topic to HR audiences since 2018, and for many of those early years it was sort of theoretical out there, futuristic and suddenly that made it a today thing for many of them. They could go and try these things out and I. Not just hr, but any profession, any person, any individual could start doing that same thing. And I believe that's what really drove this is the government started seeing, like many employers were seeing the, the opportunity for this tool to be used for good or for ill. And they started thinking about what sort of protections are we gonna need on the workers for the work that they're doing? [00:04:00] Are we gonna need around preventing bias in the workplace and, and other decisions that can be made with this?
[00:04:04] And so they started thinking about that bigger picture and really leaning in on how can we make sure we're doing the right things. Advancing technology, but also protecting the people who are using it.
[00:04:13] Alex: I appreciate that perspective. Uh, Rachel, I know that this was a particularly personal, uh, kind of experience for you with this executive order having been part of. Various administrations and focusing on ai. I remember the day we were actually together the day that it was, it was launched or, or it was signed, I should say.
[00:04:31] And, uh, I, I remember the hearing about your interpretation of all the various pieces. 'cause I also remember it being one of the longest executive orders I've ever read in my entire life. Rachel, what? Why was it so critical at that point in time?
[00:04:44] Rachel: Well, I, you know, I, I think the, the first thing that it accomplished was it got people talking and it got people thinking about both the benefits and the risks associated with ai. you know. Alex, can I, well, actually, your, your [00:05:00] first question. He, uh, here, because President Trump in his first administration did have an executive order on ai there, there were some, you know, foundational things that, that executive order I.
[00:05:12] Started the federal government doing, started, you know, getting agencies, requiring agencies to think about their, how they were using ai, how they wanted to use ai, and there were several themes that are picked up in what we see in here in the second Trump
[00:05:26] Alex: Mm-hmm.
[00:05:27] Rachel: and what President Biden's executive order did you know, again, building off of that, we have the government building off of.
[00:05:34] Prior work until it doesn't. Um, but when, you know, president Biden issued his executive order, you know, I think a lot of us were calling it, well, it it's a whole of government order, uh, and that's why it was so long, because it, it has all of these little tidbits to. All of the departments, you know, as many departments as you can think of, even, even some that you wouldn't expect would have a lot to do with ai. And you know, I, I think there was [00:06:00] this real charge or this perception that it was a real charge to all of these enforcement agencies to say, well, think about how we can use our existing authority, uh, or proposals for using our existing authority. To regulate AI to, to address some of these perceived harms that, that the president was ordering. on the other hand, and while. Doing all of that risk-based work, all of that regulatory work, also saying to the federal government, and we wanna embrace AI and we wanna, you know, help America be better at AI and make all of these investments in ai. So slow down, watch out, here's some regulation, but speed up also.
[00:06:46] And let's take advantage of, of all of the positive of ai. It, it's trying to do both. And that that's, that's a complicated thing to do.
[00:06:55] Alex: You know, I, I appreciate that and that's a healthy revision. 'cause I do recall, you're absolutely right, [00:07:00] president Trump did in fact have the first executive order. Even with the exploration of AI and exploring what it is, that that is absolutely critical. Uh, and more importantly, I do see the guardrails that you've kind of, uh, put around this conversation in, in a critical way.
[00:07:14] I, I, I think it's important to, to remember that it's both exploration and at the same time ensuring that we're being mindful of how we use it. Uh, so, uh, one thing I want to jump to next is present day in particular. And I want to think a little bit about what it was like, uh, as we look at what it is today and what the executive orders are doing and think about.
[00:07:35] We know in his first day in office, uh, president Trump rescinded the initial Biden executive order on ai, and then he issued his own that was really focused primarily on, uh. Thinking about what are some of the key tenets, the objectives, and more importantly, the key components around what it is that AI will be used for and or explored for, and how we build an infrastructure around [00:08:00] that.
[00:08:00] Uh, I, I'm gonna switch things up here a little bit and ask specifically, how do you interpret the recent AI orders that were put forth by President Trump? And what, what do you take away from them? Rachel, I'm gonna start with you.
[00:08:13] Rachel: So the, the day one executive order fulfills President Trump's campaign promise to revoke president Biden's order. So, so that that order. gone. All of the underlying agency actions and, and, and some of, you know, some of the things that the federal government is doing with its own use of AI that's being reconsidered.
[00:08:38] There are, there are deadlines. In that first executive order. Um, and, and we have an end of March, uh, deadline that, that's coming up, um, for the, for the federal government for OMB, uh, to take a look at some of these Biden policies and see what needs to explicitly needs to be revoked, uh, maintained or, or, or [00:09:00] revised. So, so there's, you know, the, the fundamental policy declarations that President Biden made, um, those are gone. We have that replaced, uh, with President Trump's vision, not just in the executive order, but you know what else we've heard from the administration, and we'll see how that translates into policy. we've had OSTP, the Office of Science and Technology Policy requesting input. from thousand thousands of stakeholders have submitted their input on where, you know, they're suggesting, where we are suggesting, uh, the administration should be focusing. Um, its its AI strategy. Its AI priorities. Um, there's been things that agencies have quickly revoked, uh, that were done under President Biden's, eo. Um, and, and there's other things that, that quietly remain be because they're, they're foundational. Their, their, their foundational to how the government itself is trying to use AI or [00:10:00] purchase ai. And we'll see that tweaked. We'll, we'll absolutely see that tweaked. Uh, but I don't think we'll see all of it go away because it can't. It's, it's part of how government works and has to work, I think.
[00:10:13] Alex: Ben, same question to you.
[00:10:15] Ben: Yeah. It is no surprise to anyone that is, that works in the hr, the world of hr, that a democratic administration's gonna put things in place that are pro worker, right? Trying to think about the workforce, protecting them. Preventing bias, all those sorts of things. And so the shift out for, for the new executive order, Rachel covers some really great things in that. The, the only things I'll say that Rachel mentioned that the, that first executive order, if you go actually look at some of the wording in that, it has words in there that might surprise you to hear it in a a as as brash that feels like this. This shift has been with the new administration, words like responsible, transparent. Um, accountable, respectful. Those are words that were the very first Trump executive order around ai. And those things, I believe are still [00:11:00] part of what this, this administration wants that to be. But they've really shifted that a little bit away from that pro worker piece. I. And, uh, the protected piece to more of a how can we be innovative?
[00:11:10] How can we faster, how can we move quicker than other countries can, because this can be, this needs to be something we're focusing on as a, as a country, as a set of industries. And so that's where they're really leaning their side. And it's no surprise to anyone. Again, because of Trump's pro business background.
[00:11:26] That's his focus. That's where he came from. That's where he spent a lot of his career. And so he's now trying to empower those companies. The, the challenge that I see with that is that we often are overly optimistic in what we expect companies to do for the best interest of everyone. And what you find is sometimes they do that and sometimes they do things that are in their own best interests.
[00:11:45] And we're seeing some of those destructive behaviors already from the, the companies that are out there right now in the market developing ai. Some of those destructive things are already happening.
[00:11:54] Alex: So it's fascinating you raised that because I, I do want to kind of do a little bit of a, a follow up there and talk a [00:12:00] little bit about how you see that actually shaping or changing HR strategy, right? You, you, you, you hit upon the notion that, uh. Obviously liberals versus conservatives. Some are, you know, liberals tend to be pro worker.
[00:12:12] Conservatives tend to be pro employer, but at the end of the day we, we see shifts in, in HR strategy that kind of materialize from, uh, some of what we're talking about. How do you anticipate that? The big vision for what we're seeing around AI and AI development actually changes? HR strategies, whether it's be hiring for AI related roles, whether it be creating new AI roles that we've never even thought about, what, what do you see coming from that, Ben?
[00:12:38] Ben: So I'm actually curious to hear what Rachel says around some of that, because I know there's been a conversation around what policies you need, things like that to back that up and protect that from the employer perspective. I will say that the HR professionals and leaders that I talk to are excited about the possibilities.
[00:12:53] They're, they're looking at ways to use these tools. They're trying to be smarter at that, but at the same time. The thing that concerns me the most, frankly, [00:13:00] is the story that isn't being told nearly as much about some of the new data coming out around these tools affecting people's ability to be creative, their ability to be innovative data from Microsoft, university of Toronto, and other organizations that starting to show those things.
[00:13:12] And so we have this picture in our head of AI being this. Positive outcome, this, this journey towards something better where we're spending less time doing that mundane stuff that none of us wanna do anyway, and more time doing the things that bring us to life and bring us joy. But the, the concern that I have is the, the research is showing that, that offloading those things is hurting us in ways we don't yet fully understand.
[00:13:35] And we're not even sure what that means long term yet. It's just hard to, to wrap our heads around that from an HR perspective, specifically, the types of jobs and roles that I'm seeing start to emerge there. Things like people really leaning in on how do we create better experiences? Let's focus on the experience design and not just on processing paperwork.
[00:13:53] How can we think about where tools can be used safely, where if we can automate some of the work our people are doing, and how do we make sure that we're [00:14:00] protecting the jobs, the people who are doing some these other things, right? There are companies right now as we speak that are replacing different types of roles, and there's a lot of data coming out around entry level jobs that are steadily being consumed by. Some of the new generative AI tools. And so any HR leader that HA hasn't started thinking about those sorts of things yet needs to begin, needs to start having those conversations at an executive level because those decisions are being made in many organizations right now. And if you're not in the room, who's gonna advocate for the people?
[00:14:28] Rachel: I, I think it, it would be, it, it's a mistake to, to think or to believe that, that the Trump administration's AI policies are not pro worker, that Vice President Vance and President Trump, um, have, have very explicitly said they want to be pro worker.
[00:14:49] Um, and on ai. President Vance has has very much said the Trump administration will maintain a pro [00:15:00] worker growth path for ai, their vision on AI policy and regulation is that they're doing things that their AI. Policy is pro worker because their vision is that this is about job creation, about empowering America, American industry, um, about making America, you know, leaders in, in a and, you know, to use the vice president's own words, um, to make us more productive, more prosperous, and more free. So, so. As with all things political, uh, you know, I think there can be agreement that yes, this is the goal. Um, and there may be, there may be debates, there may be differing visions about how we get to that goal. Um, you know, and, and I think that if you look at some of the labor policies and, and some of the goals behind deregulation or the, you know, the goals behind, you know, these regulatory, these differing regulatory visions, [00:16:00] um. I think, I think the Trump administration is trying to be pro worker in his AI policies.
[00:16:05] Alex: You know, I, I really appreciate that perspective, Rachel and, and Ben, uh, yours too. I, I, I, I think about our own thought leadership here at SHRM, and I see the, the two sides of the coin, right? We see in particular. That there are advantages that we're, we're experiencing as a result of AI and, and some of which deal specifically with better experiences for workers, new job growth.
[00:16:26] Uh, Ben, you referenced the, the notion of e uh, experiences as a whole, and I'm seeing that as the, the number one AI based job that's growing is people who are creating experiences either for customers, employees. Take your pick in, in the, in the vein of hr. and Rachel, I, I think specifically to some of the things that you were alluding to around being pro worker and how that really does put an, an onus on AI used to foster personal growth, learning and development opportunities, foster different productivity, different impact in people's lives and, and changing [00:17:00] career growth.
[00:17:01] I guess my question to you both as a follow up is. How might some of these kind of varying kind of advantages shift or change as we see the executive order kind of, uh, materialize and, and become more operational in the way that we, we look at it both within the government than, but beyond as you looked at it and, and, uh, I'll, I'll start with you Rachel, since I started with Ben last time.
[00:17:26] Rachel: Fair. Um, you know, Alex, I, I think this is less about what the executive order says because the, you know, the executive order is laying out, um, a vision for policy and, and more about what the government is going to do and, and what their regulatory, uh, perspective is going to be. you know, I think the. Trump administration perspective here is that AI is a, a critical issue of global competitiveness. And, [00:18:00] and so if, if we're trying to, you know, make America First in ai, that's looking at global competition. With China and, and seeing it as a national security issue against China and, and whoever global adversaries may be. also specifically looking at, you know, the European model of regulation and, and seeing, you know, the, the sharp criticisms. That, that the vice president made you know, of the European model of AI regulation and what that means for hampering r and d hampering adoption, slowing down adoption. So, you know, when, when you look at the vice president's remarks, in Paris in February, I think that's really laying out. The vision of the administration for, how they're going to get to the results they want of, empowering America industry. And, [00:19:00] that's hands-off regulation. That, we want to embrace technology and we are rejecting the European model of regulation here. what does that mean for HR professionals?
[00:19:13] Does, that mean that you can just do whatever? Well, I, don't think so. I, think that understanding that if you are moving to a self-regulatory environment, you, still want AI that is safe, reliable, and trustworthy. You, you're just not going to have the government giving you the, these very long documents that may or may not spell out what you have to do to comply that might not get you fully to save trustworthy and reliable ai. You, you still have, know, all of these obligations and if you want it to work, you're gonna have to, to, to wrestle with these issues also. So, so it doesn't end. The AI safety conversation, it, it just shifts the safety conversation away from here's what the government is [00:20:00] requiring me to do, to, well, well, it, it's now on industry.
[00:20:04] It's now clearly more clearly on me, and the government's not gonna save us from this. The government's not gonna tell us it exactly what to do via regulation.
[00:20:13] Alex: So, you know, uh, it's funny, I, I here, I was hoping that there was a world one day where HR would get to willy-nilly do whatever they wanted. Uh, clearly that's not gonna happen anytime soon though, Ben. Uh, how about you? What do you, what do you see as sort of the, the advantages, the things that are, that are popping up that are really.
[00:20:31] Uh, kind of unlocked as part of the potential behind some of these, these recent policy shifts.
[00:20:37] Ben: So really the. It's interesting that we're talking about these things, but a lot of the growth in the AI industry has happened in the last few years prior to even the Trump administration and some of these
[00:20:48] Alex: Mm-hmm.
[00:20:49] Ben: the new executive order and the focus of that. So even with, we talked about a minute ago, some of those policies and things that the industry is, is growing rapidly in the capabilities the AI technologies are. Honestly [00:21:00] mind blowing in some cases compared to what they were even a few years ago. And so that innovation, the expectation is it's gonna continue. Rachel used the word deregulation a minute ago, and if you go back and you look at the history of deregulation in the us, you see a couple things happening when the government says, we're gonna step back a little bit, we're gonna remove some of the, some of the, the oversight here, the protections here.
[00:21:21] We're gonna remove some of the, the overhead or red tape. You see innovation spike, usually you see. Things like, uh, better competition in the space. You see those sorts of outcomes there. You see industry growth, those are all things we're hoping for, and that's what you sort of see in the, the aspirations of how the administration's talking about this. At the same time, I would be remiss if I didn't say, we also see things we don't expect. They're called unintended consequences because we didn't intend them. So you see things like, um, when energy was deregulated, it created the opportunity for a company with. With unethical leadership like Enron to step in and take advantage of how things were done in the new, in the [00:22:00] new model with that deregulation and millions of dollars in profits basically gouging people. So those sorts of things are potential outcomes, and I sound like a, a negative raincloud doom cloud kind of person. I do have a lot of optimism here. I do believe that AI has the opportunity to change work for the better, to change lives for the better, to help HR leaders reclaim a little bit of that. That sanity. 'cause they're, they're facing burnout and stress at higher levels, and that's across every single worker. I hope for that. I'm also trying to be realistic about the, the motives and the motivations of people who are out there that are developing these tools. As I, I mentioned earlier, they're doing some destructive things.
[00:22:37] I'll give you an example of that. If you are a person who creates graphic art, for example, and you have a unique style that sets you apart, that helps you to, to earn a living, to feed your family, AI tools are trained on. Copyrighted material all the time and your art can be used to train a model and someone can create a cheap AI knockoff of the things that you use to create with your hands and you get no royalty, you get no benefit from [00:23:00] that. And those comes are making so much money off of selling that software that if you try to take them to court to fight back, you don't have enough money to fight them off 'cause they're making so much on the other end. And so like those sorts of things are real right now and are happening. And that's a, an example of how these companies, the, the profit motive drives 'em to do things even that may be harmful or, or, uh, damaging to the individuals who are doing the work.
[00:23:23] And I wanna make sure those things are, are not lost in this. Again, I don't wanna stray too far from the HR conversation, but just to give an example of what's happening out there right now and what's possible to, to be used, uh, negatively when it comes to ai, not just bias and employment and all the other things we think about.
[00:23:38] Alex: Yeah. And, and, you know, you raise a great point, I think about it. We have a, all three of us have a mutual college colleague, I suspect from Fordham Law School, uh, uh, professor Shlomi, uh, Reni, uh, Ky Vid, right? And she talks about. We, in most cases, you see the first three or four headlines related to any kind of issue around ai, especially in employment setting.
[00:23:57] But there's really 29 major issues [00:24:00] that we have to deal with, right? You even referenced, for instance, cognitive decline, the death of STEM skills, the growth of other parts of our, our neurocognition, and how that changes and things that we have to think about as, as, as humans, not just employers, but also as individual workers and future generations.
[00:24:17] She, uh, she always tells the fantastic, uh, story about how. She was trying to ask some, some younger students what it was like to, to experience a payphone for the first time. And many of them said, I don't even understand what a payphone is. I wouldn't know what to do with it if I, if I walked up to it and did something with it.
[00:24:33] And it's the same thing that you might experience here when looking at some of the unintended consequences you laid out. Uh, when thinking about the. The, the onslaught of AI as, as we look at it today, right. Uh, that, that's a fair point. Uh, one of the things that I, I kind of want to jump ahead a little bit and start talking about impacts both relatively recent and then those that are actually, uh, more, uh, longer term in your minds.
[00:24:56] Uh, one of the things that strikes me is in the first [00:25:00] eight weeks here, since we've had the. The, uh, the, the, the issuance of, of President Trump's, uh, executive order. One of the things that strikes me is that we're seeing employers react, whether it's through Im immediate impact or through, uh, just general analysis, if you will.
[00:25:15] Rachel, what are you seeing from employers when it comes to that? What are, what are you seeing employers actually do, just in general reaction in the immediate term?
[00:25:23] Rachel: Well, you know, I, I think there's all of these executive orders that, that have, you know, gotten employers, you know, really struggling to understand what's going on and the, the pace of change and the, and the pace of, you know, legal challenges and sorting all of that out, um, has been you really intense. And we haven't gotten a lot of federal ai, uh, policy. That, you know, we have the executive order, we have the Vice President's speech. We, you know, we have some early, you know, indications from the governed, you know, stay tuned. I, I, I think we'll see stuff coming outta the [00:26:00] administration, you know, soon. Um, where, you know, where I think employers are, are really trying to. Understand what's going on, you know, with AI regulation is understanding the interaction between whatever the federal government does or does not do, and states and, and all of you know, whether it's California, Colorado, Texas, Virginia, you know, all of these. State legislatures and, and regulatory agencies trying to say, well, if the federal government's not gonna do it, we are gonna enforce state law.
[00:26:30] We're gonna pass new state laws. Governing specifically putting in requirements, putting the breaks on AI in hr, addressing some of these risks, trying to address some of these risks, you know, as well as the international re regulatory environment. So, so that. Complexity. You know, I, you know, I think whatever the, the deregulatory philosophy of the Trump administration may be, may eventually be or communicated through policy [00:27:00] and, and other things. You know, I think state and international, uh, you know, really makes employers think and, and pause and, and try to understand how all of this applies.
[00:27:10] Alex: Ben, how about you? What are you seeing as far, as far as a big impact so far?
[00:27:14] Ben: I, in my head, I picture the person running around with their hair on fire trying to
[00:27:18] Alex: Hmm.
[00:27:18] Ben: do next, sort of a little bit. Um, as Rachel said, a lot of, a lot of the different companies that I'm talking to are saying, well, if it's not coming here, that the state piece is a critical component. I'm glad you touched on that, Rachel.
[00:27:29] 'cause that was one of the things I, I had meant to mention earlier is even when the government was a little bit unsure about how they were gonna make these things go for different, different areas or protect certain rights of the workforce, different states, different, you know, New York City passed a law, things like that to try to. Put their own stamp on that. If they were tired of waiting, they were gonna take action on behalf of the people in those, in those regions. Um. Overall it, as Rachel said a minute ago, it's a real, really great call out that it's still a bit of a waiting game. We're getting little drips and drabs, little snippets of what's what's possible, their vision of what [00:28:00] the future looks like, but until they really put down exactly what they're expecting, we start hearing back from how things are gonna start shifting.
[00:28:07] It's sort of a waiting game. And so if I was leading HR for an employer right now, and I was listening to this conversation today. I would be having conversations with my leadership about what sort of things may be coming down the line. I would be talking to my leadership about what sort of AI tools we should or should not be using.
[00:28:24] Are we having people use these right now? What does that mean for their jobs? Are we thinking about the future of that? I would just start having some exploratory discussions around that and not wait until something happens, because by the time it happens, It's, really too late in some cases.
[00:28:38] So I would start encouraging them just to really start leaning into that. I was reading a piece just this morning from McKenzie and it was, look, it's uh, quantifying what percentage of HR work, even if we're gonna get really spoke, uh, gonna get really specific and focused on this profession. The data show that up to 75% of the work that an HR person does, can be automated right now with AI tools. And so we [00:29:00] need to be having these discussions. 'cause if not, we're gonna end up in a place where we don't want to be ultimately, and I want us to do this intentionally. With, with real focus, with, with effort, with not, not happenstance, because we know that that tends towards chaos.
[00:29:12] Alex: So I'm gonna pose one question and it's completely unplanned, but I'm gonna ask you to do it anyway. Uh, that's my calling card. I have an unplanned question every time. One of the things that I'll ask each of you is, in one word, how would you describe the posture that A-A-C-H-R-O should be taking towards AI in their organization?
[00:29:30] In one word. And Ben, uh, I, I know you were just talking, but I'm gonna throw it to you first.
[00:29:34] Ben: First one word. Curious.
[00:29:36] Alex: All right, Rachel.
[00:29:38] Rachel: Uh, aware.
[00:29:40] Alex: All right. I like it. I really appreciate that. Believe it or not, folks, I am sorry to say that our time here today is done. We've actually reached the end of our program, and that's gonna do it today for this week's episode.
[00:29:53] A big thank you to Rachel and Ben for sharing their expertise and deep insights with us. Before we say goodbye, I [00:30:00] encourage you to follow the I I project wherever you enjoy your podcasts. If you enjoyed today's episode, please take a moment to comment, leave a review, et cetera, anything that indicates how much you enjoyed the episode.
[00:30:12] Finally, you can find all our episodes on our website at SHRM dot org slash aihi. Thanks for joining the conversation and we'll see you next time.