Explore the intersection of AI, deepfakes, and workplace dynamics with host Nichol Bradford and machine learning expert, business leader, and entrepreneur Tim Hwang. Senior leaders and HR executives can learn how to prepare for the challenges of AI-driven disinformation, build trust in an era of synthetic media, and develop agile response strategies. Discover why the future of workplace integrity may depend less on technology detection and more on fostering human connections and open communication across your organization.
Explore the intersection of AI, deepfakes, and workplace dynamics with host Nichol Bradford and machine learning expert, business leader, and entrepreneur Tim Hwang. Senior leaders and HR executives can learn how to prepare for the challenges of AI-driven disinformation, build trust in an era of synthetic media, and develop agile response strategies. Discover why the future of workplace integrity may depend less on technology detection and more on fostering human connections and open communication across your organization.
Rate/review AI+HI on Apple Podcasts and Spotify.
Nichol Bradford:
Welcome to the AI+HI Project, a SHRM podcast. I'm your host, Nichol Bradford, SHRM's Executive-in-Residence. Thanks for joining us. Each week we'll sit down with experts to provide strategic insights, practical tips, and actionable strategies to innovate a future that blends human ingenuity with advanced technology.
This week we're talking about deepfakes and the future of disinformation. You may have recently seen headlines about deepfakes, incredibly realistic videos, images, and audio generated, made possible by some of the recent breakthroughs in the field of artificial intelligence. These technologies have raised worries that the problems of fake news and online disinformation will fuel workplace discord by eroding our collective ability to distinguish fact from fiction.
How much should HR leaders worry about the impact of AI driven disinformation on the state of workplace civility? How should executives approach addressing the problems presented by these cutting edge technologies? We dive into how modern artificial intelligence works and talk about the impact of misinformation on workplace civility, and how leaders can keep deepfakes from becoming a distraction in the workplace. Joining us today, we have machine learning expert, business leader and entrepreneur, Tim Hwang, to talk about this subject in depth. Welcome to the AI+HI Project, Tim.
Tim Hwang:
Yeah, thanks for having me on the show, Nichol.
Nichol Bradford:
So what is the headline that people should know about deepfakes?
Tim Hwang:
Yeah, absolutely. So deepfakes, this new technology is really made possible by these major advances in artificial intelligence. And I think one of the most interesting things about it, is that we now have the ability to very cheaply create very, very high realism simulations of people's voices, their faces, how they talk, video, audio, texts, you name it. And in the past few years, this has been mostly a technology in the lab, but I think most recently, we've seen some major changes in how expensive it is to make these deepfakes and how good they are, how high quality they are. And so I think it's only a matter of time before these types of deepfakes start entering the workplace in a way that they really haven't in the past.
Nichol Bradford:
So I'm really curious, what made you interested in this subject? Out of all of the things that you could do with your time and interest, what is it about misinformation and DeepFakes that really captures your passion?
Tim Hwang:
Yeah, for sure. I think one of the things that's really of interest to me about this topic, is that I really like to see this in the broad scope of technology and in the history of technology. People often forget when photography first started becoming a thing, people were like, "Well, we have no idea what's going to be real and what's going to be not anymore, right? Because you can just manipulate a photograph and what happens?" And so we've been here before. The deepfakes of today are the photographs of yesteryear. And so I'm interested in how society, businesses, business leaders really come to deal with new types of media, and how these types of things become either normalized or not. We live in a world right now, where I think people know that photographs can be manipulated, that took a long time. And so I think the sociological process by which this happens it's very compelling to me and I'm very interested in it.
Nichol Bradford:
Oh, that's interesting. So are you imagining that over time we'll become socialized to AI deepfakes as well?
Tim Hwang:
Yeah, a little bit. I always joke, you always think about the first time a billboard was put up somewhere, and I always imagine that those billboards worked for a period of time. And I can only imagine that someone driving past the billboard in the 1940s and fifties was like, "I should really start smoking cigarettes, because there's that big head on that board that's telling me to smoke cigarettes." And so I think society has to metabolize forms of persuasion. And over time, billboards have just become like you see them on the highway and you don't even think about them. I've seen probably three or four today, don't remember any of them. And so I think the same thing is going to happen in the deepfake case. I guess the question is just how long it's going to take, and also what's going to happen as society tries to figure out what it's going to do for something, where I think most people still don't know or are fully aware that the technology has come so far, so quickly.
Nichol Bradford:
How did you first become aware of the potential impact of AI driven disinformation, specifically deep fakes on workplace civility? And how should HR leaders be particularly concerned about these implications? How? Why? What should they be thinking?
Tim Hwang:
Yeah, so the example I always give is cell phones, right? So if you've seen the movie Wall Street in the 1980s, there's Gordon Gekko and he's got the cell phone. It's this big chunky thing. And when they put it into that movie, it was because cell phones were a totally expensive, unique kind of technology where you're like, "Oh, you have to be a rich private equity guy to have a cell phone." And then the cost came down, the quality came up, and then now we live in a world where, I was watching the Taylor Swift's concert film recently, and everybody is there with their cell phone. It is just like a commodity thing. And I think we can think about deepfakes as going through exactly the same transition.
What I mean by that, is if you're looking in the period of 2017, 2018, 2019, pulling off a good deepfake was expensive. You required specific kinds of talent. It was like an expensive thing to do. And so you mostly saw it in the context of state run propaganda attempting to manipulate a national election. It was very, very specific. That's like the Gordon Gekko era. And I think what's most interesting, and one of the reasons I'm excited to talk about this with you, is that I think what we're seeing is that now that the costs have come down so much, you can almost expect that almost like the cell phone, everybody's just got one, they know how to use it, they know how to take photos with it. Deepfakes are going to be something that everybody uses for both good and for ill.
And this is what you see happening. There's all these technologies now where you go on and you say, "I want to generate an image." You just type in what you want and the image appears. And so for an HR leader in the workplace, you have to believe that the same dynamics that we saw in national politics and disinformation five or six years ago is going to emerge in the workplace. That it'll be a microcosm of what we saw happening on the national level, now happening in a small mid-level business that has a hundred people. You will see deepfakes play a role in that place.
Nichol Bradford:
And so for the modern workplace, what do you think will be the scale and scope? With civility, one of the things that we've seen in our research is that many of the uncivil experiences people have, happen at work. So for deepfakes, what do you think will be the scale and scope in the modern workplace? And are there any industries or sectors that you think are going to be more susceptible to this being prevalent at work?
Tim Hwang:
Yeah, definitely. So there's a lot of aspects to that problem, and I think we can unpack it in a few different directions. But I think there's two things that I've been thinking a lot about. The first one, is that the question you have to ask about deepfakes is, okay, well people can just be uncivil to one another in a workplace. People can just lie in a workplace, it happens all the time. So what's so new about deepfakes? And I think what I'd offer is that deepfakes are a video. It's a visual medium, which can be really striking. So it's the difference between, "Oh, George was super rowdy and very unprofessional at the holiday party," to, "I have a video that appears to depict that." And so I think that the impact of civility is going to be a lot greater. It's a force maximizer on the kind of incivility we've been seeing in the workplace in general. And in part, it's because visual medium video is just so striking. And so it's hard to almost eliminate the impression of a piece of media even if you know that it's fake.
I think the second thing I'll point out, is almost we can think a little bit about the impact of age and generational differences in all this as well. Because I think we also live in a world, where people use Instagram filters, and we're increasingly okay in a world, where a lot of media that we see online is totally generated. And so I actually wonder whether or not workplaces that have this difference between older and younger generations in a very striking way, will have a bigger impact due to deepfakes, in part just because the norms and expectations around media are so uneven, that that will actually play out in a much more dramatic way.
Nichol Bradford:
Given that we have many people in our audience who have multiple generations in their workplaces, can you explain in really layman's terms how modern artificial intelligence and deepfakes really create this false content?
Tim Hwang:
Sure.
Nichol Bradford:
We've talked a little bit about how we think differently or how we're so impacted by video, but in layman's terms, how does it actually really work?
Tim Hwang:
For sure. And I think this is such a good question, because it's incredibly key that people understand what's going on underneath the hood of these technologies. It's easy to read the headlines and be like, "Oh my God, the world is ending." But I think we can make much better decisions if we have a working knowledge of how modern machine learning works. And you're probably saying, "Okay, well, Tim, how are you going to do that?" But it actually turns out that there's a lot of jargon in this space, but machine learning is actually incredibly simple to understand. So the way to think a little bit about machine learning, is it's how a machine learns through examples. So back in the day, if you wanted to, for example, program a computer to tell the difference between a photo of a cat or a dog, you get a really smart person and they program a bunch of rules into the computer.
And the modern era of machine learning, is you basically show the computer a bunch of examples of what you want it to do. So rather than having someone go in and be like, "I think dog photos, dogs have pointy ears, cats have round ears. I'm going to program that system," what you do is you're basically showing the computer a lot of pictures and you say, "Oh, computer, do you think this is a picture of a cat or a dog?" And the computer will be like, "I think it's a picture of a cat." And you can say, "Well, that's correct," or, "That's incorrect." And it turns out if you do that enough times, you give the computer reinforcement, and it learns how to distinguish between say, photos of cats and dogs.
Now, I think one of the most intriguing things though, is once you've taught a computer how to do that, it has an understanding of what a dog looks like. It has an understanding of what a cat looks like. It needs to in order to be able to classify these types of images. And so what we're now seeing in the era of generative artificial intelligence is we're almost running the process in reverse, which is now the system has learned what a cat looks like, "Okay, computer give me lots and lots of photos of cats that never have existed before." And that's what you're seeing in the deepfake, is that it's nearly almost like a natural artifact of what we've been teaching computers to do for the last few years.
Nichol Bradford:
That's a great description of what's happening with generative AI. And what are some of the early warning signs or indicators that an organization should use to identify if there's a disinformation campaign that's happening on site and in their workplace? Is there any way to know in advance?
Tim Hwang:
Yeah, totally. I think this is actually one of the most interesting areas of HR at the moment. And the reason being, is because as I mentioned a little bit earlier, people have been battling deepfakes already for the last, say five, six years or so. But they've been very much doing it in the context of a journalistic newsroom trying to protect election integrity. But I think what's very interesting is that, those people have learned a lot of knowledge about how you fight disinformation campaigns, how you detect deepfakes, how do you operate effectively in a world where your media environment may be fundamentally distorted by these technologies? And so one of the things I've been thinking a bit about, is that actually HR leaders can learn a lot from what journalists have learned on the front, fighting this in the election setting for instance, over the last few years.
I'll just throw out a couple of things that I think that journalists have learned really well. I think the first one is just the degree to which you cannot do, fight this stuff completely by hand. There's just in some ways too much disinformation and it can be really, really hard to tell just looking at an image whether or not it's real or not. And so what you've seen with journalists is that they've increasingly had to build a tech stack to deal with and identify media manipulation. How do we quickly tell that an image has been tampered with? I think those types of technologies are going to actually increasingly have to come into the workplace as effectively, people, employees just start running their own disinformation campaigns out of the box. And so it's almost like we used to be dealing with Russian disinformation, now we're dealing with George's disinformation in the workplace.
I think the second one that I'll point out, which is really important, is that there's been a tendency for platforms, journalistic groups to basically say, "Oh, okay, well, maybe the way we fight disinformation is we try to censor it as much as possible." So this is almost like a supply side attempt to fight disinformation that has not worked at all. Famously, there's this thing called the Streisand effect, if you basically say, "Please, please, please don't share this thing," everybody's like, "Oh, what's that? I really want to share it." I think those dynamics are going to play out in the workplace as well. And I think one bit of advice I'd offer from the front, is that HR leaders need to think about fighting this as a conversation more than they think about it as fighting it through censorship, because if anything, it's going to exacerbate a lot of these issues.
Nichol Bradford:
That's a really good point about conversation, because one of the things that we think a lot about is trust in the workplace. And so what have you seen in deepfakes in other areas, like with journalism and more in the public, that we can learn about how deepfakes will affect trust in the workplace? And are these conversations a way to counter that?
Tim Hwang:
Totally, yeah. I mean, I think the way I see it, is basically that deepfakes aren't necessarily the problem. They're a symptom of a much larger problem. And I think one way I diagnose it, is basically that overall, if you look at the polling trust in institutions has declined pretty steeply over the last few decades. And I think that has a huge impact on our ability to deal with not just deepfakes, but disinformation in general. Which is like it's very hard now for a professional fact-finding institution like journalism to say, "Actually, we've done the work and this is not true." And then for everybody to say, "I respect that decision. You are a very credible institution, we're going to stop, basically." And I think the same almost is playing out in the workplace, which is that we have to think about what is trust in HR over overall and is that increasing or decreasing over time? And that really controls the lever upon which whether or not HR is able to effectively deal with these problems on their own.
I would submit to you that just like every other institution in society, trust in HR actually declining over time. And so I think one of the things that journalists have found and election integrity people have found, is that fighting this stuff relies a lot more on the influence and trust that people have in people more than institutions. So one way of thinking about that in HR context, is combating this stuff is not going to be HR offers a statement that says, "We did the investigation and this is not true," it's going to be thinking about who in the workplace can be a strong advocate for pushing against certain types of disinformation and thinking about how those people can be recruited and brought in to the effort of restoring, I think civility in the workplace.
Nichol Bradford:
What do you think would be the cause of a deepfake campaign within an organization? Is it if there's a lack of a feeling that there's a path to address conflict? What are some of the things that might spark a deepfake or a misinformation campaign at work?
Tim Hwang:
This has been actually, I think an evolution outside of the workplace context in the context of disinformation in media at large. Where I think for a period of time people were like, "Oh, disinformation, it's going to be this evil genius that sits in their office and is like, we're going to launch a disinformation campaign that's going to destroy America." And it actually just turns out that, yeah, sure that's happening, but it represents a much smaller slice of the overall disinformation problem. And in fact, I think disinformation has become almost a little bit of a gauche term in this world, because now we think a little bit more about misinformation. So the kind of false facts of which there's not a campaign to launch, but it is instead driven by emergent social behavior.
And I think you can almost think about this in the workplace as well. Which is it's very easy for us to think about the deranged employee that's going to launch this campaign. Or a bunch of hackers that are trying to destroy the camaraderie within a workplace. I think that's going to happen, of course, but I think it's just going to represent a much smaller slice of the problem. I think the kinds of deepfake issues that you will see in the workplace are more emergent in nature. Someone going home and being super angry and just wanting to create a video of someone that they don't like at the workplace, behaving in a completely inappropriate way. And that person feeling so angry that they want to distribute it through the company's Slack or something like that. I think that those are the real kinds of problems that are less campaigns and more people being people.
Nichol Bradford:
And so for an HR leader who's working with a people manager around teams and communication, what would be the ways that would roll out for really communicating about what a deep fake would mean?
Tim Hwang:
Totally, yeah. So one major implication of this is I think everybody, if you're an HR leader or a business leader wants to imagine they're President Biden in the situation room or Jack Bauer, like, "We're in the command center and, oh, we're observing this campaign that's about to start and then we're going to do a sting operation and nail them before it begins." One implication of what I just talked about, is that misinformation in the workplace is going to be fundamentally unpredictable. You will wake up one morning, the problem's going to emerge at 7:00 AM and you're going to be dealing with it by 7:30. So this is not something that you can predict. I don't think it's something you can surveil your way into succeeding against.
The bigger problem is going to be how do you respond very quickly to the disinformation? And is your HR operations set up in a way that can actually move fast? I think a lot of big companies don't have HR operations that can move very fast. They can't move at the speed of misinformation. And so they're going to lose. They're not nimble enough to deal with this problem. I think the second thing that some people have tried in the national media context is what they call sort of pre-debunking. So the notion is basically, if you know that deepfakes are a thing, you may just be less likely or treat those types of things that look like deepfakes, much more skeptically. So this is the equivalent of if over time we thought billboards were really legit, and then over time we forgot about them, the question is how do we accelerate that process? And so I do think the idea of workplace education to say, "Hey, media can be manipulated in this way," is a really valuable thing to almost precede in a workplace.
Nichol Bradford:
One of the things that you said earlier that it's reminded me of, is that I think that by the end of next year, pretty much everything online will be generated in whole or in part.
Tim Hwang:
Sure.
Nichol Bradford:
And so I think that the water line of synthetic is going to rise so quickly.
Tim Hwang:
Totally, yeah.
Nichol Bradford:
I think we're going to get used to those billboards pretty fast. And the assumption of, "Oh, that's probably not real."
Tim Hwang:
Totally.
Nichol Bradford:
And I think one of the other things we might see is today we're worried about people putting out things that aren't real. And people thinking that they are, I think will also flip to the other side too, where there's something that really happened and no one actually believes it.
Tim Hwang:
For sure, right.
Nichol Bradford:
So I think it'll be both ways.
Tim Hwang:
And I think that's incredibly wise. I was talking to this researcher recently, who's been working in what they call media forensics for a very long time. So this is the academic art of did someone Photoshop this image? And he was making this really interesting observation to me. He was basically like, "Look, I spent my whole career trying to figure out whether or not a piece of media is manipulated or not. But now I live in a world where every single piece of media is processed in some way, even completely innocuous types of media." And we were talking and I was like, "Oh, yeah. It's so interesting, for a really long time we were focused on whether or not the media was tampered with. Because if you tampered with it, odds are something suspicious was going on."
We live in a world right now, where tampering almost doesn't make any sense because everything like you're saying is now transformed. It's edited, it's improved, it's changed in some way. And so I think the bigger thing we need to be asking, is what is the message that is being conveyed by media? And is the intent of that messaging malicious or not? Is it destructive or not? Do people trust it or not? We're now getting to the point where the four corners of the image don't really tell us whether or not we need to be concerned about something. It's really more about the messaging.
Nichol Bradford:
And I think also in the end of it, it's that would people really trust our people? It's really going to come down to that. So what I hear you saying, is that we need to have fast moving HR response teams and we need to have leaders and HR leaders and people managers, who have the relationships with their teams, so that when something emerges, they're able to very quickly get into it.
Tim Hwang:
Yeah.
Nichol Bradford:
And then also, some of the pre-trust that would allow that disgruntled person, who just feels really passionate about something, an outlet so they can talk to someone instead of making a piece of deepfake.
Tim Hwang:
Yeah. And I think it's very interesting, because in some ways, I think you might almost even challenge the definition of HR as a defined category of things. So what I mean by that, is parts of the problems that you see is that HR is like, "We're our own departments away from the workplace. And we reach out to employees and we're like a separate function." But I think it's those types of barriers that may actually make it difficult to deal with problems like this, because you don't necessarily have the folks that are working every day with a team that have the trust. Some of that may just come with being a coworker. And so I think that dynamic I think is a really interesting one.
Nichol Bradford:
It's interesting, because the way that I think about HR is that it's very much the circulatory system of a company.
Tim Hwang:
Yeah.
Nichol Bradford:
The HR people really do know what's happening vertically and horizontally. And what I think is also interesting about that, is as we move into from what it takes to have a successful machine learning implementation to the experimentation that you need to have a successful generative AI implementation, all of it really comes back to having people-centered change.
Tim Hwang:
Yeah.
Nichol Bradford:
And having HR distributed across the organization is a way that leaders can help increase the probability of success for their machine learning or their generative AI implementation.
Tim Hwang:
That's right. Yeah.
Nichol Bradford:
So one final question, actually, I'm going to ask you two questions.
Tim Hwang:
Sure. Great.
Nichol Bradford:
Could you tell us just a couple of more unexpected challenges that people might experience? We've talked about the emergent overnight, like angerfake.
Tim Hwang:
Yeah.
Nichol Bradford:
Is there anything else that our fast-moving HR teams should be aware of?
Tim Hwang:
Certainly. So I think part of the problem is like you mentioned, that HR is the circulatory system. And part of the idea of a circulatory system is, I got veins and arteries everywhere all over my body, but we also live in a world where communication between employees increasingly happens outside of channels that HR can't even well observe, right? It's not like everything's taking place on the workplace Slack, people are texting each other, people are posting on social media about the workplace. There's all these other places and channels where disinformation can emerge.
And I think that actually, really limits the ability for HR to act effectively on this, because there's all these outside influences and channels that they don't immediately have access to. So I think that's one interesting thing, is how much disinformation is emerging on a private WhatsApp group that HR doesn't even know about? And how do they deal with that problem? I don't know if I have any smart answers there. It's a problem even in the national media journalism context as well. So I think that's one I'll flag.
The second one is I think to maybe just draw out a comment that you made a little bit earlier, which is what research is in the space called the liar's dividend, the notion that even real stuff is now going to be considered fake. And I think that is also a very similar and interesting parallel problem that we're going to deal with. Again, no smart answers, but I think it's another unexpected thing that's an effect of this deepfake technology.
Nichol Bradford:
And so where are we 10 years from now? What does the future of misinformation look like?
Tim Hwang:
Yeah, I mean, I almost like your bold thesis, which I think you've been hinting at for the episode, but I think is very real, which is in 10 years we're going to be talking about something else. It's very, very possible that what happened with billboards, what happened with photography is exactly what's going to happen with deepfakes, right? That in 10 years we're going to still be dealing with the problem of disinformation in the sense of false facts, false beliefs, and the problems caused by that. But I wouldn't get so tied up on deepfakes as a technology, that's going to come and go. And so I think the bigger problem that HR people need to be asking is not necessarily that we need a deep fake solution. You can imagine solving the deepfake problem is still losing the disinformation war. And I think keeping an eye on the disinformation problem, I think is the really key thing.
Nichol Bradford:
The other thing to add something for 10 years from now, the HR team that is able to be such a rapid and effective responder to whatever, is the HR team that's prepared for the future. The capacity that gets developed within this particular problem is a capacity that has utility, broad utility for flexible and agile organizations.
Tim Hwang:
Yeah, I think that's critical, for sure. I think maybe what we confront in deepfakes is not just a single technology, but a class of issues. And if HR teams can effectively deal with it, I agree with you, I think they will be very positioned to deal with many other problems that are likely to come through the workplace.
Nichol Bradford:
Great. Fantastic. Thank you so much for your time. That was very insightful. And so that's going to do it for this week's episode of the AI+HI Project. And a big thank you for sharing your wisdom.
Tim Hwang:
Yeah, thanks for having me.
Nichol Bradford:
Before I say goodbye, I would encourage everyone to follow the AI+HI Project wherever you enjoy your podcasts, and also audience reviews make a big difference. So thank you.