The Business Storytelling Show

What Are The Ethical Considerations For Generative AI Use in Business, with Christoph Trappe of The Business of Storytelling Podcast - Featuring Lately CEO Kate Bradley Chernis

Watch the Video ›

Speaker 1: (00:01)

This is the Business Storytelling Show, a top global marketing podcast listened to in more than 100 countries, live streamed on social media and broadcast on DBTV. Christophe Trap chats with industry leaders to help your company tell better business stories. Here's today's episode,

Speaker 2: (00:24)

Episode 6 66. Oh my goodness, are we talking to the devil today? Nope. Absolutely not. One of the OGs in ai, everybody's talking about it today. Uh, Kate Bradley, Bradley Churners will join me in a second here from try lately. Um, that's their Twitter handle lately, ai, I wanna throw it up here at the bottom if you're watching on any of the live streams. But before we get there, I wanna show you something really quickly. You might have noticed a new angle of my face. I'm so much closer to you. I'm trying to find it here. Pull it up. This is actually the Road Warrior Pro V two plexi cam, so kind of they sent it to me. I really appreciate it. Um, it's kind of cool, honestly, it's like right in front of your face, like if you do a lot of stuff on the monitor while you're talking, maybe it's not the best setup, but I don't know, the angle is kind of nice. We're gonna ask Kate what she thinks of the angle, if she thinks it's better or worse. I'm really interested in her opinion. But anyway, we're gonna talk about, um, ethical considerations when it comes to ai. People are just, you know, doing whatever they wanna do and it's kind of the wild west out there. And who better talk to than one of the OGs in AI and find out what people should do. Hey, welcome back to the show.

Speaker 3: (01:39)

Hang there. It's not about the angle. Christophe, by the way. ,

Speaker 2: (01:45)

It's not about the angle .

Speaker 3: (01:48)

No, it's, no, yeah, it's about the angle.

Speaker 2: (01:51)

It's always about the angle. I kinda, I kinda like it. I don't know. I mean, we'll see, it's kind of in your way. It's kind of in your way, but um, you know, you get used to it. And I guess if I'm looking into your digital eyeballs, um, we are good to go.

Speaker 3: (02:04)

It's very Ferris Bueller actually. That's how I feel.

Speaker 2: (02:07)

Yeah, yeah.

Speaker 3: (02:08)

You know, fourth call.

Speaker 2: (02:10)

Um, alright, well I'm not skipping today's episode and we're talking about ethical considerations. What, so first of all, why do we have to talk about this? And you know what? There's a lot of people searching for the answer to this question. And, uh, why, why, why, why, why do we have to make it so difficult? What are the problems people are creating that are not necessarily ethical today?

Speaker 3: (02:33)

Well, well the problem is the premise. So Hollywood misdefined what artificial intelligence is. And so our expectations of what it is are based on fear. Of course, we've all seen the Terminator, for example, and that has caused, you know, certainly panic, but just a real, um, lack of understanding of what artificial intelligence is. And is the AI that we even have now actually true ai? Um, or is it mostly automation? I'm going to argue that it's mostly automation in many cases. Um, and or even what machine learning really is. So there's a misunderstanding because we expect magic to be real. And I'm the biggest Harry Potter fan. I read those books every summer, every year, and I want magic to be real. And I sure am working on it, but it's not real, you know? So R 2D two doesn't exist, , I'm sorry about that.

Speaker 3: (03:31)

Um, so that's, that's I think is the biggest problem. And, and because we have that kind of fear based in Hollywood, we're transferring it onto, you know, what's now without a, without a true understanding of like what artificial intelligence really is. And we can sort of get into some basics around there. But what I like to tell people to remember is that we're still in a, if this then that kind of matrix scenario, right? And what's beautiful about artificial intelligence is it's able to process obviously all that data faster than we can as humans. And so it seems like magic, but it still, you know, that matrix,

Speaker 2: (04:14)

You know what's interesting to me when we, when you just talk about that is, and AI actually they present everything like this is it, you know, , like they're the authority, it's correct. So when people complain, Christophe, you say everything like it's correct. Even when you're not, like, when you complain about me, don't even start looking at AI because it's even worse . Um, but I think people look for the simple answer to be honest. You know what I mean? Yeah. Like they just go, like when I talk to people about how do you use AI in content creation, and I use AI all day long, different tools here and there, do this and do that and the other thing. But I never, ever go to any tools. Uh, they shall remain nameless Jasper, for example, and say, write me an article on the five ways of being ethical when it comes to AI toll garbage.

Speaker 2: (05:06)

Right? Like, it just, like, what does it do? It go, it does re it re-pins something. I mean, it's like you guys, right? Like, so first question you ask, Hey, can we get the video from this podcast episode? Right? And you're gonna use this podcast footage, run it through your tool, and then use it however you use it, wherever you wanna use it. And you don't just go, Hey, let me find some random B roll and I'll split it up . Do you know what I mean? It's like the same concept. So I think is that part of it? Like people just don't, I don't know. I don't wanna make, people are dumb, but they just don't get

Speaker 3: (05:38)

It. I mean, people are lazy. Number one. That's what you're keying into. And, and we are too. We all, everybody wants the easiest path to whatever. I mean, of course we do. That's natural because we have better things to do, right? I, I get that. Um, but the, you know, Harvard Business Review actually wrote an article recently featuring both Jasper and lately, um, as leaders in what, what they call collaborative ai. So collaborative AI is when humans are part of the process, training the AI, nudging it along the way. And the reason this is so valuable, there's the ethical component, but more importantly is the ROI. So their study showed that the ROI of AI alone, um, versus humans collaborating with ai, the humans collaborating, beat it two to seven X every time, right? So there's, uh, more than an ethical reason for this kind of collaboration to happen.

Speaker 3: (06:32)

And I'm gonna define collaboration real quick. You need to be able to analyze the results that the AI spits out to you for the reasons you just actually said. 'cause the AI is, is dumb. It needs a human to train it. And I want to sort of double click here on that perspective so people can really understand, um, humans. Like if, if you think of AI as a human for a moment, it's about three months old on the life lifespan. And if you imagine a three month old, a three month old is 100% reliant upon other humans to survive and thrive, it can't feed itself, defend itself. It can't even sit up straight, right? Um, so AI is the same way. Now the fascinating thing to me, Christophe, is that this ability to analyze what results come from AI and then course correct them is happens to be in this period of our lives, the number one skillset that we as humans lack. 'cause we've been taught over the last three decades to not bring, not to, to not identify problems, but to bring people solutions. And so employers are complaining about this. It's a global phenomenon where they can't find, uh, employees who have really solid analytic skills. You think about just even Google, we were talking about teenage daughters earlier, my friend's kids, they know they can ask ai, they know they can Google anything, but they don't know what to ask.

Speaker 2: (08:00)

That is, well first of all, they, uh, I don't agree that kids can Google anything because I'm, I have, I'm plenty of times in the vehicle with my kids and um, they'll ask me something and I have to like, while I'm driving, I have to think about it. It's crazy, right? I'm like, just Google it or use perplexity ai. It's, it's so, by the way, I, maybe I've misheard it, but, you know, I've heard that thing and I'm a, I'm a young Gen Xer, so everybody who takes me to task that, I have the millennial pause when I do a TikTok. Thank you for saying I look younger, but I am a young gen Xer, barely. And um, so you said PE companies want people to bring them solutions. And then you mentioned earlier, totally not connected. We're all lazy. And we are, I actually think bosses who run around and say, bring me a solution, don't bring me a problem. Like, they're kinda lazy because they don't wanna collaborate, they don't wanna talk about it. I'm like, I don't have time to talk and I don't have time for stuff either, but do you know what I mean? It's like, yes. Why are people bringing the problem to the boss? It's not, I mean, maybe it is to throw it over the wall for some, but a lot of times it's like, Hey, I'm stuck with this. Can we collaborate? And they're like, well, don't bring me a problem. They're just lazy.

Speaker 3: (09:16)

They, they are. I mean, people, this is a rule that we have in, in my company, which is, we call it the three quarters of the rule. Get me three quarters of the way there. So if you are identify a problem, it's your job really to fix it or get it fixed to, to deliver, right? And then to do it in a way that, like you said, doesn't throw it over the wall to me. And, um, makes me feel as though, like if I have to put my eyeball on it in any way, make it really easy for me to do that. So you're doing most of the work and then I'm just giving you a couple tips, you know, to, to move on with. So that's, that's also about asking forgiveness versus, you know, asking for permission. Um, and it's something we touch on. Like I'll have to remind the staff every once in a while. It's part of being a team player too, because you don't wanna just , like you said, throw, throw your neighbor a bag of junk. Like that's, that's not any way to incur favor. And because we all work from home now, it's more important than ever to behave, um, you know, in, in this way.

Speaker 2: (10:17)

Yeah, you have to. But let's kinda, I mean, interesting topic. We could probably do a whole nother episode on that. But let's talk about, uh, what are some of the ethical things we have to talk about? And one thing I I wrote about way back when is you have to disclaim when you use ai. And so I was, I was thinking about that topic again and I'm like, when I wrote that article, I was actually thinking about some of those tools where you literally just create from scratch Mm-Hmm without any source content. But you guys don't do that. I don't do that. Um, like even, you know, I, um, I use other tools for this show. Um, I give them source content. So, so I don't know, do I really need to, um, disclose that I used lately AI to, um, get video clips from my podcast? Who cares whether I did it myself or the software did it, or do you know what I mean? Like, there's different levels, right?

Speaker 3: (11:06)

To think about. I mean, I think it's like this. If you come over to my house for dinner and I made a cake from a cake in a box, do I have to tell you that I didn't make it from scratch? No. of course I don't. That's that is totally silly. Um, you know, I, with us, we're not a large language model. You're using our math to put on on your data. So we have a different, um, kind of set of ethical, I guess, you know, set of, of rules that would apply to us or not apply to us. Um, because most people, like we said, are lazy. God bless us and are using AI to cheat, which is what's happening for the most part. They're, that's where I think the ethical question comes in. They're trying to pass off somebody's work as their own very much.

Speaker 3: (11:53)

This is like CliffNotes. Okay? So I'm Gen X two for the Tracy's in the room. CliffNotes is when you would, there, there would be these little books that would summarize all the classic great books of the world, like Moby Dick, et cetera. And instead of reading all of Moby Dick, you would read CliffNotes. 'cause it would be shorter and faster and summarize it. And it would include all these essays that are most likely the essays that the teacher would ask you to write about any different pro topic. And so people would copy them for the most part. But the thing is, the teacher has the cliff notes too, . So like, they would immediately know, and this is the case very much for generative A and I now, right? So if both you and I typed in the same prompt to chat GBT or Jasper or anywhere else, we'll get the same results because there's no, um, there's no possible way it could understand my data, my audience.

Speaker 3: (12:44)

Like there's no access to that information. So it's gotta give you that, that kind of generic response. Um, so I mean, I think that the AI question comes here, like if you're trying to pass it off on your own, then there is an ethical response. But I think it's pretty easy. Like we just put a, some kind of speedometer or, you know, when you buy food, there's the whole FDA, you know, list of what's in the product. So I don't think there's anything wrong with that. You know, if we could have that for ai, like what, what percentage of this was written by a human or ai or collaborative humans or what was quoted from, uh, US world and news article, that kind of thing.

Speaker 2: (13:27)

Yeah, I'm not a big fan of quoting anything anyways. Like, just tell your own story. I mean, like if the, the time of curation is over to be quite frank in, in my opinion, um, I actually have, wait,

Speaker 3: (13:37)

What do you mean by that? What do you mean by that?

Speaker 2: (13:39)

You know, like the Drudge Report, like they just, oh yeah. I don't even know if they still exists, but like, they just have like links to other stuff. I'm like, if that's your model, especially as a company, just like link to other people's stuff. Like , good luck. Like you're not a thought leader, you are a link collector or whatever, , you know, like I link to other people too, but it's very specific and it's not like my main bread and butter, you know, it's uh, right. Like it's when I see a good podcast, you know, sometimes I'll link to it and I mean, but very rarely if you think about it. Um, now you've threw me off What I was gonna say now, first of all, if you invite me over and you tell me you're gonna make baked cake, and if that's what you say, I do wanna know that you've made it and not bought . So

Speaker 3: (14:21)

You do.

Speaker 2: (14:23)

So the the, the devil is in the details, right? Like talk to

Speaker 3: (14:26)

The devil.

Speaker 2: (14:28)

Well, you right. 6, 6, 6, like maybe who are talking to the devil? No. Haha. Just kidding. Just kidding. Um,

Speaker 3: (14:34)

Wait, I've got a story for you about talk about cake real quick. Yeah. So I think this is so fascinating. Um, when Betty Crocker released Cake in a Box, they were the first to do it, it was the fifties. They were marketing to housewives and it didn't sell very well. 'cause the Housewives were like, I didn't bake this cake. This is totally weird and bizarre. And um, what they did was they took the powdered eggs out and so their slogan became just add an egg , and then it sold like hot cakes because the wives felt like they actually baked something that a roll, right? And I paralleled this idea with, with ai, because this goes back to that collaborative AI we were talking about. Um, you know, when humans have a role in, in automation, in technology in general, that fear that we were talking about, that tends to go away, number one. But then also, as we said, that's when it, the collaboration is you, when you see the high performance the same way with Cake in a Box as with collaborative ai.

Speaker 2: (15:39)

Sorry, I was on the wrong screen because what I was doing is I was pulling up That's a good example,

Speaker 3: (15:43)

Devil. So Cake, were you pulling up that was

Speaker 2: (15:46)

Pulling up the cake? Yeah. Yeah,

Speaker 3: (15:49)

That's my favorite.

Speaker 2: (15:50)

So the reason what I was doing is, uh, Christian Alma, he was on the show and he talks about that, uh, example specifically in his book Start With, oh really? The story. Um, and he talks about the narrative economy. Um, so it's absolutely correct. Now, I still don't remember what the original question I was gonna ask you is, but how do you get, um, how do you get companies? So it's very interesting to me because companies are like all over the place, right? They're like, um, like when social media first came out, they were like, you can't talk about like these 59 things. And then people just stopped using it all together. And then you got some companies who say you can't use ai. And, but it's, it's kind of like saying you can't use ai. It's kinda like saying you can't use a pen, right? Right. I mean, how many,

Speaker 3: (16:36)

That's silly

Speaker 2: (16:36)

Versions of Pen are there? So how do you kind of, as a company and with your teams, how do you move forward and make sure there are some guidelines, there are some rules. Um, oh, now here's the story. Before you answer that, here's my story. Okay. Um, you can actually tell what's written by AI because, um, I read this book over the weekend and it had delve in it, delve, delve, delve, delve, delve. And even though that might be a very common word in the British English side of the world, um, I've never seen that word until AI came around and now it's all over the place. Interesting. And then the other thing is ai, if you don't edit, and if you don't especially interrogate it, which I do very well, like no, do this better. This is, this sucks, you know, or whatever. Yeah. Um, it backs into sentences, right? Like it says, because the sky is blue, Christophe likes to be outside. Right? Right. Instead of saying, Christophe likes to be outside because he enjoys the blue sky, right? Like it's totally giveaway. And the other thing is, if you don't watch out, AI loves bulleted lists. So if it's like list, like that's ai a hundred percent. Even if you have source content, right? So you have to edit. It's true. Um, but my question still, how do you Go ahead.

Speaker 3: (17:51)

Yeah. So you can smell it a mile away. Absolutely. And, and the people who can't are not, not too bright. And and what's sad about that is, and that's not a, that that's a, again, a remark on society, like sort of going way back here. The fact that generative AI has been the boom that it has been is a real spotlight on how poor we are at communicating. This is writing basic writing skills, right? How to get people to do what you want them to do, which is all communication. People are so bad at it that they would rather just, you know, type a couple of prompts and have it all done for them, which is, um, a shame frankly, you know? Um, and I think this is where I think about a calculator, for example, you know, I'm not good at math. I was a fiction writing major and I was sort of tortured by my algebra teachers, but I know how to do basic division obviously and long division and I have to go to the internet to helping me with percent calculations, stuff like that.

Speaker 3: (18:59)

But I remember my teacher saying, you can't know if the calculator is right if you don't have the basics in front of you because of human error especially, right? And so when you're typing something into calculator and you get the sum or the equal of what, of whatever it is, I'm quickly doing a check in my own mind. Be like, oh, that okay? Yeah. That's right. Right? And that's the skill that you need to have with ai, with writing, with, with like anytime you're relying on technology to do something with you, you still have to have this ability to, to check to see if it's right. Um, this is why by the way, self-driving cars like aren't really a thing because there's so many variables that the machine that the AI would have to understand and make a judgment on that only a human still has that capability because there's not enough, not not only enough data for the AI to know in the past, but not enough patterns of the same variables. 'cause there's so many endless number of variables, right?

Speaker 2: (20:05)

You know, I liken it to, um, city lain who's in the market research space and we can give 'em a shout out here on, um, LinkedIn. By the way, if you wanna leave comments, you got a couple minutes to do it live on the show. And after that, um, you can leave them just as a comment and we can respond later on. But, um, Citi said that, you know, everybody, a lot of people are now using AI and market research and he says, and that's fine and great, but if you don't know what good market research looks like, right? How do you know that whatever AI just gave you is good market research? So you, it's kind of the same example except actually with ai, what you said with the calculator. So you still have to have that base knowledge, but are people going to, how are people gonna get the base knowledge in whatever it is they're doing? Whether it's running a business or, or even I mean anything. I mean, you can use AI for anything. I mean, how do you get that if you think you don't have to get it because you can just use ai?

Speaker 3: (21:03)

Yeah, well that's that skills skillset that we're looking for, the skill analytic skillset that's missing. Um, but there is a shift now of understanding, hey, save time is great. Everybody wants to save time, but make money is better. And that's what, you know, I love the Salesforce ads make ask more of your ai. So you, we should all be asking more of the AI is saving time enough. Social media is a, is a great example. If you're just pushing out content for the sake of pushing out content and it's not actually doing anything for you, what are you doing? And that is the case for most social media managers. They wanna go home and grab some lunch, they don't care, but their bosses on the line for higher performance. And so it's up to the, the boss, the cmo, whoever they are to make sure that the AI managers, digital, social media managers or whoever they are, are actually creating content that's effective. So that effectiveness is this new wave people are starting to cotton on. Okay, ai, you're not exactly what we thought you were. We still love it. Love you. Life is easier, thank you so much. But now we want more.

Speaker 2: (22:11)

What's interesting about the whole effectiveness comment too is like sometimes when you say that, or when people say that they think to slow down. And when you look at the people who actually made a career outta content and driving performance with content, like they're not always like a sweat house of podcast episodes for example, or, you know, or blog posts, but they always keep going. I mean even if you look at the, even if you look at some of the experts who say that same thing, they have a new book out every year and a half and the reason they have a new book out is because they know it works for them. People buy it, it keeps establishing them as a leader. Uh, whatever it might be. Um, always great to have you on the show. I think it was a third time, it's been maybe a couple hundred episodes since we last saw each other. Uh, last, you know, 60 seconds or so. Tell us who should reach out, who should connect, uh, who should use your platform and uh, think thanks to, thanks for joining me. Nice to see you.

Speaker 3: (23:03)

I love you. Thank you so much. Anybody who wants to learn exactly the words that will make people do what they want them to do should reach out to me. 'cause that's the business I'm in and I do it really well.

Speaker 1: (23:19)

Thanks for tuning in. Please rate and review the Business Storytelling Show on your favorite podcast platform and subscribe so you don't miss the next episode.

99% of Posts on Social Get ZERO Engagement. Don’t let that be you.