The Friedman Group

Unlocking The Secrets Of Collaborative AI In 2024, with Brad Friedman of The Digital Slice Podcast - Featuring Lately CEO Kate Bradley Chernis

Watch the Video ›

Speaker 1: (00:03)

Hey, Kate, welcome to the Digital Slice Podcast. How are you today?

Speaker 2: (00:07)

Hey, Brad. I'm doing, you know, pretty good. I could use a little more sunshine in my life, but other than that, I'm surrounded by good people and, um, good friends like you. So hard to complain.

Speaker 1: (00:18)

Oh, good. Well, this is gonna be fun, so maybe this will brighten your day a bit.

Speaker 2: (00:24)

I like it. I'm ready. , I just started like doubling down on my vitamin D because, you know, it's January and, and, uh, January is really good for one thing, which is my birthday. But other than that, you know,

Speaker 1: (00:38)

Drag . And you, uh, you, you celebrate your birthday for like a week, right?

Speaker 2: (00:43)

Well, only because it was a, I turned 50, so this was a very big birthday and, and, um, I had some special things lined up just to, you know, it's a milestone. You only turn 50 once. Right?

Speaker 1: (00:55)

Yeah. That's a big milestone. That's great. Well, happy birthday.

Speaker 2: (00:59)

Thank you. Thank

Speaker 1: (00:59)

You so much. We'll drag it on into today too. We'll celebrate today.

Speaker 2: (01:03)


Speaker 1: (01:04)

. Happy birthday.

Speaker 2: (01:06)

I'm trying to close all my sales, you know, telling them all. Listen, it's my birthday, .

Speaker 1: (01:10)

There you go. I think that's a, oh, that's a great idea. I might try that. . So, Kate, you have, uh, been on the Digital Slice podcast before, which I appreciate very much, but in case there's anybody out there who missed it the last time you were on, um, take a minute and tell us a little bit about yourself.

Speaker 2: (01:32)

Yeah, well, the, the funnest secret is, um, and I'm just admiring your microphone right there, is that I, I used to be a rock and roll dj, and it's sort of a shame that I don't have. I do have a really good mic and it's sitting in a box over there, and I just never have hooked it up, you know? But, um, my last gig was broadcasting to 20 million listeners a day for XM Satellite Radio. And I really, before xm I was on live radio all the time, which, you know, that's unheard of now. But we were into it and we loved the Theater of the Mind and were really, you know, this is before you could look people up on the internet and see anything about them. And so we would really lean into it and kind of just make up these skits and stories and, you know, we're just messing around on the radio and it was really, really fun. Uh, I miss it, but it was an art, you know? Totally. Yeah. Yeah. So that's, that's the juiciest nugget. I guess. Those

Speaker 1: (02:31)

Days are long gone.

Speaker 2: (02:33)

Yes, they really are. It's so sad. Like, there's a couple of stations here that I listen, I mean, I still listen to the radio, believe it or not. Mm-Hmm. . And I have the CD player in my car, , um, because I'm curious, I'm just curious what's going on. And also I like to yell at the radio and be like, oh, stop talking, you idiot. You know? Yeah. I like to

Speaker 1: (02:55)

Complain. Yeah. I listen to the radio, um, quite a bit when I'm in the car. And, um, it's just frustrating because everything is on a, like a repeat cycle. There's millions and millions of songs in the world, and it seems like what I'm listening on my way to my lunch meeting, I hear again on my way home an hour and a half later because it's just cycling. It's true over and over again. And that's my biggest frustration with radio right now.

Speaker 2: (03:29)

Yeah. They're following charts and they're not making actual human decisions is what's happening there. Yeah. You know, and then, you know, also, people self included were idiots. And so like, it, we used to play the same song 300 times in a week for you to hopefully hear it once, you know. Wow. That's a lot. Right?

Speaker 1: (03:50)

Yeah. Well, we need to have radio stations and gauge AI to be picking our music throughout the day, and I bet we'd get some variety that way.

Speaker 2: (04:01)

Yeah, I mean, I think, I don't know, Spotify is doing an okay job. Like sometimes we'll be, you know, listening to some great Ella Swings channel and they'll try to throw in total random thing in the mix, and we're like, we see what you're doing, you know, skip

Speaker 1: (04:18)

It or whatever.

Speaker 2: (04:19)

Right, right. But like I do, I do really, I mean, this is an important thing 'cause we're talking about curation and in, and marketing is that way as well. Right. The reason you pay attention to Anne Hanley's newsletter, for example, you know, or, um, Brian Kramer on Instagram, I see your H two H pillow back there. Like, oh yeah, be because those guys are, are really great at curating the information that they're going to present to us. And you trust them, right? Yes. We trust you. You know? And, um, when you have a human back there or an algorithm that you trust , then you're more likely to take, take those risks and, and take those leaps. I like to be surprised, which is kind of what you're getting at.

Speaker 1: (05:03)

Right. And

Speaker 2: (05:05)

I find that humans do that. Algorithms don't, you know, I've never, I've never heard a song on an algorithm and I'm be like, oh, yes. 'cause I expect it, I expect whatever. It's, you know.

Speaker 1: (05:16)

Yeah. Yeah. That's true. That's true. So I was looking it up and the last time you were here was in October of 2022. Wow. So it's been a minute. Mm-Hmm. And I, I'm wondering if, um, anything about artificial intelligence has changed since then.

Speaker 2: (05:37)

, I love you.

Speaker 1: (05:39)

Yeah, lemme throw that out. I, I mean, I don't know. I've been living under a rock since then, , so I was just curious,

Speaker 2: (05:48)

Boy, what a ride. I mean, you know, in the fall of last year, around November, December, we were all like, finally, finally, you know, . And for a long time as in the finally phase, we were like, okay, let, how are we gonna embrace this kinda wave? And then we found ourselves, um, having to educate people on, on, on a AI again, because suddenly everybody thought that they knew what AI was, but they didn't. And they certainly didn't know like what generative AI could be, because they knew one kind, they knew chat GPT. And we are a totally different kind. We are not a large language model. Right. We, we take our math and we put it on your data, and that's how it works. We can talk in details if people want to be bored by that, but, um, so it's been a reeducation process.

Speaker 2: (06:46)

You know, Brad, we're still selling against magic, which is another kind of phase here. Yeah. Because people have a huge misunderstanding of the definition of ai. They think it's Hollywood, you know, and so what their expectations are, in fact, magic. And we have to disappoint them a little bit, you know? Um, so that's very, very interesting to me to watch. Um, this year will be the year of collaborative ai. People are just cottoning onto this. It's been something we built into the platform from the beginning. And, and just so folks know, that's when, um, humans are woven into the process of training the AI by analyzing and, and course correcting it. And Harvard Business Review actually released a recent article citing lately as a leader, and they did some studies that showed that collaborative AI versus AI alone, um, performs two x to seven X greater, you know, all the time.

Speaker 2: (07:49)

So there's a reason kind of, you know, our, we still are kings of the, of the, um, the animal kingdom, I guess you might say. Right? We're still kings of the mountain or queens, whatever we wanna be. Yeah. So it's, it's, um, I'm, I'm curious to, to just see what the what will come after that. But like the finally, I'm seeing people embrace this understanding of, oh, humans, not only because it's ethical, right? And we all agree that, but, but it's not even possible for the AI to replace, um, to replace the, the essence of us. It's not even anywhere close.

Speaker 1: (08:32)

Right. So, jumping back for a second, because my brain is simple as, you know, um, when you initially mentioned the a, a quick difference between, um, lately and chat GPT, um, when somebody puts a query into chat, GPT chat, GPT is basically looking all over the internet for an answer. Whereas when you described lately, lately is using the data that I've already put into lately with your math, instead of scraping the whole internet. That's right. It'ss just looking at my data to come up with a variety of social media posts. Is that right?

Speaker 2: (09:23)

Yes. And it also will look at my data and help you out. So when I write social media posts, I do them by hand, not assisted by a ai, and I'll get like 86,000 views on LinkedIn. And so we taught lately a series of a hundred kind of best practices to help you out if you need help. Um, and then in addition to that, Brad, we taught lately to look at the best practices of our customers at large, never sharing the information, but it can see the patterns of generally what works well, and, um, nudge, nudge you in those directions as well.

Speaker 1: (10:06)

So it might ask me to add a hashtag and add an emoji and those kinds of things, because that's a best practice.

Speaker 2: (10:16)

Yeah. And it's specific for that particular post. So it's, it's surfacing you an idea, and the idea is tied to your analytics. So there's a, what we call this is, this is little, again, hopefully everyone is like not tuning out , but we have a continuous performance learning loop built into the product, which means that every generated response we give to you looks at your analytics specifically, and it's designed to get you, Brad, the highest possible engagement with your target audience. Right. Um, whereas with chat BT kind of to your point, which is great, we love chat BT all those answers are pulled out of thin air. They have no analytics to reference, so there's no understanding, excuse me, of, of you and what you might want really, I mean, as far as like the, how your sound and everything, how your voice would sound, how you would write, and then what your target audience would like to read. Should you prompt it to ask you to read something or give you, uh, some text,

Speaker 1: (11:20)

Right? Yeah. So chat GPT, um, is never going to, uh, um, really become human.

Speaker 2: (11:34)

Well, I don't, I mean, I don't know how it's possible. So like, let's talk about that for for one second. So I think what people misunderstand, and I hope this is where you're going, is you have to have not only all the data, you have to have a, you have to have a large amount of data to work with, but you also have to have patterns within the data. And then you have to have patterns of the variance within the data as well, right? So like a self-driving car, for example, the reason they're not working so great is because how do they know if this thing that just flew by the front window is a bird or a dog or a deer? And if it's a dog, like it has to reference every kind of dog possible to make sure, is it a pit bull?

Speaker 2: (12:18)

Is it a poodle, is it a big poodle, a toy poodle? Like there's all these references, right? So if you've watched the TV show from HBO called Silicon Valley, you know what not hotdog means, right? Mm-Hmm. So that's when the character Jin Yang creates an app on his phone and it's designed to, um, be like the food, the Shazam for food. And you could hold it up to some food and it would tell you what kind of food it is. However, he's only had time to feed the app, the a, his ai pictures of hot dogs, right? Right. So if you hold it up to a piece of pizza, it says not hotdog. 'cause it doesn't know what a pizza is, right. . Right. So I think that's kind of one reference. But then the next thing that there, there's a lot to, I find metaphors are very helpful for me.

Speaker 2: (13:10)

So if you think about, um, in the Mandalorian the um, um, killer robot who turn, they turn into a nurse , right? They flip it switch, there's this scene where it's pouring tea and it's trying to grapple with first picking up the cup, you know, with its not really opposable thumb and then, and not squishing the cup and breaking it, and then getting the, you know, kettle just in the right place and actually pouring it here. And, and this is fiction. This is fiction, but even in fiction, they're getting that this struggle is so hard, and it's something that you and me and every human we know this stuff we take for granted, right? Right. And the reason, the reason is, is because our, um, um, executive, your executive function is in large part in your brain. We're responsible for this every millisecond, nanosecond, whatever.

Speaker 2: (14:07)

It's taking in all the information that's happening here and allows you to given this information, then this is the next thing you should do, or the smartest thing you should do, or the wisest thing you should do. And your executive function works like magic behind the scenes all day long, you couldn't possibly spreadsheet out all the little decisions it has to make. Right? So you think about that and, and how powerful that is and how it can reference not only the history of experience to make those decision decisions, but it's also referencing, you know, all the things you've, times you've watched other people make the decision. Plus there's instinct, which is very, very powerful with, um, then there's like, I don't know what the, what's, what's the term called, like with a pack of wolves where they have like, you know, the reason birds know to fly in a triangle, like all these, um, sort of historical over generations of animal instincts, kinds of ideas. So if you can understand the power of what we do every day as humans, and then try to figure out how AI could possibly replicate that, you know, it's, it blows the mind, right?

Speaker 1: (15:25)

It's mind boggling.

Speaker 2: (15:26)

It's mind boggling. Yeah. We haven't figured it out. So it's, it's still very much an if this then this scenario, right? That's what it's automation essentially.

Speaker 1: (15:38)

Yeah. I just, I, I just don't see, I mean, I just don't understand why everybody's so thinking that, you know, this is such a terrible thing and it's gonna take over the world and yeah. I'm just, uh, I'm just not there. Um,

Speaker 2: (15:58)

There certainly are worse things to worry about happening in the world, you know, that

Speaker 1: (16:03)

Yeah. It seems like

Speaker 2: (16:04)

Attention. Yeah. I mean, you know, when the microwave came out, people freaked out. .

Speaker 1: (16:14)

Well, I mean, even today, I'm, I read just not long ago about the, uh, cell phone, um, causing men to become sterile.

Speaker 2: (16:28)

Oh yeah, sure.

Speaker 1: (16:29)

I mean, which is related a little bit to the microwave and

Speaker 2: (16:32)


Speaker 1: (16:33)

People speaking out, stand

Speaker 2: (16:34)

Back. I remember .

Speaker 1: (16:36)

Yeah. Like don't get too close to the tv. .

Speaker 2: (16:40)

Definitely. Yeah. I mean, we're evolving as humans thanks to technology. And this is a huge leap in, in our, you know, phase of, of growth, you know, all the eras from

Speaker 2: (16:57)

The wheel and beyond. Um, I saw, I forgot who it was, who it was, I think Mike Granade, who's this really amazing, um, AI professor who teaches at like every Ivy League in the world. He had put together this video and was showing like there was a, oh, it must have been from a Gartner report. Gartner had a graph that was showing like the big, um, technological improvements over the years and how far advanced each one had the impact, like the, the, um, intensity of the impact. And obviously the internet, you know, was a big one. But the, this recent advance of AI was like two or three times the size on the graph of everything else because of how far it's moving us forward. Very interesting. Right? Yeah. And, um, like you can see, I mean, obviously I focus on writing, this is my, where I'm the AI that I know about. I don't know about AI at at large, generally, but it is amazing to me how painful this skill is. This is basic communication. Humans have been able to write something down since we, since we were, became

Speaker 1: (18:14)

Since hieroglyphics

Speaker 2: (18:16)

Humans. Yes. I mean, so Jesus, what, what's wrong with us? We can't write . This is a problem .

Speaker 1: (18:24)

Yeah. And you would think, since we can't write that we would be embracing AI to, uh, to help us rather than being afraid of it.

Speaker 2: (18:33)

I mean, I would, my, I feel like we're like, well, everything, the pendulum always swings backwards. Right. You know, and I, I've been thinking about emojis and then like how my kid, my friend's kids are teenagers, and when they talk to you, they're literally also signing emojis, like the cry emoji as they're talking to you. Right. That's so cool. And what are we gonna, are we going back to grunting? I mean, like what's happening here? Like how, how hard is it?

Speaker 1: (19:05)

Yeah. So where's AI going to go from here? What's next?

Speaker 2: (19:13)

Yeah. Well, certainly this collaborative AI thing is, is that's the next wave this year. I, I believe that 2024 will be the understand. And, and the reason it's so important is because there's a symbiotic relationship between AI and humans. They both need each other to succeed. Now we know this. Um, there's a, a ma there just happens to be coincidentally, a massive analytic skills lack in the wor workforce across the globe. And so the understanding, the ability to actually analyze anything in, in software or anywhere else is not a human skill that's been sharp. It's been quite dulled. And part of it is because we've become so solutions oriented, present me solutions don't show me problems. You hear this in offices all over. Right? And so, um, that, like, even again, I was noticing with my, my friends' kids were, were saying that they know that they can find an answer on Google, but they don't know what to Google.

Speaker 2: (20:16)

They don't know how to verbalize or, or communicate the problem. So, so identifying problems, this is why analysts, the skills of analytics are, are tanking is something that we don't either want to do or can't do anymore very well. Right. Knowing that there is a problem, and because that's the beginning, how can you fix it if you don't actually know? So it'll be an interesting year to see how people are gonna solve for collaborative ai. Um, and you know, I think the other kind of component will be around, obviously the legal, the stuff that part is only gonna get more and more intense. And we seem, George Carlin just has a lawsuit now, like there'll be more and more lawsuits from, probably from high profile individuals, you know?

Speaker 1: (21:12)

Yeah, I saw that George Carlin. And then this morning I was reading about, um, the, uh, AI deep fakes of, uh, Taylor Swift.

Speaker 2: (21:24)

Oh, yeah. And,

Speaker 1: (21:26)

And Twitter changing the algorithms so that your searches are being blocked so you're not getting Taylor Swift pornography Oh God. Which, uh, it's a little disturbing. Um, really. So in a collaborative AI model, how long does it take to sort of train to train the, a ai to recognize that you're collaborating with it and learn from you?

Speaker 2: (21:58)

Hmm, that's a good question. And obviously different, um, models will have different, you know, times for us it'll start learning right away, but really start to get, if, if you're, if you're doing what we tell you to do, it should get to know you pretty quickly. Um, but you'll see it start to really replicate your voice about one to three months in. And, um, I'll give you an example. So, um, when I, as you know, the spread, like I swear like a sailor, normally dirty, very dirty mouth.

Speaker 1: (22:37)

I've just been waiting to push that button, button.

Speaker 2: (22:41)

I'm trying to be good. Hey, I am at 50, I'm trying to turn on a new leaf, you know?

Speaker 1: (22:45)

Oh God. I hate to hear that , that saddens me more than you'll know.

Speaker 2: (22:50)

Oh, well it. No, I just kidding.

Speaker 1: (22:52)

Good. Thank you. Thank you. Finally, I can, there you go.

Speaker 2: (22:58)

Now you know, it's the real me. Um, so, uh, late and so I make up words, I'll make up hyperbole, like holy, hot, pickled jalapeno peppers, you know, stuff like that. And lately we'll start to replicate that content. It'll start to insert that into my posts because it knows that's can how i how I roll. Right. Um, it'll do things like if you say instead of want to, you say wanna a lot, it'll start to replicate that, but you have to teach it. It's not gonna guess out of nowhere. Right. So, and you

Speaker 1: (23:34)

Teach that, teach it by editing the posts.

Speaker 2: (23:37)

Yeah. So there's this, if, if this, then that scenario. So you are getting the post as we think it should be raw. And then if you make any edits, the AI is going, Ooh, ooh, okay, great. And then if you actually publish the post that you, um, e edited, it's like, okay, yes. And then if that post does well, it's got like a ton of information and it's gonna come back and like again, either go to you, nudge you along to do those things, or, um, kind of make out, it'll start making the changes on you as well on its own.

Speaker 1: (24:13)

Okay. And if the post does not do well, then it's gonna think that the human that edited it doesn't really know what it's doing. And it, it'll go back to doing what it was doing before.

Speaker 2: (24:27)

It'll give you opportunities to check it before it makes crazy moves, you know? So, um, and I think, I don't know which product you're in, but if you're in the enterprise product, you actually see, um, word clouds of the ideas and words that are resonating with your customers. And so you have a second opportunity to train the AI there. And literally like right or left, click on the words and be like, yes, no, yes, no. Oh yes. No. Cool. So it's, um, you know, kind of more, and also with enterprise, you can actually feed it tons and tons of, um, information. So if you have a, a key messaging, um, scenario, which a lot of larger companies do, you can literally just ingest that into the brain and it'll go like, oh, I know. You know? Right,

Speaker 1: (25:16)

Right, right. So I have a, I think I have an interesting example of along those lines of AI not really being able to take over as being human in that looking specifically at, lately, at the, at the product, when you upload a video for the AI to listen to and transcribe and, um, one of the people in the video speaking has an accent.

Speaker 2: (25:50)


Speaker 1: (25:52)

It seems like it's a little tricky to sometimes pick up some of those words that a human listening absolutely has zero problem because of the accent. But

Speaker 2: (26:07)


Speaker 1: (26:07)

It, it, it might be, I mean, I had a person on not long ago who, um, spoke Spanish Mm-Hmm. . And I will say that I was leaning forward and very focused Sure. Listening to him speak, because several things early on I didn't really catch. And I knew that the transcript was gonna be weird when I was creating my captions. Yeah. And I said, oh, you better start listening so that you can make those corrections. So the ai, um, had a similar problem.

Speaker 2: (26:49)

Well, I'm gonna throw, I'm gonna throw Google under the bus because that's Google translate that we use. It's not actually really,

Speaker 1: (26:56)

But it was English. Yeah. He was speaking English just with an accent that's he

Speaker 2: (27:02)

Use Google on the, your product. And then on the enterprise product we use Temi.

Speaker 1: (27:06)

Oh, interesting. Yep,

Speaker 2: (27:08)

Yep. So, um, there's secrets from behind the curtain. Yeah.

Speaker 2: (27:13)

Yeah. Um, and that is really frustrating. I mean, you know, another kind of interesting thing there is we have customers who want to use lately in other languages, and it does work in other languages, but you have to have a whole separate account because the words and the patterns are different in another language. Right? Yeah. Even if you're talking about the same thing, even if it's translated word for word. Right. It has to be another. Um, and that's hard for people to understand, um, be because they dunno how it works, you know? And, um, that I think is something that won't change at all, by the way, is that understanding how AI works will be a question. People are very curious, which is great. I mean, they should be. Yeah. And I think that the more, I don't know what the larger companies are doing, but I find that the more we explain how, how our stuff works, the better adoption it is, the better understanding, the better acceptance it is. Um, and had to learn, as you can tell, to talk about other AI a little bit also and educate myself on like the history of generative AI and all these things that like, I don't really care that much about, to be honest with you, , but like, people expect me to know a little bit about it so that I can kind of connect the dots from how we got from there to here, you know?

Speaker 1: (28:40)

Right. And so that you can make comparisons.

Speaker 2: (28:44)

Yeah. I mean, that's the

Speaker 1: (28:45)

. 'cause if I, all I know is chat GPT, and I'm looking at lately, you're gonna have to tell me how they compare

Speaker 2: (28:54)

Yeah. Or

Speaker 1: (28:55)

Something along those lines.

Speaker 2: (28:56)

We actually had to put chat GPT as an option into lately because people wanted to be able to pro also also prompt content. And so, I mean, we, we were in the closed beta of CHATT two, so it was took us like, I think it took Brian an hour to do that, um, instead of like fighting it instead of just always saying, well, that's not what we do. So we're like, well, we can also do that. It's fine. .

Speaker 1: (29:19)

Right? Yeah. It's fine. Right, right.

Speaker 2: (29:21)

Um, so

Speaker 1: (29:22)

All right, so before we uh, run too long, tell me what's next for lately? Where are you going? What's on the roadmap?

Speaker 2: (29:31)

Yeah. Well, I'm really looking at, um, not sentiment, but like the why behind the words, why do some words resonate versus others? And we've been talking to our friend David Allison, who I, I'm not sure if you know, you might, you know David from around around the spot. Yeah.

Speaker 1: (29:49)

David was actually on the Digital Slice podcast. Oh,

Speaker 2: (29:52)

Great. So you know him well. He's wonderful. He is wonderful. Um, so we're, you know, so value graphics are very interesting to me. Why do people do what they do? And this is what David has identified, and he consults the United Nations, um, in a way that understands how large groups of people divvied up, not by demographics, but by motivation. And maybe it's wanting to belong to a community or saving the environment or making more money. That's a great motivation. But if you can understand why people make those decisions, then you can market to them according to those needs. Right. This is Right. Important stuff. And so we've been talking to David and trying to figure out how to, um, integrate that data with, with our product, because I, and no one is thinking about this, by the way, like the reason I started thinking about it, um, Brett, is because I kept asked, I've, I've continued to ask customers and other people like, what, what's the biggest value from Generat AI for you? Like, what do you want out of it? And they say, save time. They don't say make money .

Speaker 1: (31:05)


Speaker 2: (31:06)

Right. They don't say be more effective, which is so interesting to me. Um, that's gonna come this year, you know, because save time is only so great after a while. Right, right. And, and saving time isn't making money. Just think about that because the person on using it is usually like a in our, in our world, like a digital marketer or a very junior level person, so great, they're saving time. But the CEO is not really interested in that , you know,

Speaker 1: (31:38)

Unless it's making money.

Speaker 2: (31:39)

Right. Unless it's making money. Exactly. Right. Um, so my idea was that a partnership with or, or some kind of way to take David's data sets and integrate them into the product can help people understand that why, and, um, kind of give them better insights. I mean, it's, so the problem is how do I know what to write and how do I know if it's working? That's the problem constantly.

Speaker 1: (32:12)

Right. And that leads to the problem of do we get to the point where we trust the ai? Do I look at a suggested post for x? I pause because I always wanna call, yeah.

Speaker 2: (32:28)

, what is it?

Speaker 1: (32:29)

Twitter? I have to think, what am I calling it today? If I look at a post that lately wrote for X and I go into to change it, do I need to pause for a moment and think, well, maybe it knows better than me and I should just leave it.

Speaker 2: (32:51)

I think you should always go with your human gut unless you're a complete moron. But like, I think , that's my caveat.

Speaker 1: (32:59)

I've been accused of that before. Have you met my wife? ?

Speaker 2: (33:04)


Speaker 1: (33:06)

You're so funny. I'm not joking. , the good news is she never listens to the podcast. So I am safe with whatever.

Speaker 2: (33:16)

That's great. That's great. , right? It'll be our secret then.

Speaker 1: (33:20)

. Oh my God.

Speaker 2: (33:22)

So funny. Yeah. I I would always go with your gut and, and until, I mean, and the reason is, is because you have information that the AI still can never have on our side. Right. You always will. Mm-Hmm. , um, let me give you an example. When I was working with Walmart, we were promoting, um, free tax prep for this software that they were involved with. Super boring. Um,

Speaker 1: (33:52)

It's that time of year. At

Speaker 2: (33:53)

That time of year. It is true . And one of the acronyms that we were using was called VITA, virtual Income Tax Assistance. I think that was it, because this was new. It was a while ago when you could do your taxes online. It was just happening. Okay. . And another, uh, coincidental kind of thing was rising in the market, and it was this vegetable mixer called a Vita mixer. Hmm. So there was Vita and Vita, and we could see that our hashtag of Vita trending really high and we had to drop it because it wasn't related to, I mean, you know, tax prep was not trending. Okay. . Right. Imagine that not as much as, you know, you might even this time of year. So that's a thing that the human figures out. Right,

Speaker 1: (34:44)

Right. Interesting. Yeah, that's a great example. Yeah. Yeah. I mean, and I guess as humans, we could also be looking at our analytics and thinking, well, I changed that and it didn't perform very well. Yeah. Maybe next time I won't change it and see how it performs, or I don't know.

Speaker 2: (35:10)

Here's a secret for you, which is across the board, it's social media platforms. The least used feature is analytics.

Speaker 1: (35:21)


Speaker 2: (35:22)

That's how much people care about saving time and not getting results. I mean, really? Yeah. Yep. We're doing something about that this year. I've got ideas,

Speaker 1: (35:32)

, I wish I had a client ever that didn't care about the analytics, because they're always wondering, you know, well, what's my ROI?

Speaker 2: (35:40)

Yeah. I think it's because they, well, it's 'cause they can't put two and two together. People can see the analytics, but they dunno how to translate them into a, to-do into an action item. Another thing I'm working on solving, um, and these are people at very large companies and very small companies. It's not, has nothing to do with, um, intelligence, to be honest with you. Like, or money. It's just something that people don't automatically put together. Right. In a way. Yeah. I, it's my name. That's my gift. I know how to do this. Right. And so, um, that's my crutch word. Right. I have to stop saying that. That's the DJ in me catching myself. ,

Speaker 1: (36:21)

What's your crutch word?

Speaker 2: (36:22)

Always confirming, confirming at the end of a sentence or a statement with the word. Right.

Speaker 1: (36:28)

Oh, okay.

Speaker 2: (36:28)

If I were, if I was gonna, I'm always into crutches, and it's important. There's Yes. A lot of people say, you know, like, um, like,

Speaker 1: (36:39)


Speaker 2: (36:40)

I did it again. Jesus

Speaker 1: (36:42)


Speaker 2: (36:44)

If I had a transcript, I would really see it, and then I'd be, I'll

Speaker 1: (36:47)

Edit it out of the whole thing. Okay, good. Yeah. I'll, I'll delete that word instead.

Speaker 2: (36:50)

I'll, I'll start to say wrong.

Speaker 1: (36:52)

. . Kidding.

Speaker 2: (36:54)

I'm kidding.

Speaker 1: (36:58)

All right. Kay. Give me one or two takeaways you want people to take away from our discussion today.

Speaker 2: (37:05)

Takeaways. Um, I'll, I will give you one takeaway, which I love, and it's a, we were talking about metaphors and the, the value of, of humans and acceptance and AI and, and fear. Um, Betty Crocker, who we all know, um, cake in A Box, which I love, I used to buy a lot of that in college. And Devil's Food Cake is my favorite one. When they invented Cake in a Box, they were marketing it to Housewives at the time, and it didn't sell at all because the housewives felt this was totally weird. All you did was add water and they had no ownership. They didn't feel like they made or baked anything. And then Betty Crocker pulled out the powdered eggs and now you had to add an egg. And the slogan was just add an egg. And suddenly the sales skyrocketed.

Speaker 1: (37:59)


Speaker 2: (38:00)

Mm-Hmm. .

Speaker 1: (38:02)


Speaker 2: (38:03)

Yes. So that under the idea of ownership being part of the process, collaborative ai, humans in the role is, uh, again about making money. Right?

Speaker 1: (38:16)

Yeah. Yeah. Very cool. So for those who don't know, give us a brief, um, summary of lately What Lately does and who it does it for,

Speaker 2: (38:30)

Thank you so much lately learns your voice by studying your analytics on social media. And it also learns the patterns of when you write well, what goes into that really thinking about your unique target audience and what will make them specifically click, like, comment, and share. And once it has this model for you and you alone or your brand, we're able to then transform long form content into high performing social posts. So that could be text, like blogs, press releases, et cetera. Um, videos like the show podcasts of audio files. And what it's doing is it's digesting that long form content with your model, and it's gonna atomize it into dozens of promotional pieces with the highest performing, um, sort of content within, within, um, the posters that it's, that it's surfacing for you and we can fully publish it for you. It's a, it's a publishing scheduling platform and et cetera.

Speaker 2: (39:31)

And we work with two clients of clients. We have a self-service product for our SMB clients, and we have a very powerful, complicated, fun product for our enterprise clients. It's actually pretty easy, but, you know, , , there's a lot to it. So, um, and it's, it's really been a ride, Brad. I mean, I, I was writing on LinkedIn when I turned 50. I when I, I turned 40 when we started lately. So it's been a decade. Mm-Hmm. A decade of this. Mm-Hmm. And, and we started lately in 2014, which was the year that generative AI was born by the way. Just, you know, and, um, it's been quite a ride and I'm, I'm just sort of astonished that we're, we made it here to

Speaker 1: (40:21)

This moment. Yeah. 10 years is quite a feat.

Speaker 2: (40:24)

Yes. We've been waiting for this particular moment to, to be ours. So, um, we're about to close a funding round, which is very exciting. And, uh, 2024 is gonna be a really, um, momentous year for us.

Speaker 1: (40:38)

Well I am, uh, I am pleased to be on the ride with you.

Speaker 2: (40:42)

Thank you so much. I love you.

Speaker 1: (40:43)

Yeah, I'm so happy. Um, where's the best place for people to find you online?

Speaker 2: (40:50)

They can get me on LinkedIn is the best. So Kate Bradley, I think I'm just Kate Bradley on LA on LinkedIn. Um, but of course Instagram, I'm not all the places you are, we're friends everywhere. And, and yeah, lately is dub dub dub do and if we're nothing, if not friendly. Right. I know. You know that for

Speaker 1: (41:08)

Sure, . Absolutely. And I'll put all those links in the show notes. And thank, thank you so much for joining me again today. I can't tell you how much I appreciate it.

Speaker 2: (41:19)

Thank you. It's really good to lay eyes on you. And I've, I've missed you and I've missed a lot of my friends. It, it was a hard year and I backed away from doing this a lot. And, um, I'm looking forward to, because it means something to me, you know? This is,

Speaker 1: (41:34)


Speaker 2: (41:34)

My fuel.

Speaker 1: (41:36)

Well, welcome back.

Speaker 2: (41:37)

Thanks my friend. I love you so much. Take care. Take care. Cheers.

99% of Posts on Social Get ZERO Engagement. Don’t let that be you.