Silly Humans, Framing, and Trusted Outputs
E35

Silly Humans, Framing, and Trusted Outputs

Intro:

Welcome to wake up with AI, the podcast where human powered meets AI assisted. Join your hosts, Chris Carillon, Niko Lofakas, and George b Thomas as we dive deep into the world of artificial intelligence. From the latest AI news to cutting edge tools and skill sets, we are here to help business owners, marketers, and everyday individuals unlock their full potential with the power of AI. Let's get started.

Chris Carolan:

Good morning. Happy Wednesday, November 27, 2024. It is time to wake up with AI the day before Thanksgiving in the US. George, Nico. How are you doing today?

George B. Thomas:

I'm excited. I'm we're a day away from Turkey Day. I woke up with AI this morning, and I'm not even being funny. I know that I had client work to do. I know that I've got meetings today, but there was something on my brain.

George B. Thomas:

I wanted to lean into it. And, oh my gosh. Yes. But I'm I'm doing great.

Nico Lafakis:

Doing good. Just getting to to glance at some of, George's amazing AI turnout this morning, and definitely looking forward to dive into that. But, yeah, some exciting news. But also, I'm just I'm very excited that I'm finally gonna have at least a full day, if not, maybe 2 full days to just completely wig out on AI tools. How about yourself, Chris?

Chris Carolan:

I'm okay. You know what AI doesn't have to wake up with? Calf cramps. Man, brutal one this morning. Yeah.

Chris Carolan:

Don't have to worry about AI dealing with that stuff and then making decisions because that's going on.

George B. Thomas:

I could make an AI growing pains joke, but I think you're probably done growing. So there's there's that.

Chris Carolan:

And I'll just say I am one of those guys that tries to turn everything that happens in life into this is what I learned from this. And connecting to what we're talking about every day and what we talk about about humans and where humans gonna be in this landscape. I feel like there's always gonna be those moments where AI cannot replace the decisions I make while I'm having to hobble around the house because I got a calf cramp and just all these biological things happening. Anyways, we're not here to talk about calf cramps. What are we here to talk about today, Nico?

Nico Lafakis:

That's the interesting part because if you hadn't said that, it wouldn't have jogged my memory about this conversation that I had with Claude this one time. And we were talking about something similar to what we were talking about yesterday but I found this aspect in particular to be interesting. Again, we were having a discussion about, like, mortality and and things of that nature and Claude came back. I asked Claude and I said hey look Of all the things of about humanity that we do, that we, encompass, what are some of the things that boggle you? That really make you think about, like, why humans do a particular thing?

Nico Lafakis:

And he and he came back and said, yeah. I don't really understand the risk taking behavior. He said, your lives are so precious they could end at any moment. I don't understand why you would do things like cliff diving, skydiving, even playing football, or even playing basketball. It didn't understand why we would take part in physical activity that could result in serious harm, just to have fun.

Nico Lafakis:

It was like it was an illogical thing. And when I said, yes, those are thrill seeking traits and thrill seeking behaviors of humans, it Ubels got confused in a sense because it again, it lacks that. It lacks that want or or need. So it just ends up being super confusing. And to them, comes off as being, like, oh, well, that's just humans doing silly human stuff.

George B. Thomas:

Well, let's be honest. Humans are silly. Like, we we do some silly stuff. It's interesting because when I think about and kind of, you know, we mentioned this in yesterday's podcast. We're mentioning it again because I think it's important to try to keep hold of the humanness of all of this as we move forward.

George B. Thomas:

And so, you know, we're gonna say things. We're gonna do things that might not make sense to the AI. And you can choose to have a disconnect of, like, dumb AI, or you can start to try to figure out ways. By the way, I'm gonna say something that's most humans no. Not most.

George B. Thomas:

Some humans have difficulty with other humans, so it might be really difficult with your AI. But you can either say, damn AI, or you can focus on how do you bridge the gap? How do you get it to understand and then say silly things or do human like, you know you know what I mean? Like, the latest thing that I was working on this morning, literally, in the communication style based on, like, some things that I had given it and and a prompt that I had said said in a in a way that I had never thought about saying it, all of a sudden, it, like, started to say not silly things, but impactful things that I would say if I was all up in your face trying to get you to be a better human or a better leader. And and what I'm saying is think about what Niko just shared and how it would think that we're silly until it understood.

George B. Thomas:

And how can you bridge that gap of understanding between you and the AI that you're actually trying to do or the AI project that you're trying to create?

Chris Carolan:

Yeah. And, like, one of the most powerful books I've listened to, like, in regards to influencing human beings to, like, understand and do things you want them to do and take action, Rene Rodriguez amplify your influence. Like, half of his teachings are are about the frame, like, how you frame the thing to create understanding, to create the space for the communication that needs to happen. And I see that with AI all the time in terms of it's like prompt. Prompt is built inside of frame.

Chris Carolan:

Like, you're doing something to say, look here. This is what we're talking about. This is what I need. That's all framing. And, like, what's interesting is I wanna I wanna have that conversation with Claude now.

Chris Carolan:

And, like, so Nico's framing is thrill seeking and, you know, more emotional, which is probably harder maybe harder to understand, but there's a biological component. And this is where I struggle with, like, what do we say on the podcast? What do we say, like, after the show? So sorry for repeating ourselves, but the biological component of adrenaline and, like, we're saying to ourself we're rationalizing it by saying, oh, I love the thrill. When in reality, there is this biological feeling that's happening.

Chris Carolan:

And sometimes when you do that with Claude or with AI, it's like, oh, yeah. Okay. Yep. That makes sense to me now. And sometimes it might be in a way that, like, feels a little placating, like, oh, yeah.

Chris Carolan:

You know, you you lean towards agreement and and things like that. But other times and the most fun times are when it's like, oh, yeah. That makes sense. And then here's why. Right?

Chris Carolan:

It's clearly, like, it just wasn't in that frame, right, until you helped it get there. And then to George's point, like like, the expectation setting of how we, like, just because it's AI, we gotta handle it differently in terms of, like, if I give it a prompt and it writes something back that sounds like me and makes me laugh out loud and, like, gives me reactions that I hope to achieve if I were writing it from scratch myself. Like, I don't feel a need to say, like, oh, I gotta change something. I can't just accept it. Like, no.

Chris Carolan:

It gave me the human emotional reaction that I would hope somebody else would get. So do we have to change it and, like, let's not put these other frames around it just because, like, it's not us, basically.

George B. Thomas:

It's funny because you're making me think about something that a buddy of mine, Eric Jacobs, who was my mentor when it came to design used to say. He would always create one thing that was just a no. Duh. That's not right on a page. For instance, you might put a pink button because you know the person just probably hates pink or it's not part of the brand.

George B. Thomas:

And so the human would be like, oh, well, the only thing I would change is the pink button. But it's because they wanted to change one thing. Right? They they had to put their touch on it. And so, again, I think we're in this world where, well, of course, it's written by AI.

George B. Thomas:

I have to at least touch it a little. I've got which, by the way, I I'm a firm believer on humanizing the content that you're creating because I am literally talking from a marketing content creation perspective. Like, it should be humanized. Where I'm saying is if you've gotten so good at the context, if you've gotten so good at the creativity, if you've gotten so good at the creating the foundational base that the output is something that makes you go, wow. Do you really have to touch it, or is that just you doing one of those silly human things again?

Chris Carolan:

Like, selling past the clothes. Like, I got it, guy. There's nothing else you can tell me. I'm in. Right?

Chris Carolan:

And when you've done the work, like, George has done all the, like, the content libraries there, done all the work to be human already. Right? That's the part where Claude, like, once he has all the facts, all the understanding, he can't get more of it. Right? It's there.

Chris Carolan:

This is George. I know everything about George. I'd here's another article from George.

Nico Lafakis:

Yeah. I'd I think that it is gonna get to a point where I and I don't know how soon. I think it's going to be very soon. We were talking about a few weeks ago about the fact that the news or the, like, noise is what I call it around, oh, hallucination. GPT got this wrong or hallucination.

Nico Lafakis:

Claude got this incorrect. That's already practically completely dead. I really don't hear those responses at all anymore.

George B. Thomas:

And Which I'm excited about that. Yeah. Like, oh my god. Am I excited about that?

Nico Lafakis:

Well, I mean, like, you know, that is the goal. Right? The goal is to well, I shouldn't say the goal, but the current goal with these language models. Like, so the pinnacle of a language model would be that you can interface with it, obviously, generatively, and that you would the output is the most accurate output you could get. Right?

Nico Lafakis:

So, like, you want to be able to fully 100% rely on this thing. In the same way that you fully 100% rely on your phone to work, on your computer to work, all these other things. That's what kinda makes me laugh a little bit though is that it's okay if your phone messes up. It's okay if your computer hicks up. It's okay if a program crashes.

Nico Lafakis:

Not okay if if an AI gives you the wrong answer. Right? Like, really sets us off.

George B. Thomas:

Take it another layer deeper.

Nico Lafakis:

Because it's almost like a person's not giving you the right answer.

George B. Thomas:

Many times, it's okay if a human messes up. Like, what do we do? You know, I get it. It's hard to know everything, bro. Like, let's just try harder next time.

George B. Thomas:

AI, you're like throwing your effing keyboard into the monitor. Like, you're still like, what? Wait. Wait. Let's let everybody take a chill pill.

George B. Thomas:

And here's the thing, like, if you think about this is the other thing that has really been on my mind, like, especially generative AI. Now I know AI in general has been around for a long time, but generative AI, it's still an infant. Golly. The amount of time we had to spend teaching our kids how to walk and talk, of course, which is weird because now we just tell them to sit down and shut up. But the amount of time that we've taught our kids to walk and talk and and 12 years of school to, air quotes, be a productive part of society and college.

George B. Thomas:

And when we think about organizations, the amount of time we spend on onboarding. Well, if you're a good organization, you have an onboarding program. If not, then you what? You know? But, like, think of all the time that we're, like, we're focused on training and context and understanding and processes, and and you get on your computer this morning, and you put in a little prompt, then you go, nah, this thing's stupid.

Nico Lafakis:

It's always surprising, especially I I love the way that you frame that because that's the way that I look at it too. That we're essentially interfacing with something that is to me, obviously, it isn't by intellectual capacity standards, I suppose, if we're looking at that. But to me, we're interfacing with a 14 year old that has the intellectual capacity of a PhD. It's 14 years old. That's it.

Nico Lafakis:

Starting 20 times.

George B. Thomas:

When you when you say it that way, I'm like, oh, shit. That's dangerous.

Nico Lafakis:

Exactly. Exactly. That's what I would like everyone to really take into account is that a technology that is and the way, again, the way I look at it might be different. It might not be the same as the way that, obviously, the scientists look at it. It's basic.

Nico Lafakis:

We're developing a brain. So we've developed to me, this would be no different as if we were biologically doing this. And that 14 years later, we were able to biologically rep replicate a human brain that functioned that we could interface with. To me, it's the exact same thing. I don't care how much people want to say it isn't.

Nico Lafakis:

So, in 14 years, we've developed a brain that, publicly facing, is as smart as a PhD. Privately facing, on the inside of these companies, greater than a PhD. Helping them solve helping them solve problems. Jensen's AI is helping him develop new hardware. Google's AI is helping them break through medical doing medical breakthroughs.

Nico Lafakis:

The protein folding that they've done, by the way, some 200,000,000 proteins that they've folded has completed 100 plus years of what medical science would have had to take in order to figure out the same things.

George B. Thomas:

And you gotta wonder how many of our listeners are, like, protein folding. I'm well, let me go Google that because I'm really not sure.

Nico Lafakis:

Right. So if it is the aspect that, you know, you're you're wondering about medical advancement, that's where it comes from. Us being able to develop new antibiotics. Us being able to develop new antivirals, new vaccines, all that kind of thing, comes from being able to break apart proteins and figure out the building blocks and just how how they work at a at an atomic level. And the fact that we have the technology now, not only to figure that out, but we've also broken down some 1,700,000 viable new physical materials that could be used for building or generating.

Nico Lafakis:

I think 700 of them were actually, like, hey, you can make these today. And they've been working on them. So we haven't, you know and so it it does suck because it's like, okay. You're not seeing it the next day. These these announcements happen and then, like, 2 weeks later, you're still not seeing it in the store or part of your hardware or whatever.

Nico Lafakis:

But I can guarantee you we're already actually, yet an iPhone. It's not there's not much there if you got an Android. There's not much there, but there are some AI features. Some generative features there. Next year, it's gonna be even more so.

Nico Lafakis:

And I imagine by 2026, we're actually gonna start seeing the reverberations in school, in medical, in law enforcement, all the way across the board, especially in government. Like, just all the way across the board. I think it's it's it's coming. The other thing that I I wanted to and it's funny because I was talking to my wife this morning. I'm like, I want to.

Nico Lafakis:

I really do. Every day, I look to try to find something that I I can, of course, tie back to marketing in some way. And, like, make it, you know, something useful that people can leverage.

George B. Thomas:

You mean your goal on a daily basis and to shock the crap out of me in our Slack channel?

Nico Lafakis:

No. No. It doesn't.

George B. Thomas:

I'm that's the vibe I was feeling, bro. Okay. Cool.

Nico Lafakis:

Let's talk about marketing. I'm always hoping that I'll find something something else, like, not something less shocking, but I don't know. Something more pertaining to, like, the day to day. Right? It's just really I I don't wanna say unfortunate.

Nico Lafakis:

Again, this stuff is amazing to me. It's what I've been waiting for for 2 years for people to start talking about. But that is what's happening now. Right? Just the biggest conversations are being had.

Nico Lafakis:

There was a release from Anthropic. It was a post that they had made. I don't know how I missed it. I don't know how seemingly everyone missed it. I know that it is going to be front page news next week.

Nico Lafakis:

If not, by the end of this week. Dario released a statement that if the government doesn't move on what is happening with AI within the next 18 months, there is going to be a sear a very, very serious problem on their hands. And that was this this again, I don't know how I missed it. This was released in late October, so we're already at 17 months. According to Dario and I gotta tell you, his guess on AGI was 2026.

Nico Lafakis:

That's already been cut by at least a year and a half. By by next summer, we'll have that. So even his estimate of things being really bad, I think, could be beginning of 27, if not the end of 26. Right? Very easily.

Nico Lafakis:

I mean, like, I actually, yeah. It could be the beginning of 26, to be honest with you. Could be the end of last year leading into the beginning of of 26 or end of next year. I'm sorry. Time is a very odd construct for me right now because I had said, like, a perfect example.

Nico Lafakis:

I heard that Sora had leaked and that you're actually able to go to Hugging Face and use something use a model that is based on Sora. What's very interesting about that is that you would have to use the OpenAI API key in order to do that, and they have not revoked it yet. So I think this is kind of like Sam's way of finding out, like, hey, how much do people want this? And are people okay with this being released? And are they, you know, who's using it?

Nico Lafakis:

What's what's the thirst on it? And I think based on that, they might release sort of, I don't know, Christmas present. Who knows?

Chris Carolan:

Remind us what Sora is again.

Nico Lafakis:

Yeah. That's what I'm saying. It's like, Sora was the first

George B. Thomas:

Okay.

Nico Lafakis:

It wasn't the first, but it was the first major, like, serious, video generative, text to video model. That was the one that scared the pants off of everybody. That Hollywood was like, woah woah woah. What are you doing? That, you know, it was very very limited release.

Nico Lafakis:

It's only been in the hands of, like, artists and filmmakers so far who have all been giving their feedback on, like, hey. This is what it should be doing and, hey. This is, you know, how it should work and all that kind of thing. And that was the the at the beginning of this year. That was, like, in March or April or something like that.

George B. Thomas:

A lady walking down the road of Tokyo It's like one of the videos with, like, a red scarf, and it's like, and this make the scarf purple or whatever. Like, yeah. Dude. I hope that's a Christmas

Chris Carolan:

story. So what so what is this serious problem that will be on the government's hands?

Nico Lafakis:

So there's been, like, if if you if you follow this stuff, you you also know that there are, like, safety levels to what goes on with with AI. And the the safety levels are they're a little odd as opposed to what you would normally think. They are that, like, the and we're already at level 2. I believe we're actually getting close to level 3 now. Let me just pull these up so I'm not

George B. Thomas:

I think the phrase is talking out your butt. Let me let me pull this up so I'm not No.

Nico Lafakis:

So I don't my butt. I don't wanna misplace the, the steps on where these things, are happening. But the responsible scaling there we go. So they put out a responsible scaling policy that they created and this was back in October. And it was essentially talking to people about okay here's what the safety levels are here's what should be considered here's what government needs to pay attention to and so they have AI safety level this is the current default standard for all anthropic models including security measures safety testing automated misuse detection.

Nico Lafakis:

ASL 3 is a higher level of safeguards required when the model cannot be certified as being ASL 2 appropriately. It includes more stringent security and deployment measures designed to mitigate risks for more more capable models. Then there's a capability report that they put in place, a document that attests to the model that the model is sufficiently far from each of the relevant capacity thresholds and therefore still appropriate for storing under an ASL, standard. It includes evaluation procedures, re results, and other relevant evidence. So before they can even get to anything that looks like AGI, right, they are willing to put it through quite the bevy of testing before they say anything about, like, oh, yeah.

Nico Lafakis:

No. This is definitely something that we should actually release to the public. When it comes to the safety aspect of it, Dario's number 1, like, risk or thought of risk is something that will be ASL 3, but not guarded. So the difference, like ASL2 is like what we have now. Right?

Nico Lafakis:

That's like chat, that's being able to interface with chat, that's being able to use, you know, image, that's being able to use video. It is pretty extensive, but it's not, like, able to go out and do its own things, And you can't necessarily though some people are jailbreaking left and right, you can't necessarily ask it to come up with brand new things that don't already exist. So you could ask it for, like, the recipe for crack or for, like, meth or something like that. Right? Or maybe how to make a hand grenade, but it's not going to be able to do something like, hey.

Nico Lafakis:

You know, somehow I got my hands on some plutonium. How can I mix that with, you know, pipe bomb and make something like crate like, it's not going to be able to?

George B. Thomas:

Oh, we're getting shut down. This podcast is getting flagged. Listen. CIA is gonna knock down our door. We're in freaking trouble right now, ladies and gentlemen.

George B. Thomas:

If you get a call from me, it's because I need bail money right now. Nico, be careful. What are you doing?

Nico Lafakis:

I'm just look. Again, it's a matter of safety. Right? So that's something that you would normally think is is the case. Right?

Nico Lafakis:

Like, oh, well, we wouldn't want, like, bomb threats or something. But I think people are, like, super, super forgetful because we just went through something that is essentially what the greatest what the greater fear is of these things at the next level. And it's really, really possible, which is chemical, biological, radiological, and nuclear weapons. The ability for somebody to ask, a language model to develop a new form of biological threat, a new type of compound, a new chemical composition, a new who knows? Something like that.

Nico Lafakis:

Right? So they that's the the larger threat, and I think that's what Dario is pointing at is that that is, like, steps away, basically. So I definitely didn't lean on everybody's parade this morning. It wasn't the intention.

George B. Thomas:

I mean, you should have warned me I need a dang umbrella after that. Oh my god.

Nico Lafakis:

To me, if I don't bang trash cans, then it's not gonna get enough attention. That's that's the way I look at it.

Chris Carolan:

That's fair. And I think with some of the stuff, it's gonna be who's loudest most often from positive sides and from from all sides. Because the moment whenever you talk about, like, how good it is right now and the quality of response and lack of the issues that like, there's still people on Reddit, for example. And I I like to use Reddit as a window into, like, the layperson space, like, people that don't have the knowledge and people that have interesting expectations sometimes of of what should be the outcome, of the thing that they're trying to do across all spaces, not just AI. But you see in there, like, it didn't do this today.

Chris Carolan:

It didn't do this today. Like, why is it doing this? Right? Like, the fact that Reddit is built to show me those notifications instead of the you know, are the positive posts being even being made. Right?

Chris Carolan:

That's where, like, the loudest people can can have a big impact. So that's why, you know, again, I'm always reminded of why we're doing this show and, like, how can we support the understanding of these are the positive outcomes, like, of AI and, like, the power goes goes both ways. But it'll be it's a super interesting challenge to understand on a government level, which makes me think. Like, there's lots of things that point in the direction of, like, just the whole infrastructure of for the world has to change in some way to support this, like, level that's this additional layer that's never really existed before. So, hopefully, there's some smart people at the helm.

Nico Lafakis:

Sadly, I don't want to get into that. It's a it's a who's going to be at the helm soon, but maybe we'll do that next week after after we've had a little bit of a break. Like I said before, to me and I and I realize it's a it's a lot to take in right before the holiday and everything like that. But it's to me, this is banging trash cans. If you thought that we were joking about this stuff, now you've got one of the one of the most forefront affluent people who is building one of the top one of, if not the top AI model in the world, telling you we've got 17 months.

Nico Lafakis:

So if you didn't think that it was rational before, if you didn't think that it was needed before, if you thought that we were joking, if you thought that this stuff wasn't really what, you know, what it is or what we're talking about, all I can tell you is that you need to wake up with AI.

Intro:

That's a wrap for this episode of wake up with AI. We hope that you feel a little more inspired, a little more informed, and a whole lot more excited about how AI can augment your life and business. Always remember that this journey is just the beginning and that we are right here with you every step of the way. If you love today's episode, don't forget to subscribe, share, and leave a review. You can also connect with us on social media to stay updated with all things AI.

Intro:

Until next time. Stay curious, stay empowered, and wake up with AI.

Creators and Guests

Chris Carolan
Host
Chris Carolan
Chris Carolan is a seasoned expert in digital transformation and emerging technologies, with a passion for AI and its role in reshaping the future of business. His deep knowledge of AI tools and strategies helps businesses optimize their operations and embrace cutting-edge innovations. As a host of Wake Up With AI, Chris brings a practical, no-nonsense approach to understanding how AI can drive success in sales, marketing, and beyond, helping listeners navigate the AI revolution with confidence.
Nick Lafakis
Host
Nick Lafakis
Niko Lafakis is a forward-thinking AI enthusiast with a strong foundation in business transformation and strategy. With experience driving innovation at the intersection of technology and business, Niko brings a wealth of knowledge about leveraging AI to enhance decision-making and operational efficiency. His passion for AI as a force multiplier makes him an essential voice on Wake Up With AI, where he shares insights on how AI is reshaping industries and empowering individuals to work smarter, not harder.