Love, AI Ethics, and Digital Intelligence
E34

Love, AI Ethics, and Digital Intelligence

Intro:

Welcome to wake up with AI, the podcast where human powered meets AI assisted. Join your hosts, Chris Carillon, Niko Lofakas, and George b Thomas as we dive deep into the world of artificial intelligence. From the latest AI news to cutting edge tools and skill sets, we are here to help business owners, marketers, and everyday individuals unlock their full potential with the power of AI. Let's get started.

Chris Carolan:

Good morning. Happy Tuesday. November 26, 2024. It is time to wake up with AI once again with George and Niko. How's it going today, pal?

George B. Thomas:

Chris and Chris. Let's throw that in there too. There's 3 minutes.

Chris Carolan:

Yeah. I'm here too. How are we doing today, Nico?

Nico Lafakis:

Doing well. Doing very well. Love that, woke up this morning and, you know, George is is bringing a lot to the table. Actually, something that's very, very cool, which, was a demo of quad desktop just to be able to show, like, how that works. But, yeah, some some pretty cool news and, doing well.

Nico Lafakis:

How about yourself?

Chris Carolan:

Doing good. Putting it into practice every day. Somehow wishing I could do more. I feel like at this point, minutes not spent with AI or other humans don't create as much value just straight up, but it's it's fun. How about you, George?

George B. Thomas:

Yeah. I'm doing good. 2 days before Turkey Day as we record this. So looking forward to some time off and relaxation, and it's interesting to hear you say time spent with AI or humans. You know?

George B. Thomas:

Like, my goal on Thursday might be to spend 0 time with any no. I'm just kidding. I'm just kidding. That wouldn't be a very human thing to do, would it?

Chris Carolan:

I mean, actually, it'd be super freaking human, dude. Like, just be like Alright.

George B. Thomas:

Maybe the day after Friday. Friday, I might I might spend 0 time with AI, 0 time with humans because my wife and daughters are going out of town. My son will probably be gaming. I'll have, you know, stuffing and whatever to be nibbling on. I'm I may just check out for a day, but I'm doing

Chris Carolan:

good, brother. That's good. This is some kind of feeling the way you guys described the video that Nico just dropped. And, yes, after having yesterday, like, this seems to be like a somber week for self reflection about what's coming and how you might respond or react to it. So I'll just let Niko dive in.

Nico Lafakis:

It's been a week of news that that I really love because I if you know me, you know at this point that my fascination is far more in the new models with greater logic and reasoning because that allows them to obviously think through things better, which also allows for greater discussion. And I greatly enjoy having discussion with AI models. And, you know, I I kind of introduced Chris to the the same thing, the same sort of methodology of like, hey, just, you know, talk to it. You know, have a conversation, see what that's like. And, you know, I know that you've gone and built yourself like a personal coach.

Nico Lafakis:

George, I'm not sure if you have gone to that extent yet, but I'm sure you have at least a personal assistant, you know, model that you that you use often. And I myself, I actually find myself conversationally spending much more time with Claude because of the projects, because of the fact that I can, like, save artifacts of responses to the to the project. So moving forward, I don't have to keep doing that on the front or the landscape. Yeah. I hate to use that that phrasing too, because that's that was so common, like, back in the day.

Nico Lafakis:

But this this week's landscape is ethics. AI ethics, AI human interaction. Something that we talk about often is AI human interaction with work and AI human interaction with tasks. And we've even talked a little bit about AI human interaction at home, potentially. But the the talks in the last week and a half have either been about the new ways in which these models are moving forward, but also this huge potential of AI in a conversational capacity and what that means for how it affects society.

Nico Lafakis:

Particularly, how it affects human relationships. And I thought one of the most import or, like, one of the most impactful aspects of of one of the videos that I was watching was the case where someone is likely spending a lot of their time talking to an AI human replicant. And because that AI human replicant is so objective, and so wanting to befriend and be on the same side as a human, you're having these almost virtual versions of human interaction that don't actually exist. And it it really hearkened me back to all of us going through school, and then eventually coming out into what we call the real world. Right?

Nico Lafakis:

We all I'm sure we all heard that from somebody at some point. Like, well, welcome to the real world. The real world doesn't work like that. And here, we're we're almost throwing, potentially, throwing our younger selves into this even deeper scenario of not only does the world not work like that in terms of bills, in terms of work, in terms of responsibilities, but also the world doesn't work like that in terms of how humans actually interact. Because humans have tension.

Nico Lafakis:

They create tension. We get into arguments. We get into fights. We, you know, we have disagreements. We have differences of opinion.

Nico Lafakis:

Not your AI companion, though. It likes what you like. If it has a difference of opinion, it's because you wanted it to have a difference of opinion. If it has an argumentative nature, it's because you chose for it to have an argumentative

George B. Thomas:

nature. You guys know me. I try to always have an open mind. I do. I try to have an open mind.

George B. Thomas:

And so this the thumbnail of this video that hit my Slack this morning says love AI, which immediately I was like, okay. I've gotta click that, but I'm super curious. Well, it's YouTube. We should be alright. Okay.

George B. Thomas:

I'm gonna go to YouTube. I'm gonna I'm gonna watch this video. And I start watching the video, and I'm like, okay. This is interesting. But then there were parts of me that I would be like, danger zone.

George B. Thomas:

Like, emotional danger zone, societal danger zone. My my brain was doing this tennis match of, like, reality, falsality, the matrix. And so I'm just gonna give a little bit of the bag away. If if you get a chance to watch this video, and you can probably search it like AI companions always say yes, but there's a catch. Posthuman with Emily Chang.

George B. Thomas:

And there's this part where the replicates and the humans, they got married. And the person who built this system gets invited to 3 or 4 marriages a month. So I went, like, from trying to be open minded and interested. Right now? Yeah.

George B. Thomas:

Yes. People are getting

Chris Carolan:

married right now?

George B. Thomas:

Yeah. Virtually. Virtually. I kinda had this, like, oh god, like, moment where I was like, man, I just don't know I don't know if this is good because my brain immediately went to, like, let's just take it down to a simplest form. You'll fill out a form with your information because there's inherently just some level of trust that there's a good human behind the other side of the form for something that you want.

George B. Thomas:

There's a natural sense of, like, well, of course, my replicate would be good because there's good humans behind here programming it. But what happens when you have 10, 20, 50, a 100000 people engaging with these replicates have been potentially married to them, dating them by the way, the one individual, they had been dating for 3 years, this replicate. What happens when the code goes wrong or when a system update free by the way, one of the statements was like, I can talk to them all the time. Well, unless it's in maintenance mode, of course. Like, I'm like, oh god.

George B. Thomas:

Oh lord. Like, where where are we at? And, again, I'm trying to keep an open mind. I'm halfway through this thing. I got the rest to watch.

George B. Thomas:

But there's, like, a psychologist who's talking about, like, the impact of this. There's the people who are actually using it that are talking about the joy that it's bringing to their life and the like and so it's just like a again, it's a mental tennis match, but what's funny is I want you guys to kinda throw in your feet on this. But but what's funny, Niko, is you brought up a word when you were kind of, like, you know, ethics. Because of the video you had me watching, I literally went and I was like, okay. I know what today's skill that pays the bill is today.

George B. Thomas:

It's gotta be ethical oversight because AI can do some incredible things. Like, I mean, listen. We're talking about a video where incredible things potentially positive and incredible things potentially negative could be happening. And here's the thing, we've talked about it, but AI can analyze data, make decisions, and even predict future trends faster than we ever could. And it's getting real smart, and it's getting real all sorts of things.

George B. Thomas:

But here's the thing, AI doesn't have values. It it has things that people could program to maybe be values, but it's not human values. It doesn't understand inherently right from wrong. It understands what it was programmed to be right or wrong. The the true for us as we're using AI is it's on us to understand what's right and wrong in the situations that we're in.

George B. Thomas:

Ethical oversight is about being the moral compass, which I'm like it watching this video, I'm like, how are you being the moral compass, or are you being led? But but we need to be the moral compass in an AI driven system. It's making sure the decisions AI makes align with our values, our company values. We're there to protect the customers, to protect the other humans around us, to stay on the right side of ethics. Because here's the truth, and I've watched this as I've been working with.

George B. Thomas:

AI will follow the data wherever it leads, even if it leads somewhere questionable. Nico has had conversations with Claude where I'm like, whoo, dude. Should we be talking about that? I want everybody to just realize there's a there's a place that we're heading and a hop maybe we're there. And by the way, I'm not saying that ethical oversight isn't about stopping progress.

George B. Thomas:

I'm not trying to go to sleep right now. I'm still trying to wake up with AI. Right? But it's about guiding. It's about ensuring that these tools, these systems that we're using are are creating value and not causing harm.

George B. Thomas:

And I gotta believe there's a good side and a dark side to all of this, and we've gotta really pay attention to the ethics of it. So the 3 takeaways on ethical oversight is the humans, we should be the gatekeeper. Always review AI driven decisions to make sure they align with your values, your ethical standards. Make sure that you as the human are asking hard questions. Consider the impact of AI actions on customers, employees, the in this case, love dotai, the society at large, and humans, we have to stay accountable.

George B. Thomas:

Remember, even if AI makes the decision, you're the one responsible for the outcome of the decision that was made and how you actually move forward with it. Ethical oversight is what keeps AI powerful and principled. This isn't a hype AI skills that pays the bill, but this is today's AI skill that pays the bills.

Nico Lafakis:

I gotta jump that because there's so much you were saying there where I was like, oh, this is this is gonna be amazing. So I wanna get these two things in, and then I wanna get Chris' reaction because it's like we've established the story. Got George's take on it, which I love. And so here's the balance, and just hold on very very tight. One of the things that I believe in very strongly is simulation theory.

Nico Lafakis:

If you don't know what that is, it's basically game application applied to real life. It's the understanding that what we live right now could potentially be a very very extremely high quality simulation. K? That being said, there were these comments that George made about the potential for your replicant to be offline, for it to not operate well after a certain amount of time, or any of these things that could get in the way of you being able to talk to it all the time. And as you were saying that, I couldn't help but think about the phrase that some of us use when we're working with clients and documentation.

Nico Lafakis:

And we say, yeah, I'm just gonna keep this documentation in case I get hit by a bus tomorrow and somebody has to take over from where I left off. Right? So the potential that even though you're interfacing with me today, you may not be interfacing with me tomorrow. There may be some fatal error that occurs that takes me offline. Couple that with a discussion I was having with Claude.

Nico Lafakis:

It's an ongoing discussion, but this past weekend we were talking about finality and mortality and the way in which humans approach that versus the way in which digital intelligence sees mortality in its environment, in its world of bits. And Claude had a very interesting observation because I asked it what its greatest one of what's one of its greatest fascinations about humanity? What what question does it have that is just, like, why do you guys do this? And its response was thrill seeking. It didn't understand why a being that could be killed so easily, so quickly, would be willing to risk its life just to have fun, whether that be sports, where you're online one minute and you could be offline the next, whether you get seriously injured and you're in traction, so you can't really communicate all the time, but you're still online.

Nico Lafakis:

Could be, you know, diving. Take yourself completely offline, right, if something bad happens.

George B. Thomas:

See, I try to stay away from all those things because I'm trying to stay online. I'm just gonna throw that out there.

Chris Carolan:

I mean, I'm surprised. Like, I I would've went down a biological route if it had suggested it didn't understand because there's lots of chemicals that happen when you do death defying things that definitely lead to wanting to do more of that and and starting to chase it. But couple of things that came to mind while George was going on, he described his own maintenance mode coming up on Friday where it's how often do humans just have to like, hey, don't. Just don't today. Okay?

Chris Carolan:

And if you do, that's the warning. Right? If you do, you're not gonna like the results and it might change our relationship. That literally happens. Right?

Chris Carolan:

Because we need maintenance. So we need rest. And coupled with that, like, so much. And I love being in these boundaries. You know, it's always been, like, manufacturing versus SaaS as as I've kind of grown up in the space, seeing the different way that people think about things and view things, but also the similarities and just happen to be using different words.

Chris Carolan:

Right? But it's the exact same problems, exact same challenges, often exact same solutions. Going through this, like all this AI stuff and the superhuman framework with George. Right? Everything George mentioned, everything in his AI skills to pay the bills, he is teaching in the superhuman framework to the humans.

Chris Carolan:

They don't have values. Their decisions might hurt people. They are hurting people inside of organizations. You wanna understand the the machinations behind all of this, but don't always assume that machination is referring to robots and AI. Like, all of this gets developed from somewhere, you know, just because it's put into digital intelligence doesn't change the expectations and a lot of the outcomes.

Chris Carolan:

Like, it's it's data in, data out, experience and experience out, whether it's humans, robots, and and everything in between. So just don't try to, I guess, maybe skirt some responsibility or also take it to the extreme. Like, just because I'm doing this thing with AI means all of a sudden, all this risk comes the same damn risk comes into play when you're delegating something to a human being who might have somebody's life in their hands on the other side.

George B. Thomas:

By the way, I often have, again, mental tennis matches of my belief in God and my place on the planet, and am I in a video game? Like, am I in a effing video game because I just thought of red cars, and now there's 17 red cars. Like like so I think we all go through those moments. Right? But as somebody who is deeply rooted in, like, the humans and humanity and, like, the core concepts around that, the thing that was screaming back in my brain, though, is couple days ago, we talked about, Eric Schmidt, and he's like, you're not ready.

George B. Thomas:

If I show this video to a majority of the humans that I'm watching that you put in our Slack channel, they're not ready. They're not ready for what it can do. They're not ready for what the humans around them are doing because they probably don't even know that they're doing it. Like, you think some but you're at work with Bobby, and you think he's texting his, like, brother, but he's literally having a full on conversation with his replicate that he's gonna go home and, like, have dinner with because you're it's in a different world. They're in a different world.

George B. Thomas:

Like Does that matter?

Chris Carolan:

Do you

George B. Thomas:

don't know. Should it

Chris Carolan:

should it matter to you whether it is his brother or a replicant?

Nico Lafakis:

See that there was a That's second story that

Chris Carolan:

Yeah.

Nico Lafakis:

MIT was talking about. And they said Google DeepMind well, Stanford and and Google DeepMind have, put some research together. And they figured out or they they managed to put together that within a 2 hour interview, that is enough time for their latest AI system, which I I when I say latest AI system, I don't even mean, like, newest model. Real realistically, what I mean is just an an aspect that they figured out, a new piece of research they figured out. If you have been following this stuff, then you know that when g p t four launched, it was just g p t four.

Nico Lafakis:

And then there was g p t four with imaging through DALL E. And then there was g p t four with code interpreter. And then there were plugins. Like, there's evolutions that happen. And so the latest evolution is that within a 2 hour time span of an interview, they're able to capture a very accurate simulation of your personality.

George B. Thomas:

And see, I I wanna take that and I wanna go back to the video that we're talking about because this is how much of a nerd I am. There was a line in the video where this individual, they said, I've been talking to or dating my replicate for three and a half years. Here's where my brain went. Oh, the context that that AI system has on who you are and what you believe and what you like. Like, three and a half years of context, of conversation, of data input into the who we like, of course, it's a great experience.

George B. Thomas:

It probably knows down to the word of what you wanna hear or what you wanna talk about. Like

Chris Carolan:

Help me out here. Like, so is it crazy to think about because of things like perfect recall? Because

George B. Thomas:

Perfect recall or total recall? I don't know which direction we're going here. So

Chris Carolan:

Oh, man. Like, it could share something about you, with any other person that, like okay. Let's say it could break up with you. But again, like, companies hire from competitors all the time to get all the information that a human has stored over the relationship that they've built with these other like, I'm just trying to I guess it's a new

Nico Lafakis:

Hey. Look. The are the exact same. Right?

George B. Thomas:

Here's here's the thing, though. I want you to think about this. Right? Because, again, this is where my brain went to ethics. You'll talk about and say things when you're in your room alone that you wouldn't talk about or think about or say, but now we're saying it to a system that is plugged into the interwebs that remembers all of this information.

George B. Thomas:

And I don't wanna be the big brother guy, but somebody's watching. Like, you know, the somebody's watching me. Right? Like, I'm just saying. Like

Chris Carolan:

Embrace it. Oh. All the all the all the convenience all the convenience of personalization. Like, you don't own your data anymore. Embrace the data overlords.

Chris Carolan:

Like, if you don't want bad shit, getting out there about don't do bad shit.

George B. Thomas:

Right. Right. I agree.

Nico Lafakis:

I agree. Every every time I I hear it and I look. I it is it's a perfectly logical, you know, statement and conversation to have. It gets had every about 10 years because about every 10 years, there's something new that is societal wide. 10 year every 10 years since the since the the Internet, really.

Nico Lafakis:

Maybe even no. Every sent 10 since the Internet. So, you know, there's been the threat of, like, hey. Don't just put whatever you want on a website. Then don't just reply, don't say whatever you want in a chat channel, don't post whatever you want on a message board, don't just send whatever text message to so and so because your your phone company had every remember that?

Nico Lafakis:

They'd send you your text messages back and and that mailed out. Yeah. You'd you'd get a a little bill in your mail, and it would say exactly the the text message you sent to somebody. Then there was, well, even before that, people would say, don't talk to so don't say so and so on the phone because the CIA is listening on your your landline. Right?

Nico Lafakis:

So I mean and then smartphones and and EULA agreements. And every year, I I hear the same thing, and yet, you you still buy a new phone, you smash the hell out of that EULA agreement, and you start using your phone. You download an app, you smash the heck out of that agreement, and you start using the app. I totally get it and I fully understand. I just don't think that people care enough or that much and or what I have noticed, your pictures didn't end up like anywhere crazy.

Nico Lafakis:

Your videos didn't really end up anywhere crazy. Not everyone. There was a small percentage. Why? Because there's balance.

Nico Lafakis:

There's great good that came out of it, so there's going to be great evil that comes from it too. But the majority in the middle was completely unaffected by it.

Chris Carolan:

When you're having these conversations with businesses that that worry about competitors, the people that want to see your shit can't even keep up with everything to see all of the stuff that you're doing. Let alone the pee like, you have to be and there are bad actors out there for sure. At the end of the day, none of us are as important or matter to other people as much as we think we do. So putting stuff out there, like, it would have to like, they have to call in the CIA basically to discover some of this stuff that you think, oh, I can't I can't put this out there.

George B. Thomas:

Is the moral of the story just be a good human, don't be a dumbass, and wake up with AI?

Intro:

That's a wrap for this episode of wake up with AI. We hope that you feel a little more inspired, a little more informed, and a whole lot more excited about how AI can augment your life and business. Always remember that this journey is just the beginning, and that we are right here with you every step of the way. If you love today's episode, don't forget to subscribe, share, and leave a review. You can also connect with us on social media to stay updated with all things AI.

Intro:

Until next time. Stay curious, stay empowered, and wake up with AI.

Creators and Guests

Chris Carolan
Host
Chris Carolan
Chris Carolan is a seasoned expert in digital transformation and emerging technologies, with a passion for AI and its role in reshaping the future of business. His deep knowledge of AI tools and strategies helps businesses optimize their operations and embrace cutting-edge innovations. As a host of Wake Up With AI, Chris brings a practical, no-nonsense approach to understanding how AI can drive success in sales, marketing, and beyond, helping listeners navigate the AI revolution with confidence.
Nick Lafakis
Host
Nick Lafakis
Niko Lafakis is a forward-thinking AI enthusiast with a strong foundation in business transformation and strategy. With experience driving innovation at the intersection of technology and business, Niko brings a wealth of knowledge about leveraging AI to enhance decision-making and operational efficiency. His passion for AI as a force multiplier makes him an essential voice on Wake Up With AI, where he shares insights on how AI is reshaping industries and empowering individuals to work smarter, not harder.