Episode 8 – Artificial Intelligence Q&A: Deep Fake Nudes, Music, Dr. Visits, Jobs and more…

Show Notes:

AI and cybersecurity expert Chris Roberts returns to the show to answer some intriguing listener questions on the topic of AI. Chris was my guest on Episode #2, and his appearance certainly stirred up some emotional responses and interesting inquiries.

Tune in to hear Chris’s take on AI and its impact on various aspects of our daily life. We explore the ethical concerns surrounding deep fakes like the Taylor Swift debaucle, open-source vs. closed-source AI development, and the concerns of creatives – musicians, artists, writers, and performers. Chris speaks about the potential environmental impact of AI, the benefits of AI in medicine, and whether or not AI is going to replace you at your job. We’d love to hear your thoughts on the show – leave your comments and feedback below!

In this episode:

  • Taylor Swift and AI, the dark web, and deep fakes
  • Open source vs. closed source AI
  • Concerns expressed by artists, writers, musicians
  • Detecting AI content on social media
  • Interview with OpenAI CTO
  • Questions around AI and the environment
  • Nvidia – what is it, how does it work, should I invest?
  • Medical professionals – how are they using AI to help you?
  • Karen’s experience with AI at her doctor’s office
  • Jobs and AI – Will it replace you?
  • Automation has already changed our lives over the years
  • Self-checkouts – Amazon Fresh didn’t work
  • The game – “You Know You’re Old When….’
  • Send me your comments and feedback on the show!

ThirtyFiveSixtyFour is your weekly dose of inspiration for navigating the exciting, unpredictable, and undeniably transformative journey of midlife. Hosted by Karen Stones, founder of 13 Jacks Marketing Agency, avoids the tired cliches of crisis and stagnation. This podcast celebrates the power of play, discovery, and possibility that comes with this unique chapter in life. Join us every week as we delve into the real stories, challenges, and triumphs of midlife. We’ll explore fresh perspectives, practical tips, and inspiring experiences that will help you thrive, not just survive, during this pivotal time. Ready to rewrite your midlife narrative? Head over to thirtyfivesixtyfour.com and be a part of the adventure!


ChatGPT from OpenAI
Gemini AI from Google
Claude AI
What exactly happened with Taylor Swift? What is a Deep Fake?
Is my Dr using AI?
How Doctors are using ChatGPT
What is Nvidia?
What is happening with all these musicians and this petition everyone is signing.
And screen writers?

Why do I keep hearing AI is bad for the earth?
AI and the Environment
AI and Jobs
Open AI Interview

Show Transcript:

[00:00:00] Chris Roberts: Publishers, specifically music labels, record labels. Hollywood studios can use AI generated content to score movies, to write scripts, and to even create the movies themselves. So a music artist could come out with a video that’s entirely created by AI. Instead of hiring a crew to have someone do the camera work, the editing work, directing work, they can all be generated by AI. And what they’re simply saying is, we don’t want to be replaced by AI.

[00:00:42] Karen Stones: Welcome to episode eight.

I have the great pleasure of welcoming back Chris Roberts to the show.

Chris has worked at Microsoft, Dell, and other leading technology companies as an artificial intelligence and cybersecurity expert. He’s got the unique gift of translating technical jargon into everyday, understandable terms. He’s a proud alum of the University of Maryland and Andrews University. He’s a dedicated father, a jazz vinyl collector, a poet, a writer, and he’s also passionate about aeronautics, art, and antiques. What does this man not do? Is the question. I am so lucky to call him a dear friend. Welcome back to the show, Chris. It’s so good to have you.

[00:01:41] Chris Roberts: Great to be back.

[00:01:42] Karen Stones: We had so many questions come in after our episode on AI, which was really a 101 basic. What is AI? How is it being used in the world today? And lots of listener questions have been coming in over the last few weeks, and I thought, hey, before we dive into a deeper episode, let’s answer some of these questions, because I have them, listeners have them, and I think they’re quite common.

[00:02:14] Chris Roberts: Oh, I still have questions myself.

[00:02:17] Karen Stones: Okay, are you ready?

[00:02:19] Chris Roberts: Absolutely. Let’s roll.

[00:02:20] Karen Stones: Okay, so the first question, this came in from Katie, and it looks like she’s in Cincinnati, and she wrote, what exactly happened with Taylor Swift? I keep on hearing stuff about Taylor Swift and AI, and I don’t exactly get it. So can you give us a little bit of background on what exactly happened there?

[00:02:47] Chris Roberts: Sure. So what happened to Taylor Swift happens to a lot of celebrities, both male and female. Whether you are a movie star, a musician, politician, if you are a person of celebrity, you may be the victim of this type of grift. I’ll just say it that way. So what happens is that. And this has been done since the Internet. Let me go further back. This has been done as long as we had photocopiers and people could cut and paste. They’ve been doing this. And that’s is to create images that were not authorized or images that are fake to represent somebody in a compromising position. And that’s what happened to Taylor Swift. It’s happened to Taylor Swift. It has happened to pretty much every Hollywood personality that people are interested in. If your name is in the press and people are interested in seeing you, someone may do this to you. And what this involves is one, either you manually create an image that is compromising to the individual. But in this case, with Taylor Swift, it was so convincing and so alarming because the group or person used AI to generate an image that looked indistinguishable from Taylor Swift in life versus the image that was online. Now, I should point out that these images were not generally available. Like, you could not do a Google search and find these images. These images were buried in, let’s just say, the dark web. So unless you were a person who likes spending time in the dark web, chances are you did not come across these images. These images came over to the mainstream because somebody went to that world, brought those images back and posted them somewhere, and that’s when the story got legs because people were actually seeing them. So this comes up in the whole conversation on deepfakes. And a deepfake is when AI is used to generate an image or a video or audio. So we’ve seen deepfake videos of people saying things that will, wait a minute, I know they didn’t say that I saw deepfake. And someone attributed the quote to something like Denzel Washington. I was like, there’s no way Denzel talks like that or would say those words in that order. For instance, I just know that from watching him both live and, you know, and in movies. Same thing for politicians. I believe there were a bunch of robocalls during the last election cycle of politicians calling their constituents quote unquote, except it was not them. So if you hear something that’s really far fetched for either an audio or video clip of a celebrity, take it with a grain of salt, you know, consider the source and do your homework. Unfortunately, compromising images are going to be around for some time. Now. I should distinguish that to generate those images, you have to use something called open source AI. So open source. So there’s open source AI and closed source. So open source is I can download a model from the Internet. So meta makes lama two. It’s an open source model I can download. And there are many of these models out there that are open source. And if I have the compute capability, and we’re going to talk about that later, I think it’s another question around this. If I have the compute capability to run that model on my machine, train it, I can make it do anything I want it to do a closed source system is what we get from OpenAI. I know it’s called OpenAI, but it’s really closed source AI. So OpenAI is closed source. Google Gemini is closed source. Claude from anthropic is closed source. If you’re not familiar with that, just Google anthropic. Both Amazon and Google have invested in anthropic, for instance. So there are a lot of closed source models that are very. The reason why I bring up closed sources, that it is very difficult to make those models do nefarious things. So I cannot get it to write computer malware, viruses, I can’t have it create deep fakes of known personalities. I can’t even ask it, like, can you help me make a pipe bomb? Things like that. So there are guardrails that they put in place, so you cannot do certain things. And deepfakes, it’s one of the things you’re not allowed a lot to do in those platforms.

[00:06:38] Karen Stones: I see. So publicly available AI tools will not typically allow you to do a nude deep fake, for instance. Which is what happened to Taylor.

[00:06:51] Chris Roberts: Exactly. So they are even the ones that are. So you may hear the words mid journey, stable diffusion. Those are some of the overly popular AI generation tools for graphics. And then the most impressive piece of software that got released was OpenAI showcased a piece of software called Sora, which generated lifelike video from a prompt, no images required. Show me an image of a dog walking through a field of buttercups, and it would create a video of that for you. So that is not publicly available. You have, that is invite only. They have not said you can use this product. It is still, it is so good that they’re not going to release it, but also it takes a lot of processing power to do that. So opening it up to millions of users would literally just create such a bottleneck in their data centers. I don’t think there’s something they want to do right now, but the technology is going to get better and better, and guardrails and governance around these technologies is going to be very important. The White House, I believe, already put out a memo about what constitutes safe and responsible AI. So there’s going to be regulation, but it doesn’t stop someone from doing something nefarious. So buyer beware. Yeah.

[00:08:08] Karen Stones: So are you telling me, even though there’s guardrails, to expect a lot of interesting things, especially around well known people.

[00:08:17] Chris Roberts: In AI, it’s not just deepfakes around well known and celebrity. Just keep in mind that none of these providers, whether it’s open eye, Google and Microsoft and Meta and Amazon, they are still learning what these systems are capable of doing. If you go to use these systems and try to read the manual, quote unquote, there is no real manual. They can tell you how to use the system, but they cannot tell you what’s going to come out the other end. Because what I may ask a system would be different from the way you ask a system. You may even ask in a very different way. And the prompt you generate or prompt you provide the system, they have no idea of telling what’s going to come out the other end. I think in the last episode we talked about what generative AI really means, and that GPT meant generative pre trained transformer. It literally predicts, based on a mathematical equation what the next letter, word, sentence, paragraph would be. So even it doesn’t know what’s going to come out the other until it’s actually prompted. So there’s no master list of all the possible responses because they literally are practically infinite.

[00:09:21] Karen Stones: Let’s move on here. This is another question from Lila, and this came in just last week, and she says, I just saw this open letter from over 200 famous artists regarding the predatory use of AI. They all signed this letter. Some of the folks include Stevie Wonder, Bon Jovi, the estate of Bob Marley. What exactly are these artists at asking for?

[00:09:52] Chris Roberts: This one’s a little bit deep to unpack, but we have to start with the first premise, and that is AI gets its capability by learning from us. So it can predict and provide you answers, because it has read everything about us. So AI, and when I say I’m using AI generically here, whether it’s Google, OpenAI, Microsoft Meta, whoever it may be, they have literally scraped the Internet for information. I think Reddit just went public and they signed, they inked a deal with OpenAI where OpenAI will get access to Reddit and subreddit’s content from their users. That is very important. AI cannot do what it does without the input of human beings. And I’ll stress that over and over again. This is not some magical black box that came into existence without human content. It is all trained on human content, whether that’s the spoken word, the written word, whether it’s images or video, it is all from us. Now, take that in context with the musicians and artists and writers, etcetera. They are concerned that because AI now has been trained on their writing style, on their musical skills, an AI can generate a chorus or a symphony because it’s learned and it has listened to Mozart, it has read sheet music from Beethoven, it can recreate these things. So when an artist, say Stevie Wonder, or prolific writers like Taylor Swift, for instance, well, and some of my favorite artists, let’s say Chardet, I do not want AI generated music by Chardet. I want the real person to sing and write that music. And what they’re concerned about is that publishers, specifically music labels, record labels, Hollywood studios, can use AI generated content to score movies, to write scripts, and to even create the movies themselves. So a music artist could come out with a video that’s entirely created by AI. Instead of hiring a crew to have someone, you know, do the camera work, the editing work, directing work, it can all be generated by AI. And what they’re simply saying is, we don’t want to be replaced by AI, and they’re trying to build it into their contracts. The writers Guild just went on strike to make sure there are provisions that AI would not be writing scripts for television, not that they’d be that good, but they want to make sure that their creative rights and privileges are protected under the contracting and the binary, what they call it, binding arbitration, so to speak.

[00:12:28] Karen Stones: So you’re telling me that some sort of AI tool could generate a total complete commercial with voices, music, audio, video?

[00:12:39] Chris Roberts: Yes.

[00:12:40] Karen Stones: Wow.

[00:12:40] Chris Roberts: I mean, you and I both noticed, because we do content, and sometimes you want a script, and I can just say, you know what? Chat. I’m going to do a talk on photosynthesis in the southern hemisphere. Can you give me a ten minute, two person script for an interview that discusses the nuances of photosynthesis? And how does that impact chlorophyll production in equator plant life? And it will come up with a series of questions with the answers that you can use. And then you can also automate the speaking of the question by the computer, and the computer can answer the questions in someone else’s voice. You can script the whole thing, record the whole thing, and now with tools from not just OpenAI, but from other companies as well, you can then have it create a video for the whole thing. So if you look at Instagram, TikTok, any platform today with short form video, most of that video, if you look at it closely, a lot of it is AI generated content. A lot of people aren’t actually acting out these videos. There’s a lot of it that’s AI generated. So the professional artists are coming back and like, I don’t want that in the music world, I don’t want it in Hollywood. I’d like to protect my livelihood it’s.

[00:13:54] Karen Stones: Interesting you mentioned social media and AI generated images. Even when we as a podcast team upload short form video, it always asks us, is this AI generated? It’s a newer button.

[00:14:11] Chris Roberts: It’s deeper than you think, because AI companies, OpenAI, Google, Microsoft, Anthropic, and the list goes on and on. They’re all hungry for data. There is a very telling interview with the CTO for OpenAI, and we can provide a link to in the show notes later where she was being asked, where are you getting data from? And she was very coy about certain things because they wanted to ask specifically about user generated content. So YouTube came up and come to find out they were actually not just scraping video, but translating through the closed captions what was being said in the video and then using that closed caption content in training data. So you look at a video and you’re like, you say, well, I understand what’s being said, but now AI can either one scrape that video, interpret what was being said, and use that information to train the AI model. And that’s because YouTube, the API, allows you to ingest that video. Now, could Google actually restrict that? Absolutely. Can anyone restrict their platform? Sure. I mean, most publications today, like the New York Times, which OpenAI is also suing along with Microsoft for, for copyright infringement, because they were trained on some New York Times content, they can put a paywall, and a lot of sites put up a paywall. It’s not very effective. There are ways around that. But more and more content providers are becoming acutely aware of just how valuable that information is, and they’re trying to protect it because they don’t want it gobbled up by AI systems.

[00:15:45] Karen Stones: Yeah, that’s really interesting. Okay, here is another question from Joey in Florida, and he writes, why do I keep on hearing AI is bad for the environment?

[00:16:00] Chris Roberts: It goes both ways. So when compute was first established in the early days, it wasn’t a big deal. I mean, we had computers that took up the size of a room, for instance. So the original IBMs, the unisys mainframes. And if I’m speaking French to some people, I’m sorry, but computers weren’t always in our hands. Computers used to take up entire rooms and entire buildings and to keep those things running, it took a lot of power, and they only ran certain things at certain times of the day. So a company would do their books at the end of the month, and there’d be a significant jump in processing power. When processing power peaks up, power consumption peaks up. So two things happens when a computer runs, one generates, it uses electricity, which has to be generated somewhere. And it also expands heat because processors are warm. If you ever you played a game on your mobile phone, the phone gets warm. Why? Because the processor is generating heat, because it’s using more energy. That’s just simple physics. AI has a problem in that it consumes a lot of compute. So it doesn’t just take a data center to do AI, it takes multiple data centers and very advanced chip technology. So when you provide a prompt to AI and hit enter, that query goes back to a data center and then chat, or anthropic or gemini, it’s got to compute what the response needs to be. And every time that happens, a processor has to run, and that cycles, or those cycles in the processor consume electricity, expense heat. So you’ve got to cool the data center and you’ve got to power the data center. All that is bad for the environment, because every time I process and I want to get a prompt answered, I’m using energy and I’m expending heat. So we know that global warming is an issue, or climate change. So the more heat that’s expended, the more problems we’re going to have. The more energy we consume, the more problems we’re going to have. So it’s now up to AI companies to figure out what is the most cost efficient, but also energy efficient or climate friendly way to generate the output that’s required by their end users. And that is an ongoing debate. It’s been going on since we’ve been building data centers. And if you notice, data centers are not built in the desert in Arizona. They’re built in the Pacific Northwest. They are built in cooler climates. They actually submerge data centers. So, because if you submerge a data center inside of a airtight container, and they’ve done this, and they’d still do it, it’s cooler 50ft down in a lake versus being on land. The cooling costs are much cheaper. So I think part of the question will always come back to, well, what about bitcoin as well? So a lot of the bitcoin miners operate in Iceland, because Iceland provides very cheap geothermal energy to run a data center. And that’s where a lot of the bitcoin miners will go to do their processing, because the energy is actually cheaper there. And there’s no real carbon penalty for using geothermal energy because you’re not burning fossil fuels, natural gas or oil or coal, to generate that electricity.

[00:19:10] Karen Stones: Today’s episode is brought to you by Dana Creith Lighting where artisanal craftsmanship meets innovative design. Are you searching for lighting that stands out from the rest? You’ve got to check out Dana Creith lighting handcrafted in southern California, each piece exudes attention to detail and commitment to quality. Say goodbye to replacements, and hello to long lasting beauty. Visit danacreeth.com that’s dash r dash.com to view their stunning collections, or stop by their showroom at 1822 Newport Boulevard in Costa Mesa, California. Dana Creith Lighting where elegance meets innovation.

So I keep on hearing about Nvidia. Does that have to do with all this processing power and what’s happening on the backend?

[00:20:14] Chris Roberts: Yes. So Nvidia creates the processors that is responsible for most of the AI compute that’s in the world today. So a little history. Nvidia was started to create a processor that could do parallel processing more efficiently than the typical CPU architecture. So the early computers are PCs friends. They use a cpu, the cpu essential processing unit, which is shipped by intel or AMD. And there’s so many different manufacturers of CPU’s. So there’s Amd arm, intel, IBM made processors at one point. So did Xerox. Host of companies made processors, but they were all CPU’s. So that means that they had a very efficient way of just multitasking processes on a chip. So it looked like it was doing a lot of things at the same time. In essence, it was just providing a slice of the cpu or an interval, or they would interrupt the cpu to inject into the processor to get something accomplished. And it was very efficient at that, and it was very fast. However, to do graphical operations means you have to be very good at mathematical operations. That is, you had to be good at math, and math is hard. So doing that in the cpu just took a lot longer. So Nvidia was started to just do mathematical operations a lot faster than a cpu, and that meant doing parallel processing. That is, I could compute one equation at the same time as another equation. So I would be able to do that a lot faster. That became very important for graphical processing. That is, for games. If you’re a gamer, you know all about Nvidia because you have an Nvidia card in your computer today. So there’s Nvidia cards, there are AMD, Radeon cards, there are lots of GPU’s that are out there. Nvidia just happens to be the one that’s the most popular and the most recognized by most developers. They’re also now the number one chip manufacturer. When it comes to providing GPU’s to companies like OpenAI and Google to do graphical processing, or GPU’s. And GPU’s now are the foundation of the modern AI economy, so to speak. So if you’re wondering is it a good idea to buy Nvidia stock, you might be a couple of years too late. But they’re not going to go out of business anytime soon. I think they’re worth $2 trillion now or something like that. It’s crazy. But they are very important to the infrastructure that supports the AI economy as we know it today.

[00:22:49] Karen Stones: Interesting. I’ve heard so many people talking about investing there and I wasn’t quite sure the reason.

[00:22:57] Chris Roberts: So, so we’re not, we’re not going to do investment advice, but you want to look at all the companies that are creating GPU’s. So that’s, that is Nvidia. That’s the top dog. There is also intel, and they’re not going away either. And also amd those three are all making GPU’s now. Not to be outdone, Google is also making GPU’s. So they’ve been making something called a TPU, a tensor processing unit, for some time that’s specific to the way they do parallel processing in the Google environment. So I’ll just say it like that to keep it simple. So they also make a GPU like processor too, as well. So this is not something that is going to be relegated to just Nvidia. Amazon also has a GPU that they are producing too, as well. And they’re doing this because an Nvidia chip, for her to do AI, one chip costs a hundred thousand dollars. So it is very expensive. You’re going to build out a data center of hundred thousand dollar chips. So if you had to buy a laptop and the chip inside cost 100 grand, there’s no way you’re buying it. But anyway, it wouldn’t fit in the laptop. It’s very large. It’s a very large, it looks like a toaster oven in its size. So when it goes into the data center, it’s a very large form factor. So they are expensive. They need specialized hardware, specialized software. It’s not something you and I would buy, but it is a very important piece of the AI food chain.

[00:24:18] Karen Stones: Wow, that’s new information.

I was thinking that they would be tiny little chips just like the ones I’m used to.

[00:24:25] Chris Roberts: Well, tiny is relative. They’re about the size of a waffle. A square waffle.

[00:24:30] Karen Stones: Yeah, but still not the processor in our cell. Phone or whatever.

[00:24:36] Chris Roberts: Exactly. So, for instance, look at your computer. Your laptop is not all processor, but it needs all that extra, what we call systems on a chip. And we don’t have time to go for all these things, but there’s a lot of other electrical components that support a gpu. So when you put all those things together, it’s a pretty sizable piece of hardware. And that’s why they run in data centers and not our laptops.

[00:24:59] Karen Stones: Yeah. Okay, that’s. That’s interesting. Okay, next question.

[00:25:03] Chris Roberts: This is.

[00:25:04] Karen Stones: I’m inserting my own question here. Is my. Is my doctor using AI? And how would they be using it? Because I already have. I already have Doctor Google at my fingertips.

[00:25:17] Chris Roberts: Oh, man. Where do we even begin with this one? All right, so both parents. So if our kids sneezed, we go right to the Internet. Webmd, you know, mayo clinic. My kid just sneezed. What’s going on? We’ve been using forms of not AI, but electronic information sources to diagnose ourselves or self diagnose for forever. If. If I got an ink and pain or something went around, like, well, how do I treat this? So, I mean, I was rock climbing a couple weeks ago, fell down a bunch of rocks. Fortunately, didn’t hit my head. But I’m like, hey, you know, what? Do I need to go to the hospital? Like, okay, how do I make sure that I don’t have a fracture without going to get an x ray? And it gave me, like, pointers on how to twist, how to check motion and movement and see exactly where the pain was. So, yeah, lots of ibuprofen, ice and rest, not much better. But if I called my insurance provider, sometimes they provide a virtual assistant or a virtual nurse. Right? Now, I believe OpenAI created or not. OpenAI was Nvidia, actually, them again, created a demo, what we call a proof of concept of what a medical assistant would look like. And you can talk and say, here are my symptoms. This is what I’m experiencing, and it would give you feedback and the recommendations on how to treat or to diagnose. Now, does this replace a doctor or nurse? No, not in any way, shape or form. But once again, it’s a way for a professional, in this case, a medical professional, to augment their skills and training with something that has the sum total of medical knowledge at its fingertips. Case in point, if I was had trouble breathing and I go to the hospital and they take an x ray, an x ray technician or radiologist, someone has to look at the x ray and say, okay, is there inflammation in the lungs. You know, what is this over here? So the image that it pulls up is subject to interpretation. Now, as an x ray technician, that person or radiologist may have seen in their sum total, their career, maybe 1000 x rays. Let’s say they’ve been doing a long time, let’s say they say solve 5000 x rays. So they’re pretty good at pinning, pointing, what may be wrong based on the x ray. A piece of AI software, and IBM Watson has proven this, they’ve done this many times, not just with x ray, but also with retina scans and ophthalmological examinations, is that they can now predict with 98% accuracy in a lot of cases what’s wrong with the patient based on that x ray. So do I want it to replace my doctor? No, but it is a very good source to augment what my doctor’s doing in my treatment plan. So do I want a cart to come into my hospital? I almost said hotel room, my hospital room. And so like, good afternoon Mister Roberts, your treatment plan today is x. And the thing is, we make fun. They won’t even sound like a robot, it would just sound like my doctor. It may even have a video of my doctor on the cart itself. And if you’ve seen in the hospital, they have these cars, they wheel around and they have a monitor on it. So there’s no reason why you can’t have your doctor video conferencing on, but also using AI at the same time. Will people be freaked out about this? Yeah, but I think in the background in the back office, when you go home after your doctor visit, can they use AI to help interpret your test results? Yes. Why? Cause I’ve done it. My last exam. You go to your patient portal with your doctor and it has all your results, you know, your blood pressure, h, one c, whatever, all a one c, all those little numbers and things like that. Well, am I gonna google every one of those things? No, I just screen scraped the entire thing, dropped it in the chat GPT and says, hey, this person just came back from the doctor. These were the numbers from most of the test she cells. Can you interpret and give me a likely diagnosis or treatment plan based on the information here? And darn sure it came back with a lot of good information. I’ll tell you this, your doctor will hate you if you do that, because then you’ll call them and says like doctor, it says, I am gonna die in six months. I’m kidding. But they don’t like it any more than you do. And I think it’s gonna be. It’s gonna take some time for us to all to get used to that level of service and care that incorporates AI. But rest assured, in the back office of your hospital, your doctor’s office, they are already using it to help in the diagnosis, to help in treatment plans, or even right now, I know hospitals use it to better perform medical records, evaluation, and billing coding with your insurance company. So it’s already in the system. Don’t fight it. Get used to it. We’ll talk about this later on another episode as well. It’s like it’s coming, so you might as well figure out how to get along with it and how to incorporate it and make the best use of it in your life.

[00:29:54] Karen Stones: Wow. Okay, that’s. That’s really interesting. Thank you. That was my personal question.

I’ve asked my personal doctor to come on and chat with us. He’s a little shy, but he actually took out a mini laptop in my last well appointment, my well visit, you know, an annual visit. And he did crunch numbers right in front of me because I said, I noticed that my cholesterol had gone up, and I said, oh, my goodness, am I going to have to start cholesterol medication? When do people do that? He put in some calculations based on my age, weight, and all these other factors, and he said, you have 99% of a probability of not having an issue with this. So, no, this is not an intervention. So I saw it happen right in front of me, actually.

[00:30:47] Chris Roberts: I think that thing I’m most cautious about, AI in healthcare, is the misapplication of the technology. So, for instance, I’ll say this, I did not go to med school. I don’t know what drug is good for me or bad for me. I don’t know everything about how to treat a specific ailment or disease or especially if it’s a chronic disease. That’s why you have medical positions. That’s why people go to med school. That’s why people go on rounds. That’s why they learn there’s more to medicine. Medicine is more of an art than a science. And I have a bunch of doctors in my family. They always say, yeah, there’s book medicine, and then there’s practical medicine. And the practical side is more of an art. So when I look at healthcare, I really want those systems to be at the fingertips of a professional. Not me, a professional, because it’s those individuals that are gonna create the best system of care for me based on my circumstance, not just because it has book knowledge. Book knowledge is dangerous because book knowledge can tell you how to make a nuclear bomb. That doesn’t mean it’s a good thing.

[00:31:53] Karen Stones: Well, that leads me into, I think, my last question. It’s the one that I think everyone keeps on asking themselves, is AI going to take my job? And I just read, and I will drop this in the show notes for you, an article that estimates AI will eliminate 85 million jobs. However, it will create 97 million new ones. So that’s a net gain of 12 million new jobs. So what is your comment on, on that?

[00:32:27] Chris Roberts: I think we touched on this a little bit last time, but not in depth. And if I said this before, it’s worth repeating. You are more likely to be replaced to by a human who knows how to use AI versus AI itself, because the people that are able to use the tools at our disposal more effectively are the ones that usually are more productive. The ones that are more productive, they get the eye of management.

They provide better customer service. They are, you know, they’re more efficient in ways that people without the tools will not be. All right, let’s say you hire someone in to do redo your backyard landscape, and they show up with a shovel and a pick. Someone else shows up with a bobcat and a crew of people and all the plants from Home Depot or Lowe’s, and they’re ready to go. Who’s going to finish first? Well, that’s what AI does for someone, you know, in their career. And that is it helps you be more efficient at what you’re doing. Now, does this come in caveats? Yes. And to those that think AI is the doomsayer of humanity, there’s a 50 50 chance that it will be. And I think if you ask whether it’s Sam Altman Musk or any, or gates or anyone who is in the know around technology, they’ll say, yeah, there’s a chance AI could be the end of humanity. There’s always that possibility. We can also get hit by an asteroid. So there’s always a possibility humanity can end. It’s just that. Pick your poison. What I will say is this, is that automation has been happening in humanity forever. Whether it was discovery of fire, the wheel, the printing press, the industrial revolution, all those things changed the way we work and interact with our environment, our societies, our economies. They always change. Do we want to go back out and make things the old fashioned way? Does anyone churn butter? I mean, if you go back and look, how was butter and cream used to be made? It was actually this bin with a stick. And you went up and down, you churned the butter, literally. And it was hard work. I want to go to the grocery store and buy a pack of butter and bring it home, put it in my fridge. I don’t want to churn butter anymore. So automation always changes our lives now. Do we need people to churn butter anymore? No. Okay. But there are other parts of the dairy industry that have been automated. There are things that now are much better and easy to use because of automation. And the people that used to do those things do something else. They didn’t just like, oh no, my job’s gone, I’m going to lay down and die now.

[00:35:03] Karen Stones: That’s not how it works.

[00:35:04] Chris Roberts: This is humanity.

We are constantly evolving and becoming our better selves. There’s no need to think that AI is going to somehow take everything away from us. It can’t. It depends on us. Think of it as a symbiotic relationship. It needs us just as much as we need it right now. And for us to say that AI will take our job is to say that, you know, my dog was going to replace me in the house, like, my spouse is going to like the dog more than me, and I won’t have a place to be anymore. That’s nonsense.

[00:35:33] Karen Stones: That could happen. That could happen.

[00:35:36] Chris Roberts: It could happen. Now on the flip side of this, the best example I can think of is a bank. ATM’s were introduced in the late sixties, early seventies, became really popular and everyone freaked out. Every teller in the bank went, oh no, I’m going to lose my job now. Do we have 15 tellers in a bank anymore? No, we probably have three or four, depending on the size of your bank. We still have tellers and we have lots of ATM’s everywhere. It just changed the way we interacted with the financial system. And there’s so many more examples of this now. Does this mean that a corporate entity or some nefarious management group will say, like, you know what? We can replace all these people with machines and computers? Yeah. It will always be that person. There will always be someone who will try that. My grocery store tries it right now with self checkout? No, I go to Trader Joe’s, Aldi’s, anywhere that has cashiers, I go because I’m not going to put everything in my cart, then take it out of my cart, scan it, bag it myself, and let you make a margin off of that. I find it offensive. So there are parts of automation that they will attempt and will fail. Amazon fresh tried this with walk in, walk out, and it would scan everything automatically. Well, they stopped that. You know why the technology didn’t work? They were using outsourced labor overseas to go through the video and figure out what you put in your card and then bill you for it. And that’s why I took. If you were ever using Amazon fresh. That’s why it took 24 hours sometimes for the credit card to be charged with what you walked out of the store with, because they had humans trying to read the video to see what you put in your card. That is the most inaccurate use of technology. So automation, for automation’s sake, is not the best thing. Will there be people who keep trying it? Yes, but just remember, humanity has always found a way to make our lives easier, to make them better, and it is going to continue. And if you freak out about AI taking your job, just remember. Butter. Do you want to churn your own butter?

[00:37:36] Karen Stones: Well, okay, that’s a perfect segue into our closing game. You know you’re old when.

[00:37:42] Chris Roberts: What?

[00:37:43] Karen Stones: Okay, here you go, Chris. You know you’re old when you refuse to use e deposit. So you still stand in line at the bank.

[00:37:56] Chris Roberts: You know what you are that. That. I will agree with that one. In that. Oh, no, no, wait a minute. You know you’re old if you’re still writing checks.

[00:38:05] Karen Stones: Oh, yeah, there’s always that. Oh, yeah, they still pop up if.

[00:38:11] Chris Roberts: There is a checkbook in your. In your pocket, in your purse, in your back. Your checkbook. Sorry, you’re old. Yeah, I can’t say the last time I’ve used a check. I don’t know.

[00:38:22] Karen Stones: I still use one very randomly, you know, to pay a bill. But I’m not. No, but I’m not, like, writing a check at the grocery store. That is super old school.

[00:38:34] Chris Roberts: All right, so. All right, so I know I just probably. I probably just hit a nerve for someone who’s listening right now. Cause, like, they have. But I like my checkbook. I have a checkbook and I have my register and I see exactly. How about my balances and. Yeah, okay, but if you’re paying bills with checks, your online banking, at your bank, when you go online, even if someone does not have an online banking account with your bank, there is an option for you to create a payee. And even if they’re not in the system, your bank will print a check and mail it to that person for free for you. So there’s no need for a check.

[00:39:09] Karen Stones: Yeah, I suppose I could look into that, because I still do have a checkbook.

I do so maybe I just proved my age here. I do know folks who are very concerned about things like Venmo and Zelle, and in particular, people beyond the age, I’m going to say, of 60. They don’t trust it. And have you come across that before?

[00:39:34] Chris Roberts: So there’s no, that is actually a legitimate concern. So. Oh.

Financial transactions are the number one area where fraudsters, criminals look to exploit consumers, plain and simple. Because it’s the exchange of money. You know, why do they rob banks? Well, that’s where the money is. Why do they try to scam Venmo or cash app, or PayPal or any of these online payment systems? Because that’s where the money is now. And all those systems do have one thing in common. They tie to a real physical payment method, whether it’s a credit card, your bank account, with your routing number. So there’s a rich amount of financial data there. So yeah, you’re right to be cautious about those platforms. So I will say this. That is, you have to have the right amount of security around those platforms. I would never use any of those payment applications if they did not have two factor or multi factor authentication. If you don’t know what that is, that is when you go to log into your bank, it sends you a text message. That’s two factor authentication. If it sends you a text message and asks you to use an authenticator app on your phone, that’s multifactor authentication. If it does all those things and asks you to plug in an encryption key into your laptop, that is even more multifactor authentication. So I would not use a financial application unless it had at least two levels of authentication for it to be used.

[00:40:59] Karen Stones: Well, Chris, we didn’t get to all the questions today. Keep them coming. If you have them, drop them in our social anywhere, DM’s, we pick them up and we’ll bring them back into an episode. You are our technology expert here, Chris. So thanks for sharing your, your knowledge with us today.

[00:41:19] Chris Roberts: Anytime. Okay. And just got here. Let’s wrap this up. Cause I gotta go make some butter.

[00:41:26] Karen Stones: Thanks again, Chris.

[00:41:28] Chris Roberts: My pleasure.

[00:41:29] Karen Stones: And that brings us to the end of another episode. I hope you enjoyed the conversation as much as I did. Okay, so if you haven’t already, make sure to hit that subscribe button so you never miss another episode. If you’re loving what you hear, I would be incredibly grateful if you took just a moment to rate and review this show on your favorite podcast platform. It helps others discover us, and it’s a great place to share your thoughts, suggestions and ideas for future episodes. For even more exclusive content and detailed show notes, check out our website at 3564 dot. And that’s spelled out 3564.com. As always, a huge, huge thank you for spending time with me today during this episode. I appreciate that you tuned in. I’m going to leave you the same way I do every episode. Remember, it’s not too late, you’re not too old, and you’re definitely not dead. Okay, until next time.

  • Karen Stones

    Show host Karen Stones is the creative heart of ThirtyFiveSixtyFour. Born in 1979, Karen is a child of Generation X. As a podcast enthusiast, she noticed a major void in content catering to listeners her age. Karen found existing productions were either niche, evangelized negative perspectives on aging, or hosted by a well-meaning young adult who lacked the wisdom and life experience to provide meaningful insight. Thus, ThirtyFiveSixtyFour was born. The philosophy behind ThirtyFiveSixtyFour stands in stark contrast to the conventional midlife crisis narrative, advocating instead for midlife to be seen as a time of confidence, reinvention, growth, reflection, exploration and renewal.

    Karen has over twenty years of mass communication and marketing expertise. Her journey in media started early, as she interned for notable figures like Larry Morgan and Ryan Seacrest at the Los Angeles FM radio station STAR 98.7. During her university years Karen served as a disc jockey for the on-campus, student-run radio station. Following a successful career in the corporate world, she took the entrepreneurial plunge, founding 13 Jacks Marketing Agency in 2014. The agency currently oversees multimillion-dollar projects including global product launches, international events, specialized social media and advertising campaigns. Beyond her agency pursuits, Karen extends her expertise to coaching executives seeking to enhance their business strategies and personal growth. Based in Orange County, California, Karen is a dedicated mother to three and an outdoor enthusiast.

    View all posts
About the Author

Show host Karen Stones is the creative heart of ThirtyFiveSixtyFour. Born in 1979, Karen is a child of Generation X. As a podcast enthusiast, she noticed a major void in content catering to listeners her age. Karen found existing productions were either niche, evangelized negative perspectives on aging, or hosted by a well-meaning young adult who lacked the wisdom and life experience to provide meaningful insight. Thus, ThirtyFiveSixtyFour was born. The philosophy behind ThirtyFiveSixtyFour stands in stark contrast to the conventional midlife crisis narrative, advocating instead for midlife to be seen as a time of confidence, reinvention, growth, reflection, exploration and renewal.

Karen has over twenty years of mass communication and marketing expertise. Her journey in media started early, as she interned for notable figures like Larry Morgan and Ryan Seacrest at the Los Angeles FM radio station STAR 98.7. During her university years Karen served as a disc jockey for the on-campus, student-run radio station. Following a successful career in the corporate world, she took the entrepreneurial plunge, founding 13 Jacks Marketing Agency in 2014. The agency currently oversees multimillion-dollar projects including global product launches, international events, specialized social media and advertising campaigns. Beyond her agency pursuits, Karen extends her expertise to coaching executives seeking to enhance their business strategies and personal growth. Based in Orange County, California, Karen is a dedicated mother to three and an outdoor enthusiast.

Leave a Reply

Your email address will not be published. Required fields are marked *