Matt Scheurer | Nov 22, 2024

November 22, 2024 00:42:05

Hosted By

Ari Block

Show Notes

In this conversation, Matt Scheurer shares his journey into cybersecurity, discussing the evolution of security practices, the importance of data protection, and the challenges posed by social engineering. He emphasizes the need for user education alongside technical controls and explores the future of cybersecurity in an increasingly digital world. The discussion also touches on the balance between security and privacy, particularly in the context of user authentication and data management

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Matt, welcome aboard to the show. So happy to have you on today. [00:00:04] Speaker B: Hi everyone. Thanks so much for having me. It's a great pleasure to be here. [00:00:09] Speaker A: Matt, what got you into cybersecurity? What is the beginning of your journey? [00:00:14] Speaker B: Ah, yes, the old origin story. So growing up as a kid and sort of microcomputers hitting the mainstream and there was a big buzz about them. The way you hear AI everywhere today, this was like small computers and these were back in 8 bit days. And so I was just enthralled with it because it was something new and exciting. And so I gravitated as maybe a bit of a socially awkward kid gravitating to technology sort of spoke to me. But what really enamored me are the early hackers and the people that were able to do things with this technology that the original designers didn't anticipate, could never predicted. So people would find, you know, ulterior uses or, you know, bend these things to their will to do some amazing things. So regaled with, with tales of hacker lore back in the early days and then movie War Games comes out and wow, did that ever speak to me. I could identify with the David Lightman character played by Matthew Broderick. And, and so that was also very inspirational to young me. And then I can remember in the mid-90s, there was an Information Week magazine articles on Dan Farmer who had authored this tool called Satan for short. And this was a tool that was a multiple vulnerability scanner and he'd open sourced it. And there's this raging debate on all this stuff about what he did and whether or not this is a good thing. We didn't really have a notion of offensive security to harden your stuff back then. But what caught my attention was he was the very first person who was doing computer security for a living at Sun Microsystems. And I just thought, wow, computer security is your core job focus. That's just for me that spoke to me is that's gotta be the coolest job I've ever heard of. And it took a really long time. I would never have maybe imagined back then that I would get there. But finally I did get there and I've just never really looked back ever since. [00:02:31] Speaker A: You know, that's funny that you talk about that. A big part of the argument was security through obscurity. What does that even mean? Like, what does anybody believe in that anymore? And what are the pros and cons? [00:02:45] Speaker B: Sure. So the idea behind security through obscurity is you have something that is not widely deployed or well known. Or something that became antiquated. And so I joke sometimes when something is so old and it's improperly secured, I refer to as security through antiquity. But truthfully, security through obscurity is just kind of hoping that people won't be able to figure this thing out. So some of the things that we read about in terms of vulnerabilities and stuff, they're widely deployed, widely used platforms because they are so prevalent, they often get targeted. And also once one vulnerability comes out for a thing, whatever that thing is, other security researchers and other threat actors alike will find that interesting. So then they start looking at other potential vulnerabilities that this particular, you know, piece of software or the site service, whatever it is, might have. So when you get through obscurity, again, you've got something that maybe isn't super prevalent, you're kind of hoping that people won't figure out that something is vulnerable. And I can set you a specific example. So once upon a way back, Motorola had these mobile data terminals that would exist in emergency medical and police vehicles, for example, and these were the mobile terminals that would communicate through the airwaves, through radio towers and radio signals. And these things would send signals, signals that would get picked up by the police cruisers and display on the terminal inside the police cruiser, for example, and ambulance, the same kind of thing. These things were not encrypted. So that is, if you were smart enough to know how to take a radio or a scanner and bypass, or tap the discriminator, which would filter out all the signals and the clicks and the, you know, the, the data part, if you will, you could get the full feed into that radio. And I believe it's gentleman by the name of Bill Cheeks did a lot of reverse engineering work on this stuff and figured out how to decode these particular protocols. And all of a sudden, within hacker and ham radio communities and things like that, people would spin up these applications based on this source code that was released on how to decode these signals. So what that meant was an enterprising hacker could set up a scanner, radio, feed it into their software, decode the signals, and read out the same things that are in these ambulances, police cruisers, that sort of thing. So that's a perfect example of security through obscurity because they never anticipated people would figure this out. [00:05:45] Speaker A: That's absolutely crazy. Were there, were there any malicious usage of that? Were reporters ambulance chasing or anything like that because of this, or. It never came to that. [00:05:56] Speaker B: I wouldn't be surprised if it happened. I don't have like a specific story that I know about. [00:06:02] Speaker A: Right. I love that story. I want to, I want to bring us into today's landscape for a second instantly stay on that same topic when we make architectural decisions. And I want your lens as a, as a hacker on one side I want to say, okay, I'm on aws, I want to use Lambda predominantly. I want to try and stay away from servers because I don't need to deal with it. I do the patches, I don't need to do anything like that. And Amazon is just handling it for me. But on the other hand I'm thinking, well, I'm on the most popular platform, right? Amazon or Azure, Google, whatever. And then I'm a main target. Everybody is trying to hack this. So the second something patch comes up and it's not fast enough, all organized crime is after me and everybody else. Is there a better architecture choice today that companies can make or is it just pure luck? [00:07:04] Speaker B: I think the biggest thing is, you know, again, when you pick a platform, regardless of the platform you pick, it's really about making smart choices within that particular platform and as much as you can. And the thing is for the most part, like there's not native support, you can bolt it in, but there's not native support, for example to capture packets and be able to do decrypted packet and analysis for things that are in the cloud. Again you can bolt in solutions to do that kind of thing, but they're not just natively there. So really you are at the mercy for your monitoring, your logging and alerting and really paying attention to those things. When it comes to these platforms there is a bit of a double edged sword with security through obscurity. We talked about that. But when you are on a super popular platform that is heavily targeted, you have to keep up to date with what those changes are. Now the good thing is that a lot of these larger vendors, if you will, have a serious interest in fixing the problems that arise and they typically have a lot of resources they can throw at something and if they prioritize something as a top priority, you can anticipate that it's going to be fixed relatively quickly. I think the biggest things are really thinking about how does data get into a platform or an application or a system, how does the data get out and then what's happening to the data when it's at rest, when it's not coming in or going out? Because really those are the attack points that you need to be concerned about and it doesn't matter really what you have and which platform you're using, those are the things to be concerned with. And again, doing what you can to ensure that you have adequate logging. I can't tell you the number of times that we've wanted to investigate something as a seasoned incident responder. And if we don't have good logging, it becomes much, much harder than to figure out what happened. And we're trying to figure out who, what, where, why, when and how and answer those critical questions. If we don't have the information, then we really have to start getting into deeper forensics and that gets really, really hard. [00:09:42] Speaker A: I would add on to that. You actually said it, you said the who, wherever you're doing the logging and how you manage your identity, make sure it ties back to individual people. I see these logs and it's just like, like who? Because you can then go and ask, well, you know, Mr. X, did you actually do this? And then you figure out, oh, somebody was just testing the database. It's like, okay, I get it. As opposed to spending hours trying to figure out if this is real or not. So I really appreciate that you talk about, Go ahead. [00:10:11] Speaker B: As I say, more to that point, you also have to worry about shared secrets getting leaked. [00:10:17] Speaker A: Yeah. [00:10:17] Speaker B: So just because it shows attributable to a particular individual, well, if their credentials or their keys or their tokens were leaked, that sort of also becomes a challenge. [00:10:30] Speaker A: So you talked about at rest. I don't think it's a trivial understanding of, you know, why the at rest data is so important. [00:10:41] Speaker B: Yeah, I think when you get to at rest data, what I've found is that the alarming thing for me personally is that sometimes people don't really realize what's in their data. And what I mean by that is we had no idea we were retaining Social Security numbers, for example, here in the United States, or we had no idea we had all this personally identifiable information that maybe we didn't really need for the application. The other thing that people tend to forget sometimes is if somebody has an opportunity to input data, there's not necessarily a guarantee. They're only putting in the data they should. And so, for example, I've done an investigation where I saw where people were including full credit card details, for example, and things that that should have been like receipts that should never have that information. But are we training the users and are we putting in the safeguards so they understand be really careful about what you upload? And it's great that you trust our platform and our system, but sometimes we have to protect users from themselves. And it only takes one misconfiguration or one oops moment for things to go from nice and secured to all of a sudden leaked. And those are really scary things. And so I think we, we owe our users a bit of training and explanation and just caution that sometimes we tend to forget as technical folks because we know how something should work, but it's open to interpretation when it gets to users. And when we run things through QA cycles for quality assurance, we're testing use cases, not misuse cases. [00:12:28] Speaker A: And I'll mention that there are pretty easy to use tools that can scan your data and see if you have. And you don't even know, ssn, credit card, these things have patterns that can be easily recognizable. But nobody is telling the, you know. And even if you did tell the user, hey, please don't put this kind of who says they're going to follow it? And what I've seen is a lot of times this is actually a derivative of the support process and it wasn't necessarily intended by the software developer, the security officers or the technology officers. It's kind of like, oh, oops, we have that. Really. So that's such an important point. I appreciate it. Is it, let me ask you this. When it comes to securing your data at rest, is it enough just to Turn on your S3 and database encryptions or is it more than that? [00:13:19] Speaker B: So again, it kind of goes back to visibility and logging because when data is accessible, like you really have to. And also there's a bit of governance around identity here and really making sure that it's only the right limited number of sources that have access to the data. It's that old sort of need to know principle, principle of least privilege where we have to be really careful. I have seen, for example, instances where doing an investigation and wondering like, okay, how did this data get leaked? And all of a sudden you start looking at it and like, well, look at all these people that had access to this thing and then you discover that maybe it's been shared with a third party partner. And at that point all bets are off. So like, yeah, it could still maybe be a malicious insider situation, but probably it's this trusted partner that perhaps maybe shared this information with somebody they know or sort of went on their own internally doing these things. And so I think sort of good identity and access management controls are also another key component when it comes to data at rest, because anybody has access to a particular set of data if their credentials get compromised or their endpoint gets compromised. And now people can, you know, pivot their attacks as that person. They are in scope for a security investigation, particularly when it comes to a thing like a data breach. [00:15:16] Speaker A: I mean, at the end of the day you can have, you know, Fort Knox data with, you know, pipes made out of, you know, platinum or whatever the strongest metal is. And the secure data with the secure pipe goes to a bad actor and they just use like they're getting it all very securely going to the wrong person. This really brings us back to this topic of, you know, the endpoint security. Right. The endpoint device, the browser, it's so open, right. I just have made a decision not to install almost any browser extensions. It seemed like that that is just a nightmare. The stuff that we install on our PCs also seems to be a huge risk. But then even the way that we kind of authenticate id, I don't see a lot of people doing this kind of thing. Right. The physical tokens and people are putting in passwords. It's the same password for 20 websites. They're frustrated, honestly. So for needing to change their passwords every three months, in some cases there's gotta be something better happening. What do you think the future of the kind of individual user security needs to look like? [00:16:29] Speaker B: Yeah, so I think there has been a big push to try to get more to passwordless, which has some merit to it. Not that anything is infallible, but I really think it's going to prove to be one, difficult, but two, a step in the right direction. And so some of the things that have been out there, you know, work a little better than others. So, for example, biometrics, I don't know what it is, but my fingers tend to not print very well. So trying to do a fingerprint thumbprint scanner for me, maybe it's because I'm a germaphobe and I wash my hands entirely too often. I'm not sure what it is, but boy, these things just are. They're a nightmare for me. And so some of the other things that have been out there, for example, Microsoft Windows has what they call hello, which seems to, you know, be pretty good. Again, you have to, you know, these, these pseudo credentials or credential substitutes live in databases somewhere. So that information has to be protected and you always have to be a little bit careful with that. But I think that we've gotten good. I can't stand the marketing term zero trust. I can't stand that term. I can't begin to tell you how much I dislike that term really. It's enhanced access control, but that's okay. [00:17:57] Speaker A: So this requires a little bit of explanation. What does that mean? And what's your. What's your beef? [00:18:03] Speaker B: Yeah, absolutely. So what vendors love to market is Zero Trust is really kind of enhanced access controls. And these are looking at things like, you know, for example, geofencing IP address. I'm here in Cincinnati, Ohio. If I'm traveling, maybe I would try logging in from somewhere in California, for example, on the other side of the country. But that's unusual. So that should be sort of something that gets weighted to determine, all right, what's the risk here? Because this is unusual. We get into these user behavior analytics. But what are the other things? Right. Am I coming at it from the same operating system, the same browser I normally use? If these things start to not line up all of a sudden again, when we're waiting these things out, that should start to look more and more suspicious. And then also time of day, for example, if I'm typically logged into a system between the hours of 8am and 5pm and whatever time zone I'm in, and now all of a sudden I'm logging into something at 1am in that local time zone again, we put these things together and we start to wait them and then it becomes maybe we need to do some extra verification that this really is Matt. And so maybe we need to do that. Maybe we need to kind of isolate him, give him access to much less resources than he would normally have. When we get into what's called micro segmentation, which kind of sort of, for lack of better word, pigeonholes me in the certain access to certain systems, those kinds of things. And then if it really looks suspicious enough, for example, I don't even have a passport, so if it shows me traveling internationally, it should just outright block that connection and not allow me access in the first place, unless that situation changes. So these are what get marketed as zero trust. And again, I prefer to think of them as enhanced access controls because that's genuinely what they are. In a much more apt description. [00:20:17] Speaker A: Yeah, there's definitely an important aspect of security which is really what I've called a behavioral anomaly. Right. If you're a consultant and that's what you do for a living, and every month, day, week, whatever, you're in a different country, you're going to see that happening over the last, whatever, three, six months, it's not going to be a huge surprise. But if for a year you've been in your office at the same IP or whatever, that's the oversimplification, I apologize. That's what you're going to see. And if it's different, then, sure, why not do it to FE authentication or whatever. I think it makes a lot of sense. But I have that same kind of frustration around, like, SaaS. There's no such thing as SaaS. We had that a long time ago. The marketing people came and they were like, oh, let's give it a great name. [00:21:04] Speaker B: And. [00:21:04] Speaker A: And then all the executives kind of. So, like, I've come come to peace with what marketing people have done to technology over the years. It's fine. You want to give it a flashy name, like, okay, I'll, I'll, I'll get on board with that. [00:21:17] Speaker B: Yeah. And more to, like, geographic locations. It shouldn't be an impossible travel situation. For example, I shouldn't be in Cincinnati right Now, and then 20 minutes from now in Dallas, Texas, or in a European country, for example. Like, that impossible travel situation should also factor in because again, it would be impossible even if I hopped on a plane right away, and the plane flight would be at least two hours, let alone to where I can plug in and use technology and so forth. So, yeah, that again, should factor into these things and when we get into user and behavioral type situations. But you're exactly right, it needs to be tailored so that what's usual for me might not be usual for what your logins should look like, proper logins. And so, yeah, it's, you know, things are going to be much more normal for each person individually. [00:22:22] Speaker A: That's right. And I would say that, at least to me, that's one of the key elements around social engineering as well, because you as a human being should be thinking in that same way. Has my boss ever communicated to me in this way, you know, has my boss. Is this a weird hour for my boss? Is this a weird day for my boss? Is this a. Does my. Has my boss ever sent me an SMS before telling me to buy, you know, whatever Best Buy vouchers? Like, is this. Is this normal or is this weird? And if you get any sense of something being weird, then in the same way that the algorithm should do tufe, you should call your boss not in the number in the text, but from your number that's in your saved phone book. [00:23:05] Speaker B: Yeah, great point. And something I like to do is if my radar goes up a little bit, I will ask a question that that person would genuinely know. That, for example, would not be quick turnaround. So I can set you an example. I actually had a friend Request request on a social media platform from somebody I went to elementary school with ages ago, and we were already connected, but the profile just didn't look right. And so I think I asked them, like, hey, what was the name of that teacher we had back in first grade? If this is really you, they should be able to name that person or name the school, name these things that wouldn't be super easy to find online and quick turnaround. And so if you're ever suspicious, I would encourage any of the listeners, listeners start asking questions that you think only that person and you would know or have in common so that when you do those things, you can identify a fraud from what's real. [00:24:11] Speaker A: I mean, even something as simple as, oh, are we still on for tonight? For whatever, for the bar, and if they say, yeah, sure, see you there, and that is completely invented. Like, you can even invent a lie and say, hey, are we still on? If the person says what? Then, you know, it's okay to a certain degree. [00:24:30] Speaker B: Or, hey, did our travel get approved to go to Paris, France next week? [00:24:35] Speaker A: Yep, exactly right. [00:24:36] Speaker B: And try to, you know, set these things that, you know, you're exactly right. You can socially engineer a social engineer right back and, and put the challenge and the onus on them to prove that they really are the legitimate person they purport to be. [00:24:54] Speaker A: Yeah, I, I saw this, this article about how they basically created a, an AI voice bot that was basically wasting time for the, you know, these social engineers, wherever they are. And I was, I was wondering, like, the day where, it's where the social engineers are not going to be humans, it's just going to be an AI and then you're just going to have two bots talking to each other. And like, really, is this what we paid for, to have a conversation between a bot protecting a bot attacking? Like, this is. This is where our electricity is going to. [00:25:28] Speaker B: Yeah, and absolutely, we're entering a stage now where we're going to have AIs battling each other. One trying to basically commit cybercrime, the other one trying to do cyber defense. And these systems are essentially going to go to battle with one another. And yeah, we'll have to, we'll have to keep an eye on these things as time progresses and the solutions get better on both sides of that aisle. [00:25:55] Speaker A: And I mean, it kind of opens your mind to, like, is the day where hackers are not going to be hacking data and systems, but they're going to be hacking AI, feeding AI false data, trying to convince it to do things Give away secrets, share other user secrets. It just seems like there's a new generation of hacking in front of us. [00:26:16] Speaker B: Yeah, I always tell people, no matter what it is, anything that you put online, just assume that that's eventually going to get leaked out to the entire rest of the world. And if it can be used against you or for somebody else's financial gain, that's probably going to happen. And since it's on the Internet, it's probably just going to live out there forever and ever. So I always caution people, be careful of anything you post online, regardless of the platform, regardless of anything else. So before you upload, for example, photos on social media platforms, stop to think what happens if this gets outside of this tight circle and leaked other places. And the same thing with AI, right? If it's on the Internet, it's getting attacked. I can tell you from somebody who's looked at a lot of log files, whatever you put online is being attacked almost instantly the moment it comes online. And those attacks happen all day, every day. They do not stop. It's just constant state of attack. [00:27:24] Speaker A: I think that the wider notion of anything that is written down in any form, you have no control over. You sent an email, you sent a letter, you wrote a text. Anything that is written down, you think you're sending it just to Joe or whatever, you have no idea where it's going to go. The amount of times that I've sent an email to one person, and obviously it's the emails where I was frustrated about something, and then I get that email back with 50 threads, right. And with the whole company on CC, like this thing that Ari said, I'm like, oh, my God, I'm never doing that again. And if you just think about the fact that once you put something out into the world, you have no control, no matter what it is. I think both from a personal and a security practice, that's such an important point. I appreciate that. Maximum. [00:28:14] Speaker B: Absolutely. [00:28:15] Speaker A: I wanted to ask you. Social engineering is becoming incredibly scary. We kind of started to go down this path. You know, I'm hearing stories about, you know, government executives getting on zooms and taking them, you know, five to ten minutes to figure out that they're talking to an avatar. Basically, this is getting to a level. I got a. An attack yesterday, phishing attack. It was an actual email, actually, from PayPal. And that's what caught me, as in, wait, hold on, is this real or not? And what they did was pretty brilliant. They actually just used the ask for payment as the attack mechanism. And Then their phishing was actually two things, which was also brilliant. One is in the message they said, and here's another brilliant aspect. They said, if this doesn't look right to you, call this number. So they were attacking, they're using the psychology of the attack against me. And then they were putting a phishing number, obviously, which would connect me to their, you know, their, their call center. But then the second elements of the attack is there was a pay now button. So, so either I paid what they wanted and I, and I fell in the scam that way, or I was like, oh no, this doesn't look right. I'm gonna call the number. And then I fought and I'm like, you know what? It took me five minutes to figure out that. Sure, I looked at it and I was like, hold on. But still, I couldn't put my finger on it. And if I self proclaim king of the lemurs, like consider myself know something about security, it took me a minute. I mean, what hope does anybody else have? So if it's becoming more and more difficult for human beings to actually figure it out, do these security trainings really have any merits anymore? Or is the technology and the method just so sophisticated that we have to come with a technology solution? [00:30:07] Speaker B: I think it's a combination of both. I think one, we need to continue to educate people and sort of implant within folks that are used to being trusting a healthy dose of skepticism and saying this doesn't. There's something about this that just doesn't seem right. And start asking those questions and trying to figure out, is this legitimate or is it a scam? And like you talked about, reach out to the person through known good communication channels that you've interacted with that individual before is pretty important. I think we also need to do some things with technical controls. And so, for example, some web platforms are very good at identifying things that, for example, if you're getting sent to a website address and that website has existed for seven days or less, the chances that that is a malicious address are much higher than say, the established site that's been online for two years and has a good online reputation. Now. It's not a perfect thing though, because threat actors will monitor for recently expired domains that have a long history, register those, and then use those in their social engineering attempts. So it's not perfect, but it's sort of a step in the right direction when we start looking at these things and classifications and so forth. And I think some of the technology providers have tried to step up for Example, some of the web browsers now, if you try to go to an absolutely known malicious site, will actually halt you and throw up a red page saying, hey, something doesn't look quite right here are you. You might want to go back, but you know, if you really want to proceed, click here. But for anybody listening that sees something like that, that should be a huge red flag because typically you shouldn't see that. That should be the kind of thing that prompts going to these other communication channels and doing that outreach. You should be highly suspicious of anything out of the ordinary like that. That's not to say, like you said, that PayPal looks perfectly normal, but there's that thing in the back of your head that says, this just doesn't feel quite right because I don't remember ordering something. And so again, I think it's that combination where we're gonna have to get a little bit better at our technical controls and building them into more things meets helping people understand what scams are and how they happen. And what I love about the podcast Names Story Samurai is by sharing stories of here's how people were unfortunately scammed and here were the things that they lost or the struggles they had to get their life back in order and disprove, for example, if somebody's applying for taking a bunch of lines of credit against their name, being able to stop those things. And so I think some of the resource, Brian Krebs does a really good job of, you know, sort of explaining a lot of these scams and so forth. So you just kind of a quick plug for Krebs on security. And Graham Cluley I think, is another one that does a great job of helping people know what is out there. There's some other great sources as well. Those are just a couple right off the top of my head. And those individuals have done a really good job to try to educate the public. But share those articles with your friends and family and those sort of things. When you see something that is like, my friends and family should know about this particular scam so that they aren't taken in by it. So I think, you know, again, getting back to education, also doing what we can on the technical controls, it's really going to take a good combination of those two things to combat the problems that we face today with social engineering. [00:34:21] Speaker A: That's such an amazing point. You know, one of my pet peeves around banking is that the text that they send you about a Transaction can take 10, 20, 30 minutes to arrive in cases, some cases hours. And my Point is, if they just made sure that that text arrived the second you did the transaction, that is revolutionary. And I'll tell you why. My way of operating is I always turn on these texts and if I get a text and I didn't, and I only use a certain bank, I want to mention them that actually do these texts real time as opposed to others. If I get the text and I didn't right now use my credit card, I know there's a problem and I immediately can call my bank and say that was fraud. It's as simple as that. And you know such a simple thing as having this like live notification. [00:35:17] Speaker B: I was just going to add. So the other thing is with some of these technical controls, we need a user education piece to go along with it. So your talking about mfa. If you get a pop up on your phone that says, hey, you're trying to do this, do you approve or disapprove? This wouldn't be through text, but a specific application. People need to understand if you see that you aren't actually trying to log into something if it, even if it comes up a thousand straight times of clicking no, somebody is trying to reach one of your accounts, whether it's a banking account or a work login or whatever that thing is. So we need that user education piece to go along with these technical controls. And that's a great example where those two things absolutely have to go hand in hand. Because if a user doesn't know any better and they're just like, I'm tired of getting these pop ups. Yeah, that's me. And it lets a fraudster or a threat actor into an account or an environment, something like that, that can be a real problem. Sorry, I just want to add that real quick. [00:36:20] Speaker A: No, I appreciate that. And that's kind of like when you look at military and intelligence, that is a common way of attacking and what we call that is desensitizing the target. Right. So you're trying to create a behavior that the user is like, okay, I understand, I need to do this a lot. And then when it's actually malicious and it comes across, you're like, okay, this is a bug. I just going to click there. That's a, that's a very common intelligence military strategy. And I would like to say that, you know, we're smart enough to know, I can tell you examples in recent history. I don't want to get political where that has worked. So you know, this is a real threat that we should all be thinking about. So I really appreciate that. Comment. [00:37:02] Speaker B: Yes. And sometimes it's. I've heard it called MFA fatigue. [00:37:06] Speaker A: Yes. [00:37:06] Speaker B: Where you just get tired of the pop up, so you just hit. Yes. So they quit coming up. Yeah. Not thinking about what are the ramifications of what I just allowed. [00:37:14] Speaker A: Yeah, exactly. Right. Okay, so here's, here's the. I want to, you know, we're. We're at the end of our time, so I'm gonna just put one bomb and we're gonna see where that goes. I am slowly thinking that any platform on Internet should authenticate their user at the Social Security slash, driver's license level. So have a TSA authentication to any user on any platform. Right. So really this idea of know your customer, extend that to everything, including Facebook, LinkedIn, Twitter, especially Snapchat, because that's a huge evil when it comes to, you know, child molesting and all kinds of really bad things. Pros and cons. Why yes, why no? [00:38:05] Speaker B: So I think why yes is because attribution is hard, that is to understand who actually is doing this thing is very hard to do on the modern Internet. And so there are any number of VPN services that will let you come out of an exit node from just about any country, whether it's considered friendly or hostile to wherever you live. People can sort of hide behind that. I've seen other legitimate services abused by threat actors just to hide where they're doing their stuff from. And I think that's the case for these enhanced verifications, as you've talked about. And some platforms will encourage it. One social media platform, it keeps badgering me, hey, verify you're you. And I'm like, but you want my legal first name and not the first name I go by, because I've got a very long lasting with entirely too many letters in it. So I haven't done that verification yet. The scary part of that is that you're sharing your verification details with somebody that could end up again. We talked about leaks earlier, and so that would be one strike against it. And I think the other argument against it is just privacy concerns. Do you have a right to be able to express yourself in a way that you can't be identified, that you can share thoughts and ideas? I will say it's a tragedy that people abuse that and they use it for nefarious purposes, because there are some things, for example, that might not be popular, but you have maybe a point of view that you would like to express. And if somebody catches wind of that, that disagrees, you know, some people we can agree to Disagree and move on. There are other people that will hold real grudges. [00:40:10] Speaker A: Yes. [00:40:11] Speaker B: Against somebody that doesn't think just like they do. And I think it's a difficult balance. Right. I don't. I wish I had a great answer for you. So I'm glad you just asked pros and cons and not where I fall on it because I can understand all sides of this argument. And when it comes to legitimate services like financial services and those kinds of things, I think absolutely, we need to be a little more careful about doing those verifications and assuring that, you know, people are who they purport to be versus, you know, maybe another platform that, that people want to use for free speech type purposes, that kind of thing. It's a tricky balancing act. [00:41:00] Speaker A: Yeah, that's it. Right. It's 1984. It's the exact 1984 argument. George Orwell. That's where I got stuck. Right. In my kind of argument with myself. That's where I got stuck. And I'm like, like, sure, you can have know your customer. It doesn't mean that your identity needs to be public. But then, oh, if you get hacked and these databases come out, then it becomes an issue. Right. Do I want people knowing my, my healthcare, my political views, my, you know, whatnots? And freedom of speech is a, you know, it really is a pivotal core of our indirect republican democracy. Democracy. So it's, it's really. There is no good answer and I don't have a good answer, but I feel like we could do more. I'm just not sure exactly how. [00:41:45] Speaker B: Yeah, I think everybody's going to feel just a little bit conflicted about that if they're really thinking critically about both sides of where that falls out. [00:41:54] Speaker A: Yeah. Matt, thank you so much. I appreciate your time today. This has been absolutely delightful. [00:42:00] Speaker B: Yeah, it's been great fun. I appreciate you having me. Thank you so much.

Other Episodes

Episode

July 24, 2024 00:20:53
Episode Cover

Marcia Drake | Jul 23, 2024

Marcia Drake shares her experiences working with immigrants and refugees and the impact it had on her. She emphasizes the importance of listening deeply...

Listen

Episode

October 09, 2024 00:24:05
Episode Cover

Cambridge Mobile Telematics | Oct 9, 2024

In this conversation, Bill Powers and Hari Balakrishnan share their entrepreneurial journeys, focusing on the intersection of technology and safety in mobility. They discuss...

Listen

Episode

November 20, 2024 00:43:05
Episode Cover

Michael Taylor | Nov 20, 2024

Michael Taylor discusses the intricacies of decision-making, emphasizing the importance of understanding the entire system involved in any decision. He highlights the pitfalls of...

Listen