Are Turtles War Machines? AI-Enabled Cybersecurity Has an Answer

Episode Summary

AI-enabled security can process data faster and more accurately than humans, but can it tell the difference between turtles and rifles? We answer this question and more as we cover AI-enabled cybersecurity for network defense, insider threat, and user privacy, including considering whether AI ethics are simply business ethics. We also discuss asymmetric uses for nation-states on both offensive and defensive postures and AI-enabled malware and social engineering. Dani concludes with a deep dive into "Fog Reveal" a law enforcement cellphone tracking tool that'll make you squirm.

Episode Notes

AI-enabled security can process data faster and more accurately than humans, but can it tell the difference between turtles and rifles? We answer this question and more as we cover AI-enabled cybersecurity for network defense, insider threat, and user privacy, including considering whether AI ethics are simply business ethics. We also discuss asymmetric uses for nation-states on both offensive and defensive postures and AI-enabled malware and social engineering. Dani concludes with a deep dive into "Fog Reveal" a law enforcement cellphone tracking tool that'll make you squirm.

Episode Transcription

Grace: [00:00:00]

Hello and welcome to R a podcast by Harvard Kennedy school students. My name is grace. I'm joined today by Sophie, Beth Winona and Danny. Today we'll be asking the question. Will AI actually improve the cybersecurity of network systems or is it all a pipe dream?

I want to start with some background or stuff. Recent Microsoft study shows that 60% of attacks in 2018 lasted less than an hour and relied on new forms of malware. So we know that. the offensive side of malware attacks are on the rise and they're getting faster and they're getting more unique.

And in the past I think cyber security solutions, as we knew them were pretty reactive. Something would go down user would complain. Then like the researchers would conduct a scan to an audit, do some sort of manual detection technique until they discovered a new malware sample to analyze and then add to a malware list.

I think the way that AI has been promised for use in cybersecurity would be to improve the three R's right. Robustness response, resilience robustness. it's [00:01:00] promised to self-test and self-heal the response is purported to be very fast. And I can process massive amounts of data systems that, that generate decoys and honeypots are like already on the market and available.

And then third is resilience, a system. Can improve its resilience by facilitating threat detection. And I know that's something that some of us have talked about before. So with that sort of background, I'm sure that there are some ruffled feathers, which is why we love discussing stuff on this podcast. So I wanna talk to you about the FKC of these broad promises. Do we think that AI will actually improve the three RS of a system?

Winnona: I'm happy to take this grace. So I spend a lot of time in the threat vendor space. I also did a lot of threat intelligence analysis and, you know, detection response. Right. And so when we're talking about AI systems, a lot of hype or there, there has been a lot of hype in the last half, decade or so.

Right. And. I wanna be very clear that some of the solutions that AI has attached itself to in the, the detection and [00:02:00] response field automation can do pretty well and manual solutions can also already do pretty well. Like take the Yara rule, for example and analyst finds a new strain of malware.

Signatures it not through, I mean, ideally there are certain strings or functionalities bits of bite code that can't be easily changed. And they write a signature and deploy it across their system. Right. And so that's not just going to find one file or one type of malware strain.

Ideally it'll actually catch different types of. Irregular or, malicious functionality. And so there's things that you could do that don't require AI that do the job almost as well. And maybe this is like me putting my, my most spicy foot forward as a, as a huge AI skeptic.

Grace: Please. I, I implore you.


Winnona: But there's a lot of false positives that AI brings up because the data underlying, some of these threats is not clean. Like it's not like a, yes, this is bad. No, this is [00:03:00] not bad. Like it's all of these different behaviors together that

concerning. For example, there there's plenty of legitimate software that also acts like malware that sometimes gets caught up in AI enabled solutions. And so regardless, you still need someone like a human in the loop or like a, so analyst, that's going through these AI alerts being like, oh, that's a false positive, that's a false positive, that's a false positive, which clogs up the queue.

And sometimes analysts spend more time debugging their AI solutions than actually using them to hunt for threats or. Help their workload, Danny, you feel like you wanna jump in

Dani: I do. So I don't disagree with that. The, the risk of false positives to me, the value of AI in network defense can be evaluated along two dimensions. when it comes to false positives. One is what are the risks to fundamental rights if those false positives occur? So if you're talking about network defense in a public sector, and we're talking about false positives that are [00:04:00] lighting up, you know, private citizens as potential threats then, then I have a real concern because we're talking about privacy rights.

We're talking about the fourth amendment. If we're talking about corporate defense systems, I. Way fewer concerns because as a function of using company equipment, you've seated all rights to privacy already. And so to me, the, the risk that, you know, somebody's irregular behavior on their company device gets lit up and the sock comes over and says, Hey, what are you doing?

That to me is actually a good outcome of, of intense security mindedness. So one dimension is, are we talking about a corporate system or private citizen? Accessing networks.

Winnona: are you just as a clarifier, cuz I'm, I'm thinking of AI and cybersecurity solutions as like, you know, helping your security operations center, your so triage like potential malware hitting your system, which I think is different from insider threat type software that you are talking about.

Dani: I'm yeah, I'm talking about insider threat. I'm talking about any system in which we're using AI to sort [00:05:00] through suspicious behavior. so could be insider threat could be external threat. So that's one dimension. Are we talking about a company deploying this on a network or are we talking about systems that a private citizen might get caught up in?

The second dimension is, is the rate of false positives, likely to have a dampening effect on people's overall security posture. So if you get so many unusable. Is it going to over time, create a culture in which people learn not to value the pings because it's, you know, the boy who cried Wolf. And so that is a real risk.

I think I, I, you know, couldn't begin to say on average, across every company that's deployed, AI is a network defense. What's the net tilt towards more security mindedness versus less. That would be a pretty hard assessment to give. I will say. I don't know, just like back of the envelope wise, I am bearish on people's organic willingness to be security minded.

So I think the degree to which we have [00:06:00] enforced pushes towards being more security minded I'm always gonna be in favor of, of systems that encourage. Security awareness. I think we've got a long runway before we start seeing that negative effect of people saying, oh, I'm so tired of

security just because our natural inclination is to not be not be willing to be security minded.

Grace: I feel like as a cybersecurity professional. I kinda laugh when you're like the dampening effect on the cybersecurity posture of whatever entity and I mean, that's assuming there is a cybersecurity posture into, to, in to dampen.

Sophie, you have a point.

Sophie: Yeah. So the other thing that I think AI has a role in, I totally hear the concern about, poor data in poor data out, but I wonder if there isn't an important role for. One, automating some of the common cybersecurity tasks, like vulnerability management, to an extent. And then also identifying patterns just in larger data sets [00:07:00] that might be missed by manual analysis.

Even if that data is poor, like maybe there are things to be learned about how, patterns that could be missed by manual.

Winnona: I mean, I you're right. Sophie, maybe I came in a little too strong at the gate, but . Definitely something to be said about pattern recognition. So. one of the RS that grace had mentioned previously is response, right? And so if you are going to work as an analyst with an AI system, like with any sort of machine learning process, you have to trust that the result of that program is going to produce more true positives than false positives and in order to respond.

And so. Pattern recognition is great. But if I, as an analyst am working with an AI or working with any sort of program that gives me 51% false positives, and I have to go manually triage them as false positives. And then there's one. [00:08:00] Event that does piece some really interesting pattern recognition things together and is actually a malicious event.

The likelihood that I'm gonna look at that and understand it might not be as high as I think people would think, especially given previous false positives, I'd be more willing, especially in a big old triage queue to be like, mm, I don't know. Next.

Bethan: Yeah, Winona. I think that point on if the human in the loop trusts the software or trust the alerts is, is super important. And I think we actually see that also in some consumer financial protection AI, right? Like. Credit score trackers or other types of things that will ping you if there's an alert or an issue.

I know I have one and most of the time it pings me. It's not an actual malicious attack on my credit score or my identity, et cetera, et cetera. But I, I think another. Interesting point of an application that I think adds a little bit of a benefit to this broader debate is detecting threats at a high operating tempo.

And this is [00:09:00] a point that the DOD is really pushed in its integration and adoption of AI in terms of like the fog of war or war fighting dynamic, where. There's a lot, you know, there's often times where humans do have an upper limit on how fast we can monitor things or monitoring a network for threats and AI can supplement the physical, mental abilities of a human operator.

Particularly I think that's very specifically in the war fighting context. So I, I think that's an important point and I see that very much in. DODs language and rhetoric or other, not just the DOD other nation, state and their militaries in terms of supplementing existing human capacity. But I, I think that is one facet of the debate and I'm not sure it's makes up for a lot of the shortcomings.

Dani: That's a really helpful. Flushing out of the landscape of what we could be talking about back then. And so maybe just to sort of ground us for the rest of this discussion, I'd, I'd love to just take a second frame, sort of some of the examples of what we're talking about and the scope of what we're talking [00:10:00] about.

So we can start by dividing AI enabled cybersecurity into offensive and defensive, and we haven't really gotten into the offensive scope yet. When we're talking about defensive, we're talking about primarily so far network. And so a lot of this discussion has focused on the perspective of a so, and many of our listeners might just be, you know, average users of technology in their daily jobs, not necessarily in a security, specific job, and as a user of your company or your organization's technology, you're gonna encounter this technology anyway.

Uh, You just may not notice it. So some common tools are things like an automated threat detection tool, which monitor. Network traffic. So for example, when your computer sends a data packet to a printer on your network there's a tool monitoring that flow of packets. And if suddenly a unusual flow of packets develops, say printer, sending lots of packets to a computer outside of [00:11:00] the network, there's an automated response that says, Hey, this is unusual.

And so the, the mapping of what is unusual. And the ability to catch that in real time and flag it to a human in loop is a piece of that AI enabled threat detection. So when we're talking about network defense and, and monitoring, that's sort of one place where people might encounter it as an average user.

I don't know if other folks have other examples that they wanna speak to and, and whether they think those other examples are useful or not applications of the technology.

Grace: Yeah, Denny. I appreciate the delineation. I think I like the categories that you split them into are really helpful. So one of the examples that I brought forward is, a data infrastructure AI company that recently won a huge contract with DOD and Honestly, the information that I was able to find about, their technical solutions was pretty difficult to find perhaps with good reason.

But they have dubbed , their technique system for insider threat hindrance, or SYT like from star wars. Just if you [00:12:00] thought that this industry couldn't get nerdier so they, they track insider threat risks by looking through both public and private data sets and to detect insider threats, which is really I mean, clearly pointed towards finding people who are insider threats.

And I think this kind of is one of the bigger issues of AI where we're not at a point technically where we can Put input into an AI, get an output and understand what its thinking was on the inside. So that like opacity in the middle, especially when it can affect people's lives. And people's careers, I think is really where I find it to be really skeptical for that usage at this time until we have that transparency in AI for my personal stance, is that it can't be used for things.

That really affect people's livelihoods and people's lives. Winona, you had a response.

Winnona: Yeah, grace. I, I mean, I love that you brought up towards AI and Not just because of the issue with opacity, which there's a lot of interesting debate in regulatory circles about whether or [00:13:00] not AI should be auditable and auditable. AI is kind of an interesting next step in opening up that black box.

I think that goes back into, I'm gonna like Rehap on the data thing over and over again in this podcast episode, apologies in advance, but insider threat is one of the most difficult places. For an AI to operate because humans don't have. Consistent data, we're irrational beings. Think about how messy even just regular network traffic is and determining what is anomalous there.

So much of security solutions related to AI technologies has to do with anomaly detection, figuring out what the baseline good is then trying to find what's weird. And so how do you determine what is generally acceptable among employees? And then. Anomalous And what does that even say about how it's going to regulate your behavior in the office, if you know that you have an AI enabled insider threat detection, like, are you going to [00:14:00] email your family from your work email, for example, like, is that anomalous, I don't know.

is it within the rules of, of the corporations? Like how, how does this technology work? How does this specific insider threat platform? I mean, not just calling at torch AI but specifically AI enabled insider threat systems. In general, how much are you relying on what the corporation is saying is okay.

Versus the data that is actually indicative of employee behavior and how well are you actually detecting anomalies? Those are really key questions that like with proprietary technology like this, like who knows.

Grace: Well, no, no. I'm so glad you brought that up. I feel like Danny and I probably have some, some words for you on the insider threat front, we have to do this well, I don't anymore, but Danny Celest to do annual training on insider threat stuff. And it's so true, especially the part where you're talking about.

The warning signs of insider threat are, are opposing. Sometimes we're like one if someone works too hard, it might be because, they're trying to get into the good graces and get, get into your, classified files. But if [00:15:00] someone doesn't work that hard, it might also be because they have a second employer.

It could also be like,, if someone's really friendly, are they too friendly because they're trying to get into the systems or if they're kind of standoffish again, is it cuz they're actually the adversary anyway. So I'm really glad you brought that up.

Sophie. What do you think.

Sophie: Yeah, it just, it seems to me a big problem here is how do you regulate this kind of technology at scale? But it also just seems to me that many of the issues with AI ethics are actually just. perennial problems with business ethics.

Grace: I think Danny had something in response.

I can see you vibrating over there.

Dani: I, I just wanna give that like a plus 1 million sign. I think Sophie has summed it up. She couldn't have described my feelings on this better and more succinctly. So now I'll just describe them less succinctly that, and, and it pertains to what grace and Winona you were saying about the unpredictability of human behavior and the [00:16:00] risk that poses.

Surveilling non-harmful behavior at work. And the question you asked about what does that do to employees who know they're being surveilled and monitored and, and aren't doing anything inherently harmful? I, I think we really have to accept that we're in a world now where all of our corporate behavior is surveilled and there's well documented.

Repressing effects to being surveilled. It's one of the big reasons I'm against, you know, deploying facial recognition and, and public surveillance. Because even if no action is taken against you, the very knowledge you're being surveilled changes the way you exist in the world. That said, I think it's an inevitability for corporate life.

We're here. It's a necessity given the number of cyber attacks company. And I actually think it's a good thing in that it forces people to be way more hygienic about the behavior they engage on in their corporate devices versus their private devices. So I know plenty of [00:17:00] people that will check their Gmail on their work computer they'll they don't wanna bring two computers when traveling, so they just bring their work computer and they'll, search whatever they're doing, best tourist sites and the place that they're in on their work computer harmless.

but the willingness to blur the lines between your personal behavior and technology use and your corporate and devices to me is really, really troubling because it suggests that people don't understand that they're being everything on that device is subject to company monitoring. And so the problem co Sophie called out that AI ethics is really about business.

Ethics is true. And yet I don't think the solution is to change business ethics. I think the solution is. Technology users to understand that the current state of play and the necessary state of play is companies will engage in really aggressive security practices. And therefore you need to engage in really aggressive, actually not aggressive, really basic hygiene and privacy practices, which, and use your [00:18:00] own devices.

Stop using your company devices for anything personal. Now I will get off my soapbox.

Sophie: that's true. And I think that there should also be like one of my main concerns in this area is that while it's really, while the pace of progress in this tech is really exciting.

It's also concerning that it's somewhat outpacing the government's capacity to deal with the pace of that change. And so in addition to Danny's points, it seems to me like there needs to be a better pipeline for people who actually understand the technology engineers to talk directly to policy makers.

Winnona: Plus 1000 agree.

Grace: Yeah, It looks like Beth has a follow up. What do you think Beth?

Bethan: I think this comes to the point of the idea and advocacy for explainable and audible AI principles. I think Sophie's point on how. AI and the applications of it and the uses of it in corporate and other areas is getting so far [00:19:00] ahead of where policy makers are. And I, this has come up multiple times in many of our podcasts for our loyal listeners.

You've heard us discuss this before. But there really needs to be more transparency. And I realize there's a trade off. Obviously you don't want your AI, they're using to protect your networks to be transparent. Right. But what is the, what does it mean to. Provide the right amount of color or insight to policy makers and regulators in a way that the protections data, privacy protections, and other type of like fundamental human right guarantees can be met.

While still utilizing this technology in a way that is critical and important. So I think the whole debate about making sure AI is yes, auditable and explainable is, is a really interesting point to bring into this conversation.

Grace: Yeah, I think that transparency portion is really important, especially as we delve into maybe more policy options in that part of the conversation. I think my, my like personal interest when it comes to privacy is the privacy of training data sets that, that AI [00:20:00] systems get deployed, what they're trained on because if the privacy of those data sets aren't protected, then the data can easily be poisoned.

And there, there are studies that show that even just an 8% addition of poisonous data could have an 80% impact of, , like false positives, et cetera.

Winnona: I'm happy to just jump in there, grace. I mean, so going back to Beth's point about explainable AI, like there. Is a business case that you can also make, not just a regulatory privacy ethics, one, but a, a real business case for some of these AI systems to be auditable. Like if I, as a, so analyst know that the AI has found a new malicious pattern of behavior that I can double check and be like, oh, wow.

Like I've never seen this before. That is a learning process on both the machine and the human. Right. Versus just the human is verifier. I think that that. One of AI's most promising selling points that I don't think gets utilized or utilized nearly enough

Sophie: I mean, obviously companies have to be factored in here [00:21:00] somewhere because the scale that's required to effectively use these different applications of AI just require the capital intensive resources that realistically, only companies have.

my point is just that most of these issues are not specific to AI. They're just, sort of general business ethics questions and to me, the immediate problems here in the AI policy space are just, that related to that distribution of power under like a capitalist system in which, you know, economies of scale require that companies are the ones kind of leading the charge on this.

Grace: I think maybe I don't understand explicitly what you mean by business ethics. Can you explain that just a little bit more?


Grace: Beth? I feel like you had a comment, but then we didn't get to it. And I'm just, I wanted to see where you're at.

Bethan: Yeah, I was so I actually, I believe there's not one definition of business ethics, cause it depends like country context sector. So I don't wanna define business ethics but I, what I was gonna mention is similar to the point that Sophie made about. This being in economies of scale and the people or institutions that have control of AI are the powerful, you know, the large companies that have the capital to invest in this.

What is some somewhat still argued a frontier technology for many people to implement. I think that same can be said for state development of AI. So we're seeing Russia, China, and the us all putting a ton of emphasis on AI as kind of the, future of war fighting [00:23:00] and the future of like, Global hegemony.

And that's really concerning again, that goes to this point of yes, large businesses with a ton of capital who are investing AI, have the ability to shape it and control it as a technology. And the same can be said for countries, whether they are democratic or not. A quote that I know a lot of AI experts or advocates in the government love to quote is Putin's declaration that whoever becomes the leader in AI will become the ruler of the world.

So Putin said that in 2017, and I think it really holds true and has been maybe not explicitly, but certainly implicit in a lot of other go. This race to harness AI, whether it's in on the battlefield, whether it's in insider threat detection, offensive cyber, I mean, there's just a list that goes on and on.

And I think another interesting stat that I read in preparation for this episode two is that over 30 countries have published national AI strategies. But I think the point that Sophie made to the fact that those who are controlling [00:24:00] and shaping AI, Are the largest and most powerful entities, whether that be in the private or public sector. Sophie, I wanna kick it over to you. I realized I just went and ran with one comment you made

Sophie: No, I love everything you said, Beth. Then I think that makes. Sense. And I also feel like, you mentioned China, Russia. I think basic AI has already really begun to actually change the nature of military conflict. And we've seen that as recently as in the Ukraine conflict where. Troops are using drones that cost 10,000, $20,000 with 3d printed components that drop, world war II style grenades using trained vision models to detect soldiers in camo.

And that has taken out multimillion dollar Russian tanks. So if my $20,000 drone can take out your, four and a half million dollar tank, That's a pretty significant asymmetric [00:25:00] capability that we just introduced. And so for certain types of conflict. AI is really changing outcomes on the battlefield.

Winnona: Yeah. I mean, Sophie, that is such a good point. And I mean, going back into more of the. The comparison between defensive insider threat EDR, like the corporate solutions into the more generic offense defense balance here, AI does really benefit offense. Like we talked a lot of, we talked a lot about problems on the defensive side, in terms of anomaly detection.

But the technical problem that comes from an offensive perspective in, in the, the vision case is simply detection and making sure that your detection of camouflage is highly accurate regardless of whatever noises in the background of that image. Right. And that's a fundamentally different AI problem or machine learning problem, then anomaly detection.

I think also like you can see that sort of work done in. The offensive side when it [00:26:00] comes to the corporate world as well, like G P T three. The open AI text generation project is really good at crafting spearing emails like phenomenally good. And it is a, and, and it goes back to that's just mimicking and doing the detection of what makes an effective phishing email versus the defensive side where they need to.

Phish fishing emails from the regular emails. And I think that this plays into kind of the more geopolitical offense defense balance and where AI and, and technologies fit. Yeah,

Grace: No Winona. That was such an awesome application of a use for GP D three that I did. I truly didn't even think of

Winnona: Hot takes only hot takes only.

Grace: um, Beth, and it seemed like you had something to say, you have any response.

Bethan: Yeah. I was also gonna say, I think the evolving, going to the training of AI for, warfare context, like building on Sophie's point on AI being trained to identify camo. There's also, I think shield AI is [00:27:00] a really interesting company as they're trying to create this autonomous AI pilot. And they use synthetic environments with human machine tactics and behaviors to test and build their pilot called like hive mind.

And I think it's super interesting how they're using. They have the capacity to build this really. Realistic environment to train their autonomous pilot. Shield AI has this really, advanced, incredible way of building a synthetic environment. And there you have a ton of resources and support from the DOD and other allied countries.

And then you contrast that with the application of AI on the battlefield in Ukraine, where you have a very kind of rapid test and adjustment

Sophie: that was a good point.

Grace: but Beth, I yeah, I think that that actually goes to the point that I wanted to make on that asymmetry point. Sophie's example I think is, is really good. In saying like this changes, the nature of warfare. I also wanna make counterpoint to that.

Given that with AI powered machines like the nature of warfare has [00:28:00] changed, but that doesn't only favor the offensive side, because I think that because AI is so new and unknown, it also has a ton of vulnerabilities that we're all experiencing now real time. A couple of examples that I can give or that one of Google's AI beta systems.

Was really easily tricked just with one image of a 3d printed turtle to then categorize all of its turtles as rifles. And obviously in like a, a warfare context, you definitely don't want your drone strike capabilities to be like, well, let me just kill all turtles.


Winnona: Are actually warm machines, I think might be the hottest take on this podcast episode.

Grace: I know we're in a competition for the hottest take. And I think I just won so. Yeah. I, I think that while I think the asymmetry is on both sides it's both offensive and defensive. However, I think those types of vulnerabilities can be pretty I won't say easy because I'm not little on fixing them.

But, but there are fixes to, to that, that type of code. Danny, over to you.

Dani: You almost threw me off track with the turtles there. I've [00:29:00] got a lot of thoughts on turtles as war machines, but I'll try to bring it back. I think the, the, your point that asymmetry occurs on both the offensive and defensive side. Is really good. And so that's, that's one clear force multiplier as my employer is fond of saying of AI, but the other is the point that Winona started to make about which I'll put under the header of, action at scale.

So if G B T three can be generating all these really effective spearing emails, we don't just have to worry. You know, a hundred employees getting screwed fishing now are worried about a hundred thousand with marginal cost added to the sender. And that's true across a lot of forms of warfare or call it political maneuvering that actually we come into contact with every day.

So we don't just have to worry about can suddenly drones, target at scale it's things. are we subject to influence operations and Twitter bots that can generate MIS disinformation at scale now? So you don't need an actual human, [00:30:00] you know, Manning bunch of systems putting out crazy information that then is gonna influence a us election.

You just need one text generating algorithm. And then suddenly you have the ability to influence nation state's political leanings. And so to me, that's. Really scary part is not just asymmetry when it comes to kinetic warfare, but actually at scale actions, when it comes to soft power.

Bethan: Danny. I'm really glad you brought up the point on soft power because the implications of AI as an offensive tool on the MIS and disinformation side, I think is super compelling. When you have open source powered, you know, open source powered AI, like G P T three, which can mimic the same speech patterns as a person.

Talking to right. You have these AI, the Facebook one, the went sideways, Microsoft one that also went sideways. And as the AI is talking to someone, they learn the, the speech patterns and the shorthand and, the tone of that conversation they're having. And then [00:31:00] it becomes more the human they're talking to starts seeing similarities and feeling similarities and feels more, wants to continue that conversation and feels like it's someone there they know or is similar to.

I think that's really dangerous for the MIS and disinformation spreading. And I definitely, I know that that is something that Russia ha or Russia affiliates have employed in terms of spreading mass disin, dis and misinformation in the us. I think it's a massive problem and very scary.

Dani: Yeah, I think I think Winona probably has a response to that. And I just wanna acknowledge that the last example you gave Beth, we saw during the 2016 election and the run up to it and subsequently. Lots of investigative reporting on how much the presence of Russian actors online. Fomented this really rankers online debate, that it was really famous example of actors staging a Facebook protest.

I think it was a anti-black lives matter protest. And then black lives [00:32:00] matters. People sh showed up in person and it turns out like at the start of this, no, there was no actual. Us political activist. It was just Russian agents stirring up really egregious sentiments, which then actual us people responded to and resulted in I L presence.

And so that is a live Russian actor, but now take that and put it at scale. And you have in every city, you know, somebody responding in common threads. That's not actually somebody it's an algorithm. And the ability to generate real world action and it's really, really scary.

Winnona: Yeah. I mean, even hopping on that point, Danny, I would argue that there is AI in that particular operation, but on the side of Facebook and what they like to promote in terms of inflammatory content, but you know, not to, it's not, you know, an AI application of cybersecurity per se. Going back to Bethany's point about the Twitter, AI bots or the, the Twitter.

I don't really know what, or remember what [00:33:00] Microsoft and and Twitter called those particular bots but that type of data poisoning where you throw a bunch of racist remarks at a, a Twitter bot and they become a little bit more racist. I mean, you also see that on the the side of G BT three, where people will open up bots to G BT three and realize that the way that you code those bot.

Are susceptible to cross site scripting, which was a fun thing that you saw on Twitter lately as well. So I guess the point that I'm trying to make is that AI is a new technology, new, in quotes because it's been around for a while, but the efficiency of computing and the amount of computing power that we have now makes it easier to produce a decent AI.

Even though the AI system itself may have cybersecurity issues. But the, the question that I have, and I wanna pose to y'all is when we're talking about offense de defense balance, it's easy to pull in an actual hot war. And there's a lot of people that say like, oh, well, like defense will just [00:34:00] get better.

Like we're going to continue training these AI models these machine learning models. So that way we can protect our users better at scale, but. There are people that are getting hurt in the process. There are people that are watching deep fakes without realizing it. You know, Twitter's content moderation, won't pick up every bot.

So how do we help those people that fall through the cracks? In the meantime is really the question that I still have and have not yet found an answer to.

Grace: I think that's interesting because I think you have mentioned what I think would be the answer, which is that even though we deploy AI systems, we still have to have human beings audit and still be the decision making end. And, and I think that's, that's the transition piece that we need. Beth, do you have a response to that?

Bethan: So I think to Winona's point about people getting hurt by. Watching deep fakes, not realizing they're incorrect because you're not realizing their lives because Twitter's moderated, you know, moderation, AI, doesn't pick it up. I think [00:35:00] one of the reason, one of the ways we can combat that is by employing and training and hiring diverse moderators.

We need diverse experts in AI. We need diverse cybersecurity talent because those different perspectives. Will help us prepare and protect the vulnerable people or people who just don't have that context to identify deep fakes or identify other malicious AI tactics that spread MIS and disinformation.

And I think that's something we all on this podcast really care about is the diversity of talent and perspectives in cybersecurity and all of these technology areas. You know, that's why we created R in the first place.

Winnona: Yeah. I mean, Beth, I, I agree. I, in terms of actually getting more people to audit and audit the AIB, the human in the loop, that's definitely a valid point. You wanna make sure that you have different people with different Logic and lived experiences to look at an AI and be like, oh, this is probably why they made that decision about this [00:36:00] being anomalous behavior or not.

I think now that we've had this conversation, Taking the offense defense balance into consideration and providing just simply more investment into the harder problems that will pay dividends to user safety, like anomaly detection versus the big sexy potentially privacy harming solutions like facial recognition.

Like the more military applications, even though they do have a lot of big use could be something to consider as well. Like where is investment going when it comes to AI? Beyond just the military solutions could be one way to think about it too.

Grace: Wow. Went on a plus. I mean, 1 billion I think is where we're at now. To that point. Yeah, I think we talked about a lot of different policy options here, whether that be human AI teaming whether that be encouraging more privacy within data systems and on the human user sides.

I think we did cover honestly, gauntlet today on both public private networks, [00:37:00] offense desense in AI systems. It's given me a lot to think about thank you all, for your extremely hot takes today. I hope we've given. The listeners, some things to Mo on about


airar_recording-3_2022-09-18--t02-32-36pm--guest922019--gracefulparking: so after that awesome conversation we've got Danny on deck for a cyber show and tell, take it away.

airar_recording-3_2022-09-18--t02-32-36pm--guest835181--dani: grace. So today we, you heard me get a little bit into my privacy soapbox, and just in case you thought I was done with it this week I have the cyber show and tell, so I'm gonna go deep, deep into it. And the story I wanna talk about today is about a company called fog reveal. The associated press along with the electronic frontier foundation issued a really great report a couple weeks ago about fog Reveal's business.

Fog reveal offers a software for tracking cell phones, and it was revealed by the AP and FF that they've sold their software in about 40 contracts to nearly two dozen law enforcement. And so how does their software work?

Your phone has an advertising ID and fog reveal, [00:38:00] aggregates the movement of that advertising ID via data brokers and basically builds a massive data set suggesting where that advertising ID I E where your phone and therefore you have been and law enforcement agencies are obviously very interested in that.

And I, I call out in particular Latonya Sweeney's research that shows that de anonymizing data is extremely easy.

You just need to cross reference different data sets. Fog reveal works with a company called Tel to offer identification of the phones and of the advertising ID Tel democratic lawmakers call for an investigation into it.

Us customs and border protection was using Tel to track people without a search warrant in 20. So the big question is this legal, so government's use of location. Data is in a. Developing place. So in 2018, the Supreme court ruled police need a search warrant to look at records of where cell phone users have been. And in general, there's this ongoing requirement for a search warrant for [00:39:00] police to secure data.

Now, the gray space is that that doesn't necessarily apply to data that people have already offered up via apps. And so fog as well as other softwares are ways for law enforcement agencies to purchase data as opposed to demand it. Arkansas prosecutor Metcalf had a really interesting quote in the, I think it was the AP article that said. people are gonna have to make a decision on whether they want all this free technology.

We want all this free stuff. We want all the selfies. We can't have that. And at the same time, say I'm a private person. So you can't look at any of that. So, you know, there it is. If the way that law enforcement is viewing our data is if you've used it with a free app, then it's public and they can purchase it.

So the question is, why should I care? You may be listening to this and think I'm a law abiding citizen. It doesn't matter for me, you know, feel free to look at where I've been, but history tells us that abuse, both accidental and intentional is rampant. So already with fog AI there's been [00:40:00] significant investigations and identification of people who didn't actually turn out to be.

The perpetrator of a crime, this occurred with a, a murder of a rare animal breeder, and it turned out to be the wife had done it, but the babysitter was the suspect. And so her whereabouts were tracked for a very long time. The AP also reported on abuse of personal information databases by law enforcement in which there were over 500 instances between 2013 and 2315 of law enforcement pulling personal data from.

These are Law enforcement databases, not private ones. Pulling information, to stock, harass, track all kinds of violations people who are just who they were interested in, personally, people who had no relation to a case. So I think it's reasonable to say we have evidence.

Humans don't behave responsibility when given unlimited access to information and therefore limits are necessary. So the question is, what can you do about this? When it comes to data privacy, one of the biggest questions [00:41:00] is should there be a federal law versus a versus state by state rules?

airar_recording-4_2022-09-18--t02-41-05pm--63135b8351909960e2ee579d--wdesombre: So, this is really fascinating, Danny. I think the one thing that I'd like to point out in terms of nuance, at least from a policy perspective, is that the 2018 case in carpenter doesn't actually cover cell phone data. They cover cell phone tower pings only as an unconstitutional search. Supreme court decision that kind of contradicts or doesn't really contradict different decisions that have come up in, in the early and mid 19 hundreds. And this will be my, my first of hopefully many legal rans that as I do more law school I will be able to comment on, the Supreme court decision is so narrow that it only says that cell phone tower pings without a.

Constitutes as an unlawful search. And so now every time someone wants to use this court case, they're going to have to ask the Supremes or another court like. This data that a third party has because a majority of times like [00:42:00] third parties are treated like informants. If they have your data and they give it to the government, that's not a search.

Like, is this case more like an informant? Is this data more like that an informant might give, or is this data more like a cell phone tower ping? And that's something that the Supremes haven't yet answered for all of the myriad types of data that you were talking about. So thanks for sharing.

airar_recording-4_2022-09-18--t02-41-05pm--guest835181--dani: Yeah, I think the that's a really great point and the fact that it's narrow is not unique to that Supreme court case. It's a it's a quality of a lot of the legislation and rule making that's going on around data privacy. So last month, FTC initiated a rule making process. To address commercial surveillance and data security.

But the notice on that rule making excludes, or, or rather doesn't specifically call out law enforcement partnerships. So it talks about the military talks about data purchased by government, but it doesn't call out this extremely common use. And you wouldn't notice that if you weren't reading really closely on the lines, so the, the restrictions [00:43:00] against collection sale data doesn't include companies. And so fog AI or sorry, fog reveal, and other companies would potentially be excluded from the only federal bill we could have, by the way, you know, who knows if it's gonna pass the clock is ticking.

So. yeah. One of the challenges is where there are state and federal attempts being made to restrict the collection sale data. There are often really narrow. Limitations on that. And there, those are limitations that you wouldn't catch if you were sort of a casual voter or reader.

And so it makes it extremely challenging for everyday citizens to track the ways in which this legislation and these rules are impacted them. So more to come later in this space, and I highly encourage everybody to, to read the AP coverage and Gizmo's subsequent coverage, we'll link to it in the show notes really good tracking by both those.

airar_recording-4_2022-09-18--t02-41-05pm--guest922019--gracefulparking: Awesome. Thank you, Danny, for bringing that story for us. I think we, we just learned so much from you in that and stay tuned for, for more tidbits like this and thanks for listening to cyber dot R E R a podcast by Harvard Kennedy school [00:44:00] students, given that this is a student led program, this podcast doesn't represent any views of any institution, school, or even ourselves.

After we finish recording this episode on September 18th, 2022. We're just students learning every day.

Just trying to navigate the murky waters of cybersecurity Stay tuned for more episodes and discussions.

Grace: what did you actually say?

Winnona: sorry!