Unleash endless possibilities with YellowG, the industry’s first generative AI-powered platform!

EPISODE 14

09-APRIL-2024

AI’s dual role in cybersecurity threats and defenses

Dive into the evolving world of cybersecurity with Brian Drake, Federal Chief Technology Officer at Accrete AI. In this episode, Brian & TJ explore the challenges and opportunities AI and advanced technologies present to cybersecurity. Brian sheds light on how AI strengthens cyber defenses, the future of cybersecurity enhanced by AI and machine learning, and the critical role of ethics in privacy and AI’s advancement.
Don’t miss out on these expert insights into safeguarding our digital future!

Listen on:

Key takeaway

Play Pause
Restart
The new wave of digital threats
[04:29 – 06:54]
Play Pause
Restart
Fortifying cybersecurity with AI
[07:52 – 08:54]
Play Pause
Restart
The future of cybersecurity
[10:55 – 13:08]

Meet the guest expert

Guest
Brian Drake
Federal Chief Technology Officer at Accrete AI
Brian brings over two decades of expertise in management consulting across defense, intelligence, and security sectors, skillfully bridging the gap between technical and non-technical groups. His prowess extends to strategic planning, business development, AI, cybersecurity, and more, driving technology enhancement and growth. He’s the President of the Defense Intelligence Memorial Foundation, supporting families of fallen officers, and has a rich history with the Defense Intelligence Agency, recognized by multiple awards including the 2021 Federal 100 A and DASA Awards Judge 2022.

Transcript

Intro – 00:00:03: Generative AI takes the center stage. But is your enterprise still watching from the sidelines? Come on in. Let’s fix that. This is Not Another Bot: The Generative AI Show, where we unpack and help you understand the rapidly evolving space of conversational experiences and the technology behind it all. Here is your host, TJ.

TJ – 00:00:26: Hello, and welcome to Not Another Bot: The Generative AI Show. I’m your host, TJ. Joining us today is Brian Drake, the Federal Chief Technology Officer for Accrete.AI Government. With over two decades of impressive experience spanning defense, intelligence, cybersecurity, and technology consulting, Brian has cultivated an exceptional ability to bridge the divide between technical and non-technical stakeholders. His efforts in spearheading Accrete’s expansion into the public sector and establishing key AI performance and ethical standards are testament to his innovative mindset. Prior to Accrete, Brian held various significant roles at the Defense Intelligence Agency, including Director of Artificial Intelligence, and his contributions have been recognized with numerous honors and awards. His deep understanding of the AI landscape, coupled with his commitment to driving meaningful change, make him an authoritative voice in this field. Welcome, Brian, and absolutely thrilled to have you here and very excited.

Brian – 00:01:24: Well, thank you, TJ. It’s my pleasure to be here.

TJ – 00:01:26: Brian, the way we kind of generally go about these podcasts is to know more about you so that the audience also knows about you. Pretty much well known, but the rest of the ICPs or the personas who would be listening in would be great to learn about yourself. So the first question for you is, Brian, can you start by sharing a little bit about your professional journey and what drew you to the intersection of AI and cybersecurity in both public and private sectors?

Brian  – 00:01:49: Yeah, so I’ve been in the technology space for most of my career, but that intersection between national security and technology has been what I found the most interest in. I think like a lot of folks in the AI space, my experience has been, I’ll say, starting as an amateur. I think we’re all amateurs in this space. We’re learning every day because the technology is evolving so much and so quickly. So I started seriously getting into the artificial intelligence space when I was at the Defense Intelligence Agency and I was working in the command element. And we were trying to figure out how to organize and understand enterprise-wide data, mostly back office type information, contracts, spend, contractors, talent. And we were doing that for the director’s dashboard so that he could look across the Defense Intelligence Enterprise and say, here’s kind of where my people are, how much we’re spending on contracts, like that kind of basic stuff. And so some very rudimentary knowledge graphs, capabilities were available to us and we were using those for that dashboard. Now, the technology since then, and that was like 2016. So since then, I mean, it has vastly improved and we were able to use that for more complex missions. So when I transitioned out of that role into a new role in counter-narcotics, we were looking at the opioid crisis in a very different way and trying to understand where are opioids coming from, who are the producers of the opioids, what are some of these criminal networks? And we applied the power of knowledge graphs just in a year or so, that problem set and opened up a whole bunch of new opportunities for defense assets, new interdiction, partnerships with law enforcement, and that sort of thing. And so I ran several different programs that were related to the counter-narcotics mission. That was kind of parlayed into a larger program across the agency. So I was appointed as the director of artificial intelligence for Defense Intelligence Agency for Defense Intelligence Agency around 2018. And I did that for another two years. So in that role, I was running about a $20 million portfolio of different projects across all sorts of areas. Defense Intelligence Agency’s mission. And so from that, I transitioned to a different role, where I was running the briefing team in the Pentagon for the Secretary of Defense and some other E-Ring principals. And then I decided that I really preferred to be in the AI space, and I decided to transition out of government, and now I joined the private sector.

TJ – 00:04:10: Just brilliant. It’s just exciting to even hear that journey. Well, I’m very intrigued by the experiences. Well, that brings me to the very first question for the show. How do you perceive the current cybersecurity landscape, and especially considering the proliferation of AI and advanced technologies? What are some of the most pressing cybersecurity threats that have emerged in the recent years? And if you could describe some recent cybersecurity incidents that might have reshaped understanding or approach to security in your side of the business, it would be great to hear.

Brian – 00:04:42: It’s a really good question. So the cybersecurity space specifically is probably one of the fastest growing areas for automation and automated threats. And it shouldn’t surprise us, right? When we start thinking about how much information is transiting just your own host network. It’s a lot. And if adversaries gain advantage by looking at the complexity of some of our networks and finding weaknesses in the chain mail service. So you’re looking at a full spectrum of different threats, not just on network, but off network. You’re looking for insider threats. You’re looking for things that might maybe just bog down performance, but don’t actually do anything to harm your infrastructure or steal your data. So the cyber threat environment has gotten considerably more diverse and complex. And into the future, I would argue, it’s probably going to be more automated. So we’re probably likely to see generative AI being used in a way that is not good, primarily because it will be at a point, it is already approaching a point where if you are not a very savvy person, but you know enough to download ChatGPT, and you kind of have an idea of what it is you want to do, like if there’s a particular exploit you want to build, today, you can ask ChatGPT to help you correct code. It’s actually quite very good at that. So if you have a particular exploit that you were trying to deliver. And you said, you know, as you were running through testing, it wasn’t quite performing the way you wanted it to. You can use ChatGPT today to provide corrections to your code and then submit it. You don’t need to know how to code Python. You don’t need to know how to code in Java. It’s there and available to you, and it will work. So that’s just today. Into the future, I think we could expect criminal organizations or hackers that are on their own to create automated arrays that will be at sizes and scale and volume that today would be reserved for more like state-based actors. Probably can be done in someone’s garage and with a very low print. And with that, it’s going to be complicated for anybody who’s policing the cyber environment, be those national militaries or police forces. It’s going to be very difficult.

Quote Line
“When we start thinking about how much information is transiting just your own host network, it’s a lot. And if adversaries gain an advantage by looking at the complexity of some of our networks and finding weaknesses in the chain mail service, you’re looking at a full spectrum of different threats, not just on-network, but off-network.”
Quote Line
Brian Drake
Federal Chief Technology Officer at Accrete AI
Not Another Bot

TJ – 00:06:54: I think that’s one thing you touched upon is the knowledge graphs thing, right? So given I worked for a company before joining Yellow, Neo4j, where we certainly built a lot of these graph databases and the backend and the knowledge graphs, and cybersecurity was such a massive use case, given how easy it was to look through the context of the data and add those entities clearly so as to understand the different iterations of those cyber attacks. But I think glad that you called out how generative AI could bring that broader automation that may be required in the industry today to kind of take care of these much more modern fashion, because there are alerts you could generate, but these are like more of the automation in terms of finding solutions too. But that kind of brings me to the next question, which is how can AI be exploited by malicious actors like in situations like how are you today or how can we guard against such threats as we bring more AI and generative AI into the cybersecurity cyberspace?

Brian – 00:07:52: Ironically, automation will become more malicious, but it also can help us. Today, I see a lot of opportunities for surveillance capabilities on network to look for threats that are coming at us right now with fingers on keyboards being directed at it. I mean, the old model that we have been using in SOX across the country is very human driven, right? You have a staff of folks sitting in a room watching audit logs or watching network activity and then making policy decisions, whether or not a particular activity violates policy or not. And adjusting those policy decisions, granting access, denying access, interrogating things, three tiers of operation. I mean, that’s just a rule-based set. So we’re at a point in just the machine learning side of things where you don’t have to do that anymore. You can automate much of that activity and really elevate the skills of those folks sitting in the Security Operations Center. And by doing that, you’re going to be able to focus on things that might be considered more strategic threats and things that might be evolving toward your companies or your organization’s equities in a way that you need to pay attention to. So, for example, if you are sitting on a Security Operations Center and you have a tier three investigation that’s looking at, seems like there’s a lot of activity coming from, I’m just going to pick a country like Iran. You see a lot of cyber activity coming from Iran. It seems to be oriented around Oracle tech. Well, why is that? Why are they going after that? What do they know that we don’t know, right? There’s an investigative element that comes behind that to then say, oh, there was a zero-day exploit. It was pushed yesterday off of this dark web forum. And now you start to understand, like, ah, got it. So they know that we have that. And now they’re going after that vulnerability. Now you can start to figure out, like, okay, so can I just patch it? Or do I have to shut off traffic? Or do I need to make a procurement decision, right? Do I need to move off of that Oracle Stack into something else? Those are all very valid questions. And usually those types of inquiries will take months. But with the kind of data that you can collect on network, and off network to then inform that decision, that usually would take months. But now with automation assisting you in that kind of forensic piece of it, you can shrink that down to a couple of days. And when you do that, you can empower your decision makers, right? Folks that have control over your budget and staff much, much more quickly and close gaps much more quickly. Now, I say all that to then say that’s the first step. There’s going to be another evolution beyond that. So the trick is once we kind of get to the point where we are automating those processes, and keeping pace, there will have to be at some point a leapfrog beyond like, what are you going to do to prevent that from coming at you before it’s your network? And so my company at Accrete AI, we are actually working on solutions that do that sort of stuff.

TJ – 00:10:32: Amazing. Really, really thoughtful, again, on that answer. And how effective are the current cybersecurity policies and regulations that are in place ensuring the security of sensitive data and systems? From what we have at least learned, basics or at least to an extent the intermediate aspects of cybersecurity, it’s been good. But are there any specific changes which you foresee? And also, what role do you see for generative AI to play in shaping future for cybersecurity policies?

Brian – 00:10:59: So when we talk about the policies that we have in place and the legal authorities that we have in place, I would say that we have a long way to go. And the reason is because in America, we think about different segments of our economy and our government as independent activities, right? We don’t think about how schools of higher education are vulnerable to a cyber attack that might impact the government. We don’t think about how. Industry may interact with that university as well. It’s all very stovepipe, so to speak. And from a policy perspective. So when we consider that our adversaries do not think that, our adversaries think in such a way that they see opportunities to attack our nation and our economic interests across the full spectrum. They see universities and the private sector and the government all interwoven together. And in that interwoven attack space, they look for vulnerabilities to go after things that are within their nationality. So let’s say that they are interested in advancing their artificial intelligence. They go and target where those researchers are collecting federal money. And then when they get to those researchers, then they figure out, ah, this researcher is working on something that’s in the Defense Department. And once they get to that point, now they are ahead of where an embryonic technology is now is about to go in. They can do things like, well, understand it. They could exploit it. They could steal it. They can also compromise it. So that investment of U.S. Taxpayer dollars just goes down the tubes. The reason why this is important is because our policies only reflect those stovepipes. They don’t reflect the full spectrum of attack that we are facing from those adversaries. So we, and CISA is making some good improvements in this regard. They’re starting to think about that in a more full spectrum way. We still have a long way to go. So I would like to see us better incorporate and think about how those different partners work together.

Quote Line
“We could expect criminal organizations or hackers that are on their own to create automated arrays that will be at sizes, scales, and volumes that today would be reserved for more like state-based actors. Probably can be done in the future with someone’s garage and with a very low acute footprint. And with that, it’s going to be complicated for anybody who’s policing the cyber environment, be those national militaries or police forces.”
Quote Line
Brian Drake
Federal Chief Technology Officer at Accrete AI
Not Another Bot

TJ – 00:13:08: Well, that was a great answer, Brian. And just following up on the discussion further, how effective are the current cybersecurity policies and regulations that are in place ensuring the security of sensitive data and systems? And what sort of role do you see, you know, generative AI or AI will play in shaping future for cybersecurity policies?

Brian – 00:13:29: Yeah, the current state of policies, they are okay, but they don’t really understand or accommodate the way our adversaries are attacking us. So our adversaries think about going after our most sensitive secrets in a very full spectrum way. And that includes things like national secrets, economic secrets, even research and engineering. So. So when they think about going after those targets, they look for interconnections between how research universities are connected back to the private sector, how the private sector is connected to the government, how the government’s connected back to the university. They look at all those pieces and our policies really treat each of those as discrete areas. So CISA has been actually doing some interesting work to start to unify databases because it’s not just where information is coming from those three vectors, but it’s also how state and local. Governments are interacting. So they’re trying to create a secure enclave to do all that. And that’s going to help us better in policy. I think the other thing that we need to think about is how industry can play a different role of positive. In addressing some of the exploits that our adversaries use in cyberspace. Presently, it’s against the law for a company to hack back at somebody who’s attacking them. The only thing they can do is defensive, and they can notify law enforcement. There is a future in which there are things that are probably low-level automated attacks that could be done that are only in response to adversary attacks and could be authorized by law, which would also deter adversaries from going after those particular elements. But presently, that’s not legal, and for good reason. But we are entering a space where there’s going to be so much coming at all of our most vulnerable assets that we’re going to have to rethink how that’s being done. It’s not reasonable to ask the H.S. Or the FBI or the National Security Agency to go after these adversaries at scale. There’s just too much, because they’re going to be using many more machines than we’re going to be using. And we’re not going to have enough fingers on keyboards to really go after that problem. So we will have to readdress those things as the future comes to us.

TJ – 00:15:41: And understanding the current limitations of AI or even generative AI, things like accuracies, the hallucinations, or more effective use in cybersecurity, what are some of the common misconceptions about what AI can actually do for cybersecurity you hear? And also, how do you address those misconceptions with organizations today?

Brian – 00:16:03: Right. And this is a problem across the industry, not just in generative AI, but AI. Even machine learning, before we kind of get to true AI, is that there’s a strong belief that artificial intelligence is like a magic wand that you can say, ah, ba-ding, done, right? Whatever the problem is. And the truth of it is that AI is appropriate to fix some things, but completely inappropriate for other things. So today, when we start talking about gross tasks that are fairly discreet and easy for a person to do, but there’s a lot to do and insert a lot of inaccuracy, those tasks are ideal for automation. So, for example, invoice processing. If you’re a big business and you receive a lot of invoices for services and goods, they come in different forms and formats. Today, a person reads that and says, okay, this is for our water bill for the week or for the year, and then they write a check through the pay system. Robotic process automation solutions, which are extraordinarily cheap, tens of thousands of dollars. You can use those solutions to do that job and make that person a much more effective customer relations person or someone who’s looking across the spend of the organization to find ways to save money. That’s a way of taking off a rote task and really optimizing that person’s performance to do something that is better suited to their talents. That’s easy. So people see that sort of thing and they think, oh, if I can do that, I can do all these other super magical stuff. Well, not really. Like whenever you approach a problem, there are some that are good candidates for automation, some that are not. And so what I find is that a lot of folks just kind of think, oh, I can just use automation for everything. Well, to a degree, but not for everything. There are still some things that require humans in the loop to do work. So one of the things that I recently wrote about was on the background investigation cycle, because we have this National Guardsman up in Massachusetts who was active on social media and was stealing secrets and so forth. And there is a lot about that situation and the way we do background clearances for security clearances, which can be automated. It’s great, but it’s never going to replace that human investigator that goes and talks to the subject, their family. They’re friends to really get a sense of who this person is and whether or not they can be trusted by national secrets. So there are pieces of that process that are awesome to automate and others that I would never touch. And that’s just one example of where some good can be done. We just have to be diligent about what we apply it to.

TJ – 00:18:37: You just brilliantly explained. And I think that’s the same thing we are also hearing, or at least with the hype with Trinity BI and the practical applications of it in real life and real time. I think the whole focus is, let’s take it and put it in all our use cases and different industries and then see how it works. Like more pray and spray sort of an approach compared to really going through a process, which is what you explained, that not all tasks needs to be automated. Some of them will need the human in the loop and beyond. So very well said. Given the significant role of AI in cybersecurity, what are some of the ethical consideration you must bear in mind? And all violations that bear in mind? Then how do we really ensure the responsible use of AI in cybersecurity applications?

Brian – 00:19:05: Obviously, the forst thing that comes to mind is privacy and civil liberties. So when we start talking about in this one example, I just gave, about someone who has a national security clearance. And as a security clearance holder myself. We sacrifice many of our common privacy rights to the federal government in order to have that privilege. And it means that the government gets to see things like what my credit report looks like. They get to see my social media activity. They get to see anything that I’m doing in my personal life. I have to do an interview. I have a polygraph. Like these are all things that I agree to. And there are programs like the Continuous Evaluation Program, which then use that information to then discern whether or not I am still a clearance holder in good standing that can then be trusted with secrets. That is, we have all agreed. That I would say is the exception, not the rule. And the way the private sector is trending, that may become the rule. So today, if you are a very private person and you don’t want to give up that information, then you choose not to have a security clearance and you go take a job at another place that is required. But because of the nature of lot of factors, insider threats, violence in the workplace, sexual harassment rules. There is a lot of people that are starting to talk about, what is it that I can do to discover if someone is a physical threat to my workers? And what do I need to know about that person in order to make either an employment decision or better protect the workforce? And in today’s media environment, unfortunately, there is not a lot of grace extended to folks who are perhaps in a security position in a private company. And there was an employee that was talking about murdering people online. The questions come around to, why didn’t you see that? Why didn’t you detect it? Well, the answer is quite simple. It’s that, well, they have privacy rights. It’s not a condition of employment for us to dig into their personal life to find that sort of stuff out. And moreover, it’s a lot of data. And so you would have to have automation to go through that data to really discover that sort of thing, because the internet’s pretty big. Social media is pretty big. There’s lots of places that can happen. That all is to say that there could come a time in the future where tolerance for that answer just doesn’t fly anymore. And that the threat to workplaces, to schools, for violence in particular, there’s just no appetite for that. And that we as a society may choose to say, well, I’m going to sacrifice quite a bit of my privacy in favor of child safety in schools, or in favor of workplace safety. That’s a potential reality. So if that is the future that we’re headed toward. Then we have to move toward it in such a way that doesn’t sacrifice other things that can be used in ways that it wasn’t intended. To political belief systems, for example. If I post something on a Twitter feed that has to do with my political leanings and that’s used against me in workplace safety kind of context, and that’s not what it was, stuff like that. So if we choose to go down that pathway, we’re going to have to think about that more carefully. So that’s one. But the other thing is that there’s some big, big issues around algorithmic justice, right? How do we understand the world around us and how do we think about progressive tax systems or healthcare? How do legacy economic issues come back around and impact at-risk communities? There’s a lot of work to be done there that I don’t think has gotten enough attention just yet. And then the last piece is, I think we do need to think about, and I spent a lot of time in this in my former life, BIA, but how does automation affect our posture in the national security? In other words, the American way of warfighting, we will never, I can be proven wrong, but I hope that we never pair together an automated solution with a lethal weapon system. I hope that we are moving toward a future where we keep those things separate and a human is always in the middle. Our adversaries don’t think that. Our adversaries like Russia, they really don’t care about whether or not they’re fighting an ethical or moral war. They think about winning the war through any means possible, which is why you see some of these behaviors in Ukraine that are very disconcerting. Bombing hospitals, bombing schools, going after field hospitals where we know innocent people are being treated, just because it bogs down the military apparatus to treat the wounded, and it creates an inability for the Ukrainians to field more soldiers and return to the battlefield. So when we are facing an adversary like that, we should be very concerned about their intentions to make automated targeting to lethal systems. Because their rules of war will not conform to ours. And reach a point where the speed of war and the speed of combat will exceed human cognition. That means that we could have very, very devastating conventional wars that happen very quickly, and there’s no way to kind of turn the temperature down and soothe people. And those are the kinds of futures that I’d like us to think about more carefully.

TJ – 00:24:32: To follow up from what you just said, maybe one level deeper, how can we create an AI system that is really transparent, fair, and accountable? What sort of measures we’re taking today and what do we foresee? Suggestions you may have we should be building so that these systems are able to make judgment around how to be more transparent and fair.

Quote Line
“Ironically, automation will become more malicious, but it also can help us. Today, I see a lot of opportunities for surveillance capabilities on the network to look for threats that are coming at us right now with fingers on keyboards being directed at it.”
Quote Line
Brian Drake
Federal Chief Technology Officer at Accrete AI
Not Another Bot

Brian – 00:24:53: So one of the most important things, so like this is a great example of we’re having a dialogue around this type of issue and we’re starting to kind of reach the horizon of that conversation. The next step beyond that is having industry have a different conversation with government around those issues. And I would argue that conversation has to be a little more comprehensive in the sense that when industry usually talks to government, it’s about trying to secure a contract and secure a sale. That’s usually the context in which those conversations happen. But if you look at like the Defense Innovation Board, they’re starting to have conversations that are larger in scope and more about what is it that you guys are doing out here? Why are you doing those things? What’s the market opportunity? And those are all good questions to have. Sometimes it’s very innocent and that’s fine. Other times there’s folks doing stuff that you might go, I’m not sure that’s a good idea. Like it may be that it’s, they think that there’s a future market for a particular capability and that’s what they’re building toward. Well, sometimes it’s like, well, we don’t know. We’re just going to see if we can. Not often a good impulse, especially when you kind of don’t know what you’re building and you’re just going to move in that direction. So when we start thinking about what is that dialogue, dialogue, it may be very much not tied around a monetary incentive for industry or a capability acquisition for the government. It might just be around the question of what is it you’re doing and why are you doing it and where are you? And government’s role shouldn’t be to stop it from happening, but more like acclimating the industry partners to understand the concerns the government might have and what kind of assurances they might want to be sure that we’re not going down a road we shouldn’t go down. So I’m trying to think of some good salient examples of that, but it’s different type of conversation instead of trying to get us to a point where I’m trying to build a weapon system. That’s really not what we’re talking about. We’re talking about very early research and development that might eventually lead to something we don’t care about. And government has to have the courage to have those conversations with industry, and industry has to understand this is not about money. Just, you’re doing a good thing for the country and understand it and kind of circling it back around to the first questions you asked about cybersecurity. That is also a piece. Is that if you’re developing a technology that you have a benign use for, but could be used for a malicious purpose, what’s your cybersecurity posture look like? Who’s in your company? Where do you take money? If you have money that’s coming from, let’s say, China, and a lot of these ventures do, what kind of access does that give the Chinese to your intellectual property? What kind of access does that give them to the source code? Would they be motivated to take it, put it into a weapon system that we don’t want to see? Or something that suppresses civil liberties, like with the Uyghurs? Those are the types of things that I don’t think everyone in industry is kind of thinking through.

TJ – 00:27:49: That’s so true. And the access to that sort of data and certainly everything is built on the data which has been provided, whether it’s a sophisticated machine learning model or the different algorithms. So I think totally that sort of access could be so damaging and threatening in many ways. Brian, could you discuss a little bit more on the role of cyber threat intelligence in proactive cybersecurity strategy? And how can AI and machine learning enhance cyber threat intelligence?

Brian – 00:28:16: So this is actually an area of research that we are involved in right now in my company. So we are looking at where malware is moving worldwide and tracking where that malware decides to attack. And in doing so, we are able to identify malware that infects hosts 90 days in advance of its former cataloging by some of the major antivirus software companies at VHS. And by doing that, we are getting into that window of the future that I was talking about. Today, it’s about perimeter defense and policy enforcement and tier one, tier three kind of elevations. We’re looking beyond that horizon to see where malicious actors are ideating around the creation of malware. And where they are starting to deploy and test that and being able to inform our clients like, hey, there’s an attack vector that hasn’t come your way yet, but your server stack or your posture or your firewall config looks exactly like this other company and they’re coming for you next. So that’s kind of how a lot of these folks do the work today. We’re building to that future because we expect that that future will be automated as well. So you’ll have folks with considerably lower technical degrees. They’re going to create these arrays. They’re going to use malware that they’re probably going to use generative AI to create. And then they’re going to start testing that attack against lots of different places. And the second it succeeds, then they’re coming after things we care about. Banks, critical infrastructure, but they test it first. That’s just how they kind of roll. We can expect that to happen in the future as well. So we are starting to build toward that future and better protect those who might be at risk.

TJ – 00:29:56: Moving a little bit into the hospitality industry. Specific industry like supply chain, right? And all that just happened a few months back with the entire supply chain being broken and has been a hot topic and it has been a hot topic for several years, but even more last year and now. Supply chain attacks have been kind of growing. What steps can organizations take to secure their supply chains? That’s one. And can you discuss some notably supply chain attacks and lessons learned, which might have any organization might have gone through to solve for? And so obviously, how? Yeah. I can help in detecting and preventing these sort of attacks.

Brian – 00:30:31: And we should start talking about like when we say supply chain, what do we mean by supply chain? Because we’re now talking about software supply chains as well as hardware supply chains, food supply chains, like there’s lots of pieces to that, and all of them are very differently treated. So we do have a supply chain solution that looks at more business relationships and how those business relationships interconnect to things of vulnerability. So if you are a major weapons producer, the question you might ask is, well, how many of my components come from China? Good question. And then of those components, which ones do I need to care about? If you’ve got, let’s say, silicate that’s being purchased in China and being used for your wafers, is that really a problem? Probably not. I mean, there’s probably lots of other providers for that silicate. Probably don’t need to have it exactly from China. Probably get it for a better price point. But what happens if it goes away? Where do you go next? Those are the types of things that we start informing. Software supply chains are now the new thing that people are starting to focus around and which is also very interesting. I think I’m just going to lean a little forward here and I think we’re focused on the wrong piece of the problem. I think that looking at where a piece of code is made and how it then propagates into a piece of software or firmware, it’s an important question. I’m not going to say it’s not an important question, but I actually care more about what it does. I care about like once it’s been deployed, what is it doing? Is it running assembly language that runs a SCADA system and it’s vulnerable and that vulnerability was implanted by a malicious actor? I want to know about. If it’s not that, if it’s assembly code that runs fine and secure, do I care where it came from? So it kind of puts more of the onus back on certification and assurance that what has been built does what it does. And I do worry, and I think the government has had this worry for a while, is that when we see open-sourced code that we say, oh, well, it’s open source, that therefore lots of eyeballs on it, it must be secure. For the most part, it is. I think we do have to interrogate that a little bit. Make sure that we are fully scrutinizing it and not taking a just-trust-us kind of way. That’s where AI actually can play another productive role. So we can use automation to test very complex software platforms in simulated environments that look exactly like what they would be if you deployed them on a production floor running a bunch of Siemens equipment. It looks exactly the same. And you can do that with assurance, multiple different scenarios, even attack it in these simulated environments. And that’s the type of thing I’d like to see us try. Kind of up our game in modeling and sim around software security rather than try to read code and discern whether or not it’s safe or not.

TJ – 00:33:12: Yeah, basically how the application is being compromised and what ways we can resolve for is super critical. And building a strong security culture within an organization is crucial, right, Brian? And how can organizations foster a more robust cybersecurity culture? And how can organizations ensure employees… At all levels understand and adhere to cybersecurity protocols? What are some of the things you would like to share in these regards?

Quote Line
“There’s a strong belief that artificial intelligence is like a magic wand. The truth is that AI is appropriate to fix some things but completely inappropriate for others.”
Quote Line
Brian Drake
Federal Chief Technology Officer at Accrete AI
Not Another Bot

Brian – 00:33:41: The weakest link in the chain, of course, is always. So when we start thinking about cybersecurity in a kind of full spectrum way, we should be starting with things like what kind of business relationships are you brokering? Who are those people and how do you know that they’re real? And what kind of information do you share with those types of folks? I don’t know if this happens to you, TJ, but it happens to me. I get approached by a lot of folks who are looking to have business relationships. Overseas. And some of those relationships are well trusted and we can rely on those folks to do the work. Other ones are, I wouldn’t say I could have full faith and credit in them. So I have to really carefully scrutinize whether or not we’ll use a vendor that is overseas. Whether that’s a data provider or a software developer or whatever it is. So that’s where we would start is to say, like, we should be thinking through those types of things. Who are you talking to? Why are you talking to them? And what kind of opportunity is there for you? And is it really kind of worth it? And are there risk factors that you need to consider that go beyond just making sure you secure the next deal? So that’s where you start. You start with people. Then it also goes along the lines of things like good cyber hygiene. Just stuff that you do in your day to day. Don’t click on links that get emailed to you. Have conversations with people you know who they are. When you have something suspicious, send it to your SSO, your CISO, so that you know kind of what’s going on and get them to weigh in. Because right now it’s a human driven enterprise. That’s what we’re going to have for a while. I think into the future that’s going to change a little bit. But humans are usually the best sensor for that sort of stuff. Once we get to the point where we have better perimeter defense and more automated solutions for defense, maybe we won’t have those concerns as much anymore. But it seems like this has been true since the 90s forward. Humans are the best sensors. So making them good sensors is probably the best thing you can do right now.

TJ – 00:35:30: Brian, thanks for your time today. Truly appreciate you spending this much of time and explaining the entire landscape of how AI, generative AI, can predominantly help the cybersecurity space and the concerns around it. So thank you so much for your time today and look forward to hearing from you more and hope the audiences kind of get a lot of insights from this conversation. Thank you so much.

Outro – 00:35:54: How impactful was that episode? Not Another Bot: The Generative AI Show, is brought to you by Yellow.ai. To find out more about Yellow.ai and how you can transform your business with AI-powered automation, visit Y-E-L-L-O-W dot AI. And then make sure to search for The Generative AI Show in Apple Podcasts, Spotify, and Google Podcasts or anywhere else podcasts are found. Make sure to click subscribe so you don’t miss any future episodes. On behalf of the team here at Yellow.ai. Thank you for listening.

We've been called gamechangers for a reason.

The most trusted & award-winning AI platform out there.
This site is registered on wpml.org as a development site.