By Use Cases
In this episode, David Linthicum, author of over twelve books on Enterprise Computing and the Chief Cloud Strategy Officer at Deloitte Consulting joins host TJ, VP – Product Marketing at Yellow.ai to discuss the game-changing role of generative AI in cloud computing including the shortage of skilled talent and the need for industry-specific use cases. The duo also share valuable insights on choosing the right cloud providers, protecting data and privacy, and balancing digital transformation with human connections.
Intro – 00:00:03: Generative AI takes the center stage. But is your enterprise still watching from the sidelines? Come on in. Let’s fix that. This is Not Another Bot: The Generative AI Show, where we unpack and help you understand the rapidly evolving space of conversational experiences and the technology behind it all. Here is your host, TJ.
TJ – 00:00:26: Hello and welcome to Not Another Bot: The Generative AI Show. I’m your host, TJ. Joining me today is David, a globally recognized thought leader, innovator, and influencer in cloud computing, AI, and cybersecurity. David’s impressive career spans over 35 years, having been a CTO five times for both public and private companies and a CEO twice. Not only is David the current Chief Cloud Strategy Officer at Deloitte Consulting, but he’s also a proficient author with over 13 books on computing and more than 7,000 published articles. His newest book is called An Insider’s Blow to Cloud Computing. So whoever is listening to us, definitely give it a read. With over 700 conference appearances, host of the successful The On Cloud Podcast, and creator of over 50 online LinkedIn courses, David has dedicated his career to teaching businesses how to use resources more productively and innovatively, constantly.
David’s passion for educating and empowering the next generation of cloud professionals is second to none, as he currently serves as adjunct instructor for Louisiana State University and a mentor for Deloitte’s Cloud Talent Development Program. Welcome, David. Absolutely excited to have you here.
David Linthicum – 00:01:41: It’s great to be here. I hope I can live up to the expectations of that long bio you just read. So we’ll see.
TJ – 00:01:47: You will for sure, David. We all are looking forward to this podcast anyways. All right, David, the way we start, we just want to know more about you first, right? So, can you start by first learning about how your passion for computing, specifically cloud computing, came about? And how did this passion shape your career journey?
David Linthicum – 00:02:05: I was always doing computers as a hobby when I was younger, which is, you know, I’m almost 62. That was tough back then. So building Timex Sinclair 1000 computers and putting together CPM systems, early DOS systems, things like that. And probably focused a bit on it in college. But when I got out of college, that’s when I kind of started the journey into what computing was. So that first job out of college was actually as an AI programmer. I was a LISP and ML programmer back in the back and actually taught it at the community college as well. So first exposure to AI kind of right out of the gate, but then got into all the other trends that were going on, client-server, service-oriented architecture, enterprise application integration, which is an area that I helped develop back in the 90s. And then kind of turned on the fact that I really saw the fact that through economies of scale, we could leverage these very advanced systems through a shareable infrastructure, which was something that was not very popular at the time. I think most people thought it was a dumb idea, but actually started writing early papers, Project Archangel, back when I was at Saga Software, which is about building an infrastructure as a service computing. I got absolutely no interest in it. I think I published it in IEEE or some other things and had some investors come in and started to build companies in those directions. At that time, we’re seeing the early rise of software as a service and they were called applications. Delivery networks and really kind of the early rise of cloud computing in terms of integration as a service, things like that, but nothing that really took off. So. Rode that through the whole painful time through about the year 2000 to about the year 2008. Worked for different cloud computing companies, different levels of success. There wasn’t a lot of interest in the industry. At the time, AWS started to build out their cloud. Salesforce.com became more of a household word. And so it started to become popular. And then 2008, the whole thing just lit on fire in terms of where cloud was going. NIST gave it some names. I was actually involved with that. In terms of how we’re categorizing things, offers of service, infrastructure service, platforms of service, private cloud, public cloud, that kind of stuff. And just really hung on and trying to figure out how to lead thought in the space, including how architectures are going to work, how to leverage this stuff successfully, how to use it in different business scenarios, and trying to get ahead of the market And figuring out not only where the market is going, but actually leading thought. In other words, providing capabilities as to people have expectations into what they can leverage this technology for, where it works and where it doesn’t. And I think there’s been a vague amount of success there. I think everybody has a tendency to chase the marketing messages more so than the pragmatic messages. So we’re kind of reeling from that. And it’s working with a lot of clients and customers in terms of what they can expect for this technology to do, where the value can be found. And what the realistic take on this stuff is. Now with the whole generative AI stuff, it’s AI again. AI all over again, but this time with huge amounts of potential in the marketplace. We have these systems are amazing for what they’re able to do.
And the great thing about this, people can see it and touch it. And we think about AI when I worked on it back in the middle 80s. So you want an AI system, that’s fine, 30 million bucks, and you can start playing with it. And it wasn’t very impressive. Where today, you know, it’s just a matter of linking into a cloud-based system and starting to use it. But now we’re at the beginnings of kind of a second, I wouldn’t say a second, but a third, fourth revolution in terms of how we’re going to leverage information differently and really taking things to the next level. So cloud computing becomes the enabling platform to make that happen because it makes it cheap enough. It makes it available enough for people to leverage it. And that’s the real game changer in the world of AI. We’d still be dealing with the $20 million systems to get something up and running, and that won’t scale. Yeah. Every small business can punch above its weight. Everybody can leverage this technology for different aspects of it. Any startup can leverage this technology. It’s completely affordable, if not free in many instances. And it’s going to be a re-revolution in terms of our ability to weaponize this stuff, to kind of take industry to the next level and change the expectations of what a client experience is and how we experience technology, how we experience products and services. And so here we are. It’s very much like cloud computing was in 2008, on the cusp of building something that’s on top of the infrastructure of cloud computing that’s looking to take the whole thing to the next level. So it’s focusing on all layers of the stack, on the primitive cloud services, storage compute, databases, things like that, all the way up to the very sophisticated generative AI services that we’re building today. How are we going to operate this complexity? How are we going to make it work? How are we going to leverage it? And it becomes, you know, from maybe 10 miles an hour to 100 miles an hour, the number of questions that we have in terms of how to use this technology properly. So right now, I’m focused on the next 10 years trying to get in the middle of this and trying to figure out where the patterns are, what people should be leveraging, and where to be leading thought in
that space. And more importantly, how people can make money at this stuff and build a business around it. I think it’s core to everything. And also, by the way, how can it improve our lives? People that look at the ethical issues with AI technology. At the end of the day, it has to make our lives better, else it’s not worth us messing with.
TJ – 00:07:22: Totally. Well, that’s a wonderful introduction to it. And thanks for calling out the whole cloud computing revolution. I started around 2009 and 2010 working on Citrix, Xenapp, Zen servers, and beyond. We started seeing virtualization in the cloud becoming a reality. And then the initial days of Azure, like day in, day out. So I could totally relate to the complexities, the way people saw different offerings, infrastructure as a service, PaaS and SaaS, and those terms were like how they evolved over a period of time. So it is kind of going back to memory lane. But thank you so much for also calling out how AI’s adoption has increased. Now on those same lines, Now that generative AI has become, I would say, you know, there’s a hype, there is also a buzzword, but I also know given companies like us who are really deploying solutions for the customers, right? And in production using our platform, we’re completely powered by generative AI. So we know that there is a humongous need and also the impact of it. But we’d love to know your point of view that how transformative generative AI will be for the enterprises due to its ability to reform cloud computing solutions. And also, what industry specifically, David, you think will be benefiting from this or all industries or maybe there are a few if you could call out and why would be great?
David Linthicum – 00:08:42: No, I think it will be industry-specific value. Because if you look at the value of generative AI or AI in general, it really goes to what the industry is doing and your ability to weaponize it in a positive way. So manufacturing companies can get into it in a very lower value way. So now there’s optimizing supply chains and the ability to automate factory floors, things like that, and make some good bets and become more productive and efficient at what they’re doing. So they’re going to get some incremental value. Where financial institutions, the use cases that they have, the ability to pick stocks, the ability to manage financial transactions, digital currency, all that kind of stuff, huge amount of value can be gained. Where it’s just a matter of them building an LLM that’s going to be better than their competitors’ LLMs, large language models, which will kind of take the business to the next level and have a huge value, multiples of 1,000, 10,000 times the amount of money that’s being spent. So to the point you made earlier, and I’ll combine. I think the people to find the incremental value in this stuff really depends on the applications that they’re going to be able to put forth and solve it with the technology. And that depends wholly on the industry. Like I said, some people are going to have lower value gains, manufacturing is an instance of that, maybe retail. Some people are going to gain higher value, certainly finance, any of the financial organizations, insurance, banks, things like that are going to have huge advantages they can gain from this technology. And other industries like… Healthcare will be able to gain from this technology, enable them to make better diagnostics and also be able to make better treatments in more effective ways, which will allow us to live longer and better lives. And so we’re going to benefit from that as well. But as far as financial benefits and business benefits, the financial groups are the ones who are really going to knock the ball out of the park. And they’re investing heavily in generative AI technology now, tooling up for it. Don’t have anything I think really to write home about in something that I can point to to say this is a killer technology for Gen AI, but they’re going to get there pretty quick. And there’s stuff that they’re just using in tactical purposes, using RPA, they’re using in whatever industry that they’re in. To have incremental gains in what they’re doing, perhaps eliminate some humans that are doing some of the information processing stuff now. I think that’s going to be a big push in the next few years. But as far as getting the knocks out of the park, I mean, that’s going to be areas where they’re able to take information and turn it into revenue. And finances is the big issue there. So it’s going to be interesting to see how people are going to gain out of this technology because all of them are going to benefit. So rising tide raises all ships. However, some ships are going to move very quickly based on the applications that they have and the ability to weaponize this technology. But there will be some companies that will not realize any benefits at all. I mean, some of my… Old clients, their tire manufacturers in the middle of Ohio, and they’ll have some use for it, but it’s not going to really change or revolutionize the way in which they do the business. Maybe get to a better supply chain, things like that. But a lot of that’s going to be provided by outside suppliers that are able to leverage generative AI for their value and capability. So everybody’s going to benefit, but not equally. And I think that you’ve got to kind of look at that as you make the investment. So financial industry, it makes wise for them to experiment and fail fast now to try to figure out where this technology is going. The other industry is probably not so much. I hope healthcare invests in it because I think they have a lot of automating to do and the ability to get lots of things right and make that experience much better for people who are users of the healthcare system, which we’re all going to be at some point.
TJ – 00:12:08: Yeah, absolutely. And I think one of the things that we do also understand, I think you touched upon it while landing the narrative, is the understanding of the use cases and also what specific gaps they should be building, which they will benefit from generative AI precisely. Now, keeping that in mind, what are some of the hurdles that, probably will be or currently preventing the mass integration of generative AI in the cloud computing and cloud security space. And how can these enterprises overcome that thought process or the challenges precisely?
David Linthicum – 00:12:45: Yeah, the biggest hurdle is skills and training. They’re just not out there. So people who can do generative AI architecture in the cloud or otherwise, they don’t know the first thing and how that’s done. I noticed that my generative AI cloud course out on LinkedIn Learning is just getting all kinds of hits because people see not only the skills as needed and getting people trained in that technology, but the ability to remarket yourself as someone who’s able to do that capability. And fortunately, I think it’s a little bit beyond just taking a course and even it’s learning generative AI running on AWS and Azure, things like that. You have to have some pretty deep skills in how you leverage this stuff effectively. And the ability to build models, the ability to deploy models, the ability to understand supervised versus unsupervised training, the ability to identify training data, and also the ability to deal with the legalities of it all. If you’re leveraging generative AI, which is built on other people’s IP, just them restating the IP in an LLM doesn’t necessarily get you away from IP infringements and understanding the laws and legal challenges that are out there. There’s not that many people that exist right now. Even the AI specialists haven’t really tooled up and retooled up for the generative AI stuff. They don’t understand the differences. Cloud architects are just focusing on cloud architecture, not necessarily how generative AI is going to be built around those systems and the ability to get into different data architectures and things like that. So it’s a deep set of skills. Not only people understand the insides and outsides of generative AI, but people understand how data consumption works in generative AI. People understand how the network bandwidth is going to change. People understand how the database technology is going to change. People understand how to leverage these cloud platforms in an optimized way. We’re able to, you’re not spending a million dollars a month on this technology. That can’t happen either. And so that’s what’s hindering things right now. It’s not going to technology. The technology has been outstanding. We’re able to do some things with the technology. You know, we had no hopes of doing 10 years ago. Now it’s there for us to pick. But the ability to use the technology effectively. In fact, my largest concern right now is people are hiring people. People who lack the talent, I think, needed to build and architect these solutions. And they’re going to be misstepping their way into generative AI, using it and misapplying the technology, and then not get the value out of it. I think we’re going to see this big hangover in two or three years where people will be writing about how generative AI failed them. And it was really more self-inflicted wounds. We saw this in the cloud as well. People talk about how cloud is more expensive than cloud. And I look at each of those scenarios. They weren’t using it effectively. They weren’t doing the refactoring of the applications that are needed to kind of take things to the next level. And the same thing’s going to happen with generative AI, and that’s the fact we’re lacking the talent. So one thing the boards of directors are calling the CIOs and telling that you need to move to this technology to enter the AI. It’s game-changing. We read about it in an airline magazine. We’re all in on this technology, go make it happen. If they can’t find the people internally and externally to make that occur. They’re going to settle for second tier and third tier talent, which many of them are, and move fast with the wrong people, make huge mistakes, lose a lot of money, perhaps even some instances probably derail the business so they can’t recover. So a lot of businesses are going to go away because their competitors were able to leverage this technology and weaponize it and become a disruptor in the marketplace where they weren’t. And the core reason that they weren’t is that they couldn’t find the talent. We can’t invest the money. We just didn’t have the creative and innovative capabilities to find the talent that we needed to bring it inside, either train them and hire them and things like that. And that’s the big fight that’s going on right now. I hate to say it boils down to HR and hiring, but it does. And it’s funny. It’s like we’re hiring to get people on board to ultimately do things with less humans. So we need the humans to move to a more pragmatic state, which is still a few years off. That doesn’t mean we’re not going to find other jobs for them. They’re going to be better now. But it’s just lacking in the sense of urgency there. And the big thing is, like I said, I think people are hiring second tier, third tier talent. They’re making core decisions. I’m talking a lot of people, a lot out in the industry who are making huge decisions for big organizations that really have no good idea as to where this stuff is going and are able to take a pragmatic view on how to leverage this technology. And are going to deliver way under optimized solutions. And they’re going to tell me, well, it works. And I say, okay, well, it works, but there’s no value being brought back to the business. You’re spending 10 times as much on executing this technology. You made some mistakes that really need to be undone. That’s the core concern. But on the positive side, you will see some very innovative businesses that do get it, that do figure out how to hire the right people, train the right people, gain the right insights in technology, know how to experiment with it, know how to fail fast. They’re going to end up leading their industry. I don’t care what industry it is, banking, we talked about that, huge value there. But whether it’s a manufacturing organization, able to provide better customer experiences, if we’re going to have automobile manufacturers, we’re going to be able to allow you to configure a custom-made car, completely designed by yourself. And they basically will print it like a 3D printer in a factory. Well, those capabilities are there right now in our ability to do it. But we don’t have the ability to manage these very complex processes that it takes to make that happen. Generative AI will provide the ability to. Well, maybe not a car in many instances, but it’s your ability to create custom homes, your ability to create custom furniture, ability to do things that are really kind of out of the realm. We have to pick things out of a catalog that’s static that we may like or not like. We can’t design something completely from scratch. Lots of organizations are going to provide that capability. So you as a customer, you’re able to design exactly what you want. You’re able to get it in record time. You’re able to see the process in which it’s built and manufactured. And by the way, they’re going to deliver it to you at half the price because, they’ve automated a lot of the expense out of it, whether it’s the supply chain or humans it takes to do it and build it someplace else. That technology is coming. So who’s going to benefit from that? How are they going to leverage that? Banking systems, we’re able to custom design investments and ability to set your investments to the perfect risk exposure that you’re looking for versus having a broker go, I don’t know, maybe that one, maybe that one. And pick the right stocks for you in the portfolios. All that technology has been able to be done for the last 20 years. It’s just. Not been affordable and it’s not been accessible to the business out there. Now it is. Now we have unsupervised learnings, able to take massive amounts of data and turn it into core decisions that are able to make decisions based on almost perfect information. That’s beyond revolutionary right now. So if businesses are going to weaponize that and take it to the next level, there has to be a higher priority. And they need to figure out how to get the talent in-house to make this happen. And they need to understand that they have to have a culture of innovation to kind of take things to the next level. Those things, I don’t think, have occurred to the point where it needs to occur.
TJ – 00:19:31: So well explained this whole thing, David. So spot on. I think the experiences that we’re building within the company will also eventually impact the experiences that goes externally to our customers. Because if people don’t know what they need to do and how to build these systems, I think how will you even make anybody else successful? So rightly said. Now, keeping that in mind and just taking some pointers from there, how does generative AI scale in cloud environments? And what kind of performance considerations are important for enterprises adopting this technology? Like as they think. Take this journey and educating people within the company or behind the talent. But what are the considerations they should look into the cloud environments precisely from performance and scale?
David Linthicum – 00:20:13: Yeah, you got to remember, these are very data intensive and process intensive systems, and they typically leverage unique processors that may not be a part of the cloud environment, GPUs and TPUs. And so, this is about picking providers that are able to provide that different processing power, have unique processes to it. And also the ability to even buy cloud services away from the brand names. So we’re seeing AI companies starting to emerge right now where they’re just providing GPU services as a service and huge data, data that’s optimized for training data. So what’s going to take it to scale is your ability to pick the right platforms that are able not only to provide you with the processing power that you need for the particular use of generative AI, but the ability to supply massive amounts of data and do so at a price point that’s going to be reasonable versus you going into the big name cloud providers and paying retail for that. And that may price it out of the business because the data, I don’t think people understand the scale of this. It’s many petabytes of information that are going to be needed to train these systems up. And it’s unstructured data, which has a tendency to be very inefficient in how it’s stored. And we’re not going to change the state of the data because we can’t afford to do that. And so massive amounts of storage, how are you going to do it? What kind of cloud providers are going to be leveraging to make that happen is the core question that needs to occur. In many instances, it may be on premise. That may be a better option for some people. Because I don’t know if you look at the price of storage and the price of compute, it’s just crashed in the last five years. It’s dirt cheap. Cloud prices have pretty much stayed the same. But then again, you’ll have some of these AI micro clouds that start to emerge. And those are attractive because they’re, again, clouds. We’re already leveraging a multi-cloud environment, typically leveraging multiple cloud brands. We just put them into the mix. We leverage them through an abstraction layer to make it happen. We leverage the data where it exists. They know and have pre-built processes for migrating training data or leveraging training data where it exists, build these LLMs directly in the systems. Those are the performance questions that need to occur right now. So it’s not around making things perform. Without money and time, I can make everything perform as well as you want it to perform. We can pay for access to CPUs and GPUs and TPUs and storage systems. But your ability to do so in an optimized way where we’re not necessarily breaking the bank and lowering the value that we’re getting from this technology is the question everybody needs to answer. So understand that these things are going to be hugely data-intensive systems. Even though we can use specialized processors, any number of different processors can be applied to make these things successful. Obviously you want something that’s gonna be optimized for running these things at scale. So that’s going to be a certain type of processor now, but it’s going to change and emerge over time. Our processor companies are already engineering processors that are just specifically designed for generative AI systems. We’re going to start seeing those available. We are seeing some of them now, but in terms of hugely differentiating technology, that’s probably a year away from making those happen. So you need to augment and build these systems to be able to leverage an open processor architecture. So if you’re writing something down to a specific processor, you’re going to find that it’s going to lock into a hardware platform you may not be able to move out of, and therefore that’s going to limit scalability. So think in terms of that. What’s the abstraction layer between your generative AI system and the underlying hardware, including storage as well as processors? Of course, people never think about that. They always think about storage, writing down to some object storage API. But the ability, some systems are using native processor features that may be proprietary to particular systems. Now in the generative AI days, anything will work. There’s a prosperous performance trade-off that has to be occurred. And the engineers really become business people more so than anything else. They look at the value that these things are able to bring. So I can sit down and design a perfectly, probably the most advanced generative AI system in the world and do so on a particular cloud provider, but it may cost 10 times as much as it should cost, even though it performs up to expectations and that kind of stuff. The big fear now is that a lot of people are getting off and they’re building these things without an understanding of where they’re moving to and how they’re going to make them scale in the future. And the biggest question is how much they’re going to cost. And I think a lot, like I said, we’re going to have this big hangover in two years, and that’s going to be related to not that they didn’t work. It’s always generative AI was very impressive, just like cloud is very impressive. But in 2022, people realized they spent more, about 2.5 times as much as they thought they would spend in cloud. I think that number is going to be five times, six times as much as generative AI because of mistakes they’re making now in terms of design decisions, and platform decisions that are easily not made. And so that’s why my urge, and I wrote the book on Insider’s Guide to Cloud Computing, that was just basically saying, an insider’s direction, how you can get through this technology to get to solutions that are more optimized that don’t just work. That’s what always tells me, they work, but something that’s going to bring the most value back to the business. And that’s how this game is going to be won. We’re going to spend billions of dollars. And even within companies, they’re going to be hundreds of million dollars on hardware platforms to build this technology. And that’d be a viable risk. But in many instances, they don’t need to spend that much money and get to the same functionality with a much more economically viable platform. So it is performance. I think we’re going to have a lot of people understand how to engineer for performance, but engineer for price performance is really where that game needs to be played.
TJ – 00:26:03: Yeah, I think that’s something, you know, as we built our own large language model and certainly did a lot of grounding. I think to your point, especially given we made it like more dynamic based in terms of the workflow generation, given we’re in a bot platform company. But I think the biggest thing is that while using the power of the backend, the hardware and the cloud computing, it’s actually pretty expensive to host. For a customer, it’s like great experiences. They’re able to quickly build an entire automation journey. But on the backend, it’s a lot of heavy lifting. So very rightly said. Now that we talked about performance and scale and a lot more about the generative AI precisely, let’s look into one or two questions on privacy and ethical AI. So data privacy and security are certainly paramount in cloud computing. It’s taken its time, but eventually it’s very secure now. And how can businesses ensure the privacy and security of the data used in Generative AI Models remains intact in the cloud.
David Linthicum – 00:27:03: Yeah, the big thing is understand the potential changes of state of the data. In many instances, I can take anonymized data and run it through a generative AI system, and suddenly it’s PII, personally identifiable information, is able to take patterns and And intelligently discern within the patterns what’s able to come up with the true answers out of those patterns. So massive amounts of clinical data that may not have a name in it or a social security number or a patient ID, anything that’s identifiable can become identifiable after it’s run through these systems. So that’s got to be a core lookout for people. I think we’re going to have a lot of breaches that occur where some of this output data that’s not protected as well as PII information and perhaps not even know that it seems to be regulated at this point is going to be leaked and we’re going to get information out of it. The other thing is probably more likely is that we’ll access anonymized information, which isn’t necessarily protected as well as it should be because it’s a massive amount of stuff. So it has a lower security platform on it than profile on it and other data pieces within the enterprise. And people are ready to take it. And get PII information directly out of it. And I think… That’s a rather scary thought. So my advice to people right now is don’t necessarily try to over-bureaucratize this, but get your ducks in the row in terms of what needs to be protected, what data categories exist, and how the data is going to change state as it moves through these systems, and what are the likely outcomes of moving through these systems. And privacy can be a big concern here because we’re able to take data that’s typically anonymized and not low-risk and turn it into something that’s PII can get us in lots of information and get us some huge amounts of bad press as people were able to take this data and do different things with it that no one really intended it to be done.
TJ – 00:28:50: Interesting. And anything businesses should be aware of in terms of managing General DPI and specifically during the cloud deployments and ensuring the security is pretty tight.
David Linthicum – 00:29:02: I mean, it should be just extending the best practices we’re currently doing now to generative AI. And that’s the big one. And also re-identifying the data, as I just mentioned, as being potential risks that may not be understood as a risk. The data is going to change state as it runs through this thing. It may come out as a risk. And so you need to start thinking through that. So identity access management systems are your best protection, your ability to identify the data down to the top of level, down to the record level, down to the object and table level. So we know what it is and what it’s able to do, but the ability to understand the potential changing states, that can be a security risk. So in many instances, we will secure data down to the database or even down to the bucket, but not down to the object, down to the details that it needs to be in. And that’s what has to be done because different pieces of data are going to have different security risks when you generative AI is around just because they’re able to, like I said, change the state. Businesses typically don’t think that way. So in many businesses out there, they’re largely undersecured. They’re underinvested in the security that needs to go into the cloud-based systems. The systems are there. They’re better in the cloud than on-premise because that’s where my money is being spent right now, but they’re not leveraging them. And I think that was really kind of a bad outcome of the whole lift and shift movement we had over the last 10 years. People just moved to the cloud as quick as they could, lifted their code up, lifted their data up, stuck it in the cloud. Probably put a security system around it, may have put some encryption around it, but not really detailed it down to the application level and the data level. They’re going to have to do that before they run that stuff to generative AI. So in other words, they’re going to have to retrofit the security solutions to become a true security solution where the data is secured before they’re able to leverage this technology effectively. And I think that’s something that businesses should be doing right now. People talk to me all the time about preparing for the advent of generative AI. I always say, get your security ducks in a row. In other words, your data is typically not secure. It’s not understood. It’s not governed as well as it should. You don’t know where it is. You don’t know who the owner is. You don’t know how much redundant data exists. Number one, that’s going to be a bad outcome for generative AI because garbage in, garbage out is going to learn from bad data and therefore come up with bad outcomes. Number two, your data is going to get breached because they’re able to take data that typically would be anonymized and lower risk and turn it into something that someone can weaponize against you. You even ransomware attacks, things like that, where we have your information, we turn it all into PII and we’re going to give it away if you don’t pay us a lot of money. So we’re going to get these odd ransomware attacks where they actually hold data outside of the company, not the company systems in general, where they’ve been encrypted and you need to pay them money to de-encrypt it. And you’re going to see, we’ve taken all this innocuous data off your systems, maybe in some things you put out on the web. And turn it into something that’s hugely embarrassing. And we’re going to expose it unless you pay us money. So we’re going to see all kinds of fun stuff that comes out of that. Many of them, they’re driven by hackers that are able to weaponize generative AI systems and then themselves. So you may not be dealing with a human being. You may be dealing with an automated system that’s able to get to the outcome payment of a ransom, but you’re not dealing with a human being. You’re dealing with something that’s running. Someone put on a server that’s anonymized, 1,000 miles away. They get the outcome of the digital currency that comes their way, but they may not be running the process. That’s a fun, scary world we’re going to live in, but we’re living there now.
TJ – 00:32:19: That’s a good one. Never thought of that analogy. That’s so true.
David Linthicum – 00:32:24: I mean, phishing is like that now. I mean, phishing has gotten very sophisticated because people are weaponizing generative AI against the phishing attacks. And we’re seeing those security systems have to retool and rearrange to defend against those. So they’re always going to be a few steps ahead of us in terms of their ability to take generative AI, which is almost free. And weaponize it against us and have different attack vectors.
TJ – 00:32:47: That was wonderful. Okay. Probably a closing question. We are on time on this one. So for our listeners, especially those in the CISO positions, embarking on or accelerating the journey in generative AI integration, what piece of advice would you offer to ensure that their strategies are not just about technological advancements as to what we’re discussing, but also about nurturing human connections and balancing digital transformation on the ground? All keeping the cloud adoption in mind
David Linthicum – 00:33:17: Well, digital transformation should have the customer at the center of everything. If you look at those smarter ways to do it, this is about adapting to the way humans want to consume information. And sometimes we lose sight of that. And the companies that do lose sight of it are going to lose sight of their markets. They’re not going to make as much money. So ultimately, when you do any digital transformation, we’re dealing with cloud or not cloud. And cloud’s 99% of it. Now we’re building these systems and taking things to the next level. We have to look at the different human experiences that we’re creating and different ways in which we’re interacting with our customers and employees and other people as human beings to make sure these systems are making things better. We can’t automate human beings out of the equation. We can make their lives easier. We can make the employees’ lives easier, automating some of the mundane processes. We can make a doctor’s life easier by automating some of the checks in terms of the diagnosis that he or she are doing to make sure they don’t misdiagnose something and give us a prescription to something we’re allergic to or anything like that. So if it’s able to better the humans that are involved in the loop, whether customers, employees, anybody using the systems, those are the decisions that need to be made right now. In many instances, I don’t think they’re being made. We’re not considering the customers at the center of the universe. We’re considering profit. We’re considering revenue. How are we optimizing things? And people will focus on that end of it. I understand why, because we’re geared to do quarter-on-quarter growth in the U.S. Economy. But in focusing on that, rather than how we’re affecting the human beings, we’re going to miss the larger picture. We’re not going to gain the market share that we’re looking to move into. And really, people are going to make decisions and vote with their feet. And if you’re not easy to deal with, whether you’re a human being as an employee, I mean, I’m going to find a job someplace else, or I’m a customer, and you’re not as easy to deal with as a person down the street that’s able to look at me as the center of their universe in terms of how they do their digital transformation, we’re walking, and you cease to become a company. And there’s some companies in danger. There’s some companies in danger of falling into that right now. They’ve gotten too much into… Focused on business process re-engineering and not necessarily understand how the human beings fit into it, then wonder why… That there’s humans leaving, employees are leaving, customers are leaving, things like that, because they feel neglected and other people are doing it better.
TJ – 00:35:30: Exactly. This is perfect, David. Thank you so much for your time. This has been an amazing discussion. I mean, there are some pointers you just called out. I think I’ve never heard of those before. It’s so spot on, right? So thanks so much for the insights into this topic. It was like great just to hear those thoughts. Well, Journey Behind the Cloud, I think this has been a topic we have been thinking about discussing and nothing like hearing it from you, David, so, thanks for your time again and hope to speak to you in the future for similar things or even beyond. But I think there’s so much to learn from all that you’ve been doing and the LinkedIn courses. And I think we’ll urge people to definitely go and listen as well. But definitely listen to this podcast because I think in this 30, 40 minutes, we covered quite a bit of ground. Thanks for the insights, David. Appreciate it.
David Linthicum – 00:36:16: Happy to be here.
TJ – 00:36:19: How impactful was that episode? Not Another Bot, The Generative AI Show, is brought to you by Yellow.ai. To find out more about Yellow.ai and how you can transform your business with AI-powered automation, visit yellow.ai. And then make sure to search for The Generative AI Show in Apple Podcasts, Spotify, and Google Podcasts or anywhere else podcasts are found. Make sure to click subscribe so you don’t miss any future episodes. On behalf of the team here at Yellow.ai. Thank you for listening.