By Use Cases
In this episode, Justin Hodges, Senior AI/ML Technical Specialist at Siemens, joins host Tara Shankar to discuss software, technology, sustainability, and the integration of AI models in the engineering industry. Learn about the use of AI in simulation software, the challenges of incorporating machine learning, and the future of engineering simulation.
Tara Shankar – 00:00:18:
Hello and welcome to yet another episode of Not Another Bot: The Generative AI Show. I’m your host TJ. Joining me today is Justin, a Senior AI Machine Learning Technical Specialist in Product Management at Siemens. Justin looks into Siemens’ portfolio of Simulation Software. He earned his bachelor’s, master’s, and PhD in Mechanical Engineering at the University of Central Florida in Thermal Fluids. Justin worked at Siemens Energy Turbo Machinery Lab characterizing heat transfer, fluid mechanics, and turbulence in gas turbine secondary flow systems. His master’s and doctoral research was on film cooling flow fields, predicting turbulence and thermal fields with advanced turbulence modeling and Machine Learning approaches. This is excellent. Welcome, Justin. That is a great introduction for sure. I’m super intrigued to learn more for sure, but welcome to the show.
Justin Hodges – 00:01:15:
I’m pleased to be here. I can’t wait to get into the content here and talk about AI. I’m thankful to say that that’s my day job now for several years. So it’s always a pleasure.
Tara Shankar – 00:01:24:
Well, intrigued by experiences, the first thing I really wish to know more about is if you could talk to me about how you have used artificial intelligence in your Work to increase efficiency and the experience of the end user, you know, over a period of time would be great to learn. And any experiences around like adoption of AI would be an addition to it. So we’d love to hear your thoughts on that.
Justin Hodges – 00:01:46:
Absolutely. So it’s like any other template and question on how can AI affect this industry or what impact is it having. It’s multifaceted. There’s a lot of ways that AI is impacting our industry of computer-aided Engineering, Engineering Simulation, whatever you want to call it. But there are certain themes that I think are paramount and pretty much obvious in a lot of discussions. So you’d start with saying, faster throughput, faster solutions, faster time to answer, whatever you want to call that one. So really Machine Learning especially can help this digital Work that people are doing to design their equipment and hardware be expedited and get answers in a fraction of the time, which is one of the biggest ones I would say that’s making an impact. So that’s the big one. The second one’s probably really relatable. It’s related to user experiences. So if you’re old enough, then you remember like Clippy from Microsoft Word. So it’s like an AI-charged Clippey, right? In a proverbial or just funny sense, but then in more serious examples, right? Like ChatGPT, and these sort of, developments are having really large impacts on people’s productivity at Work. And so it really falls into this category of like user-enhanced experiences. So less click miles, you know, having an assistant that’s like an expert that could sit with say a junior to help them through stuff that they would learn over years of Work is another one.
And then I would say the synergy, I guess you can call it like a digital twin concept where the AI is a big enabler for a lot of use cases of digital twins. And to be really inclusive with that language, we’re talking about scenarios. We have a lot of different types of data and we want to incorporate it and fold it into a consolidated model, which could be Machine Learning based to say that whatever is happening in real life, we are making a digital mirror representation or twin of that in our computer that we’re looking at in modeling. And yeah, Machine Learning can help with that a lot in some use cases. So those are three of say five that I would say are important.
Tara Shankar – 00:03:42:
Awesome. And I think my interaction with Simulation has been when my previous jobs at companies like Microsoft or so with like AirSim and others have been amazing to see how, you know, the canvas could be used for like, you know, just creating more of an environment with synthetic data and then using reinforcement learning consistently. So it’s just great to hear about that thought for sure. Now that you have been with Siemens for over four years now, and I would love to know what’s been sort of your and the company’s focus during this time? How has your Work evolved over the last four years? And what are you really focusing on? Let’s say in the first half of 2023 for now.
Justin Hodges – 00:04:19:
Okay, cool. I’ll answer that with a twofold sort of approach. The first is past until now and then present focus. So what I think I’ve been privileged to see in terms of the trajectory is that, you know, in 2017, I had my first professional AI experience. In that case, it was combining Simulation and healthcare diagnostic sort of approaches with AI to repatented something for detecting lung disease. And at the time, I saw in automotive, to machinery, this broad area of mechanical and Aerospace disciplines, theoretical proposals on the academic side of what Machine Learning can do and what this could afford society, right? And then I started seeing a bit of a maturation and cross-pollination from all the huge advancements in Machine Learning from Mainstream Tech Companies and whatnot, where now the cross-pollination was having more uptick. So rather than academic proposals and proof of concepts and theoretical type problems or very canonical simple problems, we started to see, you know, those top tier technology and research groups start to make consulting proposals where they could do these projects. But it was very much, you know, you had to have high know-how and competency to be able to do it and provide it and buy those services. And then now in present times, we’re seeing a lot of that percolate into commercial products that non-experts can use. And so we’ve seen it go from theoretical to only experts can wield the power in the industries to now a lot of things are widely available that you may not even have to be STEM oriented or an engineer to be able to just frankly use these capabilities. Maybe you don’t know what you’re doing, but like it’s available to you much in the sense that now large language models and everything else is out there and any kind of app store and things like that, it’s really everywhere. So I would say the field of Engineering and Simulation and HPC and this sort of area is no different. And that’s where we’re at now in 2023 is a lot has been vetted. A lot has been proven. A lot has been gone from interest areas to requested areas to buy certain services and products. And we’re seeing this adoption. And it’s really exciting to be at the forefront of it and take advantage of standing on the shoulders of giants, I think is a relevant term on taking advantage of these advancements.
Tara Shankar – 00:06:32:
Absolutely. Well, now that we talked a bit about what your focus here has been and where you’re heading with this, I’m also intrigued by your introduction to Simulation per se, right? Or the career you chose. Talk to me more about your whole Engineering Simulation and how you got introduced to it. What really intrigued you to be here and make it a career? I definitely see you’re very passionate about the subject. You’ve been doing this day in, day out. So we’d love to learn more before we get into the rest of the questions, for sure.
Justin Hodges – 00:07:01:
Cool. Happy to share. So it’s kind of a funny story. I started out as an experimentalist and really I focused on, I guess you could say, traditional aerodynamics and heat transfer in the labs, building experimental Test facilities, doing correlation and Measurements for different Energy Companies or Aerospace companies. And then at some point, I guess I got tired of building wind tunnels and rigs and things like that, because you spend months to build something and then you take the data and then sometimes you can just pivot to build something else. And I like working with my hands, but I really liked the science side. So then I guess for some reason it was a natural transition to start doing more and more Simulation. So you could get these models faster, get these results faster. Yeah, I was full tilt into mechanical Aerospace type Simulation, a lot of computational fluid dynamics and things of that nature. And like I said, I took this internship with the healthcare engineering portion of Siemens in their Princeton location and office. They have this amazing group of people that were every single day replacing Simulation for certain healthcare problems with AI. And something about seeing everyone around me do that for weeks. Eventually made it click like, okay, this is not a, I don’t even know if there was much hype at the time in 2017 on this. I just, at that time it clicked and said, when I come back to finish my dissertation, I’m going to incorporate Machine Learning and that’s the track I’m going to be on for the foreseeable future. And six years later, that’s really been my mandate. So I did it in my dissertation in a sense that would be balanced with the themes and study objective that I already posed. You name it, Pet Projects, Kaggle Competitions, Coursera classes, reading papers, just the traditional experience to throw myself at it. And then things really started to get more and more formal. So at Work, people started becoming interested in the business and market for incorporating AI into computer-aided Engineering and Simulation. And I had a lot of momentum on this. So then I just voluntarily with Pet Projects and after-hours sort of things, just working with the different development teams and saying, I think I can chip in and help on this. And yeah, I mean, you really manifest that after thousands of hours. And then I’m really thankful to say that for several years. Now I’ve had this AI dedicated role as a tech specialist in our product management team. And it’s really fun because you get to be involved in strategy setting and you get to Work with development teams and research teams and a fit like partners and academic groups that are way smarter than I am. And it’s just so great to be at the forefront and have this great ecosystem. So yeah, I mean, everybody has bad days but I don’t have that many and I love my job. So that’s sort of been the arc.
Tara Shankar – 00:09:44:
That’s awesome story. I mean, given my experiences with educating engineers to kind of adopt Machine Learning and AI in my previous roles in other companies, that’s not been the easiest path because we’ve created a lot of different courses across different providers and even within the company. But it was not an easy journey, but we really took that journey. We created a lot of options and segues and different sort of devices for them to learn hands-on. So I think that kind of brings me to the next question that given you have been an engineer and certainly you have the background though you do learn Machine Learning modeling and everything else, but then how difficult has it been to incorporate Machine Learning models or operationalize them into your software application stack? And second, how steep that learning sort of curve was because you have been focused on Engineering for so long but how much of a learning it required for you to incorporate Machine Learning and AI into your Engineering discipline?
Justin Hodges – 00:10:39:
It’s a great question. And I can provide some general answer in some anecdotal but of course every scenario is different. So I always cite this experience. So shout out to Sai, my favorite mentor of all time. So essentially when I was starting this out and realizing like, oh, wow, this is gonna take a lot of things to learn. How do I pick up this, you know, knowledge over time and make this my career area? I think my mentor at the time was the smartest person I think I’ve ever met. He did his PhD at Johns Hopkins in Simulation of Geological flows. And he said to me, look, we’re mechanical engineers. We’ve not taken a single formal programming class. So there’s going to be challenges here. Right. But school is really to set you up. Right. And a lot of people joke that when you finish your dissertation, then your path begins. Right. It’s not the end. It’s the start. So there are certainly things. But I think it really helps to be specific. Right. So it’s easy to get overwhelmed. Focus on specific areas that you’re interested in. You know, I choose to try to go one mile deep, one inch wide, not the other way around. There’s nothing wrong with doing either. But that’s my preference. So, you know, it’s OK that sometimes I may tune into like Tesla battered Battery Day and learn about their Computer Vision Algorithms and get lost after two minutes. Like, it’s OK. It’s a completely separate field. You know, you can focus on one specific thing and read scientific literature to snowball your knowledge and take your time with that. And I think another thing about being specific, that’s important, is also realizing that there’s a lot of applications of how you could Work in the Machine Learning industry. So you could Work in your current field and apply Machine Learning into that. You could completely forsake your field and just jump into something completely separate that’s Machine Learning oriented. So I think that’s a choice that can help you decide how you want to become more and more familiar and versed in Machine Learning. And then I think also, you know, don’t try and tackle everything at once on the computer science side if you’re coming from mechanical Engineering. Like I know for myself, I’ve had to take classes and learn over time on the architecture and computer system design side of things. Right. That’s not Machine Learning, but it’s certainly not mechanical Engineering. So it’s just something that, you know, in blocks, I decided, OK, this block of time for the season, like I’m going to take class on this, learn by doing, get involved on those projects at Work, and then I’ll have at least some minimum understanding on this topic that I want. Right. But it’s very easy to like jump in, see an ML app sort of textbook and then get lost on every facet of the computer architecture and inside of things. Right. So I think it’s good, you know, if you want to pick one, like maybe a data scientist, right. You’re more focused on the models, the pipeline, the statistical like implications of what you’re doing. That may be more bite-size than, say, jumping into something with more pieces that you’re not familiar with. And then the last piece I’ll say that I always champion as like something I love is Google Collab and or Kaggle. Right. Then you don’t have to have a GPU, a setup. You don’t have to take a week to install all the GPU libraries, acceleration, calculate like CUDA, things like TensorFlow. I mean, you can just literally in 30 seconds start coding. And that really helps when you’re trying to do things on the side to learn. It’s just to focus on that rather than distractions of like why isn’t this working? You know, why can’t I open this library, et cetera? Because those things add up in your spare time. And, you know, I mentioned Kaggle. I think that’s great because there’s so many learning resources out there. But I think it’s something novel to just download data and break it and then do it again and again and again. So I’d say, you know, take the course that you like. Be specific on what you read for the interest area that you have. And then also, you know, learn by doing with data sets and competitions and discussion boards on Kaggle. I think that’s a great way to learn in a threefold way.
Tara Shankar– 00:14:11:
That’s probably the best suggestion, you know, people who would be listening to this podcast. I think this is one of the best suggestions you could hear what just Justin said. You know, I could vouch for it because we were trying to educate so many engineers to become ML engineers. I think, you know, Kaggle, those competitions, which we were hosting and the sort of courses you talked about and then the way to focus. I think that’s so critical to learn, you know, to a learning curve and then a pattern. Thank you so much for sharing that for sure. It’s just amazing.
Justin Hodges – 00:14:37:
Yeah, you’re welcome. Yeah. Awesome.
Tara Shankar – 00:14:39:
There’s one or some other questions were specific to more of where your experiences lies. And certainly we’re going to go more towards Generative AI and large language models in a few minutes. But I think just to set the base further on the Simulation and testing, where do you really think the balance lies between AI and Simulation and AI and Test in your Work? And is one going to provide better returns purely from an investment perspective?
Justin Hodges – 00:15:04:
Good question. I’d love to be polarized and say super strong answer that it’s blank and not the other. But I think that the trend is really causing them to become equal. So I would say at present, there’s probably a higher return on investment, to replace test, with AI versus Simulation with AI. And it’s even then not clearly fairly stated because you’re not replacing, you’re more or less integrating it to take advantage and speed up and have these conveniences as far as less time, more safety and quality assurance and things like that. And I think that comes down to the simple matter of testing is harder to get. And when I say testing, I guess I should clarify, let’s say I produce aircraft engines, right? It’s a lot cheaper and more feasible for me to simulate behavior under different conditions on how the engine would perform versus to mount one up, find a facility that can have that much flow rate, probably not possible. So I’d have to scale it down, which I’m already then making different assumptions, and then take physical Measurements, right? Like temperature, velocity, pressure, et cetera. So you can see that scale-wise to Test thousands of configurations, you have to manufacture all these pieces and get all this testing time. It’s already, you’re limited, right? So there’s an inherently high value in getting more information from that data that you’re collecting and Test, and leveraging AI to amplify that and get more understanding out of it. So that’s kind of what my answer would be today. But for those that are interested in companies like NASA and the Simulation methods there that they really focus on, there’s this seminal sort of charter paper called, I think it’s NASA’s 2030 Vision, something like this. It’s really close. It might be one word’s different. Essentially, they published a few years ago and they said, look, if we wanna accomplish all of these really huge innovations for what we can do numerically, these are the steps. These are the phase gates for how we stay on time. And it’s a multifaceted sort of approach on a bunch of stuff for not just calculation, but also hardware and also the Simulation techniques.
And essentially what you see is engineers want more and more and more fidelity, larger models, things like that, right? So you can also counter-argue against the Test and say, well, you know, For all of that to happen, we’re going to hit computational limits. People are going to get tired of spending thousands and thousands on their compute resources because no matter how fast their models become, they make them larger, thus slow them down. So it’s really like both are ripe for introducing AI and saving a lot of time and resource. But if I’m forced to pick, I’d say Test.
Tara Shankar – 00:17:35:
Sweet. Makes total sense. Cool. Now let’s go further down into some of the Engineering Simulation Software that you have been working on. Where do you see the applications or main applications of it? So we’d love to know precisely where your focus areas for right now, what you’re doing at Siemens.
Justin Hodges – 00:17:52:
Cool. Well, precise and the rest of the question are probably antonyms because I Work in the sub-segment of Siemens that’s called Simcenter, which means it’s a portfolio of Simulation of Test products. And I can confidently say that it’s multi-physics in the sense that acoustics, thermal, mechanical, fluid Simulation, I mean, every kind. And we really pride ourselves in being approachable and usable and applicable to almost every industry. Pharmaceuticals, Automotive, Aerospace, Chemical Process, total machinery, defense, I mean, literally dozens of industries that we have like verticals for. So it’s really, really broad. But to be a bit specific, because I really like to try to be specific, we can get something out of what I’m saying. You know, look at your car, look at the plane that you may fly in next, look at your electronics and the cooling and things like that. The majority of that will be designed in terms of the hours spent to engineer from a Simulation. Right. And there’s a bunch of things to consider. And what we really try to do is to provide things to make sure that any sort of fluid mechanics, any sort of heat transfer, any sort of mechanical stress and wear and tear and fatigue, life in that could cause the part to break over time. Really, in all of these capacities, we try to produce software that can design. And of course, we have application areas that are heavier focus and, things like that, like electrification and batteries and flight certification and aerodynamics and things like that. So there’s really quite a large number, but you know, if it helps, my background is in computational fluid dynamics, primarily. So I looked at things like, you know, aerodynamic performance implications on if that makes things thermally safe, you know, is my engine going to overheat? Is my car going to be comfortable to ride in as well as have good drag and fuel economy and stuff like that. So it’s hard to be precise, but that is really the best answer I can give.
Tara Shankar – 00:19:48:
I will still take that. It’s pretty well explained still, I guess. Well, thanks for sharing that insight. Well, I think one of the key things, well, as we have been reading more about, and also certainly from following you and just learning from you for some time, but I had this sort of query in mind, what can you tell us about the whole accelerator portfolio software? There was some news around, you know, Stone Ridge signing up for it, and how will it help organizations like that, especially with the safety and cybersecurity requirements?
Justin Hodges – 00:20:19:
Cool. So I’ll start out by saying Siemens has, at least when I checked a couple of years ago, 500,000 plus employees. So you can imagine that things like cybersecurity and Engineering Simulation are a bit detangled, right? So I’m not a cybersecurity expert, but I’ll start out by saying that it makes sense that the more we connect everything in terms of our real data and Analytics, our design data from our customers’ design components, the way that we want customers to more fluidly interact with different types of data they have and different software. Right. We’re making things more and more connected, which puts us more and more at a threat for cybersecurity attacks. Right. So it’s really core tenant of accelerator, which I’ll talk in a second to take our, I think it’s 1300 cybersecurity experts in our company and really make sure that the software we provide, including accelerator is not vulnerable to these attacks and, you know, Siemens is a big company. So if you think about basically the worst cases where cyber attacks would have an impact, Siemens produces products and Apps in that space. And we do that because we have sufficient and trustworthy cybersecurity. So things like healthcare equipment, right? Would not want anything tampered with on that end or production facilities for industrial facilities, right? You don’t want things to be sabotaged in that case. Right. And as a final example, public transport, if you fly through a lot of European countries, you’ll notice like public transport by Siemens, right? So, you know, we have this really big established set of infrastructure and assets that like cannot be tampered with cybersecurity-wise. And I think our company has done a good job at translating that into the other software, but at any rate, okay, let’s talk about what accelerator is. So in essence, we have a ton of Apps and software that we produce. And we see that the trend of how people are using software means that it needs to be more interconnected to one another, more open to non-Siemens products and other Siemens products, and it needs to be out there in a marketplace of sorts as web applications so that people are not constrained to using the software on their actual desktop. And then as it migrates to other forums, like HPC centers or on their Cloud on-premise or on our clubs that we provide, right, we see more and more that this is a need. So strictly speaking, accelerator is a commitment to transform the way that all of our software is produced and given and used by our customers into this more central way. And as you mentioned, cybersecurity is a key important thing. So we make sure that it’s safe. And then we also vet and authorize non-Siemens Software to be used there. So the goal is to make it really, really open. And I think StoneRage is a company that does, like industrial agricultural equipment and automotive and electronics and stuff like that. So we can say like attractors or is one example. So in that case, I’ll just use it as a case study. So really the key is we want them to have all the information imaginable for their products whenever they want it, including at the very, very beginning. So information like what happens if I manufacture with this material, what happens after it’s designed, fabricated and used for thousands of hours. Right. We want to take that IoT or Analytics based information and we want to provide it to them all the way at the start. That way they can make these decisions consciously as to what they’re going to design and how they’re going to design it. And they know the implications years down the road. Right. So that’s kind of this shift left strategy, which means provide information earlier on. And you know, Machine Learning is a great part of that, right? Transfer learning, learning from historical data. A lot of use cases require Machine Learning to see all the missing pieces. Back in 2017, for example, at that research group, one of the projects at the time was for a company that’s producing airplanes. They want to make Machine Learning models that can understand things so granular, such as if I choose this manufacturing method or this manufacturing method for this one set of components of thousands of other components, how will that affect how quickly I can deliver the final product? Because it takes like three to five years to deliver some of these major turbo machinery engines and things. So it’s a really long process with thousands of steps. And Machine Learning can help sort of provide this inference and this confidence interval of, you know, maybe you should go with this approach because then you’re this much percent more confident to deliver on time, which is like one of the most important things, right? Safety, delivery and performance. So I don’t know if this is talking too much or going to weigh in and ask certain questions kind of back and forth, but that’s the vision of Accelerator. And I think there’s a lot more layers to it. But that’s one at least first thing I can say.
Tara Shankar – 00:25:06:
I think it’s precise to kind of get some insights to it. I think it was certainly news and I think it made total sense to understand how it may help with safety. I think thanks for the use cases and examples there. So I would definitely take that. Hope audiences feel valued just listening to that part of it. Another thing that certainly we want to learn more about that November last year Siemens announced the launch of Simcenter Cloud HPC Software as part of the ongoing collaboration between Siemens and AWS or Amazon Web Services. We are always looking for customer experiences to be enhanced, advanced. I just want to know if there is anything you want to talk about and how this expansion of access to software will allow for greater customer experiences for the organizations that use it. Or if it may have a different spin to it, but we’d love to hear your thoughts.
Justin Hodges – 00:25:53:
So I’ve been pretty serious so far and I think this is meant to be kind of like a conversational thing. So I’ll be a little silly, right? So, you know, first of all, I’ve spent way too many times in the past five and a half years at Siemens running simulations and sitting there waiting to see the results or going and doing something and then coming back and checking. Right. So. At least if we deploy our software in this way on Cloud, we have other ways that make the Simulation engineers life much more enjoyable by accessing it through mobile or other means to sort of like monitor your jobs and stuff like that. But we’re not promoting being workaholics, right? So the more serious answer is that if you’re an SMB, you wanna be mindful of how many IT people you have to hire to say, be experts and high-performance computing and clusters that they’ll send out for Simulation jobs and stuff. So in this case, it’s convenient because if you’re a startup, for example, you don’t have to have a bunch of those people or constrain your hiring to people who know all this information because they can just simply use their Simulation Products they need on our Cloud. And that means no setup, no worrying about things. But it’s also a huge encumbrance lifted as far as cost because really what it comes down to is there tends to be a trend where the more resources you have, like the more computational processing units or CPU, GPU, whatever, the more of those that you have, the more accurate and the more big jobs that you can simulate, the less assumptions you have to make, right? You can just maybe model the whole thing you’re designing rather than having to simplify it or make little simplifying assumptions so that way it’s more manageable and you don’t need as many CPUs to simulate it. So, if you’re a startup, it may be discouraging to think that you have to do this huge spend by a really big HPC. Whereas in this case, you can pay per unit to simulate on other configurations that maybe have 500 CPUs rather than the 64 you would have been able to afford if you did it in-house. So, it’s a lot of flexibility. It’s a great option. It’s more convenient in terms of how we know that software usage is going in the future, this direction of a distributed way on Cloud and things like that. So, I mean, that’s just one of the answers. So, we’re putting a lot of our software on the Simcenter Cloud HPC. The first of which is Simcenter Star CCM+, which is pretty much like my background, simulating fluid mechanics and things like that.
Tara Shankar – 00:28:14:
Awesome. Well explained. Well, that kind of brings me to another set of the other questions, which is the more we’re hearing you and your field of expertise and experiences. One of the key things for me, which I keep getting very intrigued about is the sustainability. How has the software you have been working on or things you have may have worked in the past has helped in responding to the current needs in the industry for greater sustainability? And it’s a big call in terms of how we look into, and that’s why AI for Good and other things exist today. But we’d love to know your thoughts in terms of how you and the software and the experiences you have built together for years is able to help the industry for greater sustainability.
Justin Hodges – 00:28:56:
Yeah, well, there’s a few ways to answer that question, but I think probably the most immediate way is you can look at big trends like electric vehicles electrification, right? We have huge customer commitments and involvements with companies like General Motors, who is one of all of the Automotive OEMs who are pushing for EV, right? And so, you know, you can look back as sort of like a silly gauge, like over the last three years, every Super Bowl, like how many commercials are there on electric vehicles, right? Dozens, okay? Three years and you know that, well, no one really had electric vehicles and now they’re all over the place, right? And you can see that this is a crazy, aggressive push to introduce this technology.
And as a software provider to companies like them, we have to respond and include those types of physics models, those types of Simulation capabilities, those types of way of getting fast answers, right? Because we know that the pressure our customers are under to do this fast turnaround time all of a sudden have tons of electric vehicles, right? So first of all, is it really serving as a provider of software in a customer focused way, what they need as they need it, as you know, our goals in society change, right? As electric vehicles become a huge thing, we need to spot them with the right software and Machine Learning helps, right? I mean, I think every summit, you know, there’s kind of this debate of, well, we’re increasing emissions with compute on these big clusters, but you know, we’re making big impacts and we’re also trying to use models more responsibly that are smaller, that consume and emit less emissions and things like that. So we’re being very mindful of all of these things to make it really economical, but also support major trends in our customer base that, you know, is what you see in magazines and in commercials and things that are very aware, right? Electrification batteries being one of them.
Tara Shankar – 00:30:40:
Love that. And I think now that you talked about models too a bit, and I think given the context of our show for sure, how models and large language models, given there’s so much talk about it, like, you know, ChatGPT, others, you know, Bard, and beyond, or GPT-4 per se and beyond being integrated into the future of Engineering Simulation to enhance capabilities. What’s your take? Is it already happening or you see a future for that or you kind of already yourself, you know, digging deeper into those areas? Would love to know, you know, how LLMs pretty much are going to shape the future of Engineering Simulation.
Justin Hodges – 00:31:14:
It’s a profound impact and we’re all looking at it and we’re all invested in it because we see its potential and it’s almost unilateral. I mean, it’s not necessarily shocking ways that it would be used in our industry versus others. But again, it’s really profound. I mean, what is the value of being able to train models that can be used in conversational form to perform tasks that are super annoying, time-consuming, and require technical expertise? Like just this week, I saw there’s this library, Pandas AI, that is related to the Pandas data frame for using and accessing your data. And you can just simply write now with an API call for OpenAI. You can just make simple conversational statements like look at my data frame and please tell me how many data points lie outside the 75 percentile core tile. Like how many of these are outliers, right? These are functions that are so important to using software that may normally be used take expertise, right? An aerodynamicist doesn’t have to know stuff about statistics or Machine Learning, right?
But with stuff like this, it’s massive availability and democratization. So it’s really profound. And I would say a very good tag to put on, like say how ChatGPT can impact is expert assistance or expert guidance. You can really have the knowledge of like a 20-year principle distilled into models that look at very scientific data and then be called on by someone who started last week. And so there’s of course, themes of trustworthiness and confidence and the way that you need to use these models and monitor them, right? But you know, it’s great power. And so I think that it will definitely be transformative in a very pervasive way for Engineering Simulation.
Tara Shankar – 00:32:55:
One of the things we are doing here within Yellow is certainly we have created something called ZLOG G2 stands for JPD or Generative as you may say, the whole thing has been around you know how can we build this multi-llm architecture for solving different use cases you know we have summarization which certainly is a great outcome of LLMs or you kind of looking through a complete QNA or you know document to kind of build a document cognition i think that’s helping a ton. One just thought from your perspective do you see multi-LLM as the sort of future as you know kind of this journey is taking given looks like and we also created a smaller model right literally to kind of go through this question is do you see the smaller models becoming a future as you kind of go through this journey or you have a different thought process and opinion on that.
Justin Hodges – 00:33:42:
Well, we certainly want to let the best minds in the field, let’s say LLMs, do their thing right and provide their guidance. And I think a lot have echoed the last few months that. Larger models are not the future, smaller models are. And so if that’s what’s being said, then right. We will take that as a starting point, which seems to be promising. I can’t really overstate the impact that I think these language models will have. And again, it’s really exciting, right? It’s almost like when the iPhone first came out, right? People are like, wow, this is really a way that I can have input-output with things that are normally pertaining to like my phone or people or communication. It’s going to be the same thing in this case of Engineering communication.
Tara Shankar – 00:34:20:
Awesome. And any thoughts on hallucinations? I mean, given if you’re doing Simulation too, I’m assuming you are always looking into the accuracy of the information, whether you’re implementing Automation on top or whatever, even kind of interacting with a bot to get some information. How much do you see pure perspective hallucination really matters at the moment? I guess my understanding is the new LLMs people are trying to create is to reduce also the hallucination possible as much as possible. There will be some, it gets generated. But what’s the acceptance for that? And then what’s your thought on hallucinations precisely?
Justin Hodges – 00:34:56:
Scary because, you know, if you’re designing something that’s life or death in terms of operation, right, you would always want to take methods that were invented and used since the eighties, right, versus any new thing, right? But that shouldn’t stop us. So. I think that for all Machine Learning type approaches in my field, including LLMs, the methodology of using Machine Learning, should be set up in a physics-backed way so that we’re not using black box models and interpretations of things without guide rails on foundational concepts, which could be the typical way that we design things and the simulations we use and the typical equations or rules of thumb a company has. So I think it should always be used with these guardrails up and including the hallucinations that may give you these overconfident unusual answers that you maybe don’t recognize. So I don’t think we should be really unleashed in full rain and sort of like take over design processes, but I think it could be a situation where the design process in Engineering is refactored to include these advancements that can make things really fast, but then also include, you know, checkpoints to kind of keep everything in check, right?
It doesn’t have to be an all-or-nothing thing. And yeah, hallucinations are very scary because if we’re probably asking these models to tell us things, it’s probably on guidance for very, very complicated things we don’t fully understand anyway, like turbulence, right? So we’re already entering a scenario that’s sort of touchy where we’re asking for input from a black box, something that’s already what most people don’t understand. So I think that expert knowledge needs to be formulated into the problem itself as much as possible. And that does not include just LLMs. And we can talk specifics if you want, because I know it sounds vague, but I really have specific things in mind. You can look up the phrase, physics-informed Machine Learning. And I’m a big believer in that. Again, it’s really about building things that you know from your Engineering background into the problem setup, and then using Machine Learning on top of that, rather than just starting out and letting it go in any direction, if you want to call it that.
Tara Shankar – 00:36:58:
Interesting. Physics-informed Machine Learning. I’ll take a note of that for sure. It’s a good point. Well, thank you so much for that thought there, Justin. Now let’s move on to just a few more questions here. One is certainly just given we’re talking about the industry you’re generally working on. And certainly we have viewers who are looking to build solutions with AI, Generative AI and specific industries. I would like to know in what ways can Generative models or Generative AI can contribute to the optimization of design and production, and also the pipelines within the mechanical and Aerospace industries.
Justin Hodges – 00:37:33:
So the idea of Generative Engineering is pretty big in terms of Mechanical and Aerospace in this area that I sit in and see. So one sort of category of Generative Engineering, that’s especially obvious for a good solution for AI is Systems Design. So we’ve talked a lot about like maybe simulating one individual component or something like that. But, you know, imagine you’re at the very beginning of a project and deciding you want to build a drone and you’re not all the way at the end of the process where you’re picking between minute differences on how you design it, but you don’t even know yet. Is this going to be gas powered or battery powered? Is it going to look with this many propellers or this many? I mean, at that point, it’s so early in the process that there’s so many permutations possible on everything you could design. Engineering Simulation, traditional methods would not be sufficient. You cannot run hundreds of thousands of individual simulations just to cover all the possibilities. So you need some intelligence there. And there’s concepts from Generative Engineering you can borrow for system modeling and system design, which again, is early stage in architecture design when you’re designing any complicated thing like an airplane. And then I would say another class of Generative Engineering or a way for Generative AI techniques to play a role in a CAE.
I mean, I really like Variational Auto Encoders. You’ll probably be outdated by something soon that, you know, every few years, something surfaces that’s catastrophic to the previous method. But I really like that because, you know, if I’m designing, for example, like the shape of a hole for a marine body, right, like the actual vessel shape of what goes through the water as a really simple conceptual example. You know, I may be at my Marine Company with 10 years experience and say, we describe this shape with 10 geometric parameters or 20 geometric parameters. Oftentimes, it’s a lot parameters, right? So for very nerdy reasons on Simulation, you want fewer parameters, which makes it more affordable to simulate. Variational auto encoders can provide that condense, the smaller latent space. So that’s great. You can use principal component analysis or other ways to arrive at that limited set of variables. Right. But then also, it’s like an alternative approach. So maybe that way that you’ve casted your problem in 20 variables in the shape of something is not exactly the best way possible. Maybe if you just throw a bunch of tessellated surfaces for your different design options at a Variational Auto Encoder, they can come up with a better way to parameterize it. That can actually produce better designs. And then where the real money is, is once it’s parameterized and you train some machine learning models on simulations, so maybe you involve like an active learning approach, so you can use the minimum amount of simulations to maximally learn the space and the space meaning the relationship of the geometry from like an autoencoder to the Simulation results. Well then you can unlock this capability of like inverse design, where now you just say, I want these results, please generate a design that can satisfy it. Rather than a traditional Engineering approach where I say, let me constantly tweak the geometry and produce Simulation results and then tell me the best geometry, right? So it kind of unlocks a paradigm and there’s a lot of other specific industry reasons on why that’s better or could be better. But essentially you see the same thing as like the electric vehicle example. The main priority now is please produce something in half the time to bring it to market. And if you do design approaches like this, it starts to get more and more possible.
Tara Shankar – 00:40:56:
Well, you explained it so well. And I think one of the key things with me, my first interaction with Anything Generator was GANs, where we actually, you know, when we did GANs in many ways, but I think, you know, with Vertex and whatnot. And then, but my whole thing with like really touching and working with GANs was producing music, right? So the whole idea was when you are just playing anything, but you can actually, whatever the tune you just played, you don’t have to be a musician, but with the help of Generator, you can actually convert that into a really good music. So that was the first interaction. And I think I love the example of the VAE. And now that, yeah, a new class of Generative things with GPT and whatnot we’re dealing with, but I totally could understand some aspect of it, but the rest you just explained so well. Thank you so much for the clarity here.
Justin Hodges – 00:41:41:
Yeah, my pleasure. I’ll just interject one other tie, because I think this is like sci-fi level excitement for me. Did you see the thing I think was with Adobe? It was like a commercial where they basically had like a road, a picture of a road, and then they showed interactively in their software with these Generative AI techniques. You could just say like, add a deer, add lines on the road, add bushes, and it was completely photorealistic. Right. So I don’t know exactly how to integrate and consume that into CAE space and Engineering space of Simulation. But right. I mean, that has to play a role. That’s absolutely phenomenal. And what’s the quote? Sufficiently advanced technology is indistinguishable from magic. I mean, that was just magic. Right. So I’m really excited about that as a prospect for Engineering Simulation.
Tara Shankar – 00:42:27:
So one of the other questions is, and given, you know, and it’s just being a little bit more thinking about the custom experiences and user experiences, I just want to take your opinion in terms of beyond what you’re seeing, what you Work on and experiences. How do you see Generative AI specifically impacting the entire customer experience space in the next five to 10 years? I want to say 10 years, but the way we’re waiting at the moment and the way we’re going to market, I think five years is a good example. So would love to know how do you see Generative AI impacting the customer experience space in the next five years, per se?
Justin Hodges – 00:43:01:
That’s a good question. I think I’m not particularly young anymore. Right. So I still think of myself as like a youngish engineer, but there’s still engineers 10 years younger. So I just think that now in my generation and younger generations, the standard on what it is like to use software is just dramatically changing. And it’s revolved around being much more accessible, much more open, much more intuitive for collaboration amongst other people. And I think that Generative AI is like one of the only ways to make that possible, in terms of the requirements that they see when they think of buying expensive software, right? So I think that sometimes you see a big push in fidelity in Engineering and improvements to efficiency and things. I think there will be a period where that’s not the larger focus. Really the demand is the experience of what it’s like to use the software because now we have fundamentally different expectations from younger generations on what it’s like to use the software. Like I said, I think Generative AI is one of the biggest things to make that possible. I can’t think of much else that would be. I mean, maybe the previous would be like Cloud and SaaS as a way of using software, but this even still feels like a step change.
Tara Shankar – 00:44:11:
Absolutely. Yeah, I think that’s kind of where we operate on heavily is to make the customer experience so amazing, whether through the interaction with all the Automation we’re doing. And there’s so much we’re achieving. There was this long tail of different use cases, which we could never touch with limited Automation. Now with Generative AI, we could do that. And it’s all dynamic workflows. Everything is in the runtime. So we definitely see more human-like conversations, more empathy, which certainly is making things even easier and better. And the customers, at least on our end, who have adopted some of our offerings on Generative AI, I think they’re just seeing the value straight away. And also the amount of time you need to set it up. Right. It’s all like you have a goal which you need to put together in a prompt way and write the prompt for it and literally Work the workflows and runtime. So it’s an exciting space. Just certainly want to take your opinion on how you feel. So thanks for sharing that. Last but not the least, this is more of a closing question for you. If you could go back in time and give career advice to yourself, your 21 year old self, what it would be. And second, maybe for others, you know, all the budding engineers are coming up, getting into the space, hearing all the hype and the facts at the same time. How would you kind of give them some direction to focus on as they grow in their careers and take up Generative AI or this entire new world altogether by Star?
Justin Hodges – 00:45:30:
has a lot of things, a lot of regret to go back and try to undo. Okay. So one of the first thing that pops in my head was that I wish I did a better job at networking the day that I started college, like the very first day of undergrad, right? Cause that’s thousands of connections, right? That you could just accumulate from basic interactions at university and things like that. And you just never know. And I think that one of the most important things in nowadays is connectivity, right? And you see how that takes over social media and other things. It just really affords a lot of opportunities. So I think in career path, that’s also super important. So I would just highly recommend being intentional at building a network. And then that way, no matter where your technical curiosities take you in the career that you pick, you know, you’ll probably have people that you know, if you were diligent with networking and that sort of thing. So that’s one thing. I think having a mentor is very, very important. I think that it’s maybe awkward to start that process of thinking about the mentor, why you’d want them to be a mentor and then asking them and stuff. And maybe you can’t always have that. There’s not always the right person to ask, but I think that’s really just super critical. I think that’s really important and is a huge thing that you can do for yourself. And it gets you in the practice of thinking about like meta information, right? Like you’re going through school and you’re studying. It’s so easy to just focus on what you’re studying and getting good grades. Right. But furthermore, I mean, it’s really important to ask yourself, like, why are you doing what you’re doing with what you’re doing today? How’s that going to impact you in five years or 10 years? Do you really want to be going down that path? Is there really other things you should be exploring? I mean, it’s so hard when you’re focused on achieving like good grades or performing well at your job to keep getting mindful of leaving that space.
Focus to think more broadly and having a mentor really helps. I think that that sets another one. And I guess on the technical side, one thing that I could have done a lot better at, especially at very early ages in undergrad, would be making it a point to allocate some of my time to trying just a lot of diverse open source things right now. Everything is so open source, you know, you can keep a long list of everything that you want to try and learn and just get a little bit of experience in. And I think that stuff builds over time and it’s so helpful. So maybe just computer science exercises or things from other disciplines like industrial Engineering or manufacturing. I mean, these things really play a role over time. And I think it’s good that sometimes you get exposed to really good opportunities to learn things and that opportunity passes you by. And if it’s a class or like a seminar or person you met, I would just be mindful that if you get opportunities to learn little things like that, it can make you quite diverse and well established later on, especially now and every week there’s a new breakthrough in something STEM related and AI related. You should carve out time to pay attention to that.
Tara Shankar – 00:48:17:
Great suggestion. Well said. I agree on the having a mentor. That is such an important one. I totally agree. I mean, I have had mentors over a period of time, but just having that at the anger age is so critical. So taking a few final thoughts from you before we close this. Just wondering if you can share your opinion on the industries or the organizations who are trying to take their journey for broader Automation, Simulation, or even Generative AI. And the Aerospace industry, or even broadly any other industry, what sort of advice would you give them? Certainly there is more education needed for sure, but then there is now a buying decision that’s certainly happening to kind of go and take a plunge. But what are the things you would ask them to be thoughtful about and how they should really embark the journey if they have to? Just your thoughts on that, Jerson.
Justin Hodges – 00:49:06:
Well, from my experience, some companies offer support throughout your patronage when you buy their products. Some people just categorically say we don’t support it. It’s on you, figure it out and others say the opposite, right? I’m in favor of the support model. So I would say that’s a pretty big consideration. I would also consider compatibility with open source tools and things like that of that nature. I think something like 30 odd percent of publications, I think as a survey from 2021, in Machine Learning were open source in terms of providing all of the code and data. So I think that it’s important that if you’re doing something that involves Simulation and AI, that you consider how much of it’s open source and how much of it’s closed off. I don’t know how to articulate it super well, but I would say the last point would be, consider the breadth of what software can do, right? Because some obviously focus on specializing in one very narrow and niche thing, which is great, because then it can do that problem extremely well. But then, especially if you’re a smaller company, you want to do more types of things, those sort of options really fall off quick when you change the application or what type of thing you’re trying to simulate. Whereas other products really pride themselves in being able to do a ton of different things, right? So I think that’s a pretty critical decision, as well as support, as well as open-source nature compatibility. That way, the reality is they’re probably using a lot of different tools. So you should consider how well those will play nicely or not.
Tara Shankar – 00:50:30:
Oh, that’s a good one, actually. Just thinking the breadth of what the software could do and then eventually extending the possibilities of it. Well thought. Cool. Justin, this has been amazing. I have to tell you, there’s so many things which you just talked about. No, seriously. I mean, there’s so many notes people could take back from here and just learn from it. And the space you’re operating on and the sort of experiences purely from what you have studied and how you’re applying that in your current job and building software and applications, I think that’s massive. And the viewpoint, I think, on some of the aspects we discussed was phenomenal. So I really want to thank you, Justin, for being part of our show and talking about your experiences in the industry and eventually also talking about Trinity BI in general. So on that note, thank you so much for being on the show.
Justin Hodges – 00:51:15:
Well, that’s a lot of nice things to say. I’m not sure I deserve all of them, but you know. It was a pleasure to be here. I really, really enjoyed it. I’m always open to follow up so people can feel free to reach out to yourself or me to have a dialogue opened up. And I’m glad you think it was useful. It’s always a challenge to be topical and inclusive in how you talk, but not be overly vague to where people listen and don’t get anything out of it. I really don’t like that myself when I listen to podcasts, I really want to get something out of what someone’s saying. So yeah, I’m happy you invited me and I’m happy you’re happy with it.
Tara Shankar – 00:51:44:
That’s been amazing as I said. So thank you so much again. And we’ll definitely be in touch and for the audience, we will, if they have questions or they want to be connected, we will definitely make that happen. Justin.
Justin Hodges – 00:51:55:
Appreciate it so much. Thank you.
Tara Shankar – 00:51:57:
How impactful was that episode? Not Another Bot: The Generative AI Show, is brought to you by yellow.ai. To find out more about yellow.ai and how you can transform your business with AI powered automation visit yellow.ai. And then make sure to search for The Generative AI Show in Apple Podcasts, Spotify and Google Podcasts or anywhere else podcasts are found. Make sure to click subscribe so you don’t miss any future episodes, on behalf of the team here at yellow.ai. Thank you for listening.