Skip to content

Milind Sawant: AI Deployment in the Healthcare Sector

 

Not a Threat Episode 4 - Milind Sawant - AI Deployment in the Healthcare Sector

 

 

Not a Threat AI Apple Podcast    Not a Threat AI Spotify Podcast    Not a Threat AI Youtube Music    Not a Threat AI Amazon Music    Not a Threat AI iHeart Radio    Not a Threat AI Player FM    Not a Threat AI Podcast Addict    Not a Threat AI Deezer

Introduction

[00:00:49] Sam: My guest today is Milind Sawant, a leader in product innovation with 30 years of global R&D experience. At Siemens Healthineers, he founded the Center of Excellence for AI and Machine Learning, growing it to more than 200 cross-functional team members and achieving significant business impact.

Milind shares a structured approach to AI deployment, covering common pitfalls, a problem-first adoption framework, and strategies for balancing speed and accuracy. We also discuss the evolving role of subject matter experts, overcoming resistance to change, and the future role of AI in healthcare.

His insights are practical, experience-driven, and invaluable for anyone looking to adopt AI in their business. So, please enjoy my conversation with Milind Sawant.

Building an AI Center of Excellence: Breaking Silos and Driving Innovation

 [00:01:32] Sam: What motivated you to establish the center of excellence at Siemens? What opportunities could you see that others were missing around this timeframe in 2019? 

[00:01:43] Milind: So, in Siemens, as you probably are aware, we have lots of divisions. We have point of care, imaging, ultrasound, diagnostics, radiation therapy. Because of this large organization, it's very easy to get lost and be siloed. And unfortunately, we sometimes end up reinventing the wheel.

So, I basically created this center of excellence to break the silos to create a forum where we can share best practices, use cases, and code. I was able to scale this organization from, I think, about five people to about 200 plus cross-functional team members. We were able to file a couple of patents, so it was a very exciting journey.

[00:02:23] Sam: It's pretty incredible growth and you were quite early on the AI wave ahead of ChatGPT, for sure.

[00:02:32] Milind: Yeah, actually, that might have helped me because, during the pandemic, people were stuck at home, so it was a great opportunity to learn something else. So that might have helped fuel this journey, I think.

As you look at the challenge of deploying AI within an organization, what are some of the most common mistakes that companies can make?

I would say the first mistake that people tend to do is they put the cart before the horse. What I mean by that is a lot of organizations generate a lot of data and then they say, okay, let's try to get business value out of data. What should be happening on the other way around, which is start with the problem and then figure out what data we need to solve that problem using AI, so that's one mistake people are doing.

Another mistake that people are doing is they're purely putting this burden of AI implementation on the I T department. Yeah. Instead of involving subject matter experts.

Number three is all about data quality, not so much about data quantity. And we want to make the models as simple as possible instead of unnecessarily make it complicated.

Number four is correlation is not causation just because two graphs are going in the same direction. That doesn't mean there's a causal relationship. So, it's very important to understand the context of the data.

Last thing I can think of is, a reliance too much on external vendors when they may or may not have the expertise of your core knowledge, which is essential. So, these are some of the pitfalls I can think of that could be avoided.

Avoiding Common Pitfalls in AI Deployment

[00:04:14] Sam: AI is this big, perhaps scary thing. It's very easy to say, Oh, let's turn to a consultant. Let's get people in from the outside to solve our problems for us and help us make the most of it. Looking at that first point that you mentioned there, let's dive deeper into that. Putting the problem first. Can you explain why that approach is so critical?

[00:04:41] Milind: Sure. So, let me give you another example, right? So, let's say you are having a problem with your car and you take the car to the mechanic and say, go fix the car, right? Now, without any idea as to what the problem is, the car mechanic might take the engine apart when all that was needed was the wiper blades in the car need to be replaced.

So, you've got to start with the problem statement, tell the car mechanic, Hey, it looks like I'm having some smudging on my windshield. Can we investigate that area? And then the mechanic will figure out what data is needed. And how to go about repairing the car. The same thing applies with AI.

You've got to start with the problem statement, then figure out what data is needed and how complex the models need to be to solve that business problem.

[00:05:24] Sam: Can you elaborate on why simpler models often outperform complex ones?

[00:05:30] Milind: So, I won't say that the simpler models outperform from an accuracy standpoint, but what happens is if you make the models too complex, a couple of things occur, you need one higher computation effort, more CPU resources. It takes a long time to make a prediction because the algorithm needs to run. So, the training resources are high, the inference resources are high. So, the trick is to make the model as complex as necessary, but as simple as possible to deliver on the main objective, the problem statement.

Leveraging Subject Matter Experts for AI Success

[00:06:05] Sam: You mentioned it can be tempting for companies to give responsibility for AI deployment to their IT departments. And although those teams make meaningful contributions to data collection efforts, they likely don't know what questions to ask of the data or how to maximize its value. So, how can companies better utilize the SMEs within their organizations to make sure the right questions are being asked to move projects closer to desired outcomes?

[00:06:33] Milind: So, what typically happens is for AI, you need lots of data, you need quality data, right? Most organizations, the IT departments are involved in storing the data, retrieving the data, making it accessible to the right people. So, it is very natural to be tempted to pass on the responsibility of AI to the IT department. The problem is IT departments know how to store and retrieve data. They don't know what to do with that data, right?

So that's where the subject matter experts come in for any meaningful project implementation or to get business value out of any A. I. Project. You have to involve our subject matter experts right from problem statement, problem definition, identifying what data is needed, how much time how to clean that data, what kind of models to develop, features engineering what to make sense of when the predictions are there, because it's going to have an iterative approach and you have to have the subject matter experts in the entire end to end AI project execution.

The other thing I would say is, all subject matter experts are not the same. If you have a finance application, let's say you're trying to improve a cash flow problem, then the subject matter experts need to come from the finance or the accounting department. Whereas if you have a technical problem, then the subject matter expert needs to be coming from the R&D department.

So, you start with the problem statement, form a cross-functional team between the subject matter experts of the relevant department and the AI data scientist and put them together. And that's how you're going to solve business problems.

Data Quality vs. Quantity: The Make-or-Break Factor for AI

[00:08:11] Sam: And presumably the quality of the data feeds into the success of a project?

[00:08:18] Milind: Oh, absolutely. I know it's going to sound like a cliche, but really, in this case, the quality of data is more important than the quantity of data. And I'll give a simple example.

Let's say we're talking about a cancer detection application, right? Most of these applications use a class of algorithms called supervised machine learning algorithms. So, what happens in this case is that you provide training data, let's say radiology or images from cancer patients, you provide training. So, the algorithm learns from these data points and these are labeled. So the label here means that the radiologist or oncologist has made a determination that this x-ray is cancer, this x-ray is not cancer.

Now, imagine a situation where a radiologist makes a mistake because it's an edge case. The patient was an edge case with a malignant tumor, and it is labeled as benign. Now, if this particular X ray or if this particular data point goes into the training set, the algorithm is going to learn incorrectly. Now, what happens at inference time, right? If a similar patient with edge characteristics gets into the input, the algorithm is going to make the same wrong prediction that the tumor is benign instead of malignant.

The last thing you want to do is tell a patient they don't have cancer when they do, Or even the other way around, right? So, this is why accurate labeling is so important. This is why having accurate data feed into these models is so important. Otherwise, it's just going to be garbage in, garbage out.

[00:09:50] Sam: The stakes are so high in the healthcare space, and it's almost people's worst nightmare, when it comes to AI, is being passed through to a machine and told everything's okay, when in fact it's not or even vice versa, it could be just as bad.

[00:10:06] Milind: Absolutely. So that's why you focus on getting high quality. If you get high quality and quantity, sure, absolutely. But if you have to make a choice, we're in high quantity and high quality. That's why you want high quality data, which in other words means the labeling is you're darn sure that this is cancer. This is no cancer. Only that data needs to be a part of the training set.

Starting Small: The Key to AI Adoption Within an Organization

[00:10:28] Sam: You're emphasizing there the importance of starting with maybe smaller pilot projects, rather than trying to eat the elephant in one bite, as it were. So, when you're making recommendations for these projects, what are the key elements that you are trying to incorporate that would help to generate momentum and excitement within a business?

[00:10:52] Milind: Yeah, that's a good question. So, when any organization is at the inception of the AI journey, the problem is people are excited, there's enthusiasm, right? Now you want to build on that enthusiasm. So, while selecting a pilot project, make sure that the projects are small projects that can be completed in the weeks rather than months. Projects where the data is already available in house. You don't have to pull hair or go to 10 different organizations or departments to get the data. And number three let's have the broad projects simple enough so that your internal resources can solve the problem. Rather, you start hiring people because now you're going to stretch the timeline and you're going to lose momentum.

For the pilot project specifically, The objective should be to Create excitement because if the project works it will create a necessary momentum because success sells itself very easily.

If you have a project that works everybody in the company knows, that will create a necessary excitement and enthusiasm and more people will come in. And of course you'll get more funding. But on the flip side, if you take a major project and it fizzles out. People are going to lose interest equally fast, right? So that's why it's important to get a few quick wins. And that can only occur with smaller, simpler projects.

[00:12:12] Sam: So rather than trying to completely rejig an end-to-end process, go for a specific task within that process where there are gains that can be made and over time just gradually take small bites out of that.

[00:12:31] Milind: Absolutely. Yeah, that is the key, right? Start with simpler projects. And as you get more momentum, more interest, more funding, then you gradually go to more midsize projects and then larger size projects.

[00:12:43] Sam: It does strike me. There's a juxtaposition between the need to move quickly, but also to address that data quality point. And so, maintaining that really thorough approach. So how can a company strike the right balance between those two opposing forces of speed and accuracy?

Speed vs. Accuracy: Finding the Right Balance in AI Implementation

[00:13:04] Milind: So, what I suggest is at the inception, Start with projects that are much simpler. The risk of failure is much, much lesser projects that you can knock off in a few weeks, right?

Of course, as an organization matures in the AI journey, then the projects are going to be more complicated. That occurs, you take an iterative approach, meaning you start with data, clean the data, build a model, make predictions. And then those predictions are not going to have high accuracy at the first round, necessarily they'll be performing very poorly on your first iteration.

So, it's very important to quickly understand, put subject matter experts on the team and understand what the problem is, right? Maybe most likely the data quality was the issue. Or maybe you did not select the right feature. So go to the second iteration where you clean your data, refine your model or refine your inputs to the model, come up with a prediction. So, when you take this iterative approach, and if you have each iteration, it's a much smaller cycle, then you can also solve mid-size problems.

Once you become mature, then you will have the processes in place to take on much larger projects. So that's how I would suggest balancing the speed versus accuracy problem.

Overcoming Internal Resistance to AI Adoption

[00:14:20] Sam: There's often a lot of excitement surrounding the deployment of AI. We've already alluded to that, and trying to maintain that momentum. I'm sure it's not all plain sailing, people love progress, they hate change, right?

So how do you handle resistance to AI from within an organization, especially from those who perhaps are skeptical or who view it as a threat to their roles and perhaps their livelihood?

[00:14:49] Milind: This is by far the best question that I have had so far, and I enjoy answering this one. When I'm coaching people on AI, especially mid-career professionals, they're worried, are they going to lose their jobs? And I keep telling them, let's convert this threat of losing jobs into an opportunity by learning about AI.

So, let's say you have mid-career professionals. Now, they're not going to start programming in Python and make AI projects, but their domain expertise is very important. Almost all AI projects require domain expertise to train the models, right? So, I tell them, let's cross train. If you have domain expertise, that's fantastic.

That is a prerequisite. Let's train you a little bit on AI just to be familiar with some of the terminology and then when they're semi trained, they can start working with the data scientist. So, the key is to upskill on AI and use your domain expertise to develop AI use cases that way. In fact, not only you'll actually make your jobs more secure because now you have domain expertise and some hands-on experience.

[00:15:53] Sam: So, trying to ride the wave rather than getting pulled under it.

[00:15:59] Milind: You cannot fight it. The best is to ride it and be in sync with it to make the most out of it.

[00:16:05] Sam: Short of throwing in the towel and saying all right, I'm just going to go build houses and hope that robots don't get good enough to do that before I retire, what skills do you believe will be most essential or most valuable for professionals who are looking to thrive in an AI driven future.

Future-Proofing Your Career in an AI-Driven World

[00:16:27] Milind: I'm going to answer your question, but I'm going to give another quote before, which is funny. People ask what kind of jobs are going to be secure in this AI world?

The answer is, if you're a plumber, your job is going to be very secure. Because plumbing requires making sure your seals are tight. There's a robotics element, sensing liquid. These are things AI is not very good at right now. Things might change in the next 5/10 years. But if you're a plumber, actually your job might be very secure.

The jobs that are going to be at risk are those like, for example, paralegal, right? So doing any legal case a lot of work is needed to see what the historical cases are similar to the current case that is being addressed. It requires a lot of study. All that can be done with AI very quickly.

Teaching, for example, most of the teaching could essentially be done by AI, right?

I foresee a situation where AI is going to be assisting us humans, at least in the next 5/10 years. They're not going to take our jobs, they're actually going to make our jobs much easier, is how I see it.

[00:17:33] Sam: In the short term or relatively short term, it's not clear that many jobs will disappear. It will be similar to the internet, right? It was a tool that came out and it was something that facilitated higher levels of productivity within the workforce, jobs changed, but they didn't necessarily disappear. The question with AI is, maybe they will.

[00:17:57] Milind: I'll give a few examples, right? When the automobile was invented, people were worried that, people who build horse carriages, those jobs, or those who are maintaining horses, those jobs would be lost, right? Maybe a hundred years ago, I think poor people had horses and rich people had cars. It is really flipped right now, these people have horses and poor people have cars, right?

What happens is the nature of the job changes. When a calculator was invented, the accountants didn't lose their jobs. They could just make the computations much faster when the computer was invented, right?

People were able to type out words and letters. Sure, some of the stenographer's jobs might have gone a little bit, but I think what is happening is we become more and more productive, right? With the same resources. Essentially, we make humanity better by becoming more efficient. And we have more time for leisure compared to, say, 200 years ago, when we were farming the fields, and we didn't have time to watch TV

Making AI Accessible: Tools and Training for Mid-Career Professionals

[00:19:02] Sam: So, there's, obviously an element of, the personal agency of go and learn, try these things out, test them. Don't be afraid of them, but what responsibility do companies have? What accountability do they hold for facilitating that upskilling of their workforce?

[00:19:26] Milind: That's a good question. I think most organizations are trying to figure this out. And if somebody is claiming they know it, no, they don't actually. What's happening is management, executive management has realized that upscaling their staff on AI is the key thing. What might be missing is upscaling on what, right?

So, what I propose is different strokes for different folks. So, if you are an early professional, early career professional, right? Then probably doing some certifications on artificial intelligence and machine learning are available. There are courses available online or you can go to campuses, but there are lots of online courses available. So early professionals, I would say, with basic AI ML training, maybe Python.

If you're a mid-career professional, then I don't think it makes sense for them to start learning Python. For them, I always propose that you use your domain expertise and know enough about AI to see how you can develop AI use cases on your domain expertise.

Instead of them having to learn Python, they should understand how they can convert their domain expertise to A. I. Use cases. Now, if you're a senior executive, then the training is much the level is different. They need to be able to work around the hype of A. I. And see how we can get business value out of AI projects. So, for them, the level of training would be different, which would be how can we help end customers, how can we get business value, which projects to fund. So, depending upon the level in the organization, the training needs to be different, but training is a must, I would say.

[00:21:09] Sam: So, let's take the mid-career professional. What kind of modules or format would you recommend to help those individuals figure out how their domain expertise can be developed within AI.

Have you found any tools that can help with the personal side, the agency of going and playing with ChatGPT and it's really one of those things, you have to use it to learn it. I could describe it to you, but you're not going to see the value of it until you really start to get into it. So how do you encourage those mid-career professionals and what training works well?

[00:21:50] Milind: There are tools available and there are trainings available for mid-career professionals. Like I said before, for mid-career professionals, they don't have to study Python. They can if they want to, but a better utilization of their time is to convert their domain expertise into AI use cases.

Surely, they need to be familiar with the terms, right? What is AI? What is ML? What is deep learning? You need to be aware about what it is that AI can do and what it is that AI cannot do. There are lots of tools available. For example, if you go on an Anaconda package, which is open source and free, There are tools, there are no code tools, like for example, orange software. You don't need to write the Python code because all the Python code is written in the background. All you need to do is place widgets which perform a certain function like build a model, make a prediction. So they are like icons that you can place on your screen.

So, there are tools available for those with minimal programming background. I would say going forward, I wouldn't be surprised with even with Excel and Word, You would be able to do some bare minimum AI thing going forward. So, there are tools available for mid-career professionals. I would say they should focus on the terminology, so they can interact with the data scientists, and also how to convert their business domain knowledge into AI use cases.

What kind of data do we need, what kind of accuracy is necessary, because I'll give an example, right? So, for a, let's say for a diagnostic application, 99 percent accuracy is good enough, 99.X. But if you're talking about, let's say, an airplane, right? Ninety-nine percent is not good enough because there are thousands of airplanes flying in the sky.

So, 99 percent would mean a couple of airplanes are going to fly off the sky every day. So, this accuracy requirement is so domain specific that a data scientist would not know what to target.

So, this is where the mid-career professionals come in and say that for this specific application, I need 99.995. For example, in our medical devices, our high throughput systems can process about a million patient test results a year. So, 99 percent is not good. And even 99.9 is not good. And we had to go like two orders higher just so that we do not cause these false positives and false negatives. So, this is where domain expertise is so important.

AI Performance Standards: Why the Bar is Higher for Automation

[00:24:15] Sam: It does raise an interesting question that there has to be this order of magnitude better performance with AI than humans. It feels like for AI to be accepted by people, there needs to be a significant improvement over the current state.

And you look at things like self-driving cars where, for all intents and purposes, even today, driving on roads with humans, they are safer. There is data to show that but if the minute a tesla crashes It's on global news and it's held to just such a high standard.

Do you think that will change? Do you think over time, that bar that AI will have to clear will come down or if anything, do you think it might go the other way? Do you think that the bar could get higher?

[00:25:13] Milind: So, interesting that you give the Tesla example, because I have been driving one for the past four years. I've got it, not into cars really, but I got it really because I wanted to see how the technology was evolving. What I can tell you and I'm not trying to sell Tesla's here, but what I can tell you is that the technology has really matured.

So, if you had asked me the same question, like four years ago, I would have said sure, this full step driving is great, but I won't trust my life on it. It has come to a point now where they have done end to end machine AI, right on, on the driving. Where let's say human beings are tired or have a long day. I would say that Tesla's FSD is slightly better than a tired human. I think it exceeds an average human being's ability to drive a car. It's those edge cases where it gets a little tricky, right?

So, what I would say is technology has definitely improved significantly over the past four years, at least based on automotive and also in healthcare. Coming back to your question, like you rightly said, you know how many there are like thousands of your deaths in car accidents in the U.S. They don't show up in the five o'clock news, but there's just one problem and then it's going to show up on Twitter and five o'clock news about a Tesla accident. 

So, how do you address this? Forget about AI in general. Let's say you have a process A and you're trying to improve it by, say, a process B. I always use like a thumb rule that, you know, Because there is more uncertainty in this new process, you need to have some factor of improvement. It could be five times.

It could be 10 times. Ideally, you only need to be slightly better than the existing technology to make a change, right? That should be the rule because there's uncertainty, there's less knowledge. I use a thumb rule between five times to 10 times if it's better. Then I think it's good enough, in my opinion, because change is not going to be in significant jumps.

It's going to be gradual. We got to start walking before we can start running, So these improvements are going to be incremental. They're not going to be just step changes. So, we have to accept and embrace these changes. then as long as we are mitigating the risks such that we at least say 5x or 10x better than a human driver, should allow the cars to drive because If it's 10x better on an average, we're going to have 1/10th of the accidents than the human drivers, right?

So, it doesn't make sense to put too many constraints on full self-driving, for example, and that is only going to get better over time. So, that's my suggestion is to have a margin in the safety margin or safety factor. 5x 10x or it depends upon the application.

Incremental Improvements vs. Big Leaps: How AI Gains Traction

[00:28:01] Sam: It is interesting because if you said to someone in a business, “I can give you a 10 percent improvement on this process for the same price”, whether that's cost or time, they'd bite your hand off. If you were to implement AI within a process and you can say, “Hey, you spend a hundred hours, or your team spends a hundred hours a week. I can get you 10 hours back and it's not going to cost you anything”, people would jump at it. Whereas if you said that to a regulator, when it comes to a healthcare tool or the self-driving car, It's just not going to move the needle in the same way.

And so, I guess it’s getting that balance right internally. Yes, the stuff it can do is amazing. But back to your point of, can we make incremental small changes that over time add up 10 percent here, 20 percent there, 50 percent here—over time, you combine all those, you have a significant step change in performance across an entire process.

[00:29:05] Milind: So, if I may, let me play the devil's advocate here. Not trying to defend the FDA, but I'll tell you why the regulators do what they do. Even if somebody says 10%, if we are darn sure that 10%, no other change, meaning the performance is exactly the same except 10 percent less resources, sure, it's a no brainer. Go for it, right?

The problem is because this is new, we are not sure whether you are going to get the same level of driving. So, there's some uncertainty. So, the factor is just to accommodate that uncertainty in the prediction, right? If we knew precisely that the performance is going to be the same with 10% less resources needing our time or money, then sure that would be no issue. The problem is we are not very sure whether everything else is going to be identical. So, you are hedging your bets by putting some buffer in there, essentially.

How AI is Transforming Business and Patient Care

[00:29:58] Sam: To pull it back up to an earlier point, you've worked at this intersection of AI and healthcare now for a while, and there's a lot of hype and hysteria and we've covered some of it, there's understandably a lot of concerns about how the tech will be deployed. I'd love to get your take specifically on how you see AI reshaping patient care and maybe the role of clinicians over the next decade.

[00:30:28] Milind: So, I want to start by saying that the Skynet is not coming. AI is not going to take our humanity, that's for sure. I'm not worried about AI taking over our life. What I am worried about though, is that humans could misuse AI to harm other humans. So that is a legitimate threat. But the AI by itself is not going to go rogue, that's one.

Now coming back to the AI landscape in the healthcare space, I think we are at some sort of an inflection point, right? So, we had all these precursor technologies in terms of internet, computers, mobile telephony, right? So, these precursor technologies have set the stage for AI to succeed. So, what is going to happen is things are going to change in phases. And again, this is my guess. Nobody can say for sure what's going to happen. But if I were to guess right in the first 10 years, we're going to see a transition.

Initially, you're going to see applications which help or assist clinicians. So just like driving, it is driver assistant technology. The cars are not driving by themselves, but they are assisting the driver. The same thing is going to occur in healthcare. These AI tools are going to be available to help or assist the radiologists, the oncologists or the physicians in the next 5/10-year period and during the end of that period, what is going to happen is the technology is going to mature.

So, initially from assisting as we end a decade, things are going to switch over where the routine work might be taken over by AI and only the edge cases would be done by the healthcare professionals, for example, or the healthcare professionals may be involved in validating the work of the ER rather than doing it themselves.

We know for sure that radiologists are in shortage all over the world. They only get a few seconds to look at an image and make a determination—cancer, no cancer or whatever. Now, what if the bulk of the common patients are taken care by the AI and only in the edge cases where the AI, the radiologist needs to spend their time and seconds, precious seconds to look at the x-ray, so initially, these tools are going to help the radiologists as to focus on the edge cases. Over time, the AI may take over most of the standard cases and clinicians would only be involved in validating the work of the AI.

Now, as you go beyond that period, you're going to see an exciting journey where we are talking about. I've heard about nanorobots where you can inject them in your body and they'll go where the cancer is or we can have precision medicine or we can have medication developed using AI, which is targeted to your specific problem based on your genome and on your clinical markers. So, I think that is coming. Nobody can predict if it is going to occur in 15 years, 25 years? It's very hard to predict that. We're talking about quantum computers now, so there are a lot of things that are in the pipeline.

I would say the next 10 years could be predicted in this transition from assisted systems to the common systems being taken over by AI and clinicians working on the edge cases. Beyond that, there's all of these theories and options that are available, but whether they would occur in year number 11 or year number 25, there's uncertainty. I know for sure that within the next 50 years, the game is going to be different and we're going to have a very different world.

[00:34:00] Sam: And if you were a regulator, put yourself in their shoes, you're the FDA. How do you look at this problem—or opportunity depending on what side of the aisle you land—how do you address AI, particularly within healthcare, protect patients, and stick to the mission of the regulatory body?

Regulating AI: Striking the Right Balance Between Innovation and Safety

[00:34:29] Milind: So, that's a very good question and very relevant for most regulated industries, right? The trick is we need regulation because if you want to make sure that patients' interests are taken care of there's no harm to the patient, but over regulation can also stifle innovation.

Like I said before, we need to take an incremental approach towards regulating it. If you put too many constraints, medical device manufacturers and healthcare manufacturers are not going to be able to release products to help the customer. The language that I've seen from FDA is that they want to work with academia, they want to work with the industry in this journey.

The other point to note is that AI is evolving. Regulators are used to a technology that changes over decades, They are not used to something that is changing every year. So, imagine that the regulators had created their framework and chat GPT was not there. Suddenly chat GPT makes a scene. And now generative AI is the bulk of the talk about AI, because that's not the only AI that is there.

So, things are moving so quickly that regulators have to shoot a moving target and there's no way they can nail it right on the first round. So, if I were the FDA, I would be flexible. I would be open to change. I would be open to managing the risk, while making sure that we take... no risk, no returns, right? So, we have to take some risk in terms of technology so that patients will benefit. So, the theme should be trust and verify. That's how I would do it.

How Business Leaders Can Effectively Deploy AI

[00:36:04] Sam: What piece of advice would you give to leaders who are just starting their AI journey?

[00:36:11] Milind: Very good question again, this is something I've been addressing for a few years now. The first thing I mentioned, right? If you are at the inception of an AI journey, make sure your subject matter experts are in the thick of the AI project development. Don't just pass it on to the IT department and say, go figure this out. Make sure there's a subject matter expert involved.

Try to manage your hype. The hype is good in the sense that it gets you funding from the investors, but overhype would mean that you have unrealistic expectations. So, leaders have to manage expectations from stakeholders and make sure that they're putting the limited amount of money in the right projects.

The other thing to remember is that if everything else fails and you don't know what to do, work on AI projects that help your end customer. Let's say you're a medical device manufacturer and If you work on the feature that helps the customer's experience, then it is going to have a trickle-down effect or maybe you will sell more or you'll be more cost competitive and your revenue position might improve and your cash flow position might improve. That extra money will feed and fund additional projects.

Of course, pilot projects are a different discussion right now and do whatever makes sense to get some good events. But when you start putting serious money into your projects, make sure as a leader that you're helping your end customer, because that is what is going to help you get more money and make it a self-sustaining cycle. So that's what I would do is to make sure your AI projects get some business value by helping your end customers.

[00:37:45] Sam: It's a nice way to round it out back at the point of ‘begin with the end in mind’, that's the key takeaway. So, just to close out, the traditional question for you, a meaningful closing question—whether it's your career or life, what's the story that you'd like others to tell about you?

Final Thoughts: AI is Easier to Learn Than You Think

[00:38:07] Milind: So, you'd be surprised to know that five years ago, I had no idea what AI was other than the Terminator movies. Within five years, I was able to learn about AI, develop use cases, and file for a couple of patents. I presented at health care conferences. So, the trick is, AI is very easy to learn. There are good online tools available. If you're a mid-career professional or a senior executive, don't worry too much about learning Python, about how you can use AI using your domain expertise and anybody can learn AI, there are lots of tools available, there is no excuse to not know about AI. So, I would say if you're learning this podcast, just Google, there are lots of online courses available, there are free tools available. You can make a significant stride in five years like I did. So, I would wish you good luck.

[00:39:01] Sam: That's a very positive way to end, Milind - thank you so much.

[00:39:04] Milind: You're welcome. I enjoyed talking with you as well.