Me, Myself, and AI Episode 102

Advancing Health Care With AI: Humana’s Slawek Kierner Talks Synthetic Data and Real Lives

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

Slawek Kierner, senior vice president of enterprise data and analytics at Humana, has been immersed in data for as long as he can remember. His fascination with process simulations began with his first PC, running Matlab and Simulink, and it later led him to innovative roles at Procter & Gamble and Nokia. Slawek’s desire to use data for a noble purpose brought him to Humana, where he uses AI to solve problems around medication adherence and predict population health outcomes.

In this episode of Me, Myself, and AI, Slawek describes how re-creating synthetic individual profiles indistinguishable from those of real humans can help physicians better predict patient admissions and behaviors. He also shares stories on how his team created an internal machine learning platform that gives data scientists access to open-source capabilities — all in pursuit of helping human beings live longer, healthier lives.

Read more about our show and follow along with the series.

Subscribe to Me, Myself, and AI via Apple Podcasts, Spotify, or Google Podcasts.

Transcript

Sam Ransbotham: There can be sizable gaps between people working directly with technology and people working in product teams. Today’s episode with Slawek Kierner, senior vice president, digital health and analytics, at Humana, illustrates how diverse experience can come together in novel ways to build value with AI.

Hello and welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, BCG and MIT SMR have been researching AI for four years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build, deploy, and scale AI capabilities and really transform the way organizations operate.

Sam Ransbotham: We had a great discussion with Prakhar Mehrotra from Walmart last time. Prakhar himself has a fascinating background from organizations like Twitter and Uber to his current job at Walmart. There’s quite some differences between those organizations. It’s also interesting to learn some aspects that are similar. Check out our last episode for the fun details.

So for today, let’s go on to something new. Shervin, I’m really looking forward to today’s episode.

Slawek, thanks for taking the time to talk to us today. Part of our focus is on you, really. Can you introduce yourself, tell us what your current role is, and we’ll go from there?

Slawek Kierner: My name is Slawek Kierner, and I’m a senior vice president in charge of data and analytics at Humana. Humana is a Fortune 60 company really focused on health care and helping our members live longer and healthier lives. I have roughly 25 years of experience in data and analytics across consumer technology and health care industries. And, you know what, I’ve always been interested in data. My first money, I spent on a PC and got Matlab and Simulink loaded there. And I would create all kinds of simulations, get them run overnight, and see what’s coming out in the morning, and use all kinds of ways to visualize data. At that time, I was fascinated with process simulations and autonomous control systems and adaptive control. And this fascination got me a job with P&G, Procter & Gamble. And the good thing is that I could continue to use my passion, so there was a lot of permission space to innovate and bring advanced algorithms to these spaces. And that’s how I started.

That was also a fascinating time, because you could start to have feedback from your algorithms, from your early AI-based systems. And [at] this moment, I got hired by Nokia. It was the moment when iPhone was launched, and that was another fascinating transformation. As you probably know the story, that part of the business was acquired by Microsoft. I moved all of my team — I was running data and analytics for Nokia at that time — to Microsoft [and] looked around for a while. I helped run some of the same operations across Microsoft’s retail and devices business and then moved to the cloud and AI unit. And that got me to Humana.

Roughly one and a half years ago, I got to this moment when I thought, “OK, I’ve built quite a bit of experience in data and analytics and can make money with it. But now let’s try to use it for some good purpose.” And that’s something that we can do in health care, where next to having a lot of data, we also have a very noble purpose.

Sam Ransbotham: I think you’ve almost run the table at almost every sort of industry and every sort of application. Sounds like you’re just right ahead of whatever is happening next.

Slawek Kierner: That’s what I hope. I’m looking for these challenges.

Sam Ransbotham: This is pointless, but I have to follow up — were you a chemical engineer back in the P&G days?

Slawek Kierner: I worked very close to chemical engineers, by the way. So part of my projects were to actually rewire large chemical factories. Think about really, really huge chemical operations that P&G has in major cities. But to your point, I actually did two majors. One was in mechatronics, and that was essentially at that time a fascination with AI in my part of the world. And the other one was in business management.

Sam Ransbotham: I just mentioned that because both Shervin and I were chemical engineers back in the dark ages, and I thought we’d found a compatriot in the whole thing, because I actually got interested — you mentioned simulation as your beginning as well. And that’s where I started too, is, you didn’t have to build a plan; you could just simulate it. It just really shows some of the opportunities from data. And it sounds like you saw some of those same things.

Slawek Kierner: Exactly. And you mentioned simulations of factors — that’s exactly what I was doing. It’s interesting, because that fascination that I had early on during my studies with simulation, I could bring to P&G.

Sam Ransbotham: So a factory is very different than you are in Humana. That’s a completely different — chemicals don’t mind if they sit for a while in a vat, but patients do. How do the things you’ve learned from that past experience influence your current?

Slawek Kierner: I think there are a number of things that actually you can learn in supply chain and chemical processes that actually do apply to health care. It’s fascinating. But let me just list a few. So first of all, the whole basic setup of process and process control, I think, applies to a chemical factory. And when you think about a market, it’s very similar. You can apply so much marketing. And when you do too much, of course, you’re wasting money. Not enough is not going to give you the results. So you kind of meet a similar problem. And I think the trouble that you have with keeping chemical processes in control also, actually, we see in health care. For example, if chemical control has a lot of lag, so the time from when you apply a certain force or temperature to a time when you start seeing a result of it on the output of a process, the more this lag is there, the more difficult it is to keep the system in control.

Sam Ransbotham: Classic thermostat problem.

Slawek Kierner: That’s what it is. And when you think about health care, we have exactly the same. We were trying all kinds of treatments and programs for our members on the onset. So think about diabetes or an issue of, let’s say, having heart disease and then needing to adhere to medications. We try to convince you to do it. But very often, we need to wait very long until we see that actually your health has improved. And again here, we have these long lags. And these lags are both inherent to the process itself and clinical domain. But quite often, they also result from just poor data interpretability.

Shervin Khodabandeh: I think you’re making a super-interesting point that there are archetypal problems in different disciplines, different industries, different fields altogether, but similar approaches, once customized to that particular industry or company, would give a lot of good results. I resonate a lot with that. You listed some very good examples. Do you feel that diversity across — having seen different archetypal problems in different industries and disciplines — do you feel like that is an important attribute of someone like yourself, who is leading an AI organization for a company, just coming from different backgrounds and having seen it across very different disciplines?

Slawek Kierner: Yeah, I think it is useful. It’s useful for me, of course. And I have a lot of respect for leaders that emerge from the health care industry. And, of course, they have that background, which I am learning only on how to really operate in a health care context. But to your point, I think specifically in health care, there is a need for people that will come from other industries and bring knowledge. Because it feels that health care is a little bit behind, certainly from data transformation, availability of data, certainly from usage of AI, and also from a platform point of view.

Sam Ransbotham: You mentioned several different examples through there. Can you give us some more detail about some particular success that you’ve had at Humana, particularly around, obviously, artificial intelligence is what we’re interested in? Is there some story of success that you’re proud of?

Slawek Kierner: Yeah. There are a few. So we certainly are testing and learning a lot. I think the key progress that I am really proud of is the fact that we have created our own internal machine learning platform, which helps our data scientists have access to modern technologies, to all of the open-source capabilities. We have cloud accessibility such that computing power is not anymore limited by what you have in your data center. But essentially, you can tap and run any kind of algorithms out there. We are starting to see the benefits coming through way better and way more accurate models that predict retention in our business, that help us predict inpatient admission situations, and therefore act, hopefully, way ahead of a time when our member needs to visit the ER or get into a hospital. And hopefully, these are early enough so that we can help this person stay in better health and avoid needing an inpatient treatment.

There is also a lot of progress in terms of usage of AI algorithms and the sophistication of this. We had to overcome a lack of proper security in the cloud to handle PII and PHI data. So as we were building those capabilities and helping also build those with our vendors, we had to generate high-quality testing data that would be differentially private. We were able to create a new model, an AI model based on synthetic data, which is of similar accuracy to the one that is created on real data. Using generative AI, we created high-fidelity synthetic profiles and populations of our members, and could use those to ingest into the platform and start to learn how to use it. We trained our data scientists. We have more than 200 Ph.D. grad data scientists at Humana. And they could already get access and start using systems ahead of our readiness for PHI and PII data handling, which happened in the meantime. But the fact that we have this synthetic data creation capability actually is helping us in many other forms as well.

Sam Ransbotham: So let’s make sure I understand. You’re using this tool to help your organization learn how to handle the real data. You use AI to generate synthetic data that then lets everybody practice and learn on before it becomes a real patient’s.

Slawek Kierner: That’s correct.

Sam Ransbotham: Gosh, I really liked that example. Could you describe why — I think I’m trivializing it — but why is that an AI problem and not a statistical sampling problem? What makes AI fit there well?

Slawek Kierner: It’s a good question. And I think this became an AI problem when AI became better. So I don’t know if you probably have seen that somewhere the work of Nvidia that creates synthetic faces of people. So essentially, you use deep learning to train your network to essentially learn [what] the face of a person looks like. And then you ask it to re-create [it], taking away the original data and controlling for our feeding, and in such a way that you can ensure that none of the training pictures is re-created exactly. So essentially, all the faces that the synthetic generator generates are unreal and never existed. And that field started many years ago, but initially those faces, they kind of always had two eyes — but an eye could be in the middle of a forehead. And so right away, you’d see that it’s not real. But over the last two years, they have improved so much. When you look at this synthetic face right now, it’s hard to recognize that it’s not a human. You could be easily tricked.

Sam Ransbotham: So the parallel there is that you’re performing the same trick but with data rather than with images.

Slawek Kierner: Exactly. We look at a human’s record, at their history, their health history, but also actually a complete history of demographic and health data. And we re-create the same population for an approach like this one. So fully differentially private, very high quality when you look at this. If you take a physician looking at health history of a synthetic individual, a physician cannot tell that it’s unreal. It looks like it’s real.

Sam Ransbotham: Who had the idea to do that? I would not have thought of that.

Slawek Kierner: So it’s always a mix of people that are sitting around the table to try to solve a tough issue that we have. That’s part of it. Quite often, we invite our partners. So in particular, that technology came from a collaboration with a partner from Europe who, interestingly enough, also worked with me at Nokia — so a very talented individual [who] created the synthetic data set capabilities and synthetic data creation capabilities, and got a lot of success with this in Europe, which as you probably know, of course, is much more concerned with personal privacy. And then another set of partners we are working very closely with right now is Microsoft, an innovator in advancement of differential privacy in this space. And then finally, we quite often connect with academia, and we have those connections as well.

Shervin Khodabandeh: This is a great example of how a real technical topic from a different discipline, with deep learning and image recognition, makes a tangible difference in a completely different industry. And I could imagine, maybe 20 years ago, 15 years ago, clinicians and business folks running a company like Humana would say, “Well, that’s not the same as our real patients. It’s all synthetic. We can’t trust it, etc., etc.” My question is, what level of education and sort of knowledge sharing do you feel you’ve had to go through, both at Humana and in your prior careers, to sort of bridge that gap between the art of the possible on the technical side and the business, where the understanding is not the same. And do you feel like there is still a gap? And how do you bridge that and narrow it?

Slawek Kierner: Shervin, yes, there is a gap. And I think there’s a gap between technology and business understanding. And there’s a gap between technology and ourselves. We all, in this particular field, need to continue to learn. Every few years, things change. And sometimes they change totally on us. Part of it, and I think the first skill, is how do you continue to learn — this continuous learning that is necessary for all of us and us as leaders who need to inspire our teams to do the same? Because even if you hire Ph.D.s in data science that have been recently trained, they need to continue to learn. They need to have a workbench where they can tweak data and they can learn with others.

And then the other part of it, which I spearheaded at Humana, is a much tighter link with product teams in these large technology companies that we collaborate with. What we are trying to do is to make sure that we are closely in touch with those product teams. We follow what they are doing. We participate with the customer advisory boards. And through this, both help them shape their products, and for us, get excited and hopefully drive accelerated adoption of these new features and functionalities ourselves. So that’s one part of your question, “So how do we stay ahead?” And then, of course, we have a huge role of helping our business teams and our partners in enterprises where we happen to work to also understand the art of the possible and help us turn this technology knowledge into reality that actually advances our experience, both internally and also for our members and customers.

Shervin Khodabandeh: My takeaway from what you’re saying is hiring the team and keeping the team at the forefront of the art of the possible, and inspiring them, is one part of it. But also, organizations have to take steps to actually bridge these gaps through all these things that you’re talking about, so that there is more collaboration, and cross-functional teaming, and much closer product management, with analytics, with AI, with voice of customer — all of that, so that these ideas are allowed to even incubate and go somewhere. Is that right?

Slawek Kierner: Yeah, hundred percent agree.

Sam Ransbotham: Well, if we’ve gotten a hundred percent agreement from Slawek, I think that’s a great time to end. Thank you for taking the time to talk with us, Slawek. Shervin, let’s do a quick recap.

Shervin Khodabandeh: Sounds good, Sam. Slawek made some very interesting points.

Sam Ransbotham: One of the things that he mentioned a lot was how past experiences led to his current role, and he had so many different past experiences, and yet he found ways to apply them. I thought that was a pretty fascinating point.

Shervin Khodabandeh: Yeah. I really resonated with that. He talked about some archetypal problems — like [in] chemical engineering, the problem of system control and the lag in the system and how that has to be managed. And he likened this to a problem of marketing, and then, more importantly, to the problem of managing the care of millions of the patients. Because as they propose different treatments for different members, there will be a lag between what’s working and what’s not working. And the ability to understand what’s working and what’s not working and what that lag time is, I mean, that’s almost a standardized chemical engineering or control systems, electrical engineering problem. And his ability to see these archetypes and sort of transcend from discipline on domain and industry to health insurance is really, really important.

Sam Ransbotham: It gave me a little hope that humans will still be around for a bit. It’s interesting as well that in a lot of these examples, the specific industry details were obviously different, and you can’t just blindly apply them from one industry to another. And I think there’s a role, too, [for] creativity and being smart about what fits and smart about what doesn’t fit. That, again, seems very human.

Shervin Khodabandeh: Yeah, completely. And the other thing, building up on that, is the importance of again, building trust, right? And building trust of the humans in the AI solutions, right? And he talked about synthetic data to do synthetic tests of different treatments. And then, he talked about the process of showing clinicians how that synthetic data actually mimics the real data and how it does. And so I think that’s also very important — again, building trust and building trust in areas where human judgment really, really matters.

Sam Ransbotham: I liked how pragmatic it was, too. You know that computers are going to have problems, and so you’d much rather find those problems out with generated data. And I thought what was creative about his solution was using AI to generate that data that smelled just like real data, but they could afford to screw up with. I like things like that, where it makes complete sense once he says it, but I would never have thought of it myself.

Shervin Khodabandeh: That’s a great point. Yeah.

That’s all the time we have today, but next time, join us as we talk to Gina Chung from DHL.

Allison Ryder: Thanks for listening to Me, Myself, and AI. If you’re enjoying the show, take a minute to write us a review. If you send us a screenshot, we’ll send you a collection of MIT SMR’s best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback@mit.edu.

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
Akshata Revankar
This is an excellent podcast. There is so much untapped information in healthcare data, which can be used to avoid several events and help members better their health and save money.

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/