The Scoop on Health Tech

Webinar

Featuring

Stat News logo

Description

STAT News’s Casey Ross has covered Amazon’s shift into healthcare, healthcare algorithmic bias, and many other issues in health tech.


Casey’s unique, data-driven reporting focuses on how technology and disruptors will impact the healthcare industry and patient experiences.


Let’s think big. What technologies should healthcare leaders be observing, and how will this impact patients seeking to access high quality care at lower cost? What’s the impact when healthcare & technology leaders fail to include health equity into the equation?!

Casey Ross

Casey Ross

Technology Reporter
Stat News

Stat News logo
Chris Hemphill

Chris Hemphill

VP, Applied AI & Growth
Actium Health

Actium Health logo
Ling Hong

Ling Hong

Data Scientist
Actium Health

Actium Health logo
1

Transcript



Chris Hemphill:
Is that it? Is it loading? Oh, oh, excellent, okay. All right. Hello, everybody. Hello, healthcare. Hello if you’re watching on LinkedIn or YouTube at a later time. However you’re consuming this, we’re just glad that you’re here to consume it. It’s an extremely important topic. It’s something that I don’t think gets talked about enough. I think that a lot of producers of algorithms, technologies and things like that, find this to be a very difficult subject matter and have a lot of fear around discussing this, but there’s an importance, there’s social implications in the things that we do in healthcare and how we employ algorithms for outreach. And basically helping people to identify where they can better their healthcare. So no better to speak to that than the two folks we have today. One internal to SymphonyRM, and the other a reporter from STAT News.


Chris Hemphill:
I’ll start with Ling Hong, who is a data scientist here, came to us from Carnegie Mellon University, has done some excellent things to build our data science and clinical pipeline, not just in terms of predictive accuracy on disease states and things like that. But also in terms of like, as we became more aware of potential racial bias issues, developing the data ethics into the data pipeline. But, Ling, with that intro, just wanted to give you a chance to do a shout-out and a hello.

Ling Hong:
Hey, everyone, I’m Ling. So I’m the data scientist at SymphonyRM. And my job is to make sure that every each model we created is equal to all of the populations in the world. So at SymphonyRM we not just care about the model performance for the whole population, we also care about minority groups, and we’ve got to make sure that everyone should have equal access to healthcare.

Chris Hemphill:
Thank you. Thank you, Ling, and excited to be able to work with you on these products. Casey, Casey Ross, if you haven’t seen some headlines that have been trending on LinkedIn or coming out from STAT News, Casey is responsible for a lot of those. He’s reported on the mergers within healthcare among Amazon and Haven and things like that. He’s reported on the issues and challenges related to bias in algorithms. And it’s excellent that he’s gone the extra mile in doing this. He’s traveled to locations and actually talked to people and seeing the disparities that certain algorithms cause. And we just thought it was extremely exciting that he would join us today, so Casey, your chance for a shout-out and hello to everybody.

Casey Ross:
Well, hi, Chris and Ling. It’s great to talk with you, and thanks so much for having me on. Yeah, I’m STATS’ national technology correspondent, so I’ve write about the efforts to apply artificial intelligence to healthcare decision-making at STAT, and it’s been a beat that I’ve been covering for a few years now. I think it’s a really important one. And I think it’s a crucial time to sort of be thinking clearly about the issues that arise when we try to make use of data and algorithms to automate decision-making or diagnoses or understandings of the conditions and medical problems that people face.

Chris Hemphill:
All right, thank you very much. And everybody else out here, Terry included, Terry, thank you for the shout-out. The reason that we have this conversation with myself, data scientists on our team and Casey is so that you can get involved in it. I know that of course it’s a difficult subject matter and for a lot of people new, and there might be questions about whether or not algorithms that are being employed at your own organizations might be showing signs of bias and what you might be able to do with them. Or there might be personal experiences that you had, where there’s bias in the healthcare delivery process, or how things were told. Feel free to comment on that stuff, share it with us. Think about this as a just a kind of a conversation. Like we’re not all necessarily in the same room, but this isn’t like a webinar with a bunch of talking heads, we’re here to interact and be involved with you in this conversation.

Chris Hemphill:
Where I wanted to get started though was kind of like Casey with your history in this, and the fact that this has been a beat that you’ve been working for years, and a lot of the stories that you’ve uncovered. Can you talk about why bias in these algorithms and healthcare technology, why has that been your focus?

Casey Ross:
Yeah. So in 2015 and 2016 at STAT, we were a startup covering sort of healthcare, the healthcare industry. And we were trying to decide what are the areas that we really want to focus on as a publication. And this one quickly arose to the forefront. This is a real frontier in medicine and in healthcare. And what we found is that the digital healthcare industry has mushroomed very quickly into a large sub-industry in healthcare that’s growing extremely rapidly. Just this week we found out that in the first quarter of 2021 $14.7 billion was invested in companies that are working within these sectors. That is more than a full year total in 2020, which set a record at 14.2 billion.

Casey Ross:
So you’re seeing a very rapid growth in the investment in this industry and that growth in the use of algorithmic products to make decisions is outpacing the ability of regulators and others to fully understand the implications of these products before they’re put in place. So you sort of have this circumstance where we really are operating on a frontier where there isn’t clear guidance understanding regulation of these products as yet. And so we at STAT think it’s particularly important to cover those implications, and the questions that everybody in this industry I think is trying to answer right now.

Chris Hemphill:
Excellent, and love hearing that response to, hey, you guys startup in the mid 2010s, but as digital healthcare has grown, it’s growing at a rate faster than it can be regulated or analyzed. And there’s all kinds of different use cases that have different needs for different nuances and different specifics. So, that kind of leads me into another question, which is, as you’ve increased more of this reporting, like articles such as the one where you traveled down to North Carolina to talk about, to speak with and visit with people who might have been victimized by algorithms or not have the appropriate outreach that they should have received. What has been the impact from delivering this kind of content, from delivering this kind of coverage, what do you see is the impact that it’s had within healthcare?

Casey Ross:
I think there’s been a huge awakening to this issue over the past couple of years in particular. I think organizations are now going back through and really thinking more carefully about the technical choices that they’re making and how those technical choices can lead to bias or inaccurate decisions being made about the medical resources that are allocated, how they’re allocated. What the impacts are downstream on patients, their care and their health. There’s just been, I think a real turnaround in people’s appreciation for that as an issue, and some of the systemic bias that has sort of been washed through generations is now being automated on a huge national scale. And I think a really interesting thing has happened, I think because machine learning has come into play to the extent that it has, it’s forced people to really think carefully about this again. And to say, “Well, there are hundreds of algorithms that are in use in healthcare that have been potentially for many years been perpetuating biased or slanted decision-making.”

Casey Ross:
And now that we’re at the point of thinking about automating them, I think they’re realizing the magnitude of impact that that can have on people. Disproportionate impact in some ways, in terms of exacerbating inequities in your delivery of care and how decisions are made.

Chris Hemphill:
You know, I had some questions coming up just with regards to the impact and different things healthcare leaders could do and stuff like that. But, oh, it seems like Ling has dropped. We’ll get her back on whenever she locks back on. Okay, but I had some questions with regards to the overall impact. But it’s a new subject. I don’t think that everybody has had a lot of exposure to a lot of the articles out there just yet. So when we’re talking about some of the biases that are exacerbating challenges and things like that. Do you have an example that you like to go to, or especially in your travels in North Carolina, the way that you describe the historic … Like one powerful thing about that article is that it described the history of the town that you were in, Ahoskie. And the segregation that had occurred in the past.

Chris Hemphill:
And it kind of gave way to the impression that that past segregation, if we’re using data that’s generated from a society and an economy that was functioning in that way, well, that’s going to inevitably lead to bias. So just wanted to ask you to share some examples or a story around what this bias issue is and why it’s important.

Casey Ross:
Yeah. I mean, when I was writing that article and I was doing the research for it, I was thinking about how do I impart this information to people in a way where they’re really going to embrace it and understand its impact on human lives? Because we’re talking about a very complex mathematical subject that oftentimes is hard for people to wrap their heads around. The fundamental problem in that example was that healthcare costs were being used as a proxy to determine who has the highest healthcare need. So if you’re a high cost individual, then you’re more likely to get the resources through care management programs. And with that approach, which is used throughout the insurance industry and by a lot of providers fail to recognize is that historical imbalances and access to care mean that people of color and different socioeconomic status may not have the same cost level.

Casey Ross:
So at the same cost level they may not, or at the same risk score that’s provided through the assessment of costs, those people may be far sicker than Caucasian populations that traditionally have more access to care. So why I went down to Ahoskie, North Carolina, all of the problems, all of the issues that surround these algorithms and are not picked up by the algorithms are evident plainly on the ground in that community. There is a railroad track that runs down the middle of that town that separates ward A from ward B. Ward A is the historically white part of tow. Ward A has all of the medical resources. It has the town’s only hospital, it has all the medical offices.

Casey Ross:
Ward B disproportionally has poverty, has security and safety issues, has more tobacco shops and liquor stores. Has people that traditionally have not used care in the same way as people on the other side of town. No algorithm is going to see that. That is not going to be in the data. So by going down there, I was trying to sort of cover that space, write about that space that algorithms do not see and cannot appreciate.

Ling Hong:
Yeah, this is a very great point. I think we are also seeing the same problem you’re coming to. And have to say, that as a huge fan of STAT, so the inference I got from your report is that I started to notice the bias in my own algorithm. And I would say that’s what inspired me to develop different evaluation measures to assess our mottoes and just cutting out bias and making joint efforts within the company to adjust them. So I think this is very important, just to let people know that this kind of issue exist. The one thing I’m curious about is that during this whole process I believe you have talked to some leaders in the healthcare industry. So were their response to this issue? What are the different kinds of response? Because I believe that different people have different reactions to it. And also what is the response that you were expecting from them?

Casey Ross:
Yeah. I mean, it’s been a range of reactions, I think by and large, I think the reaction has been concern that organizations are worried about the potential for their products to do the kinds of harms that were explained in that story to perpetuate bias. And I don’t think any of the organizations that I’ve spoken with are going around out there and saying, “I’m proud of the fact that there’s been historical bias in healthcare and I don’t really care to address it.” That’s not who I’m seeing. That’s not who I’m meeting out there. I’m meeting people that are very well-meaning people that want their products to be used in a way that helps people. That advances equity that provides better care. But I think there has been a lack of appreciation until this point of the nuances that are the subtle sort of mistakes and misapprehensions that can happen when you try to apply algorithms to data to make decisions.

Casey Ross:
There are just, there are gaps, there’s missing information, there’s all kinds of difficulties in evaluating those products. I don’t think there’s a really great evaluation system to the point that you made it in the run up to your question. I think evaluation is really, there is the right response. There’s got to be a coming together of people to agree upon a framework. We will evaluate these products before they are deployed, before they are used. And I think that work is beginning to happen by yourself and a lot of others who are really now focused on this issue.

Chris Hemphill:
So, one thing that comes up is, there’s a lot of, now that the issue is revealed, there are a lot of people talking about it. And I think among the people that know about it, there’s a firm grasp that the problem is here. But I’m curious about the leader or the data scientist or the marketer or whoever’s involved in engagement. I mean, there’s all kinds of different types of people that this can touch. But what would you like to see people start doing and health systems start doing once they learn about these issues that you’re bringing light to? What should be the course of action they should start considering after that?

Casey Ross:
Well, when I talk to people in the industry and sort of raise that question, I think the response that I get a lot, or what a lot of others are sort of telling me in terms of what ought to be done in here, is that there needs to be transparency. Those issues when they are discovered need to be disclosed first off. And there needs to be a process by which that disclosure can happen and there can be transparency into it and to the process we fix those issues. This can’t happen in a black box. It’s really got to be, there’s got to be sunlight on those issues. And there needs to not be shaming an organization for having these problems. Because if we do that, then it will force people into a black box and it will prevent the adequate solutions from being put into place that can ensure these issues get addressed on the systemic level that they need to. So I think there needs to be space for people to disclose and then to address it, to sort of have a community discussion about it and make the changes that are necessarily.

Chris Hemphill:
You know, that’s a really powerful point about what happens if we created an environment of shame around, like a lot of your articles focused on the fact that people aren’t intentionally embedding this bias. It’s not the KU Klux Klan that is developing algorithms for healthcare. I don’t think that’s typically the case. So that’s why I think that it’s been a conversation that technology companies have been afraid to have is because by revealing that they have an issue, then they potentially become the scapegoat for everything else going on in the world. But they kind of create a kind of a recency bias with everybody else that, “Oh, well, these people talked about it, therefore that’s where all the problems are.” And that’s kind of an environment where a lot of people can thrive in openly discussing these issues.

Casey Ross:
Yeah, absolutely. I mean, every company has a D&I committee. An internal ethics committee. All the big technology companies do. It doesn’t help if it’s all behind closed doors. It doesn’t help anybody. Who’s that helping? That helps, you maybe consider the issues, but if those issues aren’t brought out into the public square and dealt with and talked about, then we only learn about them through a mistake that gets made, a horrible scandal, a giant headline. I just don’t think that’s going to produce the continuity of conversation that needs to happen in order to address these issues on a wider social scale. There needs to be an open, clear discussion and solutions devised and implemented on a larger level. And that can’t happen if it’s just a internal conversation.

Chris Hemphill:
That’s why we’re alive.

Ling Hong:
Yeah, I think it’s interesting that you mentioned that almost every company has a D&I committee, but we’re still seeing this kind of things happen. So I’m guessing that is it because there’s a more overarching business goal that is kind of driving people towards revenue or profits? And this kind of creates some comfort against the goal that the D&I committee is trying to achieve?

Casey Ross:
I think that’s exactly right, yes. I think the disclosure of those issues that bubble up within companies conflicts with their mission as a company to make a profit and to serve a broader business goals. And I don’t think you can blame companies for wanting to make a profit and to be careful about the information that we disclose. It might get in a way that they have fiduciary responsibilities, shareholders to think about. It’s a very complex environment in which to think about disclosure of these issues. Which is why I think there needs to be a convener. There needs to be someone outside, or a group of organizations that are bringing people together to have that discussion to talk about this issues. There’s got to be a way to host that discussion in kind of a neutral open format that allows for these things to be brought out into the open and to be addressed in the open.

Ling Hong:
Also, I think that thinking about this issue in a deeper level, if we take the opt-ins available as example, now we all know that it’s you being the healthcare spending as a process to predict people’s health needs. So as a data scientist, what I keep asking myself is, “Why people choose this approach to be with those models?” And just in my opinion, what it would [inaudible] is kind of like a fee for service mindset? That means I want to bring more revenue to my organization, so that’s why I was choosing healthcare spending as the predictor at the very beginning. So I’m not sure if you’ve talked to people within the organization and ask them about their thoughts?

Casey Ross:
Yeah. I mean, I’ve talked to a lot of health plans insurers over the months of reporting on this. And you can understand why those organizations are focused on cost as a measure of need and to make decisions about resource allocation. Because that’s what they do. They have to have actuarial predictions about the use of resources within their organizations, otherwise they cannot function. If you can’t assess the risk of your population, then you can’t adequately plan or set premiums. You can’t make those decisions that are just core to the operation of those companies. I think one of the things that maybe those organizations have failed to appreciate is when you use that data to make resource allocation decisions, there’s this gap that sort of emerges between that data and the decision that you’re making where bias just creeps in, in ways that are subtle, are hard to detect. But have no less of an impact on the people who are the focus of those decisions. And I do think there’s been a big awakening to that in the industry and a lot of real serious efforts to take it on.

Ling Hong:
Yeah.

Chris Hemphill:
Thinking about that, the awaking that you’re talking about, Gardner has identified kind of a category that focuses on this. And we talked about racial bias extensively today. Of course, there’s other types of bias that that should be addressed by data. Other things that can’t be because of a lack of data around it, but the whole concept around this, when we talk about, “Hey, we see this proliferation of machine learning algorithms, we are at crossroads where they can either perpetuate bias in a much grander scale, replicating those economies and things like that, that you were talking about. The segregation of the past, replicating that in a data-driven form. Or as people intervene and check on the assumptions of the algorithms and how those are labeled, we are at a crossroad where they can kind of mitigate or reverse a lot of those trends.”

Chris Hemphill:
So that crossroad taking the path around mitigating those trends is called responsible AI, or at least that’s what Gardner calls it. So we have things like University of Chicago Center for Applied AI releasing their Algorithmic Bias Playbook to help healthcare leaders to identify ways to mitigate that bias. I’m just curious about just the many other ways that this can go. Because there’s so many tiny use cases, so many different ways that we can employ algorithms and things like that. So is there kind of, like in your conversations, have you been able to get a sense for the types of regulations or government actions that we might be expecting within healthcare within the next few years?

Casey Ross:
Yeah, I think there are a lot of big changes. I think the last couple of years there’s been a reckoning and the awakening that we talked about and now people are getting serious about putting in place solutions. I think a lot of people initially were looking to the FDA to sort of come in and be the regulator here. But I think there’s been a slow recognition that the FDA is oversubscribed, is not sort of able to keep pace with the extent of innovation that’s going on. So now you’re seeing organizations privately take on these issues. I think, I’m starting to hear about large health systems, groups of insurers coming together to set more frameworks and standards for the evaluation of artificial intelligence tools. We’re hearing about things like a nutrition label format, for example, for an AI algorithm that will disclose not just its intended use and its intended user, but how it was trained. How it performed within various subgroups of individuals and patients. What might be the contra indications to the use of that in the context of healthcare and clinical decision-making or an operational sort of resource allocation.

Casey Ross:
So those kinds of solutions are now being discussed and there are I think some pretty advanced efforts to put in place a different way of certifying the value and the fairness of these algorithms sort of before they get deployed and used wildly.

Ling Hong:
Yeah, I agree. So the government, I think just I would discuss if we’re just relying on the company stuff, they might just be chasing after money, so it’s very important for us to have some regulations in place. So I think so far we have talked about different players in the healthcare industry, the healthcare professionals, healthcare organizations, but we are missing one huge part here, which is patient. So what do you think that as a healthcare professional or as healthcare organization, what can we do to actually inform patients or educate patients to make them more aware of this kind of reminisce that’s happening to the healthcare?

Casey Ross:
Yeah, I really don’t think there’s as much awareness among the patient population about the extent to which algorithms are being used or their data is being used in [inaudible] of the creation of those algorithms. So I think disclosure upon the use of AI algorithms in the course of their care is an important place to start. Also, I’ve heard about some thinking that perhaps an opt-out ought to be put in place when patient data is used for the development of algorithms. So that you’re going to patients and saying, “Your data is going to be used to build such and such a product. And if you don’t want your data to be used for that framework, for that study, then please tell us so that we won’t include it.” I think those kinds of things might be difficult and expensive to scale, but I think there’s got to be consent, disclosure, just clearer awareness and communication around how data is being used to make decisions. Both at the point of care and the business decisions that are happening about the use of data.

Ling Hong:
Yeah, I think that’s … Yeah, I was just wanting to say that I totally agree. So I have been feeling that healthcare is kind of like a slow moving industry and we really need to learn from a lot of industry from their best practice. So for example, for the finance industry, I know that when you were going to invest in something, the prospectors of that investment product is totally transparent to you. And for other industry that you have the very complete privacy statement and patients, if we can have the same thing for patients, I think it will be much faster.

Casey Ross:
Yeah, right now it feels to me like, when you’re getting care, you sort of sign on the dotted line and say, “Okay, I agree to the disclosures. And yes, I agree that you may use my data for some unknown purpose in the future to support medical research.” And I think most patients don’t have a problem with that, but I also think most patients don’t have an appreciation for the notion that data submitted by them or on their care 10 years ago is now being used in the analysis of a new AI product which is being employed to help diagnose the condition they experienced decades ago to a patient in 2021. I don’t see that connection being made or that disclosure really happening so that people really appreciate the use of their data in this information.

Chris Hemphill:
And even transparency there would be … That look like one step transparency, but the other step protection. So transparency would be a powerful thing in enabling people to understand what elements, like if they’re submitting this data or using this data, how they might get a return on that data in terms of more accurate treatment or diagnosis or communication going on down the line. So transparency does open the door for kind of enhancing that relationship. And at the same time though, that should come with protections against how that data might be sold to other industries or used in other industries, because we know healthcare data is extremely valuable. Now, it is unfortunate that we are now over our 30 minute mark, but it’s just amazing the amount of different things that we’ve covered today around the technical fundamentals of how bias even enters the system in the first place. What the impact has been to various people that didn’t receive outreach based on certain algorithms and things like that.

Chris Hemphill:
And just to kind of round it out, there’s ongoing research here. It’s not a solved problem as that as they would say in technology and all that, so there’s still lots to learn about these approaches. What do you recommend for healthcare leaders to make sure that they’re properly informed and making the right decisions regarding this?

Casey Ross:
Yeah, I think really paying close attention to that body of research that’s being done right now. And the disclosure of some of the problems that are bubbling up to the deployment of the algorithms I think is really important. And to talk to people within their organizations and at the organizations that are making those discoveries so that they can be informed and think carefully about how they’re using data, how they need to change it, how they need to test and evaluate their products. I mean, one of the things what’s been clear to me is that there’s really no way to clearly and effectively evaluate the usefulness of any one algorithm used in healthcare. You can take a look at the AUC, right? You get a mathematical measure of accuracy. It means almost nothing when it comes to implementing that algorithm in here, because there’s implementation that has to happen.

Casey Ross:
There are workflows that might be affected by that. There’s a whole field of inquiry that needs to really mature about how to think about assessing the value of a product before it’s put into place. Because that assessment is just, it’s not happening now. And so all these algorithms are floating around in healthcare, these machine learning models are complicated and hard for people to understand, and nobody knows until they plug them in whether or not they work. And we may wait five or 10 years before we suddenly find that, this has been systematically bias in decision- making. It’s actually [inaudible] care, not helping it. And I think that we can avoid that if we start to think clearly about that now, instead of five and 10 years from now.

Ling Hong:
Yeah, great. So as for what healthcare leaders should do, I think they should subscribe to STAT News.

Casey Ross:
It’s very kind of you to say. I mean, I hope I’m contributing to the discussion. I really enjoy this reporting and this discussion. I think it’s so important and I just feel honored to be part of it, to be honest.

Chris Hemphill:
Well hey, we’re honored that you took part in it and it plays into the conversation that we had a little bit earlier about the need for an open discussion on this without a shameful environment. We kind of put ourselves on the ledge here by saying that, “Hey, we looked at our own algorithms, identified bias and are taking steps and continuing to learn about new ways to address these problems and then address future types of bias that we think might be within the system.” So yeah, I’d love to continue having kind of open conversations about this and hopefully that can get product manager at X and Y company comfortable talking about this kind of stuff, looking at ways to address it and being transparent about the way that they’re focusing on it. With that, Casey, I know that there is a reason that you came to talk with us today, and we appreciate that. We consider that kind of like the final thought here.

Chris Hemphill:
And maybe there was just something that you wanted people to come away with. Maybe one thing that you’d like for people that to be able to take away and reflect on over the weekend, or in their duties going on next week. What would you say is the final thought that you’d like to leave with the audience today?

Casey Ross:
I guess I would say that machine learning and use of artificial intelligence in healthcare can go one of two ways. It’s sort of an empty vessel, really, in the sense that it can be used to counteract bias into really greatly improved medical care and decision-making. Make care more accessible. Make it less biased. Make it so that people who need resources get them when they really need them. And that really can happen. And that is beginning to happen with some of the products that are being deployed. The alternative is that these biases that have existed over the last many decades and the structural racism that exists within healthcare now can be automated. And it can be perpetuated that on a national scale, if we’re not careful about the evaluation and implementation of these products. So I think it all depends on human intention that’s poured into these products.

Casey Ross:
If there’s human intention that is poured into AI is to do the right things and to carefully assess them and not think about necessarily bottom line first, but the care of people first. Then I think, following that north star, we’re really learning that artificial intelligence does create societal good and not harm. And now is the time, not later, to think about that carefully.

Chris Hemphill:
And, Ling, your final thoughts as well?

Ling Hong:
Well, my final thought, it might sound weird, but as a data scientist I want people to be aware of data science and to question data science.

Chris Hemphill:
I love it. I love it. We both work on data science products in AI. I have AI on my job title, but ultimately I want people to be skeptical, ask questions and identify where there may be challenges with some of these approaches. I look at some of the work that we see from other industries, some of the work that we do. And I kind of think about, “Well, would this have wrongly flagged my grandmother for something?” That’s just like the perspective that I bring to the table, especially being a minority. So I’m hoping that people coming away from this at least have an awareness of what these challenges are. Some good sources, such as STAT News to go and learn more about the challenges. And also one thing that I wanted to emphasize here was that fork in the road where if we don’t do anything, if we just let things continue as they are without asking these right questions about our algorithmic approaches, then we’re going to perpetuate the bias that’s existed for hundreds of years.

Chris Hemphill:
But if we start looking at where that mitigation strategies and where we can start reversing these trends, then machine learning, AI, these approaches, do represent a massive opportunity for social justice. With that we are a way over time at this point, but just wanted to let you know about some of the stuff that we have coming up. On the 22nd we have University of Chicago School of Business Center for Applied AI along with ideas42. So these are in applied artificial intelligence and also in behavioral health. And we’ll be talking a little bit more about some of the work that they’ve been doing to address racial bias and other types of bias within algorithms.

Chris Hemphill:
In addition to that, later on we have conversations with Marc Probst, who was the former chief information officer at Intermountain Healthcare about how to break down data silos and data structures and really help get the organization involved in a team-oriented way on making data-driven decisions. And we also have folks from Henry Ford Health System and other places to talk about some of the exciting things going on within healthcare data strategy. With that, thank you very much, and hope everybody has a great weekend.

Ling Hong:
Thanks.

Casey Ross:
Thanks a lot.

Find the Clarity You’ve Been Missing

Learn how Actium Health is driving improved quality, outcomes, and revenue for innovative health systems nationwide.

Request Demo
Meet Some of Our Customers