Expanding use of generative artificial intelligence (AI) in health care has sparked talk of its possibilities while raising ethical questions. Healthcare Dive recently examined generative AI’s potential, both positive and negative.

From the article: “Generative AI is showing early success in a number of use cases, including simplifying explanation of benefits notices and writing prior authorization request forms, Peter Lee, corporate vice president of Microsoft Healthcare, said in a keynote panel on Tuesday.”

Other possible uses include analyzing data to help diagnose complex diseases, summarizing patient medical histories, and streamlining medical notetaking… tasks that can give the physician more time for patient visits, improve care and lower costs.

However, uncertainties about the use of generative AI in health care abound, primarily regarding accuracy and accountability. Model answers have been factually incorrect or even nonsensical. Moreover, there’s the question of who’s to blame if AI is wrong.

Bias, too, is a concern. As the article notes, “If an algorithm is trained on biased data, or its results are applied in biased ways, it will reflect and perpetuate those biases.”

The Wall Street Journal and NPR have also recently published articles about the generative AI phenomenon.