Use of ChatGPT, MedPaLM, and Other AI Chatbots in Healthcare: Pros And Cons
LLM-chatbots developed today do not meet key principles for AI in healthcare

LLM-chatbots do not meet key principles for AI in healthcare. It is likely that AI chatbots may provide harmful medical responses, say experts.

Open AI’s ChatGPT, Google’s MedPaLM, Meta’s LLaMA, these large language models (LLMs) have some great use cases. These chatbots also have great medical potential, but their unregulated use in healthcare can be dangerous. How to regulate Large Language Models (LLMs) in healthcare is also one of the most pressing global issues today. Let’s delve into the potential benefits and risks associated with the use of AI chatbots as medical devices.

Large Language Models are capable of generating highly convincing human-like responses and engaging in interactive conversations. But they often generate wrong or provide inappropriate statements. Wrong responses to medical questions can lead to dangerous consequences. This is the biggest fear experts have regarding the use of AI chatbots as medical devices.

Prof. Stephen Gilbert, Professor for Medical Device Regulatory Science at Else Kr ner Fresenius Center for Digital Health at Technische Universit t Dresden (TU Dresden), is not in favour of using current LLM-chatbots in healthcare.

Writing in an article, Prof. Gilbert stated these chatbots are unsafe tools and stressed the need to develop new frameworks that ensure patient safety.

The dangers of using AI chatbots in healthcare

Do you research your symptoms on internet before seeking medical advice? You’re not alone. Today, search engines play a key role in people’s decision-making process.

LLM-chatbots are known for their remarkable conversational skills and highly convincing responses, and experts fear that the integration of LLM-chatbots into search engines may increase users’ confidence and dependency in the information given by a chatbot.

In the article, Prof. Gilbert cited that LLMs can provide extremely dangerous information when it comes to medical questions.

The article further mentioned there had been unethical use of chat interfaced LLMs in ‘experiments’ on patients without consent. It highlighted the need for regulatory control on medical LLM use.

How chatbots could find application in healthcare

According to Prof. Gilbert, LLM-chatbots developed today do not meet key principles for AI in healthcare, such as bias control, explainability, systems of oversight, validation and transparency.

The article also talked about how developers can design LLM-based tools that could be approved as medical devices, and creation of new frameworks that preserve patient safety.

For medical use, the accuracy of chatbots must be improved, their safety and clinical efficacy must be demonstrated and approved by regulators, added Prof. Gilbert.

Total Wellness is now just a click away.

Follow us on



window.addEventListener(‘load’, (event) => {
$(‘#commentbtn’).on(“click”,function(){
(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = “//connect.facebook.net/en_US/sdk.js#xfbml=1&version=v2.3”; fjs.parentNode.insertBefore(js, fjs);}(document, ‘script’, ‘facebook-jssdk’));
$(“.cmntbox”).toggle();
});
});



Source link

You May Also Like

11 Easy Christmas Breakfast Recipes Your Whole Family Will Love

Christmas is a season for celebrating. It’s a special time spent with…

If You Can Hold a Plank for This Long, Your Core Strength Is Bullet-Proof

Flashback to grade school P.E.: the room is buzzing with excitement as…

7 Most Effective Gym Workouts To Shrink Your Love Handles

Let’s be honest: Love handles can be incredibly stubborn to slim down.…

5 Bodyweight Workouts To Melt Your 'Apron Belly'

When it comes to trimming excess belly fat, particularly the stubborn lower…