in

Elon Musk Asked People to Upload Their Health Data. X Users Obliged.

Grok, the artificial intelligence chatbot on X, has been receiving X-rays, MRIs, CT scans, and other medical photos from users in recent weeks in order to provide diagnosis. The explanation is that X’s owner, Elon Musk, recommended it.

In a post, Musk stated, “This is still early stage, but it is already quite accurate and will become extremely good.” It is hoped that if enough people provide the AI their scans, it will eventually become proficient at correctly interpreting them. Patients could utilize Grok as a second opinion or receive quicker results without having to wait for a portal message.

Grok’s misses, such as a broken clavicle that was mistakenly diagnosed as a dislocated shoulder, have been shared by several users. Others gave it high marks: One user posted a brain scan with the comment, “Had it check out my brain tumor, not bad at all.” Curious to see if a chatbot could validate their own research, some physicians have even cooperated.

People can also upload medical photographs to Google’s Gemini and OpenAI’s ChatGPT, however neither program has made a comparable public announcement.

Some medical privacy experts are concerned about the decision to share critical information with an AI chatbot, such as the findings of your colonoscopy.

Bradley Malin, a professor of biomedical informatics at Vanderbilt University who has researched machine learning in healthcare, stated, “This is very personal information, and you don’t exactly know what Grok is going to do with it.”
The Possible Repercussions of Exchange of Health Data
The federal law known as the Health Insurance Portability and Accountability Act, or HIPAA, prevents your personal health information from being released without your agreement whether you share it with doctors or on a patient portal. However, it only applies to specific organizations, such as medical offices, hospitals, health insurance providers, and some of the businesses they collaborate with.

To put it another way, HIPAA does not regulate what you post on social media or elsewhere. Telling your lawyer about a crime is different from telling your dog walker; the former is protected by attorney-client confidentiality, while the latter can tell everyone in the area.

In contrast, there are specific agreements on how data is held, shared, and utilized when tech companies collaborate with hospitals to obtain it, according to Dr. Malin.

It’s more like, ‘Wheee!’ when you post personal information to Grok because “Let’s publish this information and hope the business will follow my instructions,” Dr. Malin stated.

A request for comment from X was not answered. The corporation states in its privacy policy that it will share user data with “related companies” but will not sell it to a third party. (The policy also states that X does not seek to gather sensitive personal information, including health data, even though Musk invited people to send medical photographs.)

Health information submitted to Grok may have very clear restrictions that the corporation hasn’t made public, according to Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania. However, would I feel at ease providing health data as an individual user? Not at all.

It’s crucial to be in mind that certain aspects of your online presence, like the books you purchase or the amount of time you spend on a website, are shared and sold. All of these components work together to create a picture of you that businesses can utilize for a number of objectives, including targeted advertising.

Think of a PET scan that reveals early Alzheimer’s disease symptoms as part of your online record, which might be accessed by potential employers, insurance providers, or even a homeowner’s association.

Although there are exceptions for some organizations, such as life insurance and long-term care insurance, laws like the Genetic Information Nondiscrimination Act and the Americans with Disabilities Act can provide protection against discrimination based on specific health conditions. Experts also pointed out that, despite being illegal, other types of discrimination based on health nevertheless occur.

The Potential for Inaccurate Outcomes
For those who are only experimenting with the instrument, giving imperfect answers could be acceptable. However, receiving inaccurate health information may result in tests or other expensive care that you don’t need, according to Suchi Saria, director of Johns Hopkins University’s machine learning and health care lab.

According to Dr. Saria, founder of Bayesian Health, a company that creates artificial intelligence (AI) tools for healthcare settings, training an AI model to generate accurate results about a person’s health requires high-quality and diverse data as well as in-depth knowledge of medicine, technology, product design, and other fields. A hobby chemist mixing materials in the kitchen sink is akin to anything less, she added.

Nevertheless, there is potential for A.I. to enhance patient outcomes and experiences in healthcare. Artificial intelligence (AI) programs can already scan mammograms and evaluate patient information to identify potential clinical trial candidates.

Even if they are aware of the privacy implications, some inquisitive people could feel at ease sharing their information to further that goal. The practice is referred to as “information altruism” by Dr. Malin. “Even if you have no safeguards, go ahead if you firmly feel that the information should be public,” he said. “But buyer beware.”

Written by admin

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings