I find the healthcare applications of this stuff so interesting.<p>On the one hand, there are SO many reasons using LLMs to help people make health decisions should be an utterly terrible idea, to the point of immorality:<p>- They hallucinate<p>- They can't do mathematical calculations<p>- They're incredibly good at being convincing, no matter what junk they are outputting<p>And yet, despite being very aware of these limitations, I've already found myself using them for medical advice (for pets so far, not yet for humans). And the advice I got seemed useful, and helped kick off additional research and useful conversations with veterinary staff.<p>Plenty of people have very limited access to useful medical advice.<p>There are plenty of medical topics which people find embarrassing, and would prefer to - at least initially - talk to a chatbot than to their own doctor.<p>Do the benefits outweight the risks? As with pretty much every ethical question involving LLMs, there are no obviously correct answers here.
It seems that time and time again, transformers are the swiss army knife of learning systems. And specifically LLMs are proving to be like chameleons. In some ways that shouldn't be surprising. Some say that math is a universal language after all, and we seem to agree that math is unreasonably effective at describing reality.
Do you reckon there's pharma people right now wondering how to make LLMs push their drugs?<p>Take fine-tuning trainers to "conferences", perhaps?<p>Will they try to make their own?<p>What a next few years...
I find chat GPT to be very helpful for working with programming languages that I’m less comfortable using (shell, python). I know enough to evaluate correct code in these languages, but producing it from scratch is more difficult, which seems like a sweet spot for carefully using ChatGPT for code.<p>As a physician, I would not be surprised if the medical use of these tools ends up having similar value.