TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

To safely deploy generative AI in health care, models must be open source

78 pointsby thecalover 1 year ago

9 comments

TaylorAlexanderover 1 year ago
Recently there has been a trend in calling models with weights and code available "open source" even if the training data is not available. For safe deployment in health care and other safety critical fields, transparency on the training data and process are vital too, which means developing clear terminology for models full transparency! Even this article title suffers from this ambiguity.
评论 #38478892 未加载
评论 #38478921 未加载
评论 #38485307 未加载
评论 #38480591 未加载
pardoned_turkeyover 1 year ago
How does open source improve safety if we simply don&#x27;t have the analytical tools to intuitively reason about LLMs?<p>You can&#x27;t use this to prove that the model will always behave correctly (or desirably). At best, you can build test-suites to empirically check that it kinda-sorta appears to be doing the right thing most of the time. Which you can just as easily do with a black-box model.<p>It&#x27;s not that I&#x27;m against openness. I just don&#x27;t see how you can posit that it gets us close enough to safety.
评论 #38479443 未加载
评论 #38478942 未加载
ribosometronomeover 1 year ago
It seems like all of their criticisms can be easily applied to essentially any technology used or medical company relied on.<p>For example: &gt;In the rush to deploy off-the-shelf proprietary LLMs, however, health-care institutions and other organizations risk ceding the control of medicine to opaque corporate interests. Medical care could rapidly become dependent on LLMs that are difficult to evaluate, and that can be modified or even taken offline without notice should the service be deemed no longer profitable<p>Even:<p>&gt;LLMs often generate ... convincing outputs that are false<p>is already a problem the medical community has to address with existing tests.<p>Or:<p>&gt;Another problem specific to proprietary LLMs is that companies’ dependency on profits creates an inherent conflict of interest that could inject instability into the provision of medical care.<p>Seemingly applies to essentially the entirety of medical supplies and medications.
KaiserProover 1 year ago
I mean opensource is nice, but that&#x27;s not actually going to make healthcare safer.<p>Whatever flavour of AI needs to be deterministic, which llama, et al are not. even if you turn the temperature right down.<p>As others have pointed out, its the training set that actually makes a model behave, hence why models are freely given away by large companies.
评论 #38479755 未加载
评论 #38479916 未加载
kordlessagainover 1 year ago
As mentioned in another comment, the problem is that Open Source does not necessarily apply to all aspects of models. Open code allows everyone access to the &quot;source&quot; of an application. It does not mean the information that the code stores, when used, is also open to viewing.<p>In models, the training data (dataset) is frequently &quot;closed&quot;, where it is not open to viewing. That&#x27;s just the default behavior of publishing models. You don&#x27;t need the dataset to use the model. The weights or tensors may be &quot;open&quot; in that we can see them, but they are fairly &quot;not worth viewing&quot; if we don&#x27;t know the nature of the relationships between the tensors.<p>If we were able to figure out relationships between the tensors, and the dataset was not made open, then there might be a debate on whether or not certain use of that extracted or &quot;transfer&quot; knowledge is allowed.<p>For a &quot;model&quot; to be fully &quot;open&quot;, it must publish the data it was trained on, the code it used to train itself, and its tensors or weights must not be encrypted or disallow establishing relationships in the weights.
verdvermover 1 year ago
It would seem to me, that the data is the more important part of the equation, and the health care providers, being the holders of this data, and also needing to find new revenue streams, want to profit from this.<p>With federated learning and homomorphic encryption, can we satisfy both parties?
andy99over 1 year ago
I agree that open source (or source available) is better, in particular the weights and code, the training data is immaterial. But I think a lot of this is pretty naive. The &quot;best&quot; model is the best and it&#x27;s unlikely to come from some idealistic consortium. And the data they have is virtually irrelevant, as every company that thinks they have a great trove of data finds out. My recommendation would be to use whatever the leading source available models is (one of the big llamas?) and focus on the guardrails needed to make it a helper for medicine. Reinventing the wheel is a bad idea.
dontreactover 1 year ago
To deploy generative AI in healthcare someone has to pay for the salaries of a lot of people to do the work. That means there needs to be a business model.<p>I am not sure who will take an AI through regulatory procedures if it is open source and there is no way to make money from it.<p>Open source is a useful tool for research yes. More of it would be nice.<p>But I don’t understand how or why anyone is going to go through all the hurdles of deploying technology if all of it is open source.<p>Maybe an open source enthusiast can explain to me how that is supposed to work?
评论 #38479816 未加载
评论 #38480405 未加载
评论 #38479837 未加载
glitchcover 1 year ago
Just like an MRI machine is open source? I&#x27;m not sure if the authors have thought any of this through.