No excuse after the implementation of AI efficiency and reduction in human labour costs to keep medical costs so high. We should eliminate the fifteen-minute doctor patient appointment of drug matching, too costly. Replace it with AI which can prescribe or refer to a specialist.
The deep problem with any of this is transparency. Open source code is an absolute necessity, if it is even possible. Nit even a mention of this is made. Without it, all of this becomes corruptible hand waiving and promises veiling what will, in all likelihood, inevitably become a blackbox sausage machine without real and serious commitment.
Outstanding breakdown. The OneHHS AI Commons concept is particularly clever because it sidesteps the perennial problem of federal agencies building siloed, incompatible systems that can never talk to each other. Requiring open weighting fundamentally changes the accountability game. When model weights are transparent, epidemiologists can no longer hide behind statistical significance while ignoring generalizabiity failures. The insistence on real-world validation rather than theoretical accuracy could finally force healthcare ML out of the lab artifact trap.
Can you clarify what kind of "AI" is being implemented by HHS? I believe that only Machine Learning (ML), which is the use of digital networks to identify patterns in data, will be implemented. Large Language Models (LLM)s, which are trained on natural language and are used to imitate human speech, will not be implemented in this program. Is that correct?
This is encouraging.
All hale the AI GOD. No humans necessary.π€ππ΅
No excuse after the implementation of AI efficiency and reduction in human labour costs to keep medical costs so high. We should eliminate the fifteen-minute doctor patient appointment of drug matching, too costly. Replace it with AI which can prescribe or refer to a specialist.
The deep problem with any of this is transparency. Open source code is an absolute necessity, if it is even possible. Nit even a mention of this is made. Without it, all of this becomes corruptible hand waiving and promises veiling what will, in all likelihood, inevitably become a blackbox sausage machine without real and serious commitment.
Skepticism is not only warranted, but mandatory.
Outstanding breakdown. The OneHHS AI Commons concept is particularly clever because it sidesteps the perennial problem of federal agencies building siloed, incompatible systems that can never talk to each other. Requiring open weighting fundamentally changes the accountability game. When model weights are transparent, epidemiologists can no longer hide behind statistical significance while ignoring generalizabiity failures. The insistence on real-world validation rather than theoretical accuracy could finally force healthcare ML out of the lab artifact trap.
Can you clarify what kind of "AI" is being implemented by HHS? I believe that only Machine Learning (ML), which is the use of digital networks to identify patterns in data, will be implemented. Large Language Models (LLM)s, which are trained on natural language and are used to imitate human speech, will not be implemented in this program. Is that correct?