I don’t know what dialect this is written in, but can anybody translate it to engineer? What type of problem are they trying to solve and how are they going about it? (Is this DRM for AIs?)
If the bank rejects your loan application they will be able, when challenged, to say “you were rejected by this particular model, trained on this particular data which was filtered for bias in this particular way”.<p>Similarly the tax authorities will be able to say why they chose to audit you.<p>The university will be able to say why you were or weren’t admitted.
I’m trying to decide if I should be concerned about the safety of general-purpose computing with such technologies sneaking into our compute. Verifying compute workloads is one thing, but I can’t find information on what kind of regulatory compliance controls this addition enables. I assume it is mostly just operation counting and other audit logging discussed in AI safety whitepapers, but even that feels disturbing to me.<p>Also, bold claim: silicon fabrication scarcity is artificial and will be remedied shortly after Taiwan is invaded by China and the world suddenly realizes it needs to (and can profit from) acquiring this capability. Regulatory approaches based on hardware factors will probably fail in the face of global competition on compute hardware.
Grey on black text, cookie shit in the corner, their stupid menu overlaid over the text, their stupid announcement banner, giant quotes about the company larger than the actual press release. I fucking hate web design in 2024.
>Verifiable Compute represents a significant leap forward in ensuring that AI is explainable, accountable.<p>This is like saying the speedometer on a car prevents speeding.
Verifiable compute doesn't do much good if the people doing the verifying and securing are making wild profits at the expense of the rest of us. This technology is more about making sure nothing horrible happens in enterprises rather than protecting people from AI, even if "safety" is claimed.
Gearing up to put a hefty price on AGI. You can only run it if you have a very costly certificate which probably requires detailed security clearances as well.
Yes, it's DRM for AI models. The idea seems to be that approved hardware will only run signed models<p>This doesn't end well. It's censorship on the user's machine. There will have to be multiple versions for different markets.<p>- MAGA (US red states)<p>- Woke (US blue states)<p>- Xi Thought (China)<p>- Monarchy criticism lockout (Thailand)<p>- Promotion of the gay lifestyle lockout (Russia)