TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Let's talk about AI and end-to-end encryption

269 点作者 chmaynard4 个月前

27 条评论

klik994 个月前
&gt; You might even convince yourself that these questions are “privacy preserving,” since no human police officer would ever rummage through your papers, and law enforcement would only learn the answer if you were (probably) doing something illegal.<p>Something I&#x27;ve started to see happen but never mentioned is the effect automated detection has on systems: As detection becomes more automated (previously authored algorithms, now with large AI models), there&#x27;s less cash available for individual case workers, and more trust at the managerial level on automatic detection. This leads to false positives turning into major frustrations since it&#x27;s hard to get in touch with a person to resolve the issue. When dealing with businesses it&#x27;s frustrating, but as these get more used in law enforcement, this could be life ruining.<p>For instance - I got flagged as illegal reviews on Amazon years ago and spent months trying to make my case to a human. Every year or so I try to raise the issue again to leave reviews, but it gets nowhere. Imagine this happening for a serious criminal issue, with the years long back log on some courts, this could ruin someones life.<p>More automatic detection can work (and honestly, it&#x27;s inevitable) but it&#x27;s got to acknowledge that false positives will happen and allocate enough people to resolve those issues. As it stands right now, these detection systems get built and immediately human case workers get laid off, there&#x27;s this assumption that detection systems REPLACE humans, but it should be that they augment and focus human case workers so you can do more with less - the human aspect needs to be included in the budgeting.<p>But the incentives aren&#x27;t there, and the people making the decisions aren&#x27;t the ones working the actual cases so they aren&#x27;t confronted with the problem. For them, the question is why save $1m when you could save $2m? With large AI models making it easier and more effective to build automated detection I expect this problem to get significantly worse over the next years.
评论 #42746590 未加载
评论 #42744353 未加载
评论 #42745195 未加载
评论 #42756231 未加载
评论 #42747693 未加载
评论 #42747578 未加载
评论 #42745270 未加载
评论 #42744806 未加载
blueblimp4 个月前
&gt; Yet this approach is obviously much better than what’s being done at companies like OpenAI, where the data is processed by servers that employees (presumably) can log into and access.<p>No need for presumption here: OpenAI is quite transparent about the fact that they retain data for 30 days and have employees and third-party contractors look at it.<p><a href="https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;models&#x2F;how-we-use-your-data" rel="nofollow">https:&#x2F;&#x2F;platform.openai.com&#x2F;docs&#x2F;models&#x2F;how-we-use-your-data</a><p>&gt; To help identify abuse, API data may be retained for up to 30 days, after which it will be deleted (unless otherwise required by law).<p><a href="https:&#x2F;&#x2F;openai.com&#x2F;enterprise-privacy&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;enterprise-privacy&#x2F;</a><p>&gt; Our access to API business data stored on our systems is limited to (1) authorized employees that require access for engineering support, investigating potential platform abuse, and legal compliance and (2) specialized third-party contractors who are bound by confidentiality and security obligations, solely to review for abuse and misuse.
评论 #42745915 未加载
评论 #42743497 未加载
ajb4 个月前
The real threat here is going to be when AI expands from being applied to accelerate the work of individuals, to being applied to the control of organisations. And it will be tempting to do that. We all know the limitations of managers, management hierarchies, metrics, OKRs etc. It&#x27;s easy to think of a CEO deciding that all the communications between their employees should just be fed into an AI that they can query . (Ironically that would be easier to enforce if everyone was remote). It&#x27;s quite possible that it would enable more effective organisations, as the CEO and upper level management can have a better idea of what is really happening. But it will reduce the already tenuous belief of the powerful that their ordinary staff are real human beings. And it will inevitably leak out from private organisations, as the executive class see no reason why they shouldn&#x27;t have the same tools when running the country as when running a corporation.<p>Advocates of mass surveillance like to point out that no human now needs to listen to your calls. But the real danger was never the guy in a drab suit transcribing your conversions from reel-to-reel tape. It was always the chief who could call for the dossier on anyone they were finding inconvenient, and have it looked at with an eye to making you never inconvenience them again.<p>The full consequences of mass surveillance have not played out simply because no one had the means to process that much ad-hoc unstructured data. Now they do.
评论 #42749919 未加载
rglover4 个月前
&gt; We are about to face many hard questions about these systems, including some difficult questions about whether they will actually be working for us at all.<p><i>And how</i>. I&#x27;d lean towards no. Where we&#x27;re headed feels like XKEYSCORE on steroids. I&#x27;d love to take the positive, optimistic bent on this, but when you look at where we&#x27;ve been combined with the behavior of the people in charge of these systems (to be clear, not the researchers or engineers, but c-suite), hope of a neutral, privacy-first future seems limited.
评论 #42743680 未加载
评论 #42744924 未加载
评论 #42743533 未加载
ozgune4 个月前
&gt; Apple even says it will publish its software images (though unfortunately not the source code) so that security researchers can check them over for bugs.<p>I think Apple recently changed their stance on this. Now, they say that &quot;source code for certain security-critical PCC components are available under a limited-use license.&quot; Of course, would have loved it if the whole thing was open source. ;)<p><a href="https:&#x2F;&#x2F;github.com&#x2F;apple&#x2F;security-pcc&#x2F;">https:&#x2F;&#x2F;github.com&#x2F;apple&#x2F;security-pcc&#x2F;</a><p>&gt; The goal of this system is to make it hard for both attackers and Apple employees to exfiltrate data from these devices.<p>I think Apple is claiming more than that. They are saying 1&#x2F; they don&#x27;t keep any user data (data only gets processed during inference), 2&#x2F; no privileged runtime access, so their support engineers can&#x27;t see user data, and 3&#x2F; they make binaries and parts of the source code available to security researchers to validate 1&#x2F; and 2&#x2F;.<p>You can find Apple PCC&#x27;s five requirements here: <a href="https:&#x2F;&#x2F;security.apple.com&#x2F;documentation&#x2F;private-cloud-compute&#x2F;corerequirements" rel="nofollow">https:&#x2F;&#x2F;security.apple.com&#x2F;documentation&#x2F;private-cloud-compu...</a><p>Note: Not affiliated with Apple. We read through the PCC security guide to see what an equivalent solution would look like in open source. If anyone is interested in this topic, please hit me up at ozgun @ ubicloud . com.
评论 #42745065 未加载
Animats4 个月前
&gt; Who does your AI agent actually work for?<p>Yes. I made that point a few weeks ago. The legal concept of principal and agent applies.<p>Running all content through an AI in the cloud to check for crimethink[1] is becoming a reality. Currently proposed:<p>- &quot;Child Sexual Abuse Material&quot;, which is a growing category that now includes AI-generated images in the US and may soon extend to Japanese animation.<p>- Threats against important individuals. This may be extended to include what used to be considered political speech in the US.<p>- Threats against the government. Already illegal in many countries. Bear in mind that Trump likes to accuse people of &quot;treason&quot; for things other than making war against the United States.<p>- &quot;Grooming&quot; of minors, which is vague enough to cover most interactions.<p>- Discussing drugs, sex, guns, gay activity, etc. Variously prohibited in some countries.<p>- Organizing protests or labor unions. Prohibited in China and already searched for.<p>Note that talking around the issue or jargon won&#x27;t evade censorship. LLMs can deal with that. Run some ebonics or leetspeak through an LLM and ask it to translate it to standard English. Translation will succeed. The LLM has probably seen more of that dialect than most people.<p><i>&quot;If you want a vision of the future, imagine a boot stepping on a face, forever&quot;</i> - Orwell<p>[1] <a href="https:&#x2F;&#x2F;www.orwell.org&#x2F;dictionary&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.orwell.org&#x2F;dictionary&#x2F;</a>
评论 #42741852 未加载
评论 #42745842 未加载
nashashmi4 个月前
The most depressing realization in all of this is that the vast treasure trove of data that we used to have in the cloud thinking it was not scannable even for criminal activity has now become a vector where we shall have thought police coming down upon us for simple ideas of dissent.
评论 #42742405 未加载
评论 #42750309 未加载
walrus014 个月前
It&#x27;s a <i>good thing</i> that encrypted data at rest on your local device is inaccessible to cloud based &quot;AI&quot; tools. The problem is that your average person will blithely click &quot;yes&#x2F;accept&#x2F;proceed&#x2F;continue&#x2F;I consent&quot; on pop up dialogs in a GUI and agree to just about any Terms of Service, including decrypting your data before it&#x27;s sent to some &quot;cloud&quot; based service.<p>I see &quot;AI&quot; tools being used even more in the future to permanently tie people to monthly recurring billing services for things like icloud, microsoft&#x27;s personal grade of office365, google workspace, etc. You&#x27;ll pay $15 a month forever, and the amount of your data and dependency on the cloud based provider will mean that you have no viable path to ever stop paying it without significant disruption to your life.
flossposse4 个月前
Green, (the author), makes an important point: &gt; a technical guarantee is different from a user promise. [...] End-to-end encrypted messaging systems are intended to deliver data securely. They don’t dictate what happens to it next.<p>Then Green seems to immediately forget the point they just made, and proceed to talk about PCC as if it were something other than just another technical guarantee. PCC only helps to increase confidence that the software running on the server is the software <i>Apple</i> intended to be there. It doesn&#x27;t give me any guarantees about where <i>else</i> my data might be transferred from there, or whether Apple will only use it for purposes I&#x27;m okay with. PCC makes Apple less vulnerable to hacks, but doesn&#x27;t make them any more transparent or accountable. In fact, to the extent that some hackers hack for pro-social purposes like exposing corporate abuse, increased security also serves as a better shield <i>against</i> accountability. Of course, I&#x27;m not suggesting that we should do away with security to achieve transparency. I am, however, suggesting that transparency, moreso than security, is the major unaddressed problem here. I&#x27;d even go so far as to say that the woeful state of security is enabled in no small part <i>by</i> lack of transparency. If we want AI to serve society, then we must reverse the extreme information imbalance we currently inhabit wherein every detail of each person&#x27;s life is exposed to the service provider, but the service provider is a complete black-box to the user. You want good corporate actors? Don&#x27;t let them operate invisibly. You want ethical tech? Don&#x27;t let it operate invisibly.<p>(Edit: formatting)
bee_rider4 个月前
The author helpfully emphasized the interesting question at the end<p>&gt; This future worries me because it doesn’t really matter what technical choices we make around privacy. It does not matter if your model is running locally, or if it uses trusted cloud hardware — once a sufficiently-powerful general-purpose agent has been deployed on your phone, the only question that remains is who is given access to talk to it. Will it be only you? Or will we prioritize the government’s interest in monitoring its citizens over various fuddy-duddy notions of individual privacy.<p>I do think there are interesting policy questions there. I mean it could hypothetically be mandated that the government must be given access to the agent (in the sense that we and these companies exist in jurisdictions that can pass arbitrary laws; let’s skip the boring and locale specific discussion of whether you think your local government would pass such a law).<p>But, on a technical level—it seems like it ought to be possible to run an agent locally, on a system with full disk encryption, and not allow anyone who doesn’t have access to the system to talk with it, right? So on a technical level I don’t see how this is any different from where we were previously. I mean you could also run a bunch of regex’s from the 80’s to find whether or not somebody has, whatever, communist pamphlets on their computers.<p>There’s always been a question of whether the government should be able to demand access to your computer. I guess it is good to keep in mind that if they are demanding access to an AI agent that ran on your computer, they are basically asking for a lossy record of your entire hard drive.
评论 #42743190 未加载
评论 #42742153 未加载
fragmede4 个月前
The article hinges on a bad assertion that<p>&gt; Apple can’t rely on every device possessing enough power to perform inference locally. This means inference will be outsourced to a remote cloud machine.<p>If you go look at Apple&#x27;s site <a href="https:&#x2F;&#x2F;www.apple.com&#x2F;apple-intelligence&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.apple.com&#x2F;apple-intelligence&#x2F;</a> and scroll down, you get:<p>Apple Intelligence is compatible with these devices. iPhone 16 A18 iPhone 16 Plus A18 iPhone 16 Pro Max A18 Pro iPhone 16 Pro A18 Pro iPhone 15 Pro Max A17 Pro iPhone 15 Pro A17 Pro iPad Pro M1 and later iPad Air M1 and later iPad mini A17 Pro MacBook Air M1 and later MacBook Pro M1 and later iMac M1 and later Mac mini M1 and later Mac Studio M1 Max and later Mac Pro M2 Ultra<p>If you don&#x27;t have one of those devices, Apple did the obvious thing and disabled features on devices that don&#x27;t have the hardware to do it.<p>While Apple has this whole private server architecture, they&#x27;re not sending iMessages off device for summarization, that&#x27;s happening on device.
EGreg4 个月前
I heard that homomorphic encryption can actually preserve all the operations in neural networks, since they are differentiable. Is this true? What is the slowdown in practice?
评论 #42740701 未加载
xp844 个月前
“who does the ai work for”<p>I think even ignoring the more scary government&#x2F;cops questions, this gets to a key problem. Because for 25 years we’ve taught 2 generations of digital natives as well as everybody else that everything Internet should be “free as in beer” paid for by trashy advertising, and that consumers’ only cash cost should be hardware and ISP&#x2F;data plan. Therefore the answer to “who’s going to foot the bill for that AI” “must” be Big Adtech meaning the AI 100% will work for them and often directly against users’ interests (just as the YouTube algorithm mindlessly but intentionally prefers to radicalize a user or drive them to obsession on any number of topics vs. having them arrive at a healthy level of satisfaction and then sign off and go outside).<p>In my opinion a lot of these problems we see in the scary law enforcement scenarios would be easier to solve if we didn’t expect everything to be ad-supported “free” and rather, we could be convinced to buy a $5000 piece of hardware for your home, that you control, that was privy to your encryption keys and performed all the power-insensitive AI processing for your family. That sounds a lot but compared to things like cars that people happily finance for $70,000 and smartphones which cost $1300 it is only weird because we aren’t used to it.
评论 #42772659 未加载
lowbatt4 个月前
Maybe a little off topic, but is there a way for a distributed app to connect to one of the LLM companies (OpenAI, etc.) without the unencrypted data hitting an in-between proxy server?<p>An app I&#x27;m building uses LLMs to process messages. I don’t want the unencrypted message to hit my server - and ideally I wouldn’t have the ability to decrypt it. But I can’t communicate directly from client -&gt; LLM Service without leaking the API key.
评论 #42742683 未加载
评论 #42742892 未加载
评论 #42743194 未加载
crackalamoo4 个月前
See also CrypTen, Meta&#x27;s library for privacy preserving machine learning: <a href="https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;CrypTen">https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;CrypTen</a>. This isn&#x27;t fully homomorphic encryption, but it is multi-party computation (MPC), which hides the inputs from the company owning the model.<p>But while not revealing user input, it would still reveal the outputs of the model to the company. And yeah, as the article mentions, unfortunately this kind of thing (MPC or fully-homomorphic encryption) probably won&#x27;t be feasible for the most powerful ML models.
bobbiechen4 个月前
There is always going to be a gap between local-first processing and what can be achieved in a full-sized datacenter&#x2F;cloud. That leads to the risks mentioned in the article.<p>I wrote about Apple&#x27;s Private Cloud Compute last year; for the foreseeable future, I still think server-side Confidential Computing is the most practical way to do processing without huge privacy risks: <a href="https:&#x2F;&#x2F;www.anjuna.io&#x2F;blog&#x2F;apple-is-using-secure-enclaves-to-ensure-ai-privacy-why-arent-you" rel="nofollow">https:&#x2F;&#x2F;www.anjuna.io&#x2F;blog&#x2F;apple-is-using-secure-enclaves-to...</a>
p4bl04 个月前
This article should be accompanied by <i>Five things privacy experts know about AI</i> [1]. They pair really well and should probably link to each other.<p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42695628">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42695628</a>
评论 #42747985 未加载
reader92744 个月前
For some reason I felt like this article had a lot of promise to start, but turned into fluff and I learned nothing. It covered very basic ideas of both AI and encryption, which I didn&#x27;t expect from such a highly regarded expert.
lifeisstillgood4 个月前
So if I understand it<p>1. E2E encryption does work<p>2. But phones can send plaintext back to the cloud to get help doing AI things<p>3. And we tend not to know because it’s all “assisstance”<p>But the solution like anything is <i>pricing</i>. I mean yet again (uber, Airbnb) billions of dollars of VC money is used as subsidy so my photos can get OCR’d.<p>If phones said “hey for a dollar fifty I can work out what the road sign says behind your dogs head in 32 photos your mum sent you last week” I think we woukd see a different threat landscape<p>This is - again - unsustainable cash spending distorting markets and common sense. If the market was “we can OCR and analyse these insurance claims” the. Things like privacy and encryption <i>would be first class requirements</i> and harder to sell and build.<p>By spending a billion they can sell services to people without regulators to ask awkward questions and then they hope step 3. Profit.<p>I short not even AI can spot patterns in encrypted data, it’s only when plaintext gets sent around in the <i>hope</i> of profit do we see a threat. That’s seems a simple fix if not an easy one
peppertree4 个月前
Is embeddings enough to preserve privacy? If I run the encoder&#x2F;decoder on device and only communicate with server in embeddings?
评论 #42744302 未加载
评论 #42744186 未加载
dathinab4 个月前
&gt; They’ll order your food and find the best deals on shopping, swipe your dating profile, negotiate with your lenders, and generally anticipate your every want or need.<p>Why should they do so?<p>I mean seriously.<p>There is more money to make in telling you that the AI will buy you the beast deal but instead buy premeditated (i.e. bought) &quot;okay&quot; looking deals instead.<p>Similar dating apps and the related ecosystems has a long history of scamy behavior in all kind of ways as they want to keep you using the app. And people with money have always found ways to highlight themself more. I.e. there is more money to make in &quot;swiping for you&quot; in way which looks on the surface honest but isn&#x27;t.<p>etc. etc.<p>There is basically always more money to make in systematically deceiving people as long as you do it well enough so that people don&#x27;t notice, or they don&#x27;t have a real&#x2F;realistic choice, i.e. are forced by the circumstances.<p>So the moment you take the human and human conscience&#x2F;moral out of the loop and also have no transparency there is a 0% of this ending well if regulators don&#x27;t preclude abuse. And AI is pretty effective at removing transparency and the humanity out of the loop. With how things currently look, especially in the US, they (edit: they==regulators) are more likely to do the opposite (edit: i.e. remove consumer protections).
评论 #42769559 未加载
1vuio0pswjnm74 个月前
&quot;You&#x27;ve probably also noticed Silicon Valley&#x27;s enthusiasm to find applications for this tech.&quot;<p>A &quot;solution&quot; looking for a &quot;problem&quot;. Like cryptocurrency. Like the &quot;metaverse&quot;. And so on.<p>When the &quot;solution&quot; brings new problems SillyCon Valley then proclaims they have a solution to the problem they created.<p>&quot;Even if these firms don&#x27;t quite know how AI will be useful to their customers, they&#x27;ve already decided that models are the future.&quot;<p>More often than not the &quot;customers&quot; are advertisers. Other computer users are just ad targets.
评论 #42752157 未加载
评论 #42751369 未加载
deathanatos4 个月前
TFA makes some rather basic errors.<p>First,<p>&gt; <i>Prior to 2011, most cloud-connected devices simply uploaded their data in plaintext.</i><p>&gt; <i>Around 2011 our approach to data storage began to evolve. […] began to roll out default end-to-end encryption […] This technology changed the way that keys are managed, to ensure that servers would never see the plaintext content of your messages.</i><p>&quot;changed the way that keys are managed&quot; is at a confused contradiction with &quot;uploaded their data in plaintext&quot;. If you&#x27;re going from TLS → E2EE, then yeah, &quot;changed the way keys are managed&quot; miiight make sense, though that&#x27;s not how I&#x27;d phrase it. Then later,<p>&gt; <i>On the one hand they can (1) send plaintext off to a server, in the process resurrecting many of the earlier vulnerabilities that end-to-end encryption sought to close. Or else (2) they can limit their processing to whatever can be performed on the device itself.</i><p>We&#x27;re still confusing &quot;transmit plaintext&quot; with plaintext being available to the server; the clear option of &quot;use TLS&quot; is omitted. It doesn&#x27;t really undermine the argument — the server would still have access to the data, and could thus maliciously train AI on it — but it is surprising for a &quot;cryptographer&quot;.<p>&gt; <i>For example, imagine that Apple keeps its promise to deliver messages securely, but then your (Apple) phone goes ahead and uploads (the plaintext) message content to a different set of servers where Apple really can decrypt it. Apple is absolutely using end-to-end encryption in the dullest technical sense… yet is the statement above really accurate? Is Apple keeping its broader promise that it “can’t decrypt the data”?</i><p>No, no reasonable person would believe that (though I am sure that if the scenario ever came to be, Apple, or whoever, would likely argue &quot;yes&quot;) since it would utterly scuttle the term &quot;E2EE&quot;. If you say &quot;Our product supports X&quot;, and then have to caveat away 100% of what makes X X, then it&#x27;s just grift, plain and simple. (Now, whether grift sees regulatory action … well.)<p>&gt; <i>Now imagine that some other member of the group — not you, but one of your idiot friends — decides to turn on some service that uploads (your) received plaintext messages to WhatsApp.</i><p>&gt; <i>In general, what we’re asking here is a question about informed consent.</i><p>I would sort of agree, but corporations will expose the consent here to the &quot;friend&quot;, and then argue that because the friend consented to <i>your</i> data being uploaded, it is fine. An argument for privacy regulations.<p>(I don&#x27;t think you have to go through all this … work. Just upload the user&#x27;s data. They&#x27;ll complain, for a bit, but the market has already consolidated into at least an ologopoly, users have shown that, for the most part, they&#x27;re going to keep using the product rather than leave, or else I&#x27;ll be ending this comment with a free &quot;2025 will be the Year of the Linux Desktop&quot;. What&#x27;s gonna happen, <i>regulation</i> to ensure a free market remains free¹? Please. Cf. MS Recall, currently in the &quot;complain&quot; phase, but give it time, and we&#x27;ll reach the &quot;we heard your concerns, and we value your input and take your feedback with the utmost respect <i>ram it down their throats</i>&quot; stage.)<p>(¹free as in &quot;dictated by the laws of supply &amp; demand&quot;, not laissez-faire which is where the US will be headed for the next 4.)<p>(and … 2011? I&#x27;d&#x27;ve said 2013 is when we found out the 4A meant way less than we thought it did, leading to the rise in massive adoption of TLS. Less so E2EE.)
natch4 个月前
From Apple&#x27;s document on Advanced Data Protection:<p>&gt;With Advanced Data Protection enabled, Apple doesn&#x27;t have the encryption keys needed to help you recover your end-to-end encrypted data.<p>Apple doesn&#x27;t have the keys. Somebody else might. Somebody other than you. Also, I think they meant to say decryption keys, although they&#x27;re probably just dumbing down terminology for the masses.<p>&gt;If you ever lose access to your account, you’ll need to use one of your account recovery methods<p>&quot;You&#x27;ll need to use.&quot; Not &quot;there is no way except to use.&quot;<p>&gt;Note: Your account recovery methods are never shared with or known to Apple.<p>&quot;shared with or known to Apple.&quot; Not &quot;shared with or known to anyone else.&quot;<p>The encryption is there, I believe that. I just don&#x27;t know how many copies of the keys there are. If the only key is with me, it would be super easy for Apple to just say that. I believe that they have said that in the past, but the wording has now changed to this hyper-specific &quot;Apple does not have the key&quot; stuff.
评论 #42745667 未加载
评论 #42743549 未加载
tonygiorgio4 个月前
&gt; Although PCC is currently unique to Apple, we can hope that other privacy-focused services will soon crib the idea.<p>IMHO, Apple&#x27;s PCC is a step in the right direction in terms of general AI privacy nightmares where they are at today. It&#x27;s not a perfect system, since it&#x27;s not fully transparent and auditable, and I do not like their new opt-out photo scanning feature running on PCC, but there really is a lot to be inspired by it.<p>My startup is going down this path ourselves, building on top of AWS Nitro and Nvidia Confidential Compute to provide end to end encryption from the AI user to the model running on the enclave side of an H100. It&#x27;s not very widely known that you can do this with H100s but I really want to see this more in the next few years.
评论 #42741932 未加载
评论 #42742122 未加载
jrm44 个月前
&quot;The goal of encryption is to ensure that only two parties, the receiver and sender, are aware of the contents of your data.<p>Thus, AI training on your data breaks this, because it&#x27;s another party.<p>You now don&#x27;t have encryption.&quot;<p>Thanks for coming to my blah blah blah
评论 #42743788 未加载
jFriedensreich4 个月前
I think this has also a silver lining. The E2E encryption movement especially for messenger apps was largely also used to silently lock users out of their own data and effectively prevent user agency to use their own data to move apps, write automations or archive, this is not just true for whatsapp (the data export feature does not fully work since its launch and was just made to appease some EU law that did not properly check if the button works until the end.) Also signal does not have a way to do this. Maybe with ai coming into the game companies finally decide to provide access to data, I just hope it&#x27;s in a transparent way with user opt in and user control.
评论 #42740689 未加载
评论 #42741345 未加载