TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Zeroing buffers is insufficient

299 点作者 MartinodF超过 10 年前

25 条评论

pslam超过 10 年前
Part 2 is correct in that trying to zero memory to &quot;cover your tracks&quot; is an indication that You&#x27;re Doing It Wrong, but I disagree that this is a <i>language</i> issue.<p>Even if you hand-wrote some assembly, carefully managing where data is stored, wiping registers after use, you still end up information leakage. Typically the CPU cache hierarchy is going to end up with some copies of keys and plaintext. You know that? OK, then did you know that typically a &quot;cache invalidate&quot; operation doesn&#x27;t actually zero its data SRAMs, and just resets the tag SRAMs? There are instructions on most platforms to read these back (if you&#x27;re at the right privilege level). Timing attacks are also possible unless you hand-wrote that assembly knowing exactly which platform it&#x27;s going to run on. Intel et al have a habit of making things like multiply-add have a &quot;fast path&quot; depending on the input values, so you end up leaking the magnitude of inputs.<p>Leaving aside timing attacks (which are just an algorithm and instruction selection problem), the right solution is isolation. Often people go for physical isolation: hardware security modules (HSMs). A much less expensive solution is sandboxing: stick these functions in their own process, with a thin channel of communication. If you want to blow away all its state, then wipe every page that was allocated to it.<p>Trying to tackle this without platform support is futile. Even if you have language support. I&#x27;ve always frowned at attempts to make userland crypto libraries &quot;cover their tracks&quot; because it&#x27;s an attempt to protect a process from itself. That engineering effort would have been better spent making some actual, hardware supported separation, such as process isolation.
评论 #8281655 未加载
评论 #8280229 未加载
willvarfar超过 10 年前
Excellent point! I really hope such a sensible suggestion is added to mainstream compilers asap and blessed in future standards.<p>Apologies to everyone suffering Mill fatigue, but we&#x27;ve tried to address this not at a language level but a machine level.<p>As mitigation, we have a stack whose rubble you cannot browse, and no ... No registers!<p>But the real strong security comes from the Mill&#x27;s strong memory protection.<p>It is cheap and easy to create isolated protection silos - we call them &quot;turfs&quot; - so you can tightly control the access between components. E.g. you can cheaply handle encryption in a turf that has the secrets it needs, whilst handling <i>each</i> client in a dedicated sandbox turf of its own that can only ask the encryption turf to encrypt&#x2F;decrypt buffers, not access <i>any</i> of that turf&#x27;s secrets.<p>More in this talk <a href="http://millcomputing.com/docs/security/" rel="nofollow">http:&#x2F;&#x2F;millcomputing.com&#x2F;docs&#x2F;security&#x2F;</a> and others on same site.
评论 #8279850 未加载
评论 #8278896 未加载
评论 #8278136 未加载
AlyssaRowan超过 10 年前
It&#x27;s becoming gradually more tempting to write a crypto library in assembly language, because at least then, it says exactly what it&#x27;s doing.<p>Alas, microcode, and unreadability, and the difficulty of going from a provably correct kind of implementation all the way down to bare metal by hand.<p>The proposed compiler extension, however, makes sense to me. Let&#x27;s get it added to LLVM &amp; GCC?
评论 #8278040 未加载
评论 #8278142 未加载
评论 #8278025 未加载
评论 #8278701 未加载
cesarb超过 10 年前
For AESNI, you probably are already using some sort of assembly to call the instructions. In the same assembly, you could wipe the key and plaintext as the last step.<p>For the stack, if you can guess how large the function&#x27;s stack allocation can be (shouldn&#x27;t be too hard for most functions), you could after returning from it call a separate assembly function which allocates a larger stack frame and wipes it (don&#x27;t forget about the redzone too!). IIRC, openssl tries to do that, using an horrible-looking piece of voodoo code.<p>For the registers, the same stack-wiping function could also zero all the ones the ABI says a called function can overwrite. The others, if used at all by the cryptographic function, have already been restored before returning to the caller.<p>Yes, it&#x27;s not completely portable due to the tiny amount of assembly; but the usefulness of portable code comes not from it being 100% portable, but from reducing the amount of machine- and compiler-specific code to a minimum. Write one stack- and register-wipe function in assembly, one &quot;memset and I mean it&quot; function using either inline assembly or a separate assembly file, and the rest of your code doesn&#x27;t have to change at all when porting to a new system.
kabdib超过 10 年前
I don&#x27;t think this can be a language feature. It&#x27;s more a platform thing: Why is keeping key material around on a stack or in extra CPU registers a security risk? It&#x27;s because someone has access to the hardware you&#x27;re running on. (Note that the plain-text is just as leaky as the key material. Yike!)<p>So stop doing that. Have a low-level system service (e.g., a hypervisor with well-defined isolation) do your crypto operations. Physically isolate the machines that need to do this, and carefully control their communication to other machines (PCI requires this for credit card processing, btw). Do end-to-end encryption of things like card numbers, at the point of entry by the user, and use short lifetime keys in environments you don&#x27;t control very well.<p>The problem is much, much wider than a compiler extension.
评论 #8278294 未加载
评论 #8278244 未加载
dmm超过 10 年前
Remember this the next time someone says &quot;C is basically portable assembler.&quot; It&#x27;s not precisely because you can do many things in assembly that you can&#x27;t directly do in c such as directly manipulate the stack and absolutely control storage locations.
pbsd超过 10 年前
&gt; For encryption operations these aren&#x27;t catastrophic things to leak — the final block of output is ciphertext, and the final AES round key, while theoretically dangerous, is not enough on its own to permit an attack on AES<p>This is incorrect. The AES key schedule is bijective, which makes recovering the last round key as dangerous as recovering the first.
评论 #8278810 未加载
评论 #8279379 未加载
nly超过 10 年前
Anything sent over HTTP(S), such as your credit card numbers and passwords, likely already passes through generic HTTP processing code which doesn&#x27;t securely erase anything (for sure if you&#x27;re using separate SSL termination). Anything processed in an interpreted or memory safe language puts secure erasure outside of your reach entirely.<p>Afaict there&#x27;s no generic solution to these problems. 99.9% of what these code paths handle is just non-sensitive, so applying some kind of &quot;secure tag&quot; to them is just unworkable, and they&#x27;re easily used without knowing it... it only takes one ancillary library to touch your data.
评论 #8278344 未加载
Someone超过 10 年前
<i>&quot;As with &quot;anonymous&quot; temporary space allocated on the stack, there is no way to sanitize the complete CPU register set from within portable C code&quot;</i><p>I don&#x27;t know enough of modern hardware, but on CPUs with register renaming, is that even possible from assembly?<p>I am thinking of the case where the CPU, instead of clearing register X in process P, renames another register to X and clears it.<p>After that, program Q might get back the old value of register X in program P by XOR-ing another register with some value (or just by reading it, but that might be a different case (I know little of hardware specifics)), if the CPU decide to reuse the bits used to store the value of register X in P.<p>Even if that isn&#x27;t the case, clearing registers still is fairly difficult in multi-core systems. A thread might move between CPUs between the time it writes X and the time it clears it. That is less risky, as the context switch will overwrite most state, but, for example, floating point register state may not be restored if a process hasn&#x27;t used floating point instructions yet.
评论 #8278147 未加载
评论 #8278235 未加载
评论 #8278183 未加载
ggchappell超过 10 年前
This article makes a good point, but I think the problem is even worse than he describes.<p>Computer programs of all kinds are being executed on top of increasingly complicated abstractions. E.g., once upon a time, memory was memory; today it is an abstraction. The proposed attribute seems workable if you compile and execute a C program in the &quot;normal&quot; way. But what if, say, you compile C into asm.js?<p>Saying, &quot;So don&#x27;t do that&quot; doesn&#x27;t cut it. In not too many years I might compile my OS and run the result on some cloud instance sitting on top of who-knows-what abstraction written in who-knows-what language. Then someone downloads a carefully constructed security-related program and runs it on that OS. And this proposed ironclad security attribute becomes meaningless.<p>So I&#x27;m thinking we need to do better. But I don&#x27;t know how that might happen.
评论 #8279505 未加载
评论 #8279410 未加载
评论 #8279319 未加载
anon4超过 10 年前
If I have enough control to the point where I can read your memory in some way, I can just use ptrace. Heck, I could attach a debugger. It seems ludicrous to want that level of protection out of a normal program running on Mac&#x2F;Win&#x2F;Linux.<p>Now, if your decryption hardware was an actual separate box, where the user inserts their keys via some mechanism and you can&#x27;t run any software on it, but simply say &quot;please decrypt this data with key X&quot;, then we&#x27;d be on to something. It could be just a small SoC which plugs into your USB port.<p>Or you could have a special crypto machine kept completely unconnected to anything, in a Faraday cage. You take the encrypted data, you enter your key in the machine, you enter the data and you copy the decrypted data back. No chance of keys leaking in any way.
评论 #8279441 未加载
评论 #8279333 未加载
Chiba-City超过 10 年前
Please, assembly is OK. It&#x27;s not even magic or special wizardry. My dad programmed and maintained insurance industry applications in assembly side by side with many other normal office workers for decades. Assembly is OK.
评论 #8278184 未加载
评论 #8278213 未加载
cheez超过 10 年前
The suggestion has the right idea, but the wrong implementation. The developer should be able to mark certain data as &quot;secure&quot; so the security of the data travels along the type system.<p>Botan, for example, has something called a &quot;SecureVector&quot; which I have never actually verified as being secure, but it&#x27;s the same idea.
评论 #8278883 未加载
delinka超过 10 年前
Why are there no suggestions to change processors accordingly? Intel should be considering changing the behavior of its encryption instructions to clear state when an operation is complete or at the request of software. Come to think of it, every CPU designer should be considering an instruction to clear the specified state (register set A, register set B) when requested by software. Then, the compiler can effectively support SECURE attributed variables, functions, or parameters without needing to stuff the pipleline with some kind of sanitizing code.
评论 #8278647 未加载
评论 #8278175 未加载
db999999超过 10 年前
Try:<p><pre><code> #include &lt;string.h&gt; void bar(void *s, size_t count) { memset(s, 0, count); __asm__ (&quot;&quot; : &quot;=r&quot; (s) : &quot;0&quot; (s)); } int main(void) { char foo[128]; bar(foo, sizeof(foo)); return 0; } gcc -O2 -o foo foo.c -g gdb .&#x2F;foo ... (gdb) disassemble main Dump of assembler code for function main: 0x00000000004003d0 &lt;+0&gt;: sub $0x88,%rsp 0x00000000004003d7 &lt;+7&gt;: mov $0x80,%esi 0x00000000004003dc &lt;+12&gt;: mov %rsp,%rdi 0x00000000004003df &lt;+15&gt;: callq 0x400500 &lt;bar&gt; 0x00000000004003e4 &lt;+20&gt;: xor %eax,%eax 0x00000000004003e6 &lt;+22&gt;: add $0x88,%rsp 0x00000000004003ed &lt;+29&gt;: retq End of assembler dump. (gdb) disassemble bar Dump of assembler code for function bar: 0x0000000000400500 &lt;+0&gt;: sub $0x8,%rsp 0x0000000000400504 &lt;+4&gt;: mov %rsi,%rdx 0x0000000000400507 &lt;+7&gt;: xor %esi,%esi 0x0000000000400509 &lt;+9&gt;: callq 0x4003b0 &lt;memset@plt&gt; 0x000000000040050e &lt;+14&gt;: add $0x8,%rsp 0x0000000000400512 &lt;+18&gt;: retq End of assembler dump.</code></pre>
评论 #8313123 未加载
erik123超过 10 年前
It very much looks like a situation in which the system has already been compromised and is running malicious programs that it shouldn&#x27;t. These malicious programs could still face the hurdle of being held at bay by the permission system that prevents them from reading your key file.<p>However, they could indeed be able to circumvent the permission system by figuring out what sensitive data your program left behind in uninitialized memory and in CPU registers.<p>Not leaving traces behind then becomes a serious issue. Could the kernel be tasked with clearing registers and clearing re-assigned memory before giving these resources to another program? The kernel knows exactly when he is doing that, no?<p>It would be a better solution than trying to fix all possible compilers and scripting engines in use. Fixing these tools smells like picking the wrong level to solve this problem ...
评论 #8278262 未加载
gioele超过 10 年前
WRT the AESNI leaking information in the XMM registers, wouldn&#x27;t starting a fake AES decryption solve the problem?<p>Also, wouldn&#x27;t a wrapper function that performs the AES decryption and then manually zeroes the registers be a good enough work around?
评论 #8278358 未加载
Demiurge超过 10 年前
Every time I read one of these posts about a clever &quot;attack vector&quot;, how something can be gleaned from this special register, or a timing attack, somesuch, I remember about a theory that the sound of a dinosaurs scream can be extracted from the waves impact made on a rocks crystal structure.<p>I googled pretty hard for real life example uses of a timing attack, and now using of stale data on the register, but couldn&#x27;t find anything. Does anyone know of examples of this actually being done?
评论 #8278328 未加载
评论 #8278443 未加载
评论 #8278549 未加载
评论 #8278367 未加载
lnanek2超过 10 年前
Doesn&#x27;t actually seem true. OK, running the decrypt leaves the key and data in SSE registers that are rarely used where it might be looked up later by attackers. There isn&#x27;t any portable way to explicitly clear the registers. Then why not just run the decrypt again with nonsense inputs when you are done to leave junk in there instead? Yes, inefficient, but a clear counter example. You could then work on just doing enough of the nonsense step to overwrite the registers.
评论 #8278589 未加载
评论 #8278586 未加载
ge0rg超过 10 年前
Even if the proposed feature is added to C and implemented, there is still the (practical) problem of OS-level task switching: when your process is interrupted by the scheduler, its registers are dumped into memory, from where they might even go into swap space.<p>It would be consequential (but utterly impractical) to add another C-level primitive to prevent OS-level task suspension during critical code paths. Good luck getting that into a kernel without opening a huge DoS surface :)
评论 #8278508 未加载
zvrba超过 10 年前
Posts like this make me just more convinced about that C combines the worst of &quot;portability&quot; and &quot;assembly&quot; into &quot;portable assembly&quot;.
cousin_it超过 10 年前
I don&#x27;t completely understand the C spec. Would the following approach work for zeroing a buffer?<p>1) Zero the buffer.<p>2) Check that the buffer is completely zeroed.<p>3) If you found any non-zeros in the buffer, return an error.<p>Is the compiler still allowed to optimize away the zeroing in this case?
评论 #8278124 未加载
评论 #8278305 未加载
评论 #8278377 未加载
评论 #8278110 未加载
ausjke超过 10 年前
There are some chips providing zeroizing a small region of device memory when needed and it&#x27;s specially designed to hold encryption keys etc. It&#x27;s also done by hardware.
rsync超过 10 年前
Would running your file system read only and optimizing the system for fast bootup be a workaround ? If so you could zero successfully by rebooting...
评论 #8278895 未加载
higherpurpose超过 10 年前
&gt; It is impossible to safely implement any cryptosystem providing forward secrecy in C<p>What about Rust?
评论 #8278607 未加载