TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to Secure Existing C and C++ Software Without Memory Safety [pdf]

132 pointsby aw1621107about 1 month ago

18 comments

pizlonatorabout 1 month ago
I think this paper overestimates the benefit of what I call isoheaps (partitioning allocations by type). I wrote the WebKit isoheap implementation so it’s something I care about a lot.<p>Isoheaps can make mostly neutralize use after free bugs. But that’s all they do. Moreover they don’t scale well. If you isoheap a select set of stuff then it’s fine, but if you tried to deploy isoheaps to every allocation you get massive memory overhead (2x or more) and substantial time overhead too. I know because I tried that.<p>If an attacker finds a type confusion or heap buffer overflow then isoheaps won’t prevent the attacker from controlling heap layout. All it takes is that they can confuse an int with a ptr and game over. If they can read ptr values as ints then they can figure out how the heap is laid out (no matter how weirdly you laid it out). If they also can write ptr values as ints then control they whole heap. At that point it doesn’t even matter if you have control flow integrity.<p>To defeat attackers you really need some kind of 100% solution where you can prove that the attacker can’t use a bug with one pointer access to control the whole heap.
评论 #43538154 未加载
pjc50about 1 month ago
Short paper, so can be easily summarized. The claim is that security can be improved by these compiler and hardware assisted measures:<p><pre><code> - Stack Integrity - Control-Flow Integrity - Heap Data Integrity - Pointer Integrity and Unforgeability </code></pre> They cite the deployment of these measures on recent Apple hardware as evidence of their effectiveness.
评论 #43532987 未加载
lambdaoneabout 1 month ago
From the cited paper:<p>&quot;These four types of integrity, do not establish memory safety, but merely attempt to contain the effects of its absence; therefore, attackers will still be able to change software behavior by corrupting memory.&quot;<p>and the paper then goes on to say, about Apple&#x27;s implementation of the cited techniques:<p>&quot;This intuition is borne out by experience: in part as a result of Apple’s deployment of these defenses since 2019, the incidence of RCE attacks on Apple client software has decreased significantly—despite strong attack pressure—and the market value of such attacks risen sharply.&quot;<p>&quot;Decreased significantly&quot; is not &quot;eliminated&quot;; indeed, you could paraphrase this as &quot;the combination of these techniques has already been shown to be insufficient for security guarantees&quot;.<p>Which is not to say that these mitigations are a bad idea; but I think their benefits are significantly over-sold in the paper.
评论 #43533754 未加载
评论 #43533888 未加载
gizmoabout 1 month ago
The memory protection strategies this paper argues for are fine. If we can recompile legacy software to gain better protection against stack and heap exploits that&#x27;s a clear win.<p>As the paper points out memory safety is not a great concern on phones because applications are sandboxed. And that&#x27;s correct. If an application is stuck in a sandbox it doesn&#x27;t matter what that process does within its own process space. Smartphones taught us what we already knew: process isolation works.<p>Then the paper observes that memory safety is still a significant problem on the server. But instead of pointing out the root cause -- the absence of sandboxing -- the authors argue that applications should instead be rewritten in go or rust! This is absurd. The kernel already provides strong memory protection guarantees for each process. The kernel also provides hard guarantees for access to devices and the file system. But server software doesn&#x27;t take advantage of any of these guarantees. When a server process intermixes data of multiple customers and privilege levels then any tiny programming mistake (regardless of memory safety) can result in privilege escalation or catastrophic data leaks. What use is memory safety when your go program returns the wrong user&#x27;s data because of an off-by-one error? You don&#x27;t need a root exploit if your process already has &quot;root access&quot; to the database server.<p>If we want to be serious about writing secure software on the server we have to start taking advantage of the process isolation the kernel provides. The kernel can enforce that a web request from user A cannot return data from user B because the process simply cannot open any files that belong to the wrong user. This completely eliminates all memory safety concerns. But today software on the server emulates what the kernel already does with threading, scheduling, and memory protection, except poorly and in userspace and without any hardware guarantees. Effectively all code runs as root in ring 0. And we&#x27;re surprised that security continues to plague our industry?
评论 #43534214 未加载
评论 #43536726 未加载
评论 #43534402 未加载
评论 #43534178 未加载
评论 #43534425 未加载
cadamsdotcomabout 1 month ago
&gt; For high assurance, these foundations must be rewritten in memory- safe languages like Go and Rust [10]; however, history and estimates suggest this will take a decade or more [31].<p>The world runs on legacy code. CISA is correct that rewrites are needed for critical software [1][2] but we know how rewrites tend to go, and ROI on a rewrite is zero for most software, so it will take far more than a decade if it happens at all. So score one for pragmatism with this paper! Hope CISA folks see it and update their guidance.<p>[1] <a href="https:&#x2F;&#x2F;www.cisa.gov&#x2F;news-events&#x2F;news&#x2F;urgent-need-memory-safety-software-products" rel="nofollow">https:&#x2F;&#x2F;www.cisa.gov&#x2F;news-events&#x2F;news&#x2F;urgent-need-memory-saf...</a> [2] <a href="https:&#x2F;&#x2F;www.cisa.gov&#x2F;resources-tools&#x2F;resources&#x2F;case-memory-safe-roadmaps" rel="nofollow">https:&#x2F;&#x2F;www.cisa.gov&#x2F;resources-tools&#x2F;resources&#x2F;case-memory-s...</a>
评论 #43533923 未加载
评论 #43534456 未加载
评论 #43533559 未加载
评论 #43533735 未加载
saagarjhaabout 1 month ago
There is actually an interesting niche that one can carve out when dealing with an attacker who has a memory corruption primitive but this paper is a bit too simple to explore that space. Preventing RCE is too broad of a goal; attackers on the platforms listed continue to bypass implementations of the mitigations presented and achieve some form of RCE. The paper suggests these are because of implementation issues, and some are clearly bugs in the implementation, but many are actually completely novel and unaddressed workarounds that require a redesign of the mitigation itself. For example, “heap isolation” can be done by moving allocations away from each other such that a linear overflow will run into a guard page and trap. Is it an implementation bug or a fundamental problem that an attacker can then poke bytes directly into a target allocation rather than linearly overwriting things? Control flow integrity has been implemented but attackers then find that, in a large application, calling whole functions in a sequence can lead to the results they want. Is this a problem with CFI or that specific implementation of CFI? One of the reasons that memory safety is useful is that it’s a lot easier to agree on what it is and how to achieve it, and with that what security properties it should have. Defining the security properties of mitigations is quite a bit harder. That isn’t to say that they’re not useful, or can’t be analyzed, but generally the result is not actually denial of RCE.
MattPalmer1086about 1 month ago
This looks really useful. Doesn&#x27;t fix the problem of memory corruption but mostly seems to limit the ability to convert that into remote code execution. And all the techniques are already in widespread use, just not the default or used together.<p>I would not be surprised if attackers still manage to find sneaky ways to bypass all the 4 protections, but it would certainly raise the bar significantly.
评论 #43533099 未加载
linux_securityabout 1 month ago
Was discussing this paper with a few colleagues who work in this area, and concluded that this paper seems like an odd combination of:<p>- The author citing their own research. (Ok, all researchers do this) - Mildly scolding the industry for not having applied their research. It&#x27;s &quot;pragmatic&quot; after all.<p>The elephant in the room is that these approaches have been widely deployed and their track record is pretty questionable. iPhone widely deploys PAC and kalloc_type. Chrome applies CFI and PartitionAlloc. Android applies CFI and Scudo. Yet memory safety exploitation still regularly occurs against these targets. Is it harder because of these technologies? Probably. But if they&#x27;re so effective, why are attackers still regularly successful at exploiting memory safety bugs? And what&#x27;s the cost of applying these? Does my phone&#x27;s battery die sooner? Is it slower? So now your phone&#x2F;browser are slower AND still exploitable.
sebstefanabout 1 month ago
&gt;A Pragmatic Security Goal<p>&gt;Remote Code Execution (RCE) attacks where attackers exploit memory-corruption bugs to achieve complete control are a very important class of potentially-devastating attacks. Such attacks can be hugely disruptive, even simply in the effects and economic cost of their remediation [26]. Furthermore, the risk of such attacks is of special, critical concern for server-side platform foundations [10]. Greatly reducing the risk of RCE attacks in C and C++ software, despite the presence of memory-corruption bugs, would be a valuable milestone in software security especially if such attacks could be almost completely prevented. We can, therefore, aim for the ambitious, pragmatic goal of preventing most, or nearly all, possibilities of RCE attacks in existing C and C++ software without memory safety. Given the urgency of the situation, we should only consider existing, practical security mechanisms that can be rapidly deployed at scale.<p>I don&#x27;t know if it&#x27;s obvious to anyone else that this is AI-written or if it&#x27;s just me&#x2F;if I&#x27;m mistaken
评论 #43532958 未加载
评论 #43532942 未加载
nickpsecurityabout 1 month ago
I&#x27;ve always favored a large public&#x2F;private investment into open-source tools like Coverity, PVS Check, and RV-Match. Put extra effort into suppressing false positives and autofixing simple problems. Companies like Apple had enough money to straight up buy the vendors of these tools.<p>I&#x27;d also say, like CPAChecker and Why3, they should be designed in a flexible way where different languages can easily be added. Also, new passes for analyzers. Then, just keep running it on all the C&#x2F;C++ code in low, false, positive mode.<p>On top of this, there have been techniques to efficiently do total memory safety. Softbound + CETS was an example. We should invest in more of those techniques. Then, combine the analyzers with those tools to only do runtime checks on what couldn&#x27;t be proven.
mreabout 1 month ago
&gt; However, their use is the exception, not the rule, and their use—in particular in combination—requires security expertise and investment that is not common. For them to provide real-world, large-scale improvements in the security outcomes of using C and C++ software, there remains significant work to be done. In particular, to provide security benefits at scale, for most software, these protections must be made an integral, easy-to-use part of the world-wide software development lifecycle. This is a big change and will require a team effort.<p>That&#x27;s the core problem.<p>The mechanisms mentioned are primarily attack detection and mitigation techniques rather than prevention mechanisms. Bugs can&#x27;t be exploited as easily, but they still exist in the codebase. We&#x27;re essentially continuing to ship faulty software while hoping that tooling will protect us from the worst consequences.<p>Couldn&#x27;t one argue that containers and virtual machines also protect us from exploiting some of these memory safety bugs? They provide isolation boundaries that limit the impact of exploits, yet we still consider them insufficient alone.<p>It&#x27;s definitely a step in the right direction, though.<p>The paper mentions Rust, so I wanted to highlight a few reasons why we still need it for people who might mistakenly think this approach makes Rust unnecessary:<p><pre><code> - Rust&#x27;s ownership system prevents memory safety issues at compile time rather than trying to mitigate their effects at runtime - Rust completely eliminates null pointer dereferencing - Rust prevents data races in concurrent code, which the paper&#x27;s approach doesn&#x27;t address at all - Automatic bounds checking for all array and collection accesses prevent buffer overflows by design - Lifetimes ensure pointers are never dangling, unlike the paper&#x27;s approach which merely tries to make dangling pointers harder to exploit </code></pre> So, we still need Rust, and we should continue migrating more code to it (and similar languages that might emerge in the future). The big idea is to shift bug detection to the left: from production to development.
评论 #43533360 未加载
评论 #43533309 未加载
评论 #43534127 未加载
tzsabout 1 month ago
OT: Are there any memory safe languages that that are fast and support goto?<p>I&#x27;m writing something that needs to implement some tax computations and I want to implement them to follow as closely as possible the forms that are used to report those computations to the government. That way it is easy to be sure they are correct and easy to update them if the rules change.<p>The way those forms work is something like this:<p><pre><code> 1. Enter your Foo: _________ 2. Enter your Bar: _________ 3. Add line 1 and line 2: ________ 4: Enter your Spam: _______ 5: Enter the smaller of line 1 and 4: _____ 6: If line 5 is less than $1000 skip to line 9 7: Enter the smaller of line 2 and $5000: _____ 8: If line 7 is greater than line 4 skip to 13 ... </code></pre> With goto you can write code that exactly follows the form:<p><pre><code> Line1: L1 = Foo; Line2: L2 = Bar; Line3: L3 = L1 + L2; Line4: L4 = Spam; Line5: L5 = min(L1, L4); Line6: if (L5 &lt; 1000) goto Line9; Line7: L6 = min(L2, 5000); Line8: if (L7 &gt; L4) goto Line13; ... </code></pre> For some forms an<p><pre><code> if (X) goto Y .... Y: </code></pre> can be replaced by<p><pre><code> if (!X) { ... } </code></pre> because nothing before that has a goto into the body of the if statement. But some forms do have things jumping into places like that. Also jumping out of what would be such a body into the body of something later.<p>Writing those without goto tends to require duplicating code. The duplication in the source code could be eliminated with a macro system but don&#x27;t most memory safe languages also frown on macro systems?<p>Putting the duplicate code in separate functions could also work but often those sections of code refer to things earlier in the form so some of the functions might need a lot of arguments. However the code then doesn&#x27;t look much like the paper form so it is harder to see that it is correct or to update it when the form changes in different years.
评论 #43542528 未加载
评论 #43537951 未加载
评论 #43538165 未加载
评论 #43535900 未加载
cartalkabout 1 month ago
This paper would have been really compelling in 2005-2010, but in 2025 there&#x27;s too much evidence that these approaches do not result in C++ that is secure. The author cites a number of projects that have broadly applied these techniques, like Chrome and iOS, but these code bases continue to be exploited regularly despite these protections. If you actually look at where those projects are investing, it&#x27;s on moving to Rust&#x2F;Swift.
comexabout 1 month ago
The paper significantly overstates the scope of PAC (pointer authentication codes) on Apple platforms. To quote the paper:<p>&gt; This is effectively what is done in Apple software, which uses special ARM hardware support to also check pointer integrity at runtime—i.e., ensure each pointer access uses pointers of the right type[]. Apple uses this further to enforce a form of stack integrity, control-flow integrity, and heap integrity<p>In reality, the compiler only automatically applies PAC to <i>code</i> pointers: stack return addresses, function pointers, and C++ vtable method pointers. You can also manually apply PAC to other pointers using attributes and intrinsics; this is used by components like the Objective-C runtime and the memory allocator. But only a tiny amount of code does this.<p>PAC is nevertheless a powerful mitigation. But let’s see how it stacks up against the paper’s claims:<p>- Heap integrity:<p>Since Apple’s implementation of PAC leaves most data pointers unsigned, it has little direct relevance to heap integrity. The paper seems to want to sign <i>all</i> pointers. You could theoretically implement a compiler feature to use PAC instructions for all pointers, but I don’t think anyone (not just Apple) has implemented such a thing. It would probably come with high performance and compatibility costs.<p>- Stack integrity:<p>The paper defines this as “attackers cannot change the arguments, local variables, or return sites”. PAC makes it difficult to change return sites, but does nothing to prevent changing arguments and local variables (unless you&#x27;re limited to a linear overwrite). Again, it&#x27;s theoretically possible to use PAC instructions to secure those things: there is a technique to make a single signature that combines multiple pointer-sized values, so you could try to make one signature that covers the whole set of local variables and other stack bits. But nobody does this, so the compatibility and performance costs are unknown. Even SafeStack (which the paper also cites) does not fully protect local variables, though it gets closer.<p>- Control-flow integrity:<p>The paper mentions “type signatures”, but Apple’s PAC-based CFI does not validate type signatures for C function pointers, only vtables and Objective-C isa pointers. Other CFI implementations do validate C function pointer type signatures, like Android, though this seems to come at the cost of a slower pace of adoption.<p>More importantly, attackers have demonstrated the ability to get around CFI, by substituting valid but irrelevant function pointers to achieve “jump-oriented programming” (JOP). Project Zero recently published a blog post explaining a (semi-recent) iOS exploit that used this technique:<p><a href="https:&#x2F;&#x2F;googleprojectzero.blogspot.com&#x2F;2025&#x2F;03&#x2F;blasting-past-webp.html" rel="nofollow">https:&#x2F;&#x2F;googleprojectzero.blogspot.com&#x2F;2025&#x2F;03&#x2F;blasting-past...</a><p>I’m not sure whether type signature validation would have prevented this particular exploit, but many C function pointers have pretty simple signatures (yet may do wildly different things), so the benefit is somewhat limited.
xiangaabout 1 month ago
Not to be harsh, but this article is messy, and I find myself agreeing with the commencts regarding hyperboles and AI-writing. It&#x27;s a directionless nothingburger that hasn&#x27;t really made any efforts to look into recent research on memory allocation and hardening.<p>Some examples:<p>In the heap-section, the article only cites ±20 year old papers and mixes user- and kernel-space allocators such as Apple&#x27;s kernel kalloc_type and one for Chrome. Not to mention that the author talks about implemention heap regions per-object as if that wont introduce significant memory and performance overhead and is completely unrealistic in a comodity OS, let alone in a server setting.<p>The pointer integrity section ends with the statement &quot;The above protections, in combination, can prevent complete software control by attackers able to corrupt memory.&quot; - which is not true (see recent CVE i found after a quick search [1]), even for relativly hardened Apple products that use pointer authentication and a type segregated allocator kalloc_type.<p>Additionally, literally the next sentence in the subsequent section contradicts the previous statement (!): &quot;...therefore, attackers will still be able to change software behavior by corrupting memory&quot;. Really dampens the credibility.<p>If anyone is interested in some recent contributions in the space, I&#x27;ve been looking into these recently (in no particular order): SeaK [2], ViK[3] and Safeslab [4].<p>[1] <a href="https:&#x2F;&#x2F;www.cvedetails.com&#x2F;cve&#x2F;CVE-2025-24085&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cvedetails.com&#x2F;cve&#x2F;CVE-2025-24085&#x2F;</a> [2] <a href="https:&#x2F;&#x2F;www.usenix.org&#x2F;conference&#x2F;usenixsecurity24&#x2F;presentation&#x2F;wang-zicheng" rel="nofollow">https:&#x2F;&#x2F;www.usenix.org&#x2F;conference&#x2F;usenixsecurity24&#x2F;presentat...</a> [3] <a href="https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1145&#x2F;3503222.3507780" rel="nofollow">https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1145&#x2F;3503222.3507780</a> [4] <a href="https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1145&#x2F;3658644.3670279" rel="nofollow">https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1145&#x2F;3658644.3670279</a>
tonyhart7about 1 month ago
commenting before reading it but I guess memory arena???
m00dyabout 1 month ago
Looks like they&#x27;re finally coming to terms with C++&#x27;s flaws.
评论 #43532917 未加载
awaymazdacx5about 1 month ago
C and C++ code is paradigmatic in being susceptible to CLI security vulnerabilities.<p>Object-oriented languages typically work in the set A-Z, with limited characters, parameters, etc...<p>Whereas Wittgenstein&#x27;s concept of the private language is internal and discursive in Skinnerite probabalistic capacities.