Repeat after me the lesson from the founders of information security: every system, from individual components to their integration, is insecure until proven trustworthy by sufficient analysis. You have to also have a precise statement of what secure means to compare the system against. You then apply methods proven to work for various classes of problems. By 1992, they had found everything from kernel compromises to cache-based, timing channels using such methods. On this topic, every hardware and software component in every centralized or decentralized system has covert channels leaking your secrets. Now, if you're concerned or want to be famous, there's something you can do:<p>Shared Resource Matrix for Storage Channels (1983)
<a href="http://www.cs.ucsb.edu/~sherwood/cs290/papers/covert-kemmerer.pdf" rel="nofollow">http://www.cs.ucsb.edu/~sherwood/cs290/papers/covert-kemmere...</a><p>Wray's Extention for Timing Channels (1991)
<a href="https://pdfs.semanticscholar.org/3166/161c3cbb5f8cd150d133a3746987da2d264d.pdf" rel="nofollow">https://pdfs.semanticscholar.org/3166/161c3cbb5f8cd150d133a3...</a><p>Using such methods were mandatory under the first, security regulations called TCSEC. They found a lot of leaks. High-assurance, security researchers stay improving on this with some trying to build automated tools to find leaks in software and hardware. Buzzwords include "covert channels," "side channels," "non-interference proof or analysis," "information flow analysis (or control)," and "information leaks." There's even programming languages designed to prevent accidental leaks in the app or do constant-time implementations for stuff like crypto. Go forth and apply covert-channel analysis and mitigation on all the FOSS things! :)<p>Here's an old Pastebin I did with examples of that:<p><a href="https://pastebin.com/ajqxDJ3J" rel="nofollow">https://pastebin.com/ajqxDJ3J</a>