Aren't these all pretty dependent on the systems you're trying to fool? Even with adversarial learning (<a href="https://en.wikipedia.org/wiki/Adversarial_machine_learning" rel="nofollow">https://en.wikipedia.org/wiki/Adversarial_machine_learning</a>), if the system you're trying to hide from is different enough from the one you trained against, won't this not work well?<p>Although it does go back to the original, non-software meaning if the manufacturers create a _patch_ to update the hat.