I've found that intentionally causing abrupt (but reasonable) changes to hyperparameters in evolutionary spiking neural network simulations (I.e between each generation) results in far more robust simulations that will meet fitness criteria with less likelihood of getting stuck somewhere. The tradeoff being that simulating will take longer, but this may be worth things like reducing the chances of your resource requirements going asymptotic half way through.<p>My current method of perturbation is to cap the total # of activations per candidate globally. When at least 3 improvements in the best global fitness score have been achieved, I cut the global activation limit in <i>half</i>. Obviously, this is fairly catastrophic for the current population. Each generation can increase the limit by 1% if no improvements made. This provides aggressive selection pressure for more efficient networks and forces a different kind of partial restart of the training process (I.e. do what you just did with 50% resources now). Very often this does result in optimization of the network. It also seems to make the networks rebound faster after restarts. At first, a restart can take 10-20 generations to recover. By generation 1000, this is closer to 1-2.
Random note: Antifragility is called hormesis in living organisms, a concept that existed since the 1950s when the wrong dose - too low - of herbicides made the plants stronger.
<p><pre><code> Systems or organisms can be defined as antifragile if they derive benefit from systemic variability, volatility, randomness, or disorder.
</code></pre>
If that's a riddle, death fits. Those things are characteristic of the pestilence, famine, war, etc. that feed death. The decay surrounding death is complex and dynamical. And death is more sustainable than any living system.
I wish folks would write more plainly. “Antifragility characterizes the benefit of a dynamical system derived from the variability in environmental perturbations.” Geish.
I'd like to point out that Norbert Wiener was the first to discover the concept of antifragility (under a different term though).<p>It's also worth checking out more of his works as he initiated the field of cybernetics.
After reading the paper, I'm really unsure what the novel contribution is. It feels like they're attempting to rebrand well-understood concepts within various fields (control systems theory, etc). The provided mathematical definition of antifragility is somewhat unconvincing too: it's not that it's wrong, per say, but in the effort to find something sufficiently broad to apply to many different fields of applied dynamical theory they've had to adopt a definition which is a bit unintuitive, and overly general.
How odd that the word "Antifragility" has risen so fast. What's wrong with robust? Is it due to contemporary memes of personal fragility?
There are now so many bullshit-terms derived from Taleb's bullshit books. Like how people still keep mentioning "black swans" as if it actually means something other (or something more) than "unexpected event". And for some unfathomable reason it keeps traction. Similarly how Mandelbrot "redefined" (i.e. distorted) the meaning of "Lindy effect", and it stuck (however, I didn't notice if it became popular to call a millenia-old banality by than name after Mandelbrot, or Taleb again). It probably should serve as an another example of that if you are arrogant enough, people will follow you just because of that.<p>However, I guess "antifragility" isn't the worst of these.