For myself, decades ago, my learn-as-you-go project was a distributed rendering system. Here's what worked about it:<p><pre><code> - Ray-tracing (for example) is easy to reason about in an object oriented fashion (this was before functional was all the rage)
- You're never done. There are always new lighting models, surface models (shaders), geometries, optimizers to add
- Each module, pretty quickly, gives you visible results.
- Because it's modular, there's lots of opportunity for incremental refactoring of modules and for re-implementation of modules in new languages
- It could be part of a (much) larger ecosystem. Maybe you want to add water - oceans are fun. To add robot simulators, you need a physics engine. To which you can add ML to generate and optimize controllers to have the robot hit goals. etc. etc.
</code></pre>
In this case, I had to learn numerical methods (for both rigid-body physics and monte carlo), various rendering techniques, how to implement shading languages, distributed algorithms, managing distributed clusters, FPGAs (before GPUs were cheaply available), etc., etc.<p>I'm not suggesting a rendering test-bed specifically, but unbounded problems that can be modularized where you always have a list of "next modules" is one way to have an ever-green project you are always adding things too -- sometimes (but not necessarily) in new languages, using new development paradigms. For me, agent-based computational economics, various physics-related simulations (in general), machine-learning around robotics (control systems) and that rendering test-bed were my "domains" of experimentation. For you, it could well be just about anything else.