To optimize a given neural network model, we often apply multiple methods and carry out many trials, which produce a large amount of binaries. Even with git LFS support, handling them gracefully is still a headache.<p>Any good practices/frameworks to maintain the experimental models?