As I understand, this company's existing consumer-level product can already be used to capture the sound of an amplifier, but it's a snapshot of the device's (amplifier, effects pedal) settings in terms gain, eq, etc.<p>Meanwhile, they have an <i>extremely</i> labor-intensive set of techniques for modeling a device's analog circuitry, resulting in a model that allows the user to adjust gain, eq, etc. This isn't a consumer-level process; it happens in a laboratory somewhere, and the output is shipped as a software plugin or model on a digital effects unit.<p>This technology bridges the gap. Ultimately it's an unguided ML approach akin to the former, but introduces ML-guided robotic knob-turning (AKA "TINA") which (unlike the former) maps continuous changes within the device's parameter space, allowing to ship something more like the latter.