As a deep learning hobbyist, I have a general sense for which GPUs are great and which are awful. But, I'm less familiar with some of the in-the-middle ones, and I'm generally curious what criteria pro researchers use. E.g., never use anything besides X, if you were strapped for cash maybe use Y, etc.<p>Here are the main ones I've come across, but maybe people would say others are relevant too.<p>I've included my guesses as to ratings/thoughts, as well as current AWS spot price for 1 GPU in east-2. Does this assessment seem roughly right? Am I missing anything? Obviously assessing these can be very complicated based on what exactly you're using them for; I'm just looking for very high level thoughts.<p>A100
9.5/10 - among the best available now?
$1.39/hr (this is an estimate; AWS actually only offers an 8 GPU instance for $11.15/hr so I just divided by 8)<p>V100
8.5/10 - very good
$0.92/hr<p>P100
8/10 - good, used a fair amount in research
AWS doesn't seem to offer these anymore? Replaced by V100s?<p>T4
5/10 - I don't know. Do people use these ever?
$0.36/hr<p>M60
2/10 - I don't really know. I assume bad
$0.34/hr<p>K80
1/10 - crap, basically don't ever waste your time
$0.27/hr