The error rate graph is extremely interesting/worrying:<p>"The performing servers return less overall errors. There is however, one exception. Cogen was able to return ALL its requests successfully no matter how hard it was hammered."<p>Why would the others decide to drop connections or return errors? Surely that makes them pretty unusable?<p>I wonder why they start dropping/erroring, and in what form?
This may be a dumb question, but I really can't imagine. What reasonable real-world process would need more than 1400 HTTP connections per second (which is the worst performer) on a single process and single server?..