It's terrifying that an analysis from software like CacheBack can be used as an important piece of evidence in a murder trial. An error of this magnitude could easily contribute to somebody wrongly losing his or her life and that is not alright in any way. I would feel a lot more comfortable if a piece of FOSS, which could be independently vetted, was used instead of some half-baked proprietary garbage with a $500 price tag. I'm all for finding a niche market and exploiting it, but to me there is something deeply wrong about hiding the logic behind a piece of software producing courtroom evidence.
"He found both reports were inaccurate (although NetAnalysis came up with the correct result), in part because it appears both types of software had failed to fully decode the entire file, due to its complexity. His more thorough analysis showed that the Web site sci-spot.com was visited only once — not 84 times."<p>How does that work? I mean, how do you examine what must basically be a log file (though perhaps in some binary format), come up with 84 hits but then realize it was only 1 hit and blame the problem on file complexity? Seems like such an issue would only result in underreporting, not overreporting. Where did the 84 number even come from?
This was a major mistake by the witness in this case, and everyone who has been watching the case already knew about it. Do you know why?<p>Because it was presented to the jury during the trial.<p>The jury was told that the number of visits to that site was transposed with the number of visits to myspace. A prosecution witness cleared the record in open trial.<p>In fact, defense attorney Jose Baez even brought up the fact during closing arguments and used it as a reason to have reasonable doubt of the entire case.
Here is his bio:<p><a href="http://www.siquest.ca/jbradley.asp" rel="nofollow">http://www.siquest.ca/jbradley.asp</a><p>He seems heavy on law enforcement credentials, but rather light on Computer Science. Not sure that is the right combo here.
Isn't it strange that when somebody looks for something 84 times, that a prosecutor sees that as more important as someone looking for it only once? So a stupid person who needs to read something 84 times, or whose dog eats his printed version 83 times, is more likely to 'have done it' as the person who understands it on the first try or doesn't have a dog?
I'm surprised that nobody with access to the data stopped to ponder that those who know how to search would find what they need in < 84 searches, while those who don't know how to search would give up earlier. The fact everyone blindly trusted suspicious data from a 'magical' program is, to me, more disturbing than the flaw itself.