The publishers now encode their papers with individual identifiers, that generationP calls UUIDs on all pdf's. That means they can trace it back to the institution(and perhaps the actual prof?) They have then sent nastygrams to threaten them with fees or loss of access to the institution - potentially serious punishment.
What is needed is a way to run the papers through an OCR recognition program to create renewed text and to further process to randomly vary a large number of adjacent letter spacings. (this is called 'micro kerning' and it allowed a second order of unique document recognition where the document is scanned by the journal to look for these fractional kerning gaps in the letter spacing which can lead back to the institution). I suppose a program flow could be made to OCR + random micro kerning changes - it would take time, but once set up it would be a rapid computer based document flow process. Photos/charts could be sanitised as well, but legends etc on them would also need OCR and micro kerning adjustments. With 25 words of text, each with 6 letters, micro kerning can easily create 10,000 unique ID's, easily enough to cope with all subscribed institutions - and that is on one chart/photo.
I suspect the subscribing institutions have acted with firm words to their profs and grad students to block this. This can easily be told to us if a few people in assorted institutions let us know if they have been read firm words about this?<p>I am not sure how Sci-hub can get past this, unless they get a good Indian court ruling and can use Indian friends to scan printed copies of journals - if they exist in this online age?