There still seems to be 10% gap between these ML APIs to miss-classify images, might be a case where they have been trained very conservatively to filter out even a little bit show of skin.
Wonder how others are moderating their UGC images? (Seems social networks like FB don't do any moderation, have been embarrassed many times when an explicit image shows up on my wall :P )