I've seen many times on HN people complaining on the OpenAI scraping to be quite intense unnecessarily, not respecting robots.txt, among other statements.<p>It seems too much of a coincidence that 'deep research' was released three days ago (Feb 2) and that the National Library of Medicine site has been under heavy traffic around Jan 27 (based on what I could see on the internet archive).<p>I have 0 evidence to back up the claim that the cause of this heavy traffic comes from OpenAI, but just wanted to note it on HN to see if sparks some interesting discussions.