TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

We crawled 800M PubMed articles and made this GPTs

4 点作者 freesam超过 1 年前

1 comment

freesam超过 1 年前
Problems:<p>1. There is no GPTs that really have access to the PubMed articles database but mostly use the ChatGPT + Google Search to do the work. Or use the Search method of the PubMed to do the work. 2. Those GPTs uses only the search results but never looked at the content of the PubMed article to provide answer<p>Therefore, the results of their GPTs will be really bad.<p>Solution:<p>This GPTs used all the content of the 800 Million PubMed article to create a distributed database and use the RAG pipeline to serve the articles. Because the RAG pipeline understand the content of the article, it can provide the most relavant PubMed article for serious research purpose