TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: DocsGPT-7B – purpose optimised and finetuned model for documentation QA

3 pointsby grittybealmost 2 years ago

1 comment

wskishalmost 2 years ago
&quot;Attention with Linear Biases (ALiBi): Inherited from the MPT family, this feature eliminates the context length limits by replacing positional embeddings, allowing for efficient and effective processing of lengthy documents. In future we are planning to finish training on our larger dataset and to increase amount of tokens for context.&quot;<p>This is interesting, but also confusing. What is the current context limit? It mentions eliminating the limit but then mentions increasing it in the future.