TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: I made an Ollama summarizer for Firefox

132 pointsby tcsenpai7 months ago
Source: <a href="https:&#x2F;&#x2F;github.com&#x2F;tcsenpai&#x2F;spacellama">https:&#x2F;&#x2F;github.com&#x2F;tcsenpai&#x2F;spacellama</a>

6 comments

RicoElectrico7 months ago
I&#x27;ve found that for the most part the articles that I want summarized are those which only fit the largest context models such as Claude. Because otherwise I can skim-read the article possibly in reader mode for legibility.<p>Is llama 2 a good fit considering its small context window?
评论 #41813107 未加载
评论 #41816247 未加载
评论 #41820994 未加载
评论 #41827965 未加载
asdev7 months ago
I built a chrome version of this for summarizing HN comments: <a href="https:&#x2F;&#x2F;github.com&#x2F;built-by-as&#x2F;FastDigest">https:&#x2F;&#x2F;github.com&#x2F;built-by-as&#x2F;FastDigest</a>
评论 #41819934 未加载
chx7 months ago
Help me understand why people are using these.<p>I presume you want information of some value to you otherwise you wouldn&#x27;t bother reading an article. Then you feed it to a probabilistic algorithm and so you <i>can not have</i> any idea what the output has to do with the input. Like <a href="https:&#x2F;&#x2F;i.imgur.com&#x2F;n6hFwVv.png" rel="nofollow">https:&#x2F;&#x2F;i.imgur.com&#x2F;n6hFwVv.png</a> you can somewhat decipher what this slop wants to be but what if the summary leaves out or invents or inverts some crucial piece of info?
评论 #41817968 未加载
评论 #41814327 未加载
评论 #41817580 未加载
tcsenpai7 months ago
Update: v 1.1 is out!<p>- # Changelog<p>## [1.1] - 2024-03-19<p>### Added - New `model_tokens.json` file containing token limits for various Ollama models. - Dynamic token limit updating based on selected model in options. - Automatic loading of model-specific token limits from `model_tokens.json`. - Chunking and recursive summary for long pages - Better handling of markdown returns<p>### Changed - Updated `manifest.json` to include `model_tokens.json` as a web accessible resource. - Modified `options.js` to handle dynamic token limit updates: - Added `loadModelTokens()` function to fetch model token data. - Added `updateTokenLimit()` function to update token limit based on selected model. - Updated `restoreOptions()` function to incorporate dynamic token limit updating. - Added event listener for model selection changes.<p>### Improved - User experience in options page with automatic token limit updates. - Flexibility in handling different models and their respective token limits.<p>### Fixed - Potential issues with incorrect token limits for different models.
oneshtein7 months ago
I use PageAssist with Ollama for two months, but I never called &quot;Summarise&quot; option in menu. :-&#x2F;
评论 #41816908 未加载
donclark7 months ago
If we can get this as the default for all the newly posted HN articles please and thank you?
评论 #41816293 未加载
评论 #41815529 未加载