I've found that for the most part the articles that I want summarized are those which only fit the largest context models such as Claude. Because otherwise I can skim-read the article possibly in reader mode for legibility.<p>Is llama 2 a good fit considering its small context window?
I built a chrome version of this for summarizing HN comments: <a href="https://github.com/built-by-as/FastDigest">https://github.com/built-by-as/FastDigest</a>
Help me understand why people are using these.<p>I presume you want information of some value to you otherwise you wouldn't bother reading an article. Then you feed it to a probabilistic algorithm and so you <i>can not have</i> any idea what the output has to do with the input. Like <a href="https://i.imgur.com/n6hFwVv.png" rel="nofollow">https://i.imgur.com/n6hFwVv.png</a> you can somewhat decipher what this slop wants to be but what if the summary leaves out or invents or inverts some crucial piece of info?
Update: v 1.1 is out!<p>- # Changelog<p>## [1.1] - 2024-03-19<p>### Added
- New `model_tokens.json` file containing token limits for various Ollama models.
- Dynamic token limit updating based on selected model in options.
- Automatic loading of model-specific token limits from `model_tokens.json`.
- Chunking and recursive summary for long pages
- Better handling of markdown returns<p>### Changed
- Updated `manifest.json` to include `model_tokens.json` as a web accessible resource.
- Modified `options.js` to handle dynamic token limit updates:
- Added `loadModelTokens()` function to fetch model token data.
- Added `updateTokenLimit()` function to update token limit based on selected model.
- Updated `restoreOptions()` function to incorporate dynamic token limit updating.
- Added event listener for model selection changes.<p>### Improved
- User experience in options page with automatic token limit updates.
- Flexibility in handling different models and their respective token limits.<p>### Fixed
- Potential issues with incorrect token limits for different models.