I don't get why I wouldn't just keep an open tab with chatgpt on specific topics. They will very soon have big enough context windows. Why build another UI, deployment pipes and all that jazz ?<p>P.s nobody will *sms the companion
Tangentially related, how much would you guys be willing to pay for a company that could deliver a local model implementation that could run on high tier consumer grade hardware at a reduced ability?
I feel like even if there was some severe restrictions (model wasn't open source, DRM, etc.) I’d still be willing to fork out for it.