I'm not an apple fanboy but I would consider some of the apple laptops, because they have some kind of sharing of RAM between their AI accelerators and their main memory. This lets them load bigger models than normal, which can be the limiting factor in running local LLM models.