Because PCI is relatively slow, compared to modern CPU buses, nothing else.<p>This is done intentionally, to make computers cheaper, because on concurrent market few dollars difference could lead to dramatically drop sales and could lead to bankruptcy.<p>So systems vendors constantly monitoring market, and trying to implement only limited functionality, to limit budgets of development and manufacturing.<p>For example, initial speed of SATA was not chosen from some random number, but considered that existed on market disk drives have some speed, plus some gap for future upgrades.<p>ISA/PCI speed initially where chosen by considerations of limits of frequency of existed hardware, basically they where tied to consideration, of typical user machine, where there one network card and one HDD with known speed (plus hdd and network roadmap), so system will be viable for planned years.<p>And at last, where existed tens examples of CPUs on daughter cards, mostly proprietary (I even have 386 computer with CPU on daughter card), but there where also PCI and EISA CPU cards, because on their market, slow speed of CPU working through PCI or even EISA was not issue.<p>Mostly CPUs on daughter cards used in server machines, to simplify change of CPU without changing motherboard.<p>Basically, very large share of non-x86 multiprocessor machines (RISC servers), where with CPUs on daughter cards, and large share of them have features need special hardware, like hot-swap.
So these daughter boards were not only processor nests, but also include cache, bus bridge, in many cases RAM slots, and other circuits, need for hot-swap.