TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How does a CPU communicate with a GPU?

148 点作者 pedrolins大约 3 年前
I&#x27;ve been learning about computer architecture [1] and I&#x27;ve become comfortable with my understanding of how a processor communicates with main memory - be it directly, with the presence of caches or even virtual memory - and I&#x2F;O peripherals.<p>But something that seems weirdly absent from the courses I took and what I have found online is how the CPU communicates with other processing units, such as GPUs - not only that, but an in-depth description of interconnecting different systems with buses (by in-depth I mean an RTL example&#x2F;description).<p>I understand that as you add more hardware to a machine, complexity increases and software must intervene - so a generalistic answer won&#x27;t exist and the answer will depend on the implementation being talked about. That&#x27;s fine by me.<p>What I&#x27;m looking for is a description of how a CPU tells a GPU to start executing a program. Through what means do they communicate - a bus? How does such a communication instance look like?<p>I&#x27;d love get pointers to resources such as books and lectures that are more hands-on&#x2F;implementation aware.<p>[1] Just so that my background knowledge is clear: I&#x27;ve concluded NAND2TETRIS, watched and concluded Berkeley&#x27;s 2020 CS61C and have read a good chunk of H&amp;P (both Computer Architecture: A Quantitative Approach and Computer Organization and Design: RISC-V edition), and now am moving on to Onur Mutlu&#x27;s lectures on advanced computer architecture.

21 条评论

rayiner大约 3 年前
Typically CPU and GPU communicate over the PCI Express bus. (It’s not technically a bus but a point to point connection.) From the perspective of software running on the CPU, these days, that communication is typically in the form of memory-mapped IO. The GPU has registers and memory mapped into the CPU address space using PCIE. A write to a particular address generates a message on the PCIE bus that’s received by the GPU and produces a write to a GPU register or GPU memory.<p>The GPU also has access to system memory through the PCIE bus. Typically, the CPU will construct buffers in memory with data (textures, vertices), commands, and GPU code. It will then store the buffer address in a GPU register and ring some sort of “doorbell” by writing to another GPU register. The GPU (specifically, the GPU command processor) will then read the buffers from system memory, and start executing the commands. Those commands can include, for example, loading GPU shader programs into shader memory and triggering the shaders to execute those shaders.
评论 #30860953 未加载
评论 #30860524 未加载
评论 #30862718 未加载
评论 #30860466 未加载
zoenolan大约 3 年前
Other are not wrong in saying Memory mapped IO. taking a look at the Amiga hardware Reference manual [1] and a simple example [2] or a NES programming guide [3] would be a good way to see this in operation.<p>A more modern CPU&#x2F;GPU setup is likely to use a ring buffer. The buffer will be in CPU memory. That memory is also mapped into the GPU address space. The Driver on the CPU will write commands into the buffer which the GPU will execute. These will be different to the shader unit instruction set.<p>Commands would be setting some internal GPU register to a value. Allowing the setting resolution, framebuffer base pointer, set up the output resolution, setting the mouse pointer position, reference a texture from system memory, load a shader, execute a shader, set a fence value (Useful for seeing when a resource, texture, shader is no longer in use).<p>Hierarchical DMA buffers are a useful feature of some DMA engines. You can think of them as similar to sub routines. The command buffer can contain an instruction to switch execution to another chunk of memory. This allows the driver to reuse operations or expensive to generate sequences. OpenGL&#x27;s display list commonly compiled down to separate buffer.<p>[1] <a href="https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;amiga-hardware-reference-manual-3rd-edition" rel="nofollow">https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;amiga-hardware-reference-manual-...</a><p>[2] <a href="https:&#x2F;&#x2F;www.reaktor.com&#x2F;blog&#x2F;crash-course-to-amiga-assembly-programming&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reaktor.com&#x2F;blog&#x2F;crash-course-to-amiga-assembly-...</a><p>[3] <a href="https:&#x2F;&#x2F;www.nesdev.org&#x2F;wiki&#x2F;Programming_guide" rel="nofollow">https:&#x2F;&#x2F;www.nesdev.org&#x2F;wiki&#x2F;Programming_guide</a>
simne大约 3 年前
Lot of things happen there.<p>But most important, PCIe bus is serial bus, which have virtualized interface, so there is no physical process of communication, what happen more similar to Ethernet network, mean on each device exists few endpoints, each has it&#x27;s own controller with its own address and few registers to store state and transitions, and memory buffer(s).<p>Videocards usually have many behaviors. In simplest modes, they behave just as RAM mapped to large chunk of system RAM space, plus video registers to control video output, and to control address mapping of video ram, and to switch modes.<p>In more complex modes, Videocards generate interrupts (just special type of message on PCIe).<p>In 3D modes, which are most complex, Videocontroller take data from its own memory (which mapped to system space), there are stored tree of graphic primitives, some draw directly from videoram, but for others used bus master option of PCIe, in which videocontroller read additional data (textures) from predefined chunks of system RAM.<p>About GPU operation, usually, CPU copy data to Videoram directly, than ask videocontroller to run program in videoram, and when complete, GPU issue interrupt, and than CPU copied result from videoram.<p>Recent additions where, add GPU possibility to read data from system disks, using mentioned before bus master, but those additions are not already wide implemented.
评论 #30871491 未加载
评论 #30861240 未加载
kllrnohj大约 3 年前
The OSDev Wiki is a great resource on how this all works from the perspective of actually programming it at least on x86<p>For example here&#x27;s the page on talking PCI-E <a href="https:&#x2F;&#x2F;wiki.osdev.org&#x2F;PCI_Express" rel="nofollow">https:&#x2F;&#x2F;wiki.osdev.org&#x2F;PCI_Express</a>
melenaboija大约 3 年前
It is old and I am not sure everything still applies but I found this course useful to understand how GPUs work:<p>Intro to Parallel Programming:<p><a href="https:&#x2F;&#x2F;classroom.udacity.com&#x2F;courses&#x2F;cs344" rel="nofollow">https:&#x2F;&#x2F;classroom.udacity.com&#x2F;courses&#x2F;cs344</a><p><a href="https:&#x2F;&#x2F;developer.nvidia.com&#x2F;udacity-cs344-intro-parallel-programming" rel="nofollow">https:&#x2F;&#x2F;developer.nvidia.com&#x2F;udacity-cs344-intro-parallel-pr...</a>
aliasaria大约 3 年前
There is some good information on how PCI-Express works here: <a href="https:&#x2F;&#x2F;blog.ovhcloud.com&#x2F;how-pci-express-works-and-why-you-should-care-gpu&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.ovhcloud.com&#x2F;how-pci-express-works-and-why-you-...</a>
评论 #30873636 未加载
phendrenad2大约 3 年前
At a high level, it&#x27;s actually really simple. Your PCIe devices are each given a region of the address space, say, 0x8428000000000000-0x8428000000000fff. Just write to that region from kernel mode. But what do you write? Well, that isn&#x27;t standardized. It&#x27;s not even really documented. The best documentation is the source code to the GPU drivers in the Linux kernel, which are usually added to by engineers working at GPU vendors, and they don&#x27;t discuss it much.
评论 #30866895 未加载
ar_te大约 3 年前
And I you looking for some strange architecture forgoten by time:). <a href="https:&#x2F;&#x2F;www.copetti.org&#x2F;writings&#x2F;consoles&#x2F;sega-saturn&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.copetti.org&#x2F;writings&#x2F;consoles&#x2F;sega-saturn&#x2F;</a>
pizza234大约 3 年前
You&#x27;ll find a very good introduction in the comparch book &quot;Write Great Code, Volume 1&quot;, chapter 12 (&quot;Input and Output&quot;), which also explains the history of system buses (therefore, you&#x27;ll find an explanation of how ISA works).<p>Interestingly, there is a footnote explaining that &quot;Computer Architecture: A Quantitative Approach provided a good chapter on I&#x2F;O devices and buses; sadly, as it covered very old peripheral devices, the authors dropped the chapter rather than updating it in subsequent revisions.&quot;
评论 #30862285 未加载
ncmncm大约 3 年前
While we&#x27;re here: is there any reasonable prospect of keeping one&#x27;s GPU from being able to read and write to literally anywhere in physical memory?<p>I.e., a practical way a kernel and driver might be able to forward to the GPU only commands and shaders that can access only your process memory, and nobody else&#x27;s, and your process&#x27;s pixels, and no other process&#x27;s pixels, when they live in GPU RAM?<p>For all I know, this is the norm for all GPUs, but I wonder why it is hard, then, for VMs to share a GPU.
评论 #30863012 未加载
评论 #30863647 未加载
justsomehnguy大约 3 年前
TL;DR: bi-directional memory access with some means to notify the other part about &quot;something has changed&quot;.<p>It&#x27;s not that different for any other PIC&#x2F;E device, be it a network card or a disk&#x2F;HBA&#x2F;RAID controller.<p>If you want to understand how it came to this - look at the history of ISA, PCI&#x2F;PCI-X, a short stint for AGP and finally PCI-E.<p>Other comments provides a good ELI15 for the topic.<p>A minor note about &quot;bus&quot; - for PCEe it is mostly a historic term, because it&#x27;s a serial, P2P connection, though the process of enumerating and qurying the devices is still very akin to what you would do on some bus-based system, e.g.: SAS is a serial &quot;bus&quot;, compared to SCSI, but still you operate with it as some &quot;logical&quot; bus, because it is easier for humans to grok it this way.
评论 #30862456 未加载
derekzhouzhen大约 3 年前
Other has mentioned MMIO. MMIO has several kinds:<p>1. CPU accessing GPU hw with uncache-able MMIO, such as lower level register access<p>2. GPU accessing CPU memory with cache-able MMIO, or DMA. such as command and data stream<p>3. CPU accessing GPU memory with cache-able MMIO, such as textures<p>They all happen on the bus with different latency and bandwidth.
chubot大约 3 年前
BTW I believe memory maps are set up by the ioctl() system call on Unix (including OS X), which is kind of a &quot;catch all&quot; hole poked through the kernel. Not sure about Windows.<p>I didn&#x27;t understand that for a long time ...<p>I would like to see a &quot;hello world GPU&quot; example. I think you open() the device and the ioctl() it ... But what happens when things go wrong?<p>Similar to this &quot;Hello JIT&quot;, where it shows you have to call mmap() to change permissions on the memory to execute dynamically generated code.<p><a href="https:&#x2F;&#x2F;blog.reverberate.org&#x2F;2012&#x2F;12&#x2F;hello-jit-world-joy-of-simple-jits.html" rel="nofollow">https:&#x2F;&#x2F;blog.reverberate.org&#x2F;2012&#x2F;12&#x2F;hello-jit-world-joy-of-...</a><p>I guess one problem is that this may be typically done in vendor code and they don&#x27;t necessarily commit to an interface? They make you link their huge SDK
dyingkneepad大约 3 年前
On my system, the CPU sees the GPU as a PCI device. The &quot;PCI config space&quot; [0] is a standard thing and so the CPU can read it and figure out its device ID, vendor ID, revision, class, etc. From that, the OS looks at its PCI drivers and tries to find which one claims to drive that specific PCI device_id&#x2F;vendor_id combination (or class in case there&#x27;s some kind of generic universal driver for a certain class).<p>From there, the driver pretty much knows what to do. But primarily the driver will map the registers to memory addresses, so accessing offset 0xF0 from that map is equivalent as accessing register 0xF0. The definition of what each register does is something that the HW developers provide to the SW developers [1].<p>Setting modes (screen resolution) and a lot of other stuff is done directly by reading and writing to these registers. At some point they also have to talk about memory (and virtual addresses) and there&#x27;s quite a complicated dance to map GPU virtual memory to CPU virtual memory. On discrete GPUs the data is actually &quot;sent&quot; to the memory somehow through the PCI bus (I suppose the GPU can read directly from the memory without going through the CPU?), but in the driver this is usually abstracted to &quot;this is another memory map&quot;. On integrated systems both the CPU and GPU read directly from the system memory, but they may not share all caches so extra care is required here. In fact, caches may also mess the communication on discrete graphics, so extra care is always required. This paragraph is mostly done by the Kernel driver in Linux.<p>At some point the CPU will tell the GPU that a certain region of memory is the framebuffer to be displayed. And then the CPU will formulate binary programs that are written in the GPU&#x27;s machine code, and the CPU will submit those programs (batches) and the GPU will execute them. These programs are generally in the form of &quot;I&#x27;m using textures from these addresses, this memory holds the fragment shader, this other holds the geometry shader, the configuration of threading and execution units is described in this structure as you specified, SSBO index 0 is at this address, now go and run everything&quot;. After everything is done the CPU may even get an interrupt from the GPU saying things are done, so they can notify user space. This paragraph describes mostly the work done by the user space driver (in Linux, this is Mesa), which implements OpenGL&#x2F;Vulkan&#x2F;etc abstractions.<p>[0]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;PCI_configuration_space" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;PCI_configuration_space</a> [1]: <a href="https:&#x2F;&#x2F;01.org&#x2F;linuxgraphics&#x2F;documentation&#x2F;hardware-specification-prms" rel="nofollow">https:&#x2F;&#x2F;01.org&#x2F;linuxgraphics&#x2F;documentation&#x2F;hardware-specific...</a>
throwmeariver1大约 3 年前
Everyone in tech should read the book &quot;Understanding the Digital World&quot; by Brian W. Kernighan.
评论 #30860874 未加载
评论 #30860706 未加载
cesarb大约 3 年前
&gt; What I&#x27;m looking for is a description of how a CPU tells a GPU to start executing a program. Through what means do they communicate - a bus? How does such a communication instance look like?<p>For most modern computers, through the PCI Express bus. Take a look at the output of &quot;lspci -v&quot; and you&#x27;ll see something like:<p><pre><code> 00:02.0 VGA compatible controller: [...] [...] Flags: bus master, fast devsel, latency 0, IRQ 128 Memory at ee000000 (64-bit, non-prefetchable) [size=16M] Memory at d0000000 (64-bit, prefetchable) [size=256M] I&#x2F;O ports at f000 [size=64] Expansion ROM at 000c0000 [virtual] [disabled] [size=128K] </code></pre> That is, the GPU on this particular laptop makes available a region of memory sized 16 megabytes at physical address 0xee000000, and another region of memory sized 256 megabytes at physical address 0xd0000000. Whenever the CPU writes to or reads from these memory regions, it is writing to memory on the GPU, not on the normal RAM chips. And not all of that &quot;memory&quot; on the GPU is real memory; some of it are registers, which are used to control the GPU.<p>The same happens on the opposite direction: for code running on the GPU, some regions of memory are actually the RAM normally used by the CPU. In either case, the memory read and&#x2F;or write transactions go through the PCI Express bus to the other device.<p>The exact details of what is written to (and read from) that memory vary depending on the device. For most GPUs, the driver sets up a list of commands in memory (either &quot;host&quot; memory, which is the RAM on the CPU, or &quot;device&quot; memory, which is the RAM on the GPU accessible through these PCI Express &quot;memory windows&quot;), and writes the address of that command list to a register on the GPU; the GPU then reads the list and executes the commands found in it. These commands can include things like &quot;start N threads of the program found at X with Y as the input&quot; (GPU programs are commonly called &quot;shaders&quot;, and they are highly parallel), but also things like &quot;wait for event W to happen before doing Z&quot;.
brooksbp大约 3 年前
Woah there, my dude. Let&#x27;s try to understand a simple model first.<p>A CPU can access memory. When a CPU performs loads &amp; stores it initiates transactions containing the address of the memory. Therefore, it is a bus master--it initiates transactions. A slave accepts transactions and services them. The interconnect routes those transactions to the appropriate hardware, e.g. the DDR controller, based on the system address map.<p>Let&#x27;s add a CPU, interconnect, and 2GB of DRAM memory:<p><pre><code> +-------+ | CPU | +---m---+ | +---s--------------------+ | Interconnect | +-------m----------------+ | +----s-----------+ | DDR controller | +----------------+ System Address Map: 0x8000_0000 - 0x0000_0000 DDR controller </code></pre> So, a memory access to 0x0004_0000 is going to DRAM memory storage.<p>Let&#x27;s add a GPU.<p><pre><code> +-------+ +-------+ | CPU | | GPU | +---m---+ +---s---+ | | +---s------------m-------+ | Interconnect | +-------m----------------+ | +----s-----------+ | DDR controller | +----------------+ System Address Map: 0x9000_0000 - 0x8000_0000 GPU 0x8000_0000 - 0x0000_0000 DDR controller </code></pre> Now the CPU can perform loads &amp; stores from&#x2F;to the GPU. The CPU can read&#x2F;write registers in the GPU. But that&#x27;s only one-way communication. Let&#x27;s make the GPU a bus master as well:<p><pre><code> +-------+ +-------+ | CPU | | GPU | +---m---+ +--s-m--+ | | | +---s-----------m-s-----+ | Interconnect | +-------m----------------+ | +----s-----------+ | DDR controller | +----------------+ System Address Map: 0x9000_0000 - 0x8000_0000 GPU 0x8000_0000 - 0x0000_0000 DDR controller </code></pre> Now, the GPU can not only receive transactions, but it can also initiate transactions. Which also means it has access to DRAM memory too.<p>But this is still only one-way communication (CPU-&gt;GPU). How can the GPU communicate to the CPU? Well, both have access to DRAM memory. The CPU can store information in DRAM memory (0x8000_0000 - 0x0000_0000) and then write to a register in the GPU (0x9000_0000 - 0x8000_0000) to inform the GPU that the information is ready. The GPU then reads that information from DRAM memory. In the other direction, the GPU can store information in DRAM memory, and then send an interrupt to the CPU to inform the CPU that the information is ready. The CPU then reads that information from DRAM memory. An alternative to using interrupts is to have the CPU poll. The GPU stores information in DRAM memory and then sets some bit in DRAM memory. The CPU polls on this bit in DRAM memory, and when it changes, the CPU knows that it can read the information in DRAM memory that was previously written by the GPU.<p>Hope this helps. It&#x27;s very fun stuff!
评论 #30862375 未加载
评论 #30864530 未加载
评论 #30867612 未加载
roschdal大约 3 年前
Through the electrical wires in the PCI express port.
评论 #30860445 未加载
评论 #30860414 未加载
Randolf_Scott大约 3 年前
Drivers make all hardware communicate.
dragontamer大约 3 年前
I&#x27;m no expert on PCIe, but its been described to me as a network.<p>PCIe has switches, addresses, and so forth. Very much like IP-addresses, except PCIe operates on a significantly faster level.<p>At its lowest-level, PCIe x1 is a single &quot;lane&quot;, a singular stream of zeros-and-ones (with various framing &#x2F; error correction on top). PCIe x2, x4, x8, and x16 are simply 2x, 4x, 8x, or 16 lanes running in parallel and independently.<p>-------<p>PCIe is a very large and complex protocol however. This &quot;serial&quot; comms can become abstracted into Memory-mapped I&#x2F;O. Instead of programming at the &quot;packet&quot; level, most PCIe operations are seen as just RAM.<p>&gt; even virtual memory<p>So you understand virtual memory? PCIe abstractions go up to and include the virtual memory system. When your OS sets aside some virtual-memory for PCIe devices, when programs read&#x2F;write to those memory-addresses, the OS (and PCIe bridge) will translate those RAM reads&#x2F;writes into PCIe messages.<p>--------<p>I now handwave a few details and note: GPUs do the same thing on their end. GPUs can also have a &quot;virtual memory&quot; that they read&#x2F;write to, and translates into PCIe messages.<p>This leads to a system called &quot;Shared Virtual Memory&quot; which has become very popular in a lot of GPGPU programming circles. When the CPU (or GPU) read&#x2F;write to a memory address, it is then automatically copied over to the other device as needed. Caching layers are layered on top to improve the efficiency (Some SVM may exist on the CPU-side, so the GPU will fetch the data and store it in its own local memory &#x2F; caches, but always rely upon the CPU as the &quot;main owner&quot; of the data. The reverse, GPU-side shared memory, also exists, where the CPU will communicate with the GPU).<p>To coordinate access to RAM properly, the entire set of atomic operations + memory barriers have been added to PCIe 3.0+. So you can perform &quot;compare-and-swap&quot; to shared virtual memory, and read&#x2F;write to these virtual memory locations in a standardized way across all PCIe devices.<p>PCIe 4.0 and PCIe 5.0 are adding more and more features, making PCIe feel more-and-more like a &quot;shared memory system&quot;, akin to cache-coherence strategies that multi-CPU &#x2F; multi-socket CPUs use to share RAM with each other. In the long term, I expect Future PCIe standards to push the interface even further in this &quot;like a dual-CPU-socket&quot; memory-sharing paradigm.<p>This is great because you can have 2-CPUs + 4 GPUs on one system, and when GPU#2 writes to Address#0xF1235122, the shared-virtual-memory system automatically translates that to its &quot;physical&quot; location (wherever it is), and the lower-level protocols pass the data to the correct location without any assistance from the programmer.<p>This means that a GPU can do things like perform a linked-list traversal (or tree traversal), even if all of the nodes of the tree&#x2F;list are in CPU#1, CPU#2, GPU#4, and GPU#1. The shared-virtual-memory paradigm just handwaves the details and lets PCIe 3.0 &#x2F; 4.0 &#x2F; 5.0 protocols handle the details automatically.
评论 #30861447 未加载
评论 #30862414 未加载
rasz大约 3 年前
On the PC side start by reading some basics like <a href="https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;URP_8th_edition&#x2F;" rel="nofollow">https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;URP_8th_edition&#x2F;</a> (never editions require logging in and borrowing)<p>&gt;What I&#x27;m looking for is a description of how a CPU tells a GPU to start executing a program. Through what means do they communicate - a bus? How does such a communication instance look like?<p>Long time ago you would memory map the framebuffer and just write directly to it.<p>Then first 2D acceleration showed up in 1987 in form of IBM 8514 (later cloned by ATI&#x2F;Matrox&#x2F;S3&#x2F;Tseng and others). You wrote commands one at a time using I&#x2F;O port access to FIFO with pooling for idle&#x2F;full, no direct access to the framebuffer <a href="http:&#x2F;&#x2F;www.os2museum.com&#x2F;wp&#x2F;the-8514a-graphics-accelerator&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.os2museum.com&#x2F;wp&#x2F;the-8514a-graphics-accelerator&#x2F;</a><p>Next evolution was MMIO - memory mapped IO. You no longer executed dedicated CPU IO instruction (assembler IN&#x2F;OUT), IO ports were simply addresses in memory. You still had FIFOs and wrote one command at a time <a href="http:&#x2F;&#x2F;www.o3one.org&#x2F;hwdocs&#x2F;video&#x2F;voodoo_graphics.pdf" rel="nofollow">http:&#x2F;&#x2F;www.o3one.org&#x2F;hwdocs&#x2F;video&#x2F;voodoo_graphics.pdf</a><p>Then someone threw DMA into the mix. Now you could DMA contents of a circular buffer filled with your commands <a href="http:&#x2F;&#x2F;www.bitsavers.org&#x2F;components&#x2F;s3&#x2F;DB019-B_ViRGE_Integrated_3D_Accelerator_Aug1996.pdf" rel="nofollow">http:&#x2F;&#x2F;www.bitsavers.org&#x2F;components&#x2F;s3&#x2F;DB019-B_ViRGE_Integra...</a><p>We finally got command list&#x2F;command buffer&#x2F;bundle copied directly to the GPU.<p>Nowadays you have multiple command lists&#x2F;command buffers&#x2F;bundles going in parallel <a href="https:&#x2F;&#x2F;developer.nvidia.com&#x2F;blog&#x2F;advanced-api-performance-command-buffers&#x2F;" rel="nofollow">https:&#x2F;&#x2F;developer.nvidia.com&#x2F;blog&#x2F;advanced-api-performance-c...</a><p>On a hardware side 8&#x2F;16 bit ISA bus was a shared parallel connection to CPU bus at fixed clock (4.77-10MHz, 4 clocks per transfer, ~5MB&#x2F;s max speed).<p>It took us up to 1992 to get the next commonly used solution, a &quot;rogue&quot; consortium of companies tired of IBM shit designed VESA Local Bus (a true hack) in form of slapping expansion cards direct on the raw 32bit CPU bus of 486 processors. Cheap, no licensing fees, extremely fast (40MHz x 32bit = potentially faster than later PCI), easy to implement.<p>This got replaced with the advent of Pentium (64bit external CPU data bus) and introduction of PCI. PCI is still a shared parallel bus, but this time 32bits at 33MHz with packetized transactions.<p>AGP was &quot;just&quot; a faster PCI on its own dedicated separate controller (no contention with other PCI devices) and optimized addressing (sideband). 32bit at 66MHz, then x2 DDR, x4 QDR, x8 ODR. Last one means there are 8 transfers taking place between one clock cycle for a nice 2GB&#x2F;s.<p>PCI-E is faster bidirectional serial point-to-point PCI with ability to combine links into bundles (x1-x16). PCI-E devices live on a network switch and dont block each other from talking simultaneously. You could think of PCI-E as every PCI device getting its own dedicated dual direction AGP connector.<p>Some vintage hands on coding examples:<p>2D Tseng Labs ET4000 coding <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=K8kZ4BFxOtc" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=K8kZ4BFxOtc</a><p>2D Cirrus Logic <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=WoAE7x-u1g0" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=WoAE7x-u1g0</a><p>&quot;How 3D acceleration started 20 years ago: S3&#x2F;Virge register level programming&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=fXJ11_wG_0U" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=fXJ11_wG_0U</a><p>&quot;Acceleration code working on real S3 Virge&#x2F;DX&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Hsg1N4IqXac" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Hsg1N4IqXac</a><p>&quot;Direct hardware accelerated 3d in 20kB code&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=n509_wN02u8" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=n509_wN02u8</a><p>&quot;Bare metal hardware 3d texturing in 23kb of code w&#x2F; S3&#x2F;Virge&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=UgvBGXiw6LY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=UgvBGXiw6LY</a><p>&quot;Testing our latest low-level hardware 3d code on real S3&#x2F;Virge hardware&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=px--LWdRoYA" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=px--LWdRoYA</a><p>&quot;Live coding and testing more low-level 3D w&#x2F; S3&#x2F;Virge&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=l3lH0cIZUSA" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=l3lH0cIZUSA</a><p>&quot;Finishing low-level hardware S3&#x2F;Virge acceleration demo&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=JmfeB2LEDbc" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=JmfeB2LEDbc</a><p>&quot;3dfx Voodoo: Low-level &amp; bare-metal driver-less code&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=LDT6KlfOG2k" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=LDT6KlfOG2k</a><p>&quot;Finally 3dfx Voodoo triangles&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ZWaDqY4gqhw" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ZWaDqY4gqhw</a><p>&quot;More GPU programming Voodoo case study&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=AYZvNyxFHqk" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=AYZvNyxFHqk</a><p>&quot;Quite final 3dfx Voodo low-level code working&quot; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=2ADQgIEWrx4" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=2ADQgIEWrx4</a>