TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Clocks for Software Engineers

434 pointsby mr_tyzicover 7 years ago

16 comments

btownover 7 years ago
One of my favorite undergrad electrical engineering classes [0] took an innovative approach to introducing this. Instead of learning about clocks&#x2F;pipelines and HDL at the same time, we only looked at the former. We created our own simulators for an ARM subset, fully in C, where there was only a single for&#x2F;while loop allowed in the entire codebase, representing the clock ticks. Each pipeline stage, such as Instruction Fetch, would read from a globally instantiated struct representing one set of registers, and write to another one. If you wanted to write to the same place you read from, you could only do so once, and you&#x27;d better know exactly what you were doing.<p>Because we didn&#x27;t need to learn a new language&#x2F;IDE&#x2F;environment at the same time that we learned a new paradigm, we were able to keep our feet on solid ground while working things out; we were familiar with the syntax, so as soon as we realized how to &quot;wire something up,&quot; we could do so with minimal frustration and no need&#x2F;ability to Google anything. Of course, it was left to a subsequent course to learn HDL and load it on real hardware, but for a theoretical basis, this was a perfect format. Much better than written tests!<p>[0] <a href="http:&#x2F;&#x2F;www.cs.princeton.edu&#x2F;courses&#x2F;archive&#x2F;fall10&#x2F;cos375&#x2F;descrip.html" rel="nofollow">http:&#x2F;&#x2F;www.cs.princeton.edu&#x2F;courses&#x2F;archive&#x2F;fall10&#x2F;cos375&#x2F;de...</a> - see links under Design Project, specifically <a href="http:&#x2F;&#x2F;www.cs.princeton.edu&#x2F;courses&#x2F;archive&#x2F;fall10&#x2F;cos375&#x2F;Cproject10.pdf" rel="nofollow">http:&#x2F;&#x2F;www.cs.princeton.edu&#x2F;courses&#x2F;archive&#x2F;fall10&#x2F;cos375&#x2F;Cp...</a>
评论 #15290075 未加载
评论 #15289481 未加载
评论 #15289624 未加载
评论 #15289424 未加载
评论 #15291346 未加载
Joking_Phantomover 7 years ago
When I took Berkeley&#x27;s EECS151 class (Introduction to Digital Design and Integrated Circuits), the first lecture actually did not go over clocks. Instead, it goes over the simple building blocks of circuits - inverters, logic gates, and finally combinational logic blocks that are made up of the previous two. These components alone do not need a clock to function, and their static functions are merely subject to the physical limitations such as the speed of electrons, which we package into something called propagation delay. It is entirely possible to build clockless circuits, otherwise known as asynchronous circuits.<p>From the perspective of an electrical engineer and computer scientist, asynchronous circuits theoretically can be faster and more efficient. Without the restraint of a clock slowing down an entire circuit for its slowest component, asynchronous circuits can instead operate as soon as data is available, while consuming less power to overhead functions such as generating the clock and powering components that are not changing state. However, asynchronous circuits are largely the plaything of researchers, and the vast majority of today&#x27;s circuits are synchronous (clocked).<p>The reason why we use synchronous circuits, which may relate to the reason why many students learning circuits often try to make circuits without clocks, is because of abstraction. Clocked circuits can have individual components&#x2F;stages developed and analyzed separately. You leave problems that do not pertain to the function of a circuit such as data availability and stability to the clock of the overall circuit (clk-to-q delay, hold delay, etc), and can focus on functionality within an individual stage. As well, components of a circuit can be analyzed by tools we&#x27;ve built to automate the difficult parts of circuit design, such as routing, power supply and heat dissipation, etc. This makes developing complex circuits with large teams of engineers &quot;easier.&quot; The abstraction of synchronous circuits is one step above asynchronous circuits. Without a clock, asynchronous circuits can run into problems where outputs of components are actually wrong for a brief moment of time due to race conditions, a problem which synchronous circuit design stops by holding information between stages stable until everything is ready to go.<p>The article&#x27;s point of hardware design beginning with the clock is useful when you are trying to teach software engineers, who are used to thinking in a synchronous, ordered manner, about practical hardware design which is done entirely with clocks. However, it is not the complete picture when trying to create understanding of electrical engineering from the ground up. Synchronous circuits are built from asynchronous circuits, which were built from our understanding of E&amp;M physics. Synchronous circuits are then used to build our ASICs, FPGAs, and CPUs that power our routers and computers, which run instructions based on ISA&#x27;s that we compile down to from higher order languages. It&#x27;s hardly surprising that engineers who are learning hardware design build clockless circuits - they aren&#x27;t wrong for designing something &quot;simple&quot; and correct, even if it isn&#x27;t currently practical. They&#x27;re just operating on the wrong level of abstraction, which they should have a cursory knowledge of so synchronous circuits make sense to them.
评论 #15291112 未加载
teraflopover 7 years ago
I&#x27;m not surprised that software engineers find these concepts difficult to understand at first -- it&#x27;s a very different way of thinking, and everyone has to start somewhere. But I do find it kind of odd that someone would jump straight into trying to use an HDL without already knowing what the underlying logic looks like. (My CS degree program included a bit of Verilog programming, but it only showed up after about half a semester of drawing gate diagrams, Karnaugh maps and state machines.)<p>Does this confusion typically happen to engineers who are trying to teach themselves hardware design, or is it just an indication of a terribly-designed curriculum?
评论 #15289650 未加载
评论 #15288882 未加载
评论 #15289391 未加载
评论 #15290975 未加载
AceJohnny2over 7 years ago
TL;DR:<p>&gt; <i>The reality is that no digital logic design can work “without a clock”. There is always some physical process creating the inputs. These inputs must all be valid at some start time – this time forms the first clock “tick” in their design. Likewise, the outputs are then required from those inputs some time later. The time when all the outputs are valid given for a given set of inputs forms the next “clock” in a “clockless” design. Perhaps the first clock “tick” is when the set the last switch on their board is adjusted and the last clock “tick” is when their eye reads the result. It doesn’t matter: there is a clock.</i><p>Put another way, combinatorial systems (the AND&#x2F;OR&#x2F;etc[1] logic gates that form the hardware logic of the chip) have a physical <i>propagation delay</i>. The time it takes for the input signals at a given state to propagate through the logic and produce a <i>stable</i> output.<p>Do not use the output signal before it is stable. That way lies glitches and the death of your design.<p>Clocks are used to tell your logic: &quot;NOW your inputs are valid&quot;.<p>The deeper your combinatorial logic (the more gates in a given signal path), the longer the propagation delay. And the maximum propagation delay across your entire chip[2] determines your minimum clock period (and thus maximum clock speed)<p>There exist clockless designs, but they get exponentially more complicated as you add more signals and the logic gets deeper. In a way, clocks let you &quot;compartmentalize&quot; the logic, simplifying the design.<p>[1] What&#x27;s the most widespread fundamental gate in the latest fab processes nowadays? Is it NAND?<p>[2] or at least clock domain
评论 #15289193 未加载
评论 #15289644 未加载
评论 #15291676 未加载
alain94040over 7 years ago
This is such an important notion.<p>Another I try to explain hardware design for people coming from a software background:<p>You get one choice to put down in hardware as many functions as you want. You cannot change any of them later. All you can do later is sequence them in whatever order you need to accomplish your goal.<p>If you think of it this way, you realize that the clock is critical (that&#x27;s what makes sequencing possible), and re-use of fixed functions introduces you to hardware sharing, pipelining, etc.<p>But it&#x27;s hard to grasp.
ameliusover 7 years ago
And here&#x27;s &quot;Clocks for Hardware Engineers&quot;: [1]<p>[1] <a href="http:&#x2F;&#x2F;lamport.azurewebsites.net&#x2F;pubs&#x2F;time-clocks.pdf" rel="nofollow">http:&#x2F;&#x2F;lamport.azurewebsites.net&#x2F;pubs&#x2F;time-clocks.pdf</a>
评论 #15288967 未加载
martin1975over 7 years ago
Reading this would actually tremendously help software engineers improve their concurrent&#x2F;parallel software design skills as well. I never had a particular desire to do hardware (my degree is CS) but some of the best C&#x2F;C++ programmers who were able to squeeze out every last ounce of performance truly understood not just software languages but also computer architecture and I might even go as far as saying understood physics to a large extent very well. The LMAX software architecture is a product of this kind of hardware+software understanding. Awesome article.
评论 #15296835 未加载
DigitalJackover 7 years ago
&quot;The reality is that no digital logic design can work &#x27;without a clock&#x27;. &quot;<p>This is not true.<p>&quot;HDL based hardware loops are not like this at all. Instead, the HDL synthesis tool uses the loop description to make several copies of the logic all running in parallel.&quot;<p>This is not true as a general statement. There are for loops in HDLs that behave exactly like software loops. And there are generative for loops that make copies of logic.<p>Also, the &quot;everything happens at once&quot; is not true either. In fact with out the delay between two events happening, synchronous digital design would not work. (specifically flip-flops would not work).
评论 #15289056 未加载
评论 #15289462 未加载
评论 #15289505 未加载
jonnycomputerover 7 years ago
I liked the article, but I feel like an argument for why you need a clock was really never made.
评论 #15290294 未加载
评论 #15293071 未加载
mzzterover 7 years ago
Learning to think in parallel, and understand and design for procedures that don&#x27;t run sequentially, would be good practice for concurrent runtimes and distributed systems too. Not only for HDLs.
kbeckmannover 7 years ago
The zipcpu blog posts never ceases to amaze me, the content is so good. As a sw developer who plays around in verilog on my free time, the posts are extremely helpful to me. I just want to tip my hat to the author(s?), thanks!
评论 #15289629 未加载
评论 #15289217 未加载
评论 #15289571 未加载
trapperkeeper74over 7 years ago
BTDTBTTS. Way back when in uni, we had to deaign a working CPU with everything at the time but superscalar, MIMD and reservation&#x2F;retire. Pipelined CPUs can get faster clock rates by splitting up hardware into more (smaller) stages, but at the expense of total latency (due to adding pipeline regisers) AND slower pipeline stalls on branch prediction misses (pipeline has to be emptied of wrong micro-ops). The overall CPU can only be as fast as the slowest stage.<p>It looks like this Si are the mostly combinational logic for a stage and Pi are the pipeline registers between stages (nearly all signals between stages should be buffered by pipeline regs). IO is omitted but it&#x27;s the same overall architecture.<p><pre><code> Clk --------+---------------+---... .... | | \ \ +-&gt; S0 --&gt; |P0| --&gt; S1 --&gt; |P1| --&gt; .... --+ | | +------------------------------------------+</code></pre>
PeterisPover 7 years ago
The Figure 5 in that article pretty much summarizes the main point - if you show that to the original (hypothetical?) student, then this should be sufficient to make them understand the downsides of their design.
gravypodover 7 years ago
How does one go about starting a project in an HDL? I have always wanted to design and build a CPU but I&#x27;ve never figured out how to set up the &quot;build chain&quot; for VHLD. How do you implement, compile, and test different features? Is there an IDE?<p>Understanding the basics is important but I&#x27;m held up before the basics even start mattering.
gertefover 7 years ago
Conceptually, this is the same idea as concurrent network programming with futures, yes?
blackbear_over 7 years ago
Immediately thought it was referring to this <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15282967" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15282967</a>