Its easier to think about this using the water pressure analogy. Imagine a large barrel full of water connected to another identical barrel. The first barrel is filled to the top whilst the second is empty. A valve is opened between the two barrels and they equalise with half of the water in each.<p>The initial energy is mgh and the final energy is 2 * (m/2 * g * h/2) = mgh/2 so half of then energy has disappeared. It is clear that work could have been done by the water moving between the two barrels (like in a hydro-electric power station).
I once got this question on an interview, and they didn’t know I’d heard it before. So I pretended to figure it out during the interview. I gave him a correct answer after just a few minutes, and they cut the interview short and had me directly proceed to the HR office to proceed with onboarding. Two years later I told them how I already knew the question. I’ve been with the company for 6 years now.
I don't think this should be called a paradox. It's just a case where the limitations of the model (ideal everything) is clear and leads to inconsistent results. Adjusting the model and making it more realistic, quickly clears up the "paradox". To me this seems like something one would use as an example in a physics lecture to show when certain assumptions are necessary and when they aren't.
This example violates one of the core assumptions of circuit theory, which is that you cannot have a node with two voltages. As soon as the switch closes, is the voltage at the switch Vi or 0v? It would be both, which is impossible. If you had some kind of component between the two then there would be a voltage drop across it, and you would get realistic results.
It's simpler than that. When you short a capacitor, where does the energy go? An ideal capacitor has an infinite current in that situation. Inductors have a comparable situation - when you open-circuit an ideal inductor, you get an infinite voltage.<p>In practice, you can get hundreds or even thousands of volts from inductors that way, which is how auto ignitions and boost-type switching power supplies work.<p>Similar problems come up in the idealized physics of impulse/constraint physics engines. Getting rid of the energy in collisions requires hacks to prevent things from flying off into space, a problem with early physics engines.
Adding the second capacitor is equivalent doubling the area of a single capacitor. If you had a plate capacitor where the plates were extendable you could charge it and decrease the energy by extending the plates, which tells you that there's a force that is pushing to extend the plates. Of course there is--the charges on one plate repel each other and "want" to increase their spacing. If you do extend the plates there is a force times a distance, which equals work done by the charges, decreasing the capacitive energy.
Here's another one - put an oscilloscope across a capacitor and turn off the circuit it's connected too. Quickly short the capacitor with something, and watch the voltage on the oscilloscope as the capacitor discharges, remove the short and watch it start recharging slightly.<p>Where does this charge come from?
FYI - this is a popular intro level interview question at semiconductor companies. This mechanism is the whole basis of how DRAM works, as well as the foundation of some cap sense technologies. Likely plenty of applications I don't know about, either!
Explained here:<p><a href="http://hyperphysics.phy-astr.gsu.edu/hbase/electric/capeng2.html#c4" rel="nofollow">http://hyperphysics.phy-astr.gsu.edu/hbase/electric/capeng2....</a><p>For an infinitely small resistor the energy is effectively a spark pulse coupled directly to free space, and half is radiated away in EM waves.
In the true EE parts reduction fashion, you don't even need two capacitors. One capacitor plus a switch works out similarly - in fact it's an equivalent circuit. Everyone has an intuition for shorting a capacitor (zap!). When there are very few components defining a system, the parasitic components must be significant.
This is effectively the same paradox some friends and I discussed in college. Another way of looking at it is:<p>If you connect an ideal voltage source through a resistor R to a capacitor C, the amount of energy required to charge the capacitor is CV^2 while the amount of energy that winds up in the charged capacitor is 1/2 CV^2. The other 1/2 CV^s is dissipated across R. This is unaffected by the value of R, even as R -> 0. The only thing that changes is the charge time.<p>R = 0 is impossible for real circuits, but no matter how close you get you still lose half of that energy in the resistor.
Hearing about this problem while I was taking a stats mech course (online from Leo Susskind) this reminded me more of energy / entropy relationship than a paradox. For example if we have hot and cold reservoirs separated by a divider in a bath, when we remove the divider entropy of the system increases when it reaches equilibrium. After reaching equilibrium, moving the divider back will not cause the two reservoirs to return to the initial hot cold state.<p>The system with 2 capacitors seems like a good analogy to a heat bath; when the switch is flipped the charge is divided equally between the capacitors when it reaches equilibrium. The entropy of the system has increased, and the potential to do work has decreased. The number of possible states the system can be identified in has decreased by half (it is not possible to know which side had the initial charge after the switch has been flipped). Flipping the switch back will not bring the charge back to only one side. While this line of thinking doesn't explain the physics of what is happening, there is clearly a (statistically) irreversible change going on which seems like the natural language is energy, entropy, and temperature.
Does anyone have a sense of how much the idealization of circuit design limits people's imagination when designing real circuits? Are there fruitful possibilities that could be explored but aren't because the abstraction doesn't contain them?
I wouldn't really consider this a paradox. It is a failure mode of the idealization you made in saying that wires and capacitors have zero resistance and inductance.<p>Even if you work by your idealization, saying wires have zero resistance means that every piece of connected conductor in the circuit is at the same potential (in the absence of a magnetic field) so saying you have two connected capacitors charged to different potentials is already violating your assumption.<p>You can find similar edge cases in almost every situation and field of study where you try to simplify things with an approximation, and most of them aren't called paradoxes.
Got this question in an interview at National Semiconductor (out of MSEE school). I didn't think it was too paradox-y: we didn't work the math through fully but my answer of "start with the circuit with a resistor with value R and the energy will burn up in R; now recalculate as R→0 and the energy will still burn up in R (even though R is zero-y...". That seemed to satisfy the interviewer (who had not heard that answer before).
I remember explaining how an NMR machine works to an EE person. They simply refused to believe me that you could inject current into a supercooled superconductive magnet and have it circulate for months at a time. The only way I could convince him it worked was to point out that slowly the system did loose energy and you had to go back and add more current.
Try it in Spice with realistic small resistances and inductance. Every electrical engineer is taught, in basic theory: The voltage on a capacitor cannot change instantaneously. The current in an inductor cannot change instantaneously. These become much clearer when you write down the INTEGRAL form of the V-I relations.
@surewhynat<p>Why did someone mod you down dead?<p>- quote -
It's an exponential function on the W=CV^2<p>let's say initially:<p>C = 1<p>V = 16<p>W = 1 * 16^2<p><pre><code> = 256
</code></pre>
When the Voltage is split between the two capacitors, the voltage drops in half from 16 to 8, but because there are two capacitors you count them twice:<p>W = CV^2 + CV^2<p>C = 1<p>V = 8<p>W = (1 * 8^2) + (1 * 8^2)<p><pre><code> = 128
</code></pre>
Where did half the energy go? People are saying it's loss in magnetic radiation during the transfer. But on paper it still seems counter intuitive, so it's called a, "paradox," instead of just how an exponential function works. Something in the universe makes the energy levels exponentially higher as voltage increases. More electric pressure (V) = exponentially more energy. Cool!
- unquote -
Without reading the solution: wouldn’t this impossible zero resistance zero inductance ideal setup result in infinite frequency infinite magnitude oscillations? Add in some resistance and the missing energy goes into heat generated by the resistance on the way to equilibrium.<p>Edit: also when I think about it there is a little bit of additional energy in the open switch which is itself a capacitor.<p>Edit 2: for this circuit to stay the way it is in the initial state, wouldn’t the open switch need to have equal capacitance to the capacitors? Or some kind of voltage generating field applied across it that is removed when the switch is closed?
A somewhat analogous effect/paradox in thermodynamics is the Joule expansion:<p><a href="https://en.wikipedia.org/wiki/Joule_expansion" rel="nofollow">https://en.wikipedia.org/wiki/Joule_expansion</a>
This is another instance of those situations where "the abstraction is leaky". Relatedly, I've seen (unfortunately) a few textbooks which attempt to teach basics of computing and digital logic by assuming that gates have no propagation delay, or at least that's what their timing diagrams seem to show. It's very puzzling because a lot of sequential circuit elements rely on that in order to work.
I knew nothing about electricity until I started studying for my technicians license recently. It was actually very interesting and I learned enough that I could actually understand this wiki article. It made me interested in studying for the general and extra licenses since they have more advanced electrical knowledge required.
Funny thing is - this is exactly what happens many millions times every single clock cycle inside pretty much any modern(-ish) digital CMOS CPU/ASIC, with the right-hand-side capacitor being parasitic gate capacitance in the driven gate.
This is an easy one. The assumptions are flawed. W = (0.5) x(C)x(V) is a back-of-the-envelope shorthand for a whole realm of formulas in the study of capacitors, and it doesn't work in edge cases like this.
I learned that this is also why you should never connect batteries in parallel - the power will oscillate in much the same manner, draining away the energy and wearing out the batteries.
The 'paradox' in this experiment lies in believing that you can have a voltage across one of the capacitors and zero volts across the other before you hit the switch. In other words, you can't have two connected locations resting at different voltages.<p><pre><code> --------| |---------------| |----------
--- +6 -6 ------------ 0 0 -----
^^^^^^^^^^^^^^^^ NOT POSSIBLE!
</code></pre>
Two capacitors in series are equivalent to one (combined) capacitor, just like a bunch of batteries can be considered to be one (combined) battery.