This reminds a bit of a project I coded the audio for an art exhibition around 20 years ago, which was for multiple simultaneous players. As the game got more intense it became apparent that the ball/wall sounds were playing music, and that the three players were all actually playing one musical piece.<p>It was based around 3 arcade cabinets pointing together, so the players couldn't see what was on each others screens.<p>This was achieved by modifying the ball speed/direction slightly so that it arrived at the bat/wall at a musically relevant point and triggered the correct sound.<p>Ah, here you go, Josh as a reference to it on his site: <a href="https://www.autogena.org/work/ping" rel="nofollow">https://www.autogena.org/work/ping</a>
Very cool! As a further variation on this idea, I'm imagining training a reinforcement learning agent on atari games / super mario, but with an additional music-based reward/input, to try to get a "musical" looking playthrough... (Not sure how good it would look / whether it would be worth it though...)
This reminds me of those polyrhythm visualizations on YouTube (check out LucidRhythms for some great examples).<p><a href="https://www.youtube.com/@LucidRhythms" rel="nofollow">https://www.youtube.com/@LucidRhythms</a><p>Probably almost impossible to adapt written works 'backwards' into a visualization but it might be fun to have different bars represent different notes and have the balls split for chords.
This is so freaking cool! I was mesmerized watching the paddles move as the beat progressed. There are certain things that just look right which makes it beautiful.This project is one of them!
While technically okay, there are multiple cases where a paddle and the ball move at the almost identical speed and it looks like the paddle is pushing the ball all the time. (By the way, p[i] = 0 should be disallowed for this reason.) This is inevitable when a large d[i] is immediately followed by a very small d[i+1], but it might be able to avoid them whenever feasible.
Imagining an `installation` in my space, using both my MT-80S and a display. Can I even reason about this, the timing? I'm not smart here, just interested<p><a href="https://www.matrixsynth.com/2014/07/roland-mt-80s-midi-player-synthesizer.html?m=1" rel="nofollow">https://www.matrixsynth.com/2014/07/roland-mt-80s-midi-playe...</a>
Really interesting. For some reason my brain really really hates this. I think it screws with my internal model of causality or something and I find it difficult to watch. Odd
Atari had a video music "visualizer" device back in the late 1970s. Designed by one of the developers of the Pong game. One of if not the first consumer product of its kind.<p><a href="https://en.wikipedia.org/wiki/Atari_Video_Music" rel="nofollow">https://en.wikipedia.org/wiki/Atari_Video_Music</a><p>If you've seen the movie <i>Over the Edge</i>, Claude and Johnny have one at their house.
Awesome work!<p>How is the beat used to sync the pong chosen? Like for Bad Apple!, especially around 1m55 <a href="https://www.youtube.com/watch?v=bvxc6m-Yr0E" rel="nofollow">https://www.youtube.com/watch?v=bvxc6m-Yr0E</a> it seems off<p>Good suggestion from a YouTube commenter, pasting it here<p>> This is pretty cool.. it would be cooler if there were multiple pongs and paddles for each type of beat (like high beats and low beats)
This is really cool. I took an optimization class a few years back, but haven't made the time to do anything fun with it since. This inspires me to do it.<p>I do kind of wish that the last note corresponded to a game over, though, and I wonder if a smaller screen or faster ball would widen the playing field a little. Maybe I'll fork the code and try some of those out myself.
Absolutely wonderful!<p>> "We obtain these times from MIDI files, though in the future I’d like to explore more automated ways of extracting them from audio."<p>Same here. In case it helps: I suspect a suitable option is (python libs) Spleeter (<a href="https://github.com/deezer/spleeter">https://github.com/deezer/spleeter</a>) to split stems and Librosa (<a href="https://github.com/librosa/librosa">https://github.com/librosa/librosa</a>) for beat times. I haven't ventured into this yet though so I may be off. My ultimate goal is to be able to do it 'on the fly', i.e. in a live music setting being able to generate visualisations a couple of seconds ahead being played along with the track.<p>Not sure if this is unsavory self promotion (it's not for commercial purposes, just experimenting), but I am in the middle of documenting something similar at the moment.<p>Experiments #1 - A Mutating Maurer Rose | Syncing Scripted Geometric Patterns to Music: <a href="https://www.youtube.com/watch?v=bfU58rBInpw" rel="nofollow">https://www.youtube.com/watch?v=bfU58rBInpw</a><p>It generates a mutating Maurer Rose using react-native-svg on my RN stack, synced to a music track I created in Suno AI *.
Manually scripted to sync up at the moment (not automatic until I investigate the above python libs).<p>Not yet optimised, proof of concept.
The Geometric pattern (left) is the only component intended to be 'user facing' in the live version - But the manual controls (middle) and the svg+path html tags (right) are included in this demo in order to show some of the 'behind the scenes'.<p>Code not yet available, app not yet available to play with. Other geometric patterns in the app that I have implemented:<p>- Modified Maurer<p>- Cosine Rose Curve<p>- Modified Rose Curve<p>- Cochleoid Spiral<p>- Lissajous Curve<p>- Hypotrochoid Spirograph<p>- Epitrochoid Spirograph<p>- Lorenz Attractor<p>- Dragon Curve<p>- Two Pendulum Harmonograph<p>- Three Pendulum Harmonograph<p>- Four Pendulum Harmonograph<p>This is the Typescript Maurer Rose function (that is used with setInterval + an object array of beat times which determine when to advance the 'n' variable):<p><pre><code> export const generateGeometricsSimplemaurer = (n: number, d: number, scale: number = 1) => {
const pathArray: TypeSvgPathArray = [];
for (let i = 0; i <= 360; i += 1) {
const k = i \* d;
const r = Math.sin(n \* k \* (Math.PI / 180));
const x =
r \*
Math.cos(k \* (Math.PI / 180)) \*
40 \* // base scale
scale +
50; // to center the image
const y =
r \*
Math.sin(k \* (Math.PI / 180)) \*
40 \* // base scale
scale +
50; // to center the image
pathArray.push(\${i === 0 ? "M" : "L"} ${x} ${y}`);`
}
const pathString: string = pathArray.join(" ");
return pathString;
};
</code></pre>
setInterval isn't an appropriate solution for the long term.<p>The geometric patterns (with their controls) will have a playground app that you can use to adjust variables... As for the music sync side, it will probably take me a long time.<p>*Edit: I just noticed that the author (Victor Tao) actually works at Suno
pretty neat! it feels like if you spaced out “important” beats instead of most of them and shrunk the play area so the paddles are larger, it would have an even more interesting effect.
> Synchronizing pong to music with constrained optimization<p>Nothing new. Apparently there are references to people doing this in ancient and medieval times.<p><a href="https://en.wikipedia.org/wiki/Flatulist" rel="nofollow">https://en.wikipedia.org/wiki/Flatulist</a>