Generative music in 40-140 bytes on-chain. ByteBeats are miniature functions that synthesize audio (and video) via bitwise operations (&^|<<*+/%>>). Because bits form groups in powers of 2, 4/4 beat-driven music emergently self-organizes. Each function in this drop is itself randomly generated, grouped into six families of composition and four families of visualizers. Some ByteBeats loop, some generate new patterns indefinitely. Some are for headbanging, some are for climbing forever through oceans of phase cancelation.
♫ ByteBeats
Standing on the Shoulders of Giants
This style of algorithmic music composition was first popularized by Viznut with a series of videos. Kyle McDonald was an early pioneer in visualizing the structure of these functions as bitmap images. Definitely check out their work. Today people run ByteBeat competitions and post on /r/bytebeat. Try composing your own using this web tool
Neural vs ByteBeat
ByteBeat Synthesis is a lot like Neural Synthesis. They are both functions that generate a sequence of amplitudes. However Neural Synthesis is HUGE AF. One SampleRNN model is 700MB. Whereas ByteBeat functions are very short, usually under 1kb, sometimes just 32 characters. This makes them an ideal starting point for generative music synthesis where storage is scarce.
We are researching how to cross these two worlds. Simply training a character-level language model on the bytebeat function syntax is just the beginning. We aim to achieve an end-to-end differentiable audio-syntax model with a spectrogram loss, capable of audio2syntax, cramming incredible amounts of music into tiny spaces.