Generating dubstep from one line of code

Posted on by Pastabagel and tagged , , , , , , . Bookmark the permalink.

Demoscene coder viznut has come up with what amounts to a new genre of music–bytebeat–in which music is generated using one-line computer programs:

The technical details are on his page, and they are worth reading. Unlike previous attempts at algorithmic or “genetic” music, the songs produced by viznut’s one-line programs actually sound like electronic or dubstep songs. An online sandbox allows you to experiment with your own. Here’s another one).

The ability to generate recognizable music with math raises an interesting question: would it be possible to condense traditional music that sounds “mathematical”, like Bach’s inventions, to formulas like these? Would it them be possible to exchange or share music simply by trading formulas?

Related posts:

  1. We interrupt this blog for some new classical music
  2. Cascada ripped off Katy Perry, who ripped off…
  3. Consequences? Bring ‘em on!

13 Responses to Generating dubstep from one line of code

  1. Dan Dravot says:

    What you’re describing is a fairly extreme case of data compression. So, yes, we’ve been doing essentially that for years.

    • Guy Fox says:

      Oh man, operator is gonna love this.

      Algorithmically speaking, some songs/styles are gonna be way easier to rewrite as a formula than others. Melodic music with really repetitive chord progressions, like Bach, The Ramones, Bruno Mars, Bob Marley, Katy Perry, B.B. King & co., should be much easier to formulate than stuff like Tchaikovsky, Patti Smith, Sonic Youth (esp. the tracks with Gordon on vocals), or even Jon Spencer. There would just be a lot less information, if you count like this. The more regularity, the easier it should be, but I don’t know enough about compression algorithms in use to know to what extent they actually convert tracks into formula.

      The follow-up question about whether those formula are intellectual property is interesting too. My guess would be that a series of mathematical expressions would be pretty hard to patent, but as soon as you distribute and profit from it as if it were a unique string, you can expect the sheriff to come knockin’.

  2. Oneironaut says:

    Boards of Canada actually have a song called Music is Math on the album Geogaddi. Amusingly, in February it will have been 10 years since it’s release. Coincidence?

    • operator says:

      co·in·ci·dence /kōˈinsədəns/

      1. A remarkable concurrence of events or circumstances without apparent causal connection.

      2. Correspondence in nature or in time of occurrence.

      So, by definition, it’s only a coincidence if you either believe that the universe is inherently chaotic and things like free will (and maybe animism!) exist, or there are no coincidences – only incidences for which no apparent cause is observed.

      I can’t do anymore, I have to go now. Have a Merry Christmas!

  3. MattK says:

    There is actually a lot of work going on right now in algo music. You might also be interested in reading Godel, Escher, Bach by Douglas Hofstadter. As part of the book he delves into the mathematical structures of Bach’s musical offering etc. There is a<a href="; readalong that has JUST started over on reddit.

  4. sdenheyer says:

    The programs generate samples, which is the musical equivalent of coding in assembly. Musicians work at a higher layer of abstraction – pitch. That is to say – you can play Bach on harpsichord or guitar, it doesn’t matter, you’re following a recipe for a song . In generative music, if you try to tweak the timbre or key or a single “note” of your song by changing a parameter of the program, it cascades all the way through the output and you end up with something completely different via the non-linear nature of the process.

    I think the reason Bytebeat even sounds like music to begin with is because of the kinds of bi-furcations you get. (It reminds of a bit of this ) I notice most of the songs start with simple pitches or beats and grow in complexity – which is a common feature in electronic music.

    In order to generate something that sounds Bach-like, you’d have to somehow encapsulate the boundary between pitch and timbre to make them independent. By the time you get there, you no longer have a compact algorithm, you basically have a programmable synthesizer. A piece of sheet-music or a computer program, at this level, can be viewed as a distinction without a difference.

    Also, Bach’s stuff sounds mathy because of the way he applied math concepts to music – transforms, inversions and the like. This makes is possible to generate for resemblance (I believe it’s been done) – but Bach was still applying human judgement to achieve a goal-state – an emotion or mind-set he wanted to convey, or in the case of the Inventions, students he wanted to teach. In my view of consciousness, this is still an algorithm, but we don’t have the code yet.

    (I don’t want to seem snobby about Bach vs. Beatbyte – I think Beatbyte is genuinely interesting and exciting)

  5. Or says:

    Sheet music often turns out to be much simpler than you’d expect from just hearing a song. I’d imagine the notation for a lot of classical music could be compressed into strings even shorter than the programs that generate these bytebeat songs, but this is because the notation represents higher-level concepts and not exact waveforms. It takes a lot less memory to store the string “hello world” than to store a sound recording of me saying the words “hello world”.

    The real test for an intelligent generative music system would be, for a string representing one distinct, engaging, meaningful piece of music, can you easily come up with a way to make a small change to that string that would generate a different piece of music of the same caliber? Can you do it with a system any less complex than the mind of an expert human improvisational musician?

    • operator says:

      If you model the process – and not the product – there’s a strong probability.

      • Or says:

        If I remember correctly, computers have had better success rates than that at identifying novelists solely by counting the frequency of certain words. How would we know whether they understood something more fundamental about the creative process?

        Great art walks a fine line between satisfying our pattern-based expectations and violating them in interesting ways. I think the process involves creating an internal model of your audience, a dialogue-like aspect that is irreducible. The beauty of art may reflect universal rules, but it also refracts them through the peculiar structures of our minds.

        But anyway, the real reason I came back to this thread was to post this link:

    • sdenheyer says:

      If you’re dealing with a smallish number of instruments, like Chamber music or solo instruments, yes, I suspect the rate of music produced per sec per length of string is about equal. If it’s an orchestra, my intuition says you need much more bandwidth. Likewise, you can make Bytebeat sound more “layered” (akin to more instruments) by adding more code.

      So, modelling the process, as Operator suggests, if you take a pure “algorithm mining” approach like Bytebeat and explore the search space in a random walk, you could probably come up with some very parsimonious ways to come up with some very interesting stuff.

      If you want to say to a computer “write me a sad song”, and have it produce anything meaningful, you need to build many more assumptions into your code. NB – there is a bias among people to assume our intuitive processes are very simple (I’m thinking of the story of the summer project given to AI researchers to model the visual system).