Data-Over-Music: How Do Data-Over-Sound Protocols Like “Gibberlink” Work?

A few days ago, I came across a video of two AIs communicating over the Gibberlink protocol, which seems to have sparked a sci-fi scare. As a linguist and cognitive scientist, I see this not as a threat but as a fascinating exploration of how data can be structured and transmitted. Inspired by this, I created the Jazzy Protocol, a playful experiment that turns data-over-sound into music, making it easier to explain and a more approachable concept. In this post, I’ll break down how these protocols work and share my own take on transforming data into something both technical and artistic.

A couple of days ago, I came across a video of two AIs communicating over the “Gibberlink” protocol. Many media outlets (mostly tabloids) and social media discussions framed this as the “end” of humanity’s understanding of AI, claiming that the AIs had developed a more complex and efficient communication system than our “traditional” linguistic systems.

You may have noticed the abundance of quotation marks in the paragraph above. That’s because Gibberlink, derived from the ggwave library, is indeed an impressive protocol that enables two AIs to communicate more securely and efficiently.

I completely understand the fear surrounding it. Perhaps it sounds too much like a dial-up handshake. That’s why I developed an application that I hope will both humanize and make data-over-sound protocols easier to understand.

I call this system Data-Over-Music, or the Jazzy Protocol because it creates random chords that sometimes sound musical. At the end of this section, I will explain how data-over-sound works through an example project I created. Unlike Gibberlink, this protocol is not meant to be an efficient or secure means of communication. It is, instead, an artistic computational linguistics experiment.

Data-Over-Sound

Data is fascinating. At its core, it is just a sequence of ones and zeros, even if we don’t usually think of it that way. But how that data moves from one place to another is just as intriguing.

We are used to Wi-Fi, Bluetooth, and even infrared as ways to transfer data like your TV remote. But what if sound could do the same job? That’s where protocols like Gibberlink shines.

Gibberlink is built on ggwave, a library designed for transmitting data over audio frequencies. The idea isn’t entirely new—modems used similar techniques back in the day, turning digital signals into sound and back again. If you’ve ever heard the screech of a dial-up connection, you’ve already experienced this concept in action. But where modems were loud and slow, Gibberlink is optimized for modern use: it can encode messages into sound waves that are almost imperceptible to the human ear, making it an efficient and secure way for machines to communicate.

Of course, when people hear about two AIs talking over a seemingly incomprehensible protocol, it triggers a bit of a sci-fi panic. That’s why I created Jazzy Protocol—not to compete with Gibberlink, but to explore data-over-sound in a way that’s more approachable, more musical, and, well, more fun.

There are other aspects to it like Fast Fourier transform and such, but I am skipping over the details to provide a simple explanation with an example. Maybe in another post I can provide extra context on how the musical aspect of these methods work.

The Jazzy Protocol

“if i was a rich girl” in the Jazzy Protocol

The program I created encodes one byte per chord, with each chord consisting of eight notes—one note for each bit. In other words, a chord directly corresponds to a byte in the encoded data.

To test it, I encoded the message “if i was a rich girl” using this system and then decoded it with a separate program designed specifically for this encoding. The process starts with a designated signal at 1000.0 Hz to mark the beginning of transmission and ends with a “Roger” signal at 600.0 Hz to indicate completion.

For encoding the actual data, I mapped the classical Western musical scale to specific frequencies:

NOTE_FREQS = {
    'C4': 261.63, 'D4': 293.66, 'E4': 329.63, 'F4': 349.23,
    'G4': 392.00, 'A4': 440.00, 'B4': 493.88, 'C5': 523.25,
    'D5': 587.33, 'E5': 659.25, 'F5': 698.46, 'G5': 783.99,
    'A5': 880.00, 'B5': 987.77, 'C6': 1046.50, 'D6': 1174.66
}
ROGER_SIGNAL = 1000.0  # Roger signal
START_SIGNAL_FREQ = 600.0  # Starting signal

This way, each note represents a bit, and each full chord forms a byte, turning data into music.

The program then plays the sound for you and saves it as a file.

The decoder program waits for the starting signal and starts the decoding process upon hearing it. It then stores the data and translates it into ASCII until it hears the “Roger” signal.

Here is a spectrogram visualization of the sound file to see what goes on more clearly:

Each time interval has a chord that corresponds to a byte of data. You can clearly see the starting signal and the “Roger” signal at the end as well.

A spectrogram seems to be the best way to visualize what is going on.

A Simpler Example – Bit-by-Bit

This is a simpler program I made that uses individual notes as bits of data. I have deleted the original encoder when I was tweaking with possible ways of encoding, but this one reads the text “hi”.

The way the program works is pretty similar to how punched cards worked.

For those who do not know, a punched card is a piece of cardboard that stores digital data physically – like a very basic CD or a USB drive. It was limited by its size, however. Likewise, in experiments such as Gibberlink, we are limited by a sound spectrum. These were very popular in the earlier days of computing. You can see how you can store data the same way using sound years later. Of course, protocols like Gibberlink are way more efficient than these old punched cards – they are faster and almost as intelligible if not more.

All in all, do not be afraid of technology. Something that creates panic can inspire you to create something fun. With this little experiment, I hope I could humanize this aspect of AI for you a little. For me it has been a fun little project. I will be uploading the full source code for the encoder and the decoder if anyone is interested, cheers!

11.03.25

Leave a Reply

Your email address will not be published. Required fields are marked *