The Future Of Grateful Dead Music

Friends and fellow Deadheads, we are about to enter a new era for Grateful Dead music. And by that, I don’t mean for Dead cover bands, spin-offs, or associated acts. I mean actual Grateful Dead music from 1966 through to 1995. This music has properties that are singularly unique (no surprise there, Deadheads) for AI and machine learning. In this post I am going to look into my crystal ball and try and predict the future. I also suspect these things will arrive sooner than you think, because a lot of what I suggest is technology that already half-exists.

The Baseline

Since I’m going to be speculative, let’s start by trying to keep it real and grounded. Computers can do many things, but for the purposes of this post I am not going to give them superhuman skills. What I suggest could in theory be done with humans or is already “out there” in modern AI research.

The Close Future: Fake stuff

AI already does things that are impressive. Your mobile phone can likely run an app that will turn a photo of you into someone of the opposite gender. It is already possible to do this with voices, and we are not that far off from an app where you can change your voice to Jerry’s and it be 100% convincing.

From there, it is not too far to get to the world of deepfakes, where video can be manipulated to make people say anything. Take a look at and watch the Obama video. In 2 – 5 years this technology will be accessible to the public (some would say it already is) – your social media stream will start to have videos of Jerry wishing you “happy new year 2022” or “smoke Dan’s hippy weed for maximum buzz”. It’s unavoidable.

Images on the left are real, on the right fake

Cleaning Up What We Have

Technology already exists to clean up and improve video and images. I have code that can automatically colour in old Grateful Dead video. I have attempted to increase resolution in video, but that is some way off. I have done this with single images. The first time I did this with a picture of Jerry I really did stare at the screen for a while. Jerry in resolution that was simply not available when he was alive.

Original image (left) enhanced with increased resolution using machine learning

Currently this technology does not exist for sound, but it is coming. The benefits of this will be less hiss and cleaner sounding tapes. But what is cleaner sounding? Remember that computers will take “bad sound” and make it “good”, but what they add is not real Grateful Dead music. But soon apps will come along that allow you to mix and fiddle with audio to an unprecedented degree. “Barton Hall in quadraphonic stereo with enhanced bass, vocals at double CD quality remaster” is coming. Is that music better than what we have now? That is a hard call to make.

This technology will likely not change the audio you actually own or download, the filters can be on the player. This means that your friend who really likes Brent will be able to up his volume whenever they play a show, and this will be more than just a graphic equalizer, it will fundamentally alter the sound of the keyboards alone.

Transferring Style

So I said at the start that we will try not to give the computer mystic powers. But you’ll need a (very!) brief primer on what is known as domain transfer to understand the next part.

Domain transfer is where we take one thing that is categorised at being of type X, and we want to make it type Y. A computer that changes photos of you to a different gender are the best example of this. So to achieve this magic trick we start by creating a neural network. Now a neural network is something like your brain but running on a computer. There are thousands of connections between neurons, and each of these connections has a “weight”, which is normally a number between 0 and 1.

We train a neural network by feeding it a lot of data. It will learn to react to differences between data, and this these differences will be differences in the network weights. If we then take the average of these weights for 2 types of data, we can “move” one set of data another by reversing the neural network, so it thinks “backwards”, and then change the value of the weights so they are more like the ones for the other set of data.

Some pictorial examples may help:

Examples of style transfer

So what do we need to do this? Simple, a discriminator that can tell the difference between 2 sets of data. That is all. I said earlier that we would not rely on computers being super-human, so let’s have a think about what differences there could be in the music of the GD. For example:

  • Can you tell the difference between 60’s and 80’s shows?
  • Easy to tell a Bob song from a Jerry one?
  • Can you normally spot an audience from a soundboard?
  • Is it easy to tell the difference from a Brent solo to a Jerry one?

In these situations a style transfer may be possible. It is possible because we have a LOT of audio data to work with. Audience to soundboard is the obvious style transfer. But somebody, somewhere will start to do audio style transfers. You will be hearing a SBD of 8th May 1970. What it might sound like is currently anyone’s guess though.

Creating New Stuff

Have you ever seen a GD covers band? The fact that they exist can tell you that emulating the Dead is possible. Computers will take this to the extreme. I said in the last section that we need a discriminator that can tell the difference between 2 things. Once you have such a thing, you can then invert the neural network to make something that produces something of that type, it’s tutor being the discriminator. Let me choose a discriminator that can tell the difference between Grateful Dead and other music. Then the output of the inverted network is – after a large degree of training – music that cannot be differentiated from the Grateful Dead. This has already been done with images:

This person has never existed, and was imaged by a neural network. This will happen with music in the future

Now, you cannot do this with almost all music because there is not enough digital information. There’s a limited amount of Beatles audio, for example, and version of individual songs are nearly the same. Compare that with the Dead and the vast trove of recorded material. Conceptually, you can already perceive of some other intelligence producing new Grateful Dead music and jams – go check Dark Star orchestra. Computers are just going to accelerate that to the point where you will not be able to tell the difference between real GD and fake.

The Good, the Bad and the Ugly

This all comes with a caveat: the first examples you see of these technologies will not be pretty. I already have on my home computer artificial synthesized Grateful Dead music that was never played by the band. If it was good, I would be posting it here already. It will take time, but it will come.

Technology will also give us bad things. Don’t be surprised to see things that really boil your blood – a video of Jerry expressing support for Donald Trump is something that is not far away, and we could swap Trump for Obama, Hitler or even Nickleback.

But ultimately, it should bring good things. There are currently around 2000 shows to listen to. I do not see any strong reasons why, in the future ahead of us, a computer can produce a few more. And no particular reason to not try to do it.

“Gone are the days we stopped to decide where we should go, we just ride”.

One thought on “The Future Of Grateful Dead Music

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s