No no, it's all good. It's cool with me to discuss other demoscene related things in this thread.
As for your question, I think there's only a few ways to do this.
You can open the glsl files with notepad and you will see that they did not use a shader for the transformation of the mesh. So I bet they create meshes at realtime and deform them according to the music and probably according to the texture with the 'noise' on it.
Off course, I'm assuming this based on what we're able to see from the source-codes that can be seen,
Cheers