Well now that not entirely true Vent. YOu obviously can't do it on a simulation level, but you can emulate it:
a) Create a musical your musical instruments. Lets take a hollow cylinder with one end closed; This will be our "pan flute"; thus you blow air across the lip of the open side and a tone will hopefully ensues.
b) Pick a fixed spot where you want to hear the sound produced. Lets keep the spot fixed for now and not make it dynamic for ease of discussion and computation.
c) In a bid to finite element analysis, pepper the inside of the flute with as many single quant entities you can and place them a fixed distance apart, say for example 3 or 4 quants. Register these entities with the PE with zero friction, zero drag, and zero gravity. Since we are considering a homogenous gas, mass and molecule volume are irrelavant and thus set mass to unity. This internal grid will be the "air".
d) Take the entities that are at the lip of the flute and vibrate them by giving them pushes and pulls. This will drive the open lip of the cylinder and depending on which and how many lip air molecules you drive , you will emulate diffrent air flow patterns across the lip. These lip air molucles must be constantly vibrating to work.
e) Assuming your grid size is adequate and allows for air molecule collision (ie. the air is dense enough to transmit sound), the driven air particles collide with their neightbor and on down the line, all the other air particles inside the flute as well as the walls of the flute itself. This in and of itself will not produce a tone since all the particles are moving and colliding randomly. You will need to drive teh lip of the flute properly to get resonance....
f) So far this has been an accurate simulation of what happens inside an instrument. Now we need to emulate the sound aspect. Remember we are at a fixed position relative to the flute. The key is that everytime one air entity collides with another, we would mathematically calculate teh waveform that I would receive at my fixed postiont and add that to the waveform from every other collision. It is this resultant waveform that is then player to the listener at the fixed spot. This last bit of coding is the fuzziest to me, but we are in essesnse relying on the air particle simulation to give us an idea of what is "sounding" and what isn't and then we use the Principle of Superposition to put these tones together at our fixed point and hopefully "hear" a sound.
Just an idea of the top of my head!