You are describing exactly what GSX always did, since it came out: positional audio, related to the actual 3D position of all its sound source and yes, it supports multichannel audio.
All objects that emits sounds in GSX will do that in the 3d space, including proper attenuation and direction.
However, for positional audio to work correctly, it needs two things:
- The 3d position of the sound emitter, which we obviously have and use, since we creates our vehicles and we know where they are
- The 3d position of the *listener*, which means where you "ears" are.
And this is where there's a problem in MSFS: while in P3D we always had the correct position of the listener, because the Camera API allowed to get the actual position of the Camera/Eyepoint (which for this purpose can safely assumed to be the same as your ears), in MSFS there's no such thing as a Camera API, so we don't know where your eye/ear is located, so the listener position is only approximated to the airplane default viewpoint, and since we don't have data about the camera rotation either, we don't know where you are looking/listening to.
In FSX/P3D (in FSX we hacked in memory to read the current eyepoint, because there was no Camera API there, we can't do that in MSFS ), the positional audio was just perfect, in MSFS not so much.
We haven't changed the audio part, hoping the issue will be fixed, and currently a Camera API is the most wanted feature on the Asobo Developers forum, so we have hope it will be added.