Audio Framework

From MythTV Official Wiki
Revision as of 17:50, 5 October 2012 by Stevegoodey (talk | contribs) (Undo revision 56438 by Felenciahines (talk) Spam)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This page mainly refer to mythtv 0.24 and later

Using myth audio framework will greatly simplify your task should you want to work with sound. The audio framework will let your application/plugin work on any platforms and any supported hardware without even having the need for the developer to know what those are.

Opening the audio:

   static AudioOutput *OpenAudio(
       const QString &audiodevice, const QString &passthrudevice,
       AudioFormat format, int channels, int codec, int samplerate,
       AudioOutputSource source, bool set_initial_vol, bool passthru,
       int upmixer_startup = 0);

audiodevice is the name of the audio device to open, you would typically retrieve those with:

gCoreContext->GetSetting("AudioOutputDevice")


passthrudevice is the name of the digital audio device if any, this can be an empty string, "default", "auto" or the actual device name. Typically it would be:

gCoreContext->GetSetting("PassThruOutputDevice").

if "default", the same device as audiodevice will be used. if "auto", the same device as audiodevice will be used, but the framework will try to set it for digital passthrough. The use of "auto" is highly recommended unless you know what you are doing (or QString::null).

To respect the user configuration, you would do something like:

passthrudevice =  gCoreContext->GetNumSetting("PassThruDeviceOverride", false) ?
       gCoreContext->GetSetting("PassThruOutputDevice") : QString::null;

format is the format of the samples you are about to play. It can be one of the following:

   FORMAT_NONE,
   FORMAT_U8,
   FORMAT_S16,
   FORMAT_S24LSB,
   FORMAT_S24,
   FORMAT_S32,
   FORMAT_FLT

(mythtv 0.23 and earlier can only handle FORMAT_U8 and FORMAT_S16)

channels is the number of audio channels you will want to play.

codec is the codec_id as used by ffmpeg/libavcodec, it's mainly used for logging perspective

set_initial_vol is a boolean indicating if the mixer volume has to be initially set as configuring in the general mythfrontend settings.

passthru is a boolean indicating if the audio being played are digital frames.
Currently supported are:
0.24: AC3 and DTS
0.25: AC3, DTS, E-AC3, TrueHD, DTS-HD Hi-Res and DTS-HD MA

samplerate is the sampling rate of the audio data you are going to play

source is a value of:

   AUDIOOUTPUT_UNKNOWN,
   AUDIOOUTPUT_VIDEO,
   AUDIOOUTPUT_MUSIC,
   AUDIOOUTPUT_TELEPHONY,

this is a legacy argument and isn't used for the time being.

upmixer_startup is an optional value defining the upmixer startup mode. 0: use general audio configuration 1: upmixer is disabled 2: upmixer is enabled

You do not have to worry about what your hardware support, the audio framework will automatically downmix as required.

OpenAudio will return a pointer to an instance of AudioOutput, if the value is null, it failed. If the value is other than null, the class creation was successful, however this alone doesn't indicate if it was successful. Use the GetError() method, if it returns an empty string and only then it was successful, otherwise it will contain the error message.

To play audio:

you use:

bool AddFrames(void *buffer, int frames, int64_t timecode)

buffer is a pointer to the audio frames you want to play. buffer must be 16 bytes aligned. A frame is made of interleaved samples.

frames: is the number of frames you are playing

timecode: is the timecode of the first sample that is about to be added. This is used for A/V sync ; if you only want to play audio, use -1

Closing the audio device.

simply delete object returned by OpenAudio.

Example:

This code will play pink noise on all speakers in succession,

QString passthru = gCoreContext->GetNumSetting("PassThruDeviceOverride", false) ? gCoreContext->GetSetting("PassThruOutputDevice") : QString::null;
QString main = gCoreContext->GetSetting("AudioOutputDevice");
int channels = gCoreContext->GetNumSetting("MaxChannels", 2);
QString errMsg;

char *frames_in = new char[channels * 48000 * sizeof(int16_t) + 15]; // allocate space for 48000 frame ; 48000 samples at 48kHz is 1s of audio
char *frames = (char *)(((long)frames_in + 15) & ~0xf); //align on 16 bytes boundary

AudioOutput *audioOutput = AudioOutput::OpenAudio(main, passthru,
                                          FORMAT_S16, channels,
                                          0, 48000,
                                          AUDIOOUTPUT_VIDEO,
                                          true, false);
if (!audioOutput)
{
   errMsg = QObject::tr("Unable to create AudioOutput.");
   return;
}
else
{
   errMsg = audioOutput->GetError();
   if (!errMsg.isEmpty())
       return;
}

for (int i = 0; i < channels; i++)
{
   AudioOutputUtil::GeneratePinkSamples(frames, channels, i, 48000);
   audioOutput->AddFrames(frames, 48000, -1);
   usleep(audioOutput->GetAudioBufferedTime()*1000); //wait to give audio time to play (here 1s)
   audioOutput->Pause(true);
   usleep(500000); // .5s pause
   audioOutput->Pause(false);
}
// free buffer
delete[] frames_in;

// Close audio
delete audioOutput;