Audio API

The audio API is the interface we have built around GStreamer to support our specific use cases. Most backends should be able to get by with simply setting the URI of the resource they want to play, for these cases the default playback provider should be used.

For more advanced cases such as when the raw audio data is delivered outside of GStreamer or the backend needs to add metadata to the currently playing resource, developers should sub-class the base playback provider and implement the extra behaviour that is needed through the following API:

class mopidy.audio.Audio(config, mixer)[source]

Audio output through GStreamer.

emit_data(buffer_)[source]

Call this to deliver raw audio data to be played.

Note that the uri must be set to appsrc:// for this to work.

Returns true if data was delivered.

Parameters:buffer (gst.Buffer) – buffer to pass to appsrc
Return type:boolean
emit_end_of_stream()[source]

Put an end-of-stream token on the playbin. This is typically used in combination with emit_data().

We will get a GStreamer message when the stream playback reaches the token, and can then do any end-of-stream related tasks.

get_mute()[source]

Get mute status of the software mixer.

Return type:True if muted, False if unmuted, None if no mixer is installed.
get_position()[source]

Get position in milliseconds.

Return type:int
get_volume()[source]

Get volume level of the software mixer.

Example values:

0:
Minimum volume.
100:
Maximum volume.
Return type:int in range [0..100]
pause_playback()[source]

Notify GStreamer that it should pause playback.

Return type:True if successfull, else False
prepare_change()[source]

Notify GStreamer that we are about to change state of playback.

This function MUST be called before changing URIs or doing changes like updating data that is being pushed. The reason for this is that GStreamer will reset all its state when it changes to gst.STATE_READY.

set_appsrc(caps, need_data=None, enough_data=None, seek_data=None)[source]

Switch to using appsrc for getting audio to be played.

You MUST call prepare_change() before calling this method.

Parameters:
  • caps (string) – GStreamer caps string describing the audio format to expect
  • need_data (callable which takes data length hint in ms) – callback for when appsrc needs data
  • enough_data (callable) – callback for when appsrc has enough data
  • seek_data (callable which takes time position in ms) – callback for when data from a new position is needed to continue playback
set_metadata(track)[source]

Set track metadata for currently playing song.

Only needs to be called by sources such as appsrc which do not already inject tags in playbin, e.g. when using emit_data() to deliver raw audio data to GStreamer.

Parameters:track (mopidy.models.Track) – the current track
set_mute(mute)[source]

Mute or unmute of the software mixer.

Parameters:mute (bool) – Whether to mute the mixer or not.
Return type:True if successful, else False
set_position(position)[source]

Set position in milliseconds.

Parameters:position (int) – the position in milliseconds
Return type:True if successful, else False
set_uri(uri)[source]

Set URI of audio to be played.

You MUST call prepare_change() before calling this method.

Parameters:uri (string) – the URI to play
set_volume(volume)[source]

Set volume level of the software mixer.

Parameters:volume (int) – the volume in the range [0..100]
Return type:True if successful, else False
start_playback()[source]

Notify GStreamer that it should start playback.

Return type:True if successfull, else False
state = u'stopped'

The GStreamer state mapped to mopidy.audio.PlaybackState

stop_playback()[source]

Notify GStreamer that is should stop playback.

Return type:True if successfull, else False

Audio listener

class mopidy.audio.AudioListener[source]

Marker interface for recipients of events sent by the audio actor.

Any Pykka actor that mixes in this class will receive calls to the methods defined here when the corresponding events happen in the core actor. This interface is used both for looking up what actors to notify of the events, and for providing default implementations for those listeners that are not interested in all events.

reached_end_of_stream()[source]

Called whenever the end of the audio stream is reached.

MAY be implemented by actor.

static send(event, **kwargs)[source]

Helper to allow calling of audio listener events

state_changed(old_state, new_state, target_state)[source]

Called after the playback state have changed.

Will be called for both immediate and async state changes in GStreamer.

Target state is used to when we should be in the target state, but temporarily need to switch to an other state. A typical example of this is buffering. When this happens an event with old=PLAYING, new=PAUSED, target=PLAYING will be emitted. Once we have caught up a old=PAUSED, new=PLAYING, target=None event will be be generated.

Regular state changes will not have target state set as they are final states which should be stable.

MAY be implemented by actor.

Parameters:

Audio scanner

class mopidy.audio.scan.Scanner(timeout=1000, min_duration=100)[source]

Helper to get tags and other relevant info from URIs.

Parameters:
  • timeout – timeout for scanning a URI in ms
  • min_duration – minimum duration of scanned URI in ms, -1 for all.
scan(uri)[source]

Scan the given uri collecting relevant metadata.

Parameters:uri – URI of the resource to scan.
Returns:Dictionary of tags, duration, mtime and uri information.