前言
通過普通AudioTrack的流程追蹤數(shù)據(jù)流。分析一下聲音模塊的具體流程。這里比較復雜的是binder以及共享內(nèi)存。這里不做詳細介紹。只介紹原理
正文
java層的AudioTrack主要是通過jni調(diào)用到cpp層的AudioTrack。我們只介紹cpp層相關。
初始化
初始化只核心是通過set函數(shù)實現(xiàn)的。主要包括三步。
1. 客戶端準備數(shù)據(jù),
status_t AudioTrack::set(
audio_stream_type_t streamType,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
audio_output_flags_t flags,
callback_t cbf,
void* user,
int32_t notificationFrames,
const sp<IMemory>& sharedBuffer,
bool threadCanCallJava,
audio_session_t sessionId,
transfer_type transferType,
const audio_offload_info_t *offloadInfo,
uid_t uid,
pid_t pid,
const audio_attributes_t* pAttributes,
bool doNotReconnect,
float maxRequiredSpeed,
audio_port_handle_t selectedDeviceId)
{
//如果構造audioTrack時時傳入AudioTrack.MODE_STREAM。則sharedBuffer為空
mSharedBuffer = sharedBuffer;
if (cbf != NULL) {
mAudioTrackThread = new AudioTrackThread(*this);
mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
// thread begins in paused state, and will not reference us until start()
}
// create the IAudioTrack
{
AutoMutex lock(mLock);
//這是核心,通過audiofligure 創(chuàng)建服務端數(shù)據(jù),以及拿到共享內(nèi)存。
status = createTrack_l();
}
mVolumeHandler = new media::VolumeHandler();
}
status_t AudioTrack::createTrack_l()
{
const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger()
IAudioFlinger::CreateTrackInput inpu
IAudioFlinger::CreateTrackOutput output;
//關鍵部分
sp<IAudioTrack> track = audioFlinger->createTrack(input,output,&status);
sp<IMemory> iMem = track->getCblk();
void *iMemPointer = iMem->pointer();
mAudioTrack = track;
mCblkMemory = iMem;
audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);
mCblk = cblk;
void* buffers;
buffers = cblk + 1;
mAudioTrack->attachAuxEffect(mAuxEffectId);
if (mFrameCount > mReqFrameCount) {
mReqFrameCount = mFrameCount;
}
// update proxy
if (mSharedBuffer == 0) {
mStaticProxy.clear();
mProxy = new AudioTrackClientProxy(cblk, buffers, mFrameCount, mFrameSize);
}
mProxy->setPlaybackRate(playbackRateTemp);
mProxy->setMinimum(mNotificationFramesAct);
}
}
核心是createTrack,之后拿到關鍵的共享內(nèi)存消息,然后寫入內(nèi)容
2. Audiofligure創(chuàng)建遠端track。
sp<IAudioTrack> AudioFlinger::createTrack(const CreateTrackInput& input,
CreateTrackOutput& output,
status_t *status)
{
sp<PlaybackThread::Track> track;
sp<TrackHandle> trackHandle;
sp<Client> client;
pid_t clientPid = input.clientInfo.clientPid;
const pid_t callingPid = IPCThreadState::self()->getCallingPid();
{
Mutex::Autolock _l(mLock);
PlaybackThread *thread = checkPlaybackThread_l(output.outputId);
client = registerPid(clientPid);
PlaybackThread *effectThread = NULL;
track = thread->createTrack_l(client, streamType, localAttr, &output.sampleRate,
input.config.format, input.config.channel_mask,
&output.frameCount, &output.notificationFrameCount,
input.notificationsPerBuffer, input.speed,
input.sharedBuffer, sessionId, &output.flags,
callingPid, input.clientInfo.clientTid, clientUid,
&lStatus, portId);
trackHandle = new TrackHandle(track);
return trackHandle;
}
核心是createTrack_l,在混音線程中加入新建一個服務端的Track。新建共享內(nèi)存,返還給客戶端的Track,最終可以共享數(shù)據(jù)。代碼如下:
// PlaybackThread::createTrack_l() must be called with AudioFlinger::mLock held
sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(
const sp<AudioFlinger::Client>& client,
const sp<IMemory>& sharedBuffer,)
{
sp<Track> track;
track = new Track(this, client, streamType, attr, sampleRate, format,
channelMask, frameCount,
nullptr /* buffer */, (size_t)0 /* bufferSize */, sharedBuffer,
sessionId, creatorPid, uid, *flags, TrackBase::TYPE_DEFAULT, portId);
//這個不太重要,后面調(diào)用start后,track會進入mActiveTracks列表,最終會在Threadloop中讀取數(shù)據(jù),經(jīng)過處理寫入out中
mTracks.add(track);
return track;
}
Track的構造函數(shù)是關鍵,主要是拿到共享內(nèi)存。交給binder服務端的TrackHandle對象。大概代碼如下:
AudioFlinger::ThreadBase::TrackBase::TrackBase(
ThreadBase *thread,
const sp<Client>& client
void *buffer,)
: RefBase(),
{
//client是audiofligure中維護的一個變量,主要持有MemoryDealer。用于管理共享內(nèi)存。
if (client != 0) {
mCblkMemory = client->heap()->allocate(size);
}
}
然后把tracxk對象封裝成TrackHandle這個binder對象,傳給AudioTrack,通過TrackHandle實現(xiàn)數(shù)據(jù)傳輸。他主要實現(xiàn)了start和stop。以及獲取共享內(nèi)存的接口如下
class IAudioTrack : public IInterface
{
public:
DECLARE_META_INTERFACE(AudioTrack);
/* Get this track's control block */
virtual sp<IMemory> getCblk() const = 0;
/* After it's created the track is not active. Call start() to
* make it active.
*/
virtual status_t start() = 0;
/* Stop a track. If set, the callback will cease being called and
* obtainBuffer will return an error. Buffers that are already released
* will continue to be processed, unless/until flush() is called.
*/
virtual void stop() = 0;
/* Flush a stopped or paused track. All pending/released buffers are discarded.
* This function has no effect if the track is not stopped or paused.
*/
virtual void flush() = 0;
/* Pause a track. If set, the callback will cease being called and
* obtainBuffer will return an error. Buffers that are already released
* will continue to be processed, unless/until flush() is called.
*/
virtual void pause() = 0;
/* Attach track auxiliary output to specified effect. Use effectId = 0
* to detach track from effect.
*/
virtual status_t attachAuxEffect(int effectId) = 0;
/* Send parameters to the audio hardware */
virtual status_t setParameters(const String8& keyValuePairs) = 0;
/* Selects the presentation (if available) */
virtual status_t selectPresentation(int presentationId, int programId) = 0;
/* Return NO_ERROR if timestamp is valid. timestamp is undefined otherwise. */
virtual status_t getTimestamp(AudioTimestamp& timestamp) = 0;
/* Signal the playback thread for a change in control block */
virtual void signal() = 0;
/* Sets the volume shaper */
virtual media::VolumeShaper::Status applyVolumeShaper(
const sp<media::VolumeShaper::Configuration>& configuration,
const sp<media::VolumeShaper::Operation>& operation) = 0;
/* gets the volume shaper state */
virtual sp<media::VolumeShaper::State> getVolumeShaperState(int id) = 0;
};
下面大概介紹一下寫入數(shù)據(jù)的過程,因為這個算法比較復雜,主要是通過共享內(nèi)存,通過共享內(nèi)存中的結構體,控制共享內(nèi)存中后半部分的數(shù)據(jù)寫入寫出。就不在詳細介紹。這里只簡要介紹大概流程:
首先是Audiotrack的write方法:
ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
//申請空間,
status_t err = obtainBuffer(&audioBuffer,
blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking)
//寫入內(nèi)容
memcpy(audioBuffer.i8, buffer, toWrite);
//寫入結尾,通知共享內(nèi)存結構體,已經(jīng)寫入,防止讀取錯誤數(shù)據(jù)。
releaseBuffer(&audioBuffer);
}
obtainBuffer主要是通過AudioTrackClientProxy這個客戶端共享內(nèi)存控制的,
大概代碼如下:
status_t ClientProxy::obtainBuffer(Buffer* buffer, const struct timespec *requested,
struct timespec *elapsed)
{
//核心是這里,方便循環(huán)數(shù)組的下標的變化。是把長的坐標去掉頭部。
rear &= mFrameCountP2 - 1;
//這是取的是AudioTrack,所以需要在結尾添加數(shù)據(jù),所以是rear。
buffer->mRaw = part1 > 0 ?
&((char *) mBuffers)[(rear) * mFrameSize] : NULL;
關于releaseBuffer這里就不在詳細介紹,因為主要是加鎖,然后通知數(shù)組下標變化的,具體邏輯這里不詳細介紹。
關于循環(huán)讀取的過程,本篇文章不再詳細介紹,主要是通過track類中的getNextBuffer實現(xiàn)的,他主要是audiomixer中被調(diào)用,本質(zhì)上都是通過audiomixerthread這個線程實現(xiàn)的,audiomix通過職責鏈模式,對聲音進行處理,最終寫入到hal層。文章來源:http://www.zghlxwxcb.cn/news/detail-541693.html
后記
這篇文章,這篇文章雖說不夠完善,但是基本上解釋了聲音的大概流向,但是framew層用到比較多的系統(tǒng)組件,還有更底層的鎖與同步機制,完全詳細介紹清楚,還是分困難了,這里暫時就這樣,以后如果有空,進一步補充。文章來源地址http://www.zghlxwxcb.cn/news/detail-541693.html
到了這里,關于AudioTrack的聲音輸出流程的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!