In the iaudioclient:: initialize method, the third parameter is hnsbufferduration. In the explanation of Microsoft documents, this parameter contains the buffer size requested by the caller. If the iaudioclient:: initialize method is called successfully, the system will allocate a buffer of at least this size.(https://docs.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudioclient-initialize)
For capture and render, I have the following two questions:
For the rendering device, when calling the iaudioclient:: initialize method, set the hnsbufferduration to 100 * 10000 (i.e. the buffer size is 100ms), and set the hnsperiodicity parameter to 0 (which means that the default period is 10ms). Does that mean that I must fill the 100ms buffer to play the sound from the speaker? Or does the system play the data in the rendering buffer from the speaker device every 10ms?Because I will receive the system event notification every 10ms. At this time, I will call the iaudiorenderclient:: getbuffer method to insert 10ms of data into the buffer. So I want to know when the speaker will really play sound? Do you need to fill all the buffers set by hnsbufferduration (or reach a certain amount) before playing through the speaker device.
For the capture device, when calling the iaudioclient:: initialize method, set the hnsbufferduration to 100 * 10000 (i.e. the buffer size is 100ms), and set the hnsperiodicity parameter to 0 (which means that the default period is 10ms). Does it mean that this buffer can help me cache the next 100ms of data at most?For example, under normal circumstances, I should have received the system event notification for each 10ms, then called IAudioCaptureClient:: GetBuffer to get the data collected by the microphone. If my thread gets stuck during this period and calls iaudiocaptureclient:: getbuffer after 60ms, does the capture buffer help me save all the data in the past 60ms?
Look forward to your professional answer, thank you