Waveout driver




















KMixer outputs a wave stream to a WaveCyclic or WavePci device, whose port and miniport drivers appear below KMixer on the left side of the preceding figure. The miniport driver binds itself to the port driver to form the wave filter that represents the underlying audio rendering device. A typical rendering device outputs an analog signal that drives a set of speakers or an external audio unit.

The right side of the preceding figure shows the components that are needed to support an application that captures wave data to a file. The wave-capture or "wave-in" application communicates with the WDM audio drivers through the waveIn Xxx functions, which are implemented in the WinMM system component. At the lower right corner of the figure, the wave-capture device is controlled by wave miniport and port drivers.

The port and miniport drivers, which can be of type WaveCyclic or WavePci, bind together to form a wave filter that represents the capture device. This device typically captures an analog signal from a microphone or other audio source and converts it to a wave PCM stream. A system that performs simultaneous rendering and capture of audio streams might require two instances of KMixer, as shown in the figure.

Note that SysAudio automatically creates these instances as they are needed. Regardless of whether the wave stream is captured by a USB device or by a WaveCyclic or WavePci device, KMixer performs sample-rate conversion on the stream, if needed, but does no mixing with other streams.

KMixer outputs the resulting stream to Wdmaud. The user-mode half, Wdmaud. Finally, at the top of the figure, the wave-capture application writes the wave data to a file.

At the time that the wave-capture application calls the waveInOpen function to open the capture stream, it passes in a pointer to its callback routine. When a wave-capture event occurs, the operating system calls the callback routine with a buffer containing the next block of wave samples from the capture device. In response to the callback, the application writes the next block of wave data to the file.

The following figure shows the user-mode and kernel-mode components that are used by a DirectSound application program to render or capture wave data. The rendering components are shown in the left half of the preceding figure, and the capture components appear on the right. The wave miniport drivers are shown as darkened boxes to indicate that they are vendor-supplied components. Example 9. File: WaveOutputMapper.

GetCapabilities driver. CreateIeeeFloatWaveFormat format. SampleRate, caps. Example File: ToxCall. File: AudioChannel. CreateKeys out apk, out ask ; NaClClient. File: APU. SetWaveFormat , 2 ; sound. Init sound ; waveOut. ReadKey ; if keyInfo. Stop ; waveOut. Dispose ; waveFileWriter. Close ; waveFileWriter. Sin new Frequency adj. Sin Frequency. Sin TFunc. Init tFuncWaveProvider ; waveOut.

Play ; Console. Write " " ; Console. WriteLine adj. File: WaveOutFactory. NewWindow : WaveCallbackInfo. File: voice. Close ; udpListener. Close ; if waveout! Stop ; waveout. StopRecording ; sourcestream. Show ex. File: AudioExtensions.

Seek 0, SeekOrigin. Init stream ; waveOut. Play ; while waveOut. File: MainForm. Millisecond; InitializeComponent ; this. Alternatively, you can ignore PlaybackStopped and just call Stop whenever you decide that playback is finished.

You may notice a Volume property on the interface that is marked as [Obsolete]. There are better ways of setting the volume in NAudio.

For example look at the WaveChannel32 class or in NAudio 1. Be careful with Stopped though, since if you call the Stop method, the PlaybackState will immediately go to Stopped but it may be a few milliseconds before any background playback threads have actually exited. WaveOut should be thought of as the default audio output device in NAudio. The WaveOut object allows you to configure several things before you get round to calling Init.

Most common would be to change the DeviceNumber property. To find out how many WaveOut output devices are available, query the static WaveOut. DeviceCount property. You can also set DesiredLatency , which is measured in milliseconds. This figure actually sets the total duration of all the buffers. So in fact, you could argue that the real latency is shorter. By default the DesiredLatency is ms, which should ensure a smooth playback experience on most computers.

You can also set the NumberOfBuffers to something other than its default of 2 although 3 is the only other value that is really worth using. Understanding which one to use is important. Callbacks are used whenever WaveOut has finished playing one of its buffers and wants more data. In the callback we read from the source wave provider and fill a new buffer with the audio.

It then queues it up for playback, assuming there is still more data to play. As with all output audio driver models, it is imperative that this happens as quickly as possible, or your output sound will stutter. Whenever WaveOut wants more data it posts a message that is handled by the Windows message pump of an invisible new window. You get this callback model by default when you call the empty WaveOut constructor. However, it will not work on a background thread, since there is no message pump.

One of the big benefits of using this model or the Existing Window model is that everything happens on the same thread. This protects you from threading race conditions where a reposition happens at the same time as a read. Existing Window is essentially the same callback mechanism as New Window, but you have to pass in the handle of an existing window.

Function callback was the first callback method I attempted to implement for NAudio, and has proved the most problematic of all callback methods. Essentially you can give it a function to callback, which seems very convenient, these callbacks come from a thread within the operating system.



0コメント

  • 1000 / 1000