SpeechRecognitionEngine.AudioPosition プロパティ


SpeechRecognitionEngine に入力を提供しているデバイスによって生成されているオーディオ ストリーム内の現在の位置を取得します。Gets the current location in the audio stream being generated by the device that is providing input to the SpeechRecognitionEngine.

 property TimeSpan AudioPosition { TimeSpan get(); };
public TimeSpan AudioPosition { get; }
member this.AudioPosition : TimeSpan
Public ReadOnly Property AudioPosition As TimeSpan


入力デバイスによって生成されているオーディオ ストリームの現在の位置。The current location in the audio stream being generated by the input device.

次の例では、プロセス内音声認識エンジンがディクテーションの文法を使用して音声入力を照合します。In the following example, the in-process speech recognizer uses a dictation grammar to match speech input. このSpeechDetectedイベントのハンドラーはAudioPosition、音声認識エンジンが入力RecognizerAudioPosition時に音声を検出したときに、、、およびAudioLevelをコンソールに書き込みます。A handler for the SpeechDetected event writes to the console the AudioPosition, RecognizerAudioPosition, and AudioLevel when the speech recognizer detects speech at its input.

using System;  
using System.Speech.Recognition;  
namespace SampleRecognition  
  class Program  
    private static SpeechRecognitionEngine recognizer;  
    public static void Main(string[] args)  
      // Initialize an in-process speech recognition engine for US English.  
      using (recognizer = new SpeechRecognitionEngine(  
        new System.Globalization.CultureInfo("en-US")))  
        // Create a grammar for finding services in different cities.  
        Choices services = new Choices(new string[] { "restaurants", "hotels", "gas stations" });  
        Choices cities = new Choices(new string[] { "Seattle", "Boston", "Dallas" });  
        GrammarBuilder findServices = new GrammarBuilder("Find");  
        // Create a Grammar object from the GrammarBuilder and load it to the recognizer.  
        Grammar servicesGrammar = new Grammar(findServices);  
        // Add handlers for events.  
        recognizer.SpeechRecognized +=  
          new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);  
        recognizer.SpeechDetected +=  
          new EventHandler<SpeechDetectedEventArgs>(recognizer_SpeechDetected);  
        // Start asynchronous recognition.  
        Console.WriteLine("Starting asynchronous recognition...");  
        // Keep the console window open.  
    // Gather information about detected speech and write it to the console.  
    static void recognizer_SpeechDetected(object sender, SpeechDetectedEventArgs e)  
      Console.WriteLine("Speech detected:");  
      Console.WriteLine("  Audio level: " + recognizer.AudioLevel);  
      Console.WriteLine("  Audio position at the event: " + e.AudioPosition);  
      Console.WriteLine("  Current audio position: " + recognizer.AudioPosition);  
      Console.WriteLine("  Current recognizer audio position: " +   
    // Write the text of the recognition result to the console.  
    static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)  
      Console.WriteLine("\nSpeech recognized: " + e.Result.Text);  
      // Add event handler code here.  


プロパティAudioPositionは、生成されたオーディオストリーム内の入力デバイスの位置を参照します。The AudioPosition property references the input device's position in its generated audio stream. これに対し、 RecognizerAudioPositionプロパティは、オーディオ入力内のレコグナイザーの位置を参照します。By contrast, the RecognizerAudioPosition property references the recognizer's position within its audio input. これらの位置は異なる場合があります。These positions can be different. たとえば、認識結果がまだ生成されていない認識エンジンが入力を受け取った場合、 RecognizerAudioPositionプロパティの値はAudioPositionプロパティの値よりも小さくなります。For example, if the recognizer has received input for which it has not yet generated a recognition result then the value of the RecognizerAudioPosition property is less than the value of the AudioPosition property.