SpeechRecognitionEngine.InitialSilenceTimeout SpeechRecognitionEngine.InitialSilenceTimeout SpeechRecognitionEngine.InitialSilenceTimeout SpeechRecognitionEngine.InitialSilenceTimeout Property

定義

SpeechRecognitionEngine が認識を終了する前に無音状態のみを含む入力を受け入れる時間間隔を取得または設定します。Gets or sets the time interval during which a SpeechRecognitionEngine accepts input containing only silence before finalizing recognition.

public:
 property TimeSpan InitialSilenceTimeout { TimeSpan get(); void set(TimeSpan value); };
public TimeSpan InitialSilenceTimeout { get; set; }
member this.InitialSilenceTimeout : TimeSpan with get, set
Public Property InitialSilenceTimeout As TimeSpan

プロパティ値

無音状態の間隔の時間。The duration of the interval of silence.

例外

このプロパティが 0 秒未満の値に設定されています。This property is set to less than 0 seconds.

次の例は、基本的な音声認識を示すコンソールアプリケーションの一部を示しています。The following example shows part of a console application that demonstrates basic speech recognition. この例ではBabbleTimeoutInitialSilenceTimeout音声認識をSpeechRecognitionEngine開始する前に、のプロパティとプロパティを設定します。The example sets the BabbleTimeout and InitialSilenceTimeout properties of a SpeechRecognitionEngine before initiating speech recognition. 音声認識エンジンAudioStateChangedのおよびRecognizeCompletedイベントのハンドラーは、イベント情報をコンソールに出力してInitialSilenceTimeoutSpeechRecognitionEngineプロパティのプロパティが認識操作にどのように影響するかを示します。Handlers for the speech recognizer's AudioStateChanged and RecognizeCompleted events output event information to the console to demonstrate how the InitialSilenceTimeout properties of a SpeechRecognitionEngine properties affect recognition operations.


using System;  
using System.Speech.Recognition;  

namespace SpeechRecognitionApp  
{  
  class Program  
  {  
    static void Main(string[] args)  
    {  

      // Initialize an in-process speech recognizer.  
      using (SpeechRecognitionEngine recognizer =  
        new SpeechRecognitionEngine(  
          new System.Globalization.CultureInfo("en-US")))  
      {  
        // Load a Grammar object.  
        recognizer.LoadGrammar(CreateServicesGrammar("FindServices"));  

        // Add event handlers.  
        recognizer.AudioStateChanged +=  
          new EventHandler<AudioStateChangedEventArgs>(  
            AudioStateChangedHandler);  
        recognizer.RecognizeCompleted +=  
          new EventHandler<RecognizeCompletedEventArgs>(  
            RecognizeCompletedHandler);  

        // Configure input to the speech recognizer.  
        recognizer.SetInputToDefaultAudioDevice();  

        recognizer.InitialSilenceTimeout = TimeSpan.FromSeconds(3);  
        recognizer.BabbleTimeout = TimeSpan.FromSeconds(2);  
        recognizer.EndSilenceTimeout = TimeSpan.FromSeconds(1);  
        recognizer.EndSilenceTimeoutAmbiguous = TimeSpan.FromSeconds(1.5);  

        Console.WriteLine("BabbleTimeout: {0}", recognizer.BabbleTimeout);  
        Console.WriteLine("InitialSilenceTimeout: {0}", recognizer.InitialSilenceTimeout);  
        Console.WriteLine("EndSilenceTimeout: {0}", recognizer.EndSilenceTimeout);  
        Console.WriteLine("EndSilenceTimeoutAmbiguous: {0}", recognizer.EndSilenceTimeoutAmbiguous);  
        Console.WriteLine();  

        // Start asynchronous speech recognition.  
        recognizer.RecognizeAsync(RecognizeMode.Single);  

        // Keep the console window open.  
        while (true)  
        {  
          Console.ReadLine();  
        }  
      }  
    }  

    // Create a grammar and build it into a Grammar object.   
    static Grammar CreateServicesGrammar(string grammarName)  
    {  

      // Create a grammar for finding services in different cities.  
      Choices services = new Choices(new string[] { "restaurants", "hotels", "gas stations" });  
      Choices cities = new Choices(new string[] { "Seattle", "Boston", "Dallas" });  

      GrammarBuilder findServices = new GrammarBuilder("Find");  
      findServices.Append(services);  
      findServices.Append("near");  
      findServices.Append(cities);  

      // Create a Grammar object from the GrammarBuilder. 
      Grammar servicesGrammar = new Grammar(findServices);  
      servicesGrammar.Name = ("FindServices");  
      return servicesGrammar;  
    }  

    // Handle the AudioStateChanged event.  
    static void AudioStateChangedHandler(  
      object sender, AudioStateChangedEventArgs e)  
    {  
      Console.WriteLine("AudioStateChanged ({0}): {1}",  
        DateTime.Now.ToString("mm:ss.f"), e.AudioState);  
    }  

    // Handle the RecognizeCompleted event.  
    static void RecognizeCompletedHandler(  
      object sender, RecognizeCompletedEventArgs e)  
    {  
      Console.WriteLine("RecognizeCompleted ({0}):",  
        DateTime.Now.ToString("mm:ss.f"));  

      string resultText;  
      if (e.Result != null) { resultText = e.Result.Text; }  
      else { resultText = "<null>"; }  

      Console.WriteLine(  
        " BabbleTimeout: {0}; InitialSilenceTimeout: {1}; Result text: {2}",  
        e.BabbleTimeout, e.InitialSilenceTimeout, resultText);  
      if (e.Error != null)  
      {  
        Console.WriteLine(" Exception message: ", e.Error.Message);  
      }  

      // Start the next asynchronous recognition operation.  
      ((SpeechRecognitionEngine)sender).RecognizeAsync(RecognizeMode.Single);  
    }  
  }  
}  

注釈

各音声認識エンジンには、無音と音声を区別するためのアルゴリズムが用意されています。Each speech recognizer has an algorithm to distinguish between silence and speech. 最初のサイレント状態のタイムアウト期間中にレコグナイザー入力が無音になっている場合、レコグナイザーは認識操作を終了します。If the recognizer input is silence during the initial silence timeout period, then the recognizer finalizes that recognition operation.

最初のサイレント状態のタイムアウト間隔が0に設定されている場合、レコグナイザーは最初のサイレント状態のタイムアウトチェックを実行しません。If the initial silence timeout interval is set to 0, the recognizer does not perform an initial silence timeout check. タイムアウト間隔には、負でない値を指定できます。The timeout interval can be any non-negative value. 既定値は0秒です。The default is 0 seconds.

適用対象

こちらもご覧ください