RecognizedAudio.WriteToWaveStream(Stream) Metodo

Definizione

Scrive audio in un flusso nel formato Wave.

public:
 void WriteToWaveStream(System::IO::Stream ^ outputStream);
public void WriteToWaveStream (System.IO.Stream outputStream);
member this.WriteToWaveStream : System.IO.Stream -> unit
Public Sub WriteToWaveStream (outputStream As Stream)

Parametri

outputStream
Stream

Flusso che riceverà i dati audio.

Esempio

L'esempio seguente crea una grammatica di riconoscimento vocale per l'input del nome, aggiunge un gestore per l'evento SpeechRecognized e carica la grammatica in un riconoscimento vocale in-process. Scrive quindi le informazioni audio per la parte del nome dell'input in un file audio. Il file audio viene usato come input per un SpeechSynthesizer oggetto , che parla una frase che include l'audio registrato.

private static void AddNameGrammar(SpeechRecognitionEngine recognizer)  
{  
  GrammarBuilder builder = new GrammarBuilder();  
  builder.Append("My name is");  
  builder.AppendWildcard();  

  Grammar nameGrammar = new Grammar(builder);  
  nameGrammar.Name = "Name Grammar";  
  nameGrammar.SpeechRecognized +=  
    new EventHandler<SpeechRecognizedEventArgs>(  
      NameSpeechRecognized);  

  recognizer.LoadGrammar(nameGrammar);  
}  

// Handle the SpeechRecognized event of the name grammar.  
private static void NameSpeechRecognized(  
  object sender, SpeechRecognizedEventArgs e)  
{  
  Console.WriteLine("Grammar ({0}) recognized speech: {1}",  
    e.Result.Grammar.Name, e.Result.Text);  

  try  
  {  
    // The name phrase starts after the first three words.  
    if (e.Result.Words.Count < 4)  
    {  

      // Add code to check for an alternate that contains the   
wildcard.  
      return;  
    }  

    RecognizedAudio audio = e.Result.Audio;  
    TimeSpan start = e.Result.Words[3].AudioPosition;  
    TimeSpan duration = audio.Duration - start;  

    // Add code to verify and persist the audio.  
    string path = @"C:\temp\nameAudio.wav";  
    using (Stream outputStream = new FileStream(path, FileMode.Create))  
    {  
      RecognizedAudio nameAudio = audio.GetRange(start, duration);  
      nameAudio.WriteToWaveStream(outputStream);  
      outputStream.Close();  
    }  

    Thread testThread =  
      new Thread(new ParameterizedThreadStart(TestAudio));  
    testThread.Start(path);  
  }  
  catch (Exception ex)  
  {  
    Console.WriteLine("Exception thrown while processing audio:");  
    Console.WriteLine(ex.ToString());  
  }  
}  

// Use the speech synthesizer to play back the .wav file  
// that was created in the SpeechRecognized event handler.  

private static void TestAudio(object item)  
{  
  string path = item as string;  
  if (path != null && File.Exists(path))  
  {  
    SpeechSynthesizer synthesizer = new SpeechSynthesizer();  
    PromptBuilder builder = new PromptBuilder();  
    builder.AppendText("Hello");  
    builder.AppendAudio(path);  
    synthesizer.Speak(builder);  
  }  
}  

Commenti

I dati audio vengono scritti outputStream in in formato Wave, che include un'intestazione RIFF (Resource Interchange File Format).

Il WriteToAudioStream metodo usa lo stesso formato binario, ma non include l'intestazione Wave.

Si applica a

Vedi anche