LUIS and Custom Speech C#

Abdulmalik Alzubi 11 Reputation points
2021-09-11T14:41:03.827+00:00

I've been trying to program LUIS and Custom Speech together for a few days, but it didn't work. I can only use the two separately, but that doesn't help, because I want the intentions of the already trained model to be investigated. You can find my solution below.
//Erstellt eine Instanz einer Sprachkonfiguration mit dem angegebenen Abonnementschlüssel und der Dienstregion.

            var config = SpeechConfig.FromSubscription("", "");

            //Um die Zahlen richtig abzulesen wird diese Methode der Klasse Seechkonfig aufgerufen. Dadurch wird die Ausgabe detaliert sein.

            config.SetProperty("SpeechServiceResponse_OutputFormatOption", "detailed");

 

            //Erstellt eine Instanz eines Schlüsselworterkennungsmodells.

            var model = KeywordRecognitionModel.FromFile(@"");

            //Anerkennungssprache

            config.SpeechRecognitionLanguage = "de-DE";

            config.EndpointId = "";

 

           

            var stopRecognition = new TaskCompletionSource<int>();

           

            // Creates a speech recognizer using microphone as audio input.

            // Creates a Language Understanding model using the app id, and adds specific intents from your model

            var model_LUIS = LanguageUnderstandingModel.FromAppId("");

            //Erstellt einen Spracherkenner mit Mikrofon als Audioeingang.

           

            using var recognizerf = new SpeechRecognizer(config);

            using var recognizer = new IntentRecognizer(config);

 

            recognizer.AddAllIntents(model_LUIS);

            recognizer.Recognized += (s, e) =>

                {

                    if (e.Result.Reason == ResultReason.RecognizedIntent)

                    {

                        SpeechServiceText = e.Result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult); ;

                            //Hier wird der Text der Microsoft Azure was das Json Format enthält an die Klasse LUIS übergeben um bearbeitet zu werden

                            luis.LUISAusgabe(e.Result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult));

                    }

                    else if (e.Result.Reason == ResultReason.RecognizedSpeech)

                    {

                        SpeechServiceText = ($"RECOGNIZED: Text={e.Result.Text}");

                    }

                    else if (e.Result.Reason == ResultReason.NoMatch)

                    {

                        SpeechServiceText = ($"NOMATCH: Speech could not be recognized.");

                    }

                };

            recognizer.Canceled += (s, e) =>

            {

                SpeechServiceText = $"CANCELED: Reason=e.Reason{e.Reason}";

 

                if (e.Reason == CancellationReason.Error)

                {

                    SpeechServiceText = $"CANCELED: ErrorCode={e.ErrorCode}";

                    SpeechServiceText = $"CANCELED: ErrorDetails={e.ErrorDetails}";

                    SpeechServiceText = $"CANCELED: Did you update the subscription info?";

                }

                stopRecognition.TrySetResult(0);

            };

            recognizer.SessionStopped += (s, e) =>

            {

                stopRecognition.TrySetResult(0);

            };

 

            // Starts continuous recognition using the keyword model.

            await recognizer.StartKeywordRecognitionAsync(model).ConfigureAwait(false);

            await recognizerf.StartKeywordRecognitionAsync(model).ConfigureAwait(false);

            // Waits for a single successful keyword-triggered speech recognition (or error).

            // Use Task.WaitAny to keep the task rooted.

            Task.WaitAny(new[] { stopRecognition.Task });

 

            // Stops recognition.

            await recognizer.StopKeywordRecognitionAsync().ConfigureAwait(false);

Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
1,404 questions
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
2,395 questions
{count} vote

3 answers

Sort by: Most helpful
  1. Paul Ryan 321 Reputation points
    2021-09-14T11:51:05.653+00:00

    If I understand your question, you want to use voice with LUIS, Microsoft Learn has an example (search "c# luis voice intent example")
    https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-recognize-intents-from-speech-csharp

    here is an example of a console app (Net 5.0) that uses continuous speech,
    it captures the string and json output.
    You get the appID, key region and Intent Name form LUIS

    string intentResultString ="";
    string intentJSON = "";
    var config = SpeechConfig.FromSubscription("key", "westus");

    using (var recognizer = new IntentRecognizer(config))
    {
    var model = LanguageUnderstandingModel.FromAppId("AppID");
    recognizer.AddIntent(model, "Intent Name", "id1");

            // Subscribes to events.  
            recognizer.Recognizing += (s, e) =>  
            {  
    
            };  
    
            recognizer.Recognized += (s, e) =>  
            {  
                intentResultString += e.Result.Text;  
                intentJSON += e.Result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult);  
            };  
    
            recognizer.Canceled += (s, e) =>  
            {  
                Console.WriteLine($"\n    Canceled. Reason: {e.Reason.ToString()}, CanceledReason: {e.Reason}");  
            };  
    
            recognizer.SessionStarted += (s, e) =>  
            {  
                Console.WriteLine("\n    Session started event.");  
            };  
    
            recognizer.SessionStopped += (s, e) =>  
            {  
                  
                Console.WriteLine("\n    Session stopped event.");  
            };  
    
            // Starts continuous recognition.   
            // Uses StopContinuousRecognitionAsync() to stop recognition.  
            await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);  
    
            do  
            {  
    
            } while (Console.ReadKey().Key != ConsoleKey.Enter);  
    
            // Stops recognition.  
            await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);  
        }  
    
    }  
    
    #endregion  
    

    }

    hope this helps

    1 person found this answer helpful.

  2. Paul Ryan 321 Reputation points
    2021-09-24T19:17:41.063+00:00

    Hi Leandro,
    I am developing an app that uses voice-text and LUIS. I am using a two-step process

    1. voice to text
    2. Luis (using the text form voice-text) into a JSON

    I created a CLI to test Voice with LUIS for some reason the results were not as accurate as using the two-step (I did not want to spend any time on figuring why I am getting different results

    If you want the program here is a link (I keep them on OneDrive)
    https://1drv.ms/u/s!Ai6m7WAwpO1kpqZ9PhOyBub8K1ZALA?e=ry0tNH

    When you publish make sure you enable the voice option

    my email is pryannow@harsh.com .com I am in EDT (New York)
    good luck

    1 person found this answer helpful.
    0 comments No comments

  3. Leandro Duarte 1 Reputation point
    2021-09-24T14:44:13.987+00:00

    Hello @Paul Ryan

    I have been looking for a way to improve my chatbot architectury. Currently, my code has to send the user's speech to Custom Speech's endpoint firstly, after that my code sends the text generated by Custom Speech's endpoint to Luis for intention recognizer.

    I'd like to connect Luis and Custom Speech directly and avoid to convert the user's speech in text firstly and then send the generetad text to Luis for intention recognition.

    Have you already got any way to do that?

    Would you like to think about it togheter?

    Best regards!

    0 comments No comments