Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Quickstart: Recognize speech from a microphone in Objective-C on macOS using the Speech SDK

This sample demonstrates how to create a macOS app in Objective-C using the Cognitive Services Speech SDK to transcribe speech recorded from a microphone to text.

Prerequisites

Get the Code for the Sample App

Get the Speech SDK for macOS

By downloading the Microsoft Cognitive Services Speech SDK, you acknowledge its license, see Speech SDK license agreement.

The Cognitive Services Speech SDK for Mac is distributed as a framework bundle. It can be used in Xcode projects as a CocoaPod, or downloaded directly here and linked manually. This guide uses a CocoaPod.

Install the SDK as a CocoaPod

  1. Install the CocoaPod dependency manager as described in its installation instructions.
  2. Navigate to the directory of the downloaded sample app (helloworld) in a terminal.
  3. Run the command pod install. This will generate a helloworld.xcworkspace Xcode workspace containing both the sample app and the Speech SDK as a dependency. This workspace will be used in the following.

Build and Run the Sample

  1. Open the helloworld.xcworkspace workspace in Xcode.
  2. Make the following changes in the AppDelegate.m file:
    1. Replace the string YourSubscriptionKey with your subscription key.
    2. Replace the string YourServiceRegion with the region associated with your subscription (for example, westus for the free trial subscription).
  3. Make the debug output visible (View > Debug Area > Activate Console).
  4. Build and run the example code by selecting Product -> Run from the menu or clicking the Play button.
  5. After you click the button in the app and say a few words, you should see the text you have spoken on the lower part of the screen. When you run the app for the first time, you should be prompted to give the app access to your computer's microphone.

References