MR 和 Azure 302:计算机视觉MR and Azure 302: Computer vision


备注

混合现实学院教程在制作时考虑到了 HoloLens(第一代)和混合现实沉浸式头戴显示设备。The Mixed Reality Academy tutorials were designed with HoloLens (1st gen) and Mixed Reality Immersive Headsets in mind. 因此,对于仍在寻求这些设备的开发指导的开发人员而言,我们觉得很有必要保留这些教程。As such, we feel it is important to leave these tutorials in place for developers who are still looking for guidance in developing for those devices. 我们 不会 在这些教程中更新 HoloLens 2 所用的最新工具集或集成相关的内容。These tutorials will not be updated with the latest toolsets or interactions being used for HoloLens 2. 我们将维护这些教程,使之持续适用于支持的设备。They will be maintained to continue working on the supported devices. 将来会发布一系列新教程,这些教程将演示如何针对 HoloLens 2 进行开发。There will be a new series of tutorials that will be posted in the future that will demonstrate how to develop for HoloLens 2. 此通知将在发布时通过指向这些教程的链接进行更新。This notice will be updated with a link to those tutorials when they are posted.


在本课程中,你将了解如何在混合现实应用程序中使用 Azure 计算机视觉功能识别所提供的映像中的视觉内容。In this course, you will learn how to recognize visual content within a provided image, using Azure Computer Vision capabilities in a mixed reality application.

识别结果将显示为描述性标记。Recognition results will be displayed as descriptive tags. 您可以使用此服务,无需训练机器学习模型。You can use this service without needing to train a machine learning model. 如果实现需要训练机器学习模型,请参阅 MR And Azure 302bIf your implementation requires training a machine learning model, see MR and Azure 302b.

实验室结果

Microsoft 计算机视觉是一组 Api,旨在向开发人员提供图像处理和分析 (,并使用高级算法从云中) 的返回信息。The Microsoft Computer Vision is a set of APIs designed to provide developers with image processing and analysis (with return information), using advanced algorithms, all from the cloud. 开发人员上传图像或图像 URL,Microsoft 计算机视觉 API 算法根据输入的内容分析视觉内容,该用户随后可以返回信息(包括、标识图像的类型和质量)、检测人脸 (返回其坐标) 和标记或分类图像。Developers upload an image or image URL, and the Microsoft Computer Vision API algorithms analyze the visual content, based upon inputs chosen the user, which then can return information, including, identifying the type and quality of an image, detect human faces (returning their coordinates), and tagging, or categorizing images. 有关详细信息,请访问 Azure 计算机视觉 API 页For more information, visit the Azure Computer Vision API page.

完成本课程后,你将拥有一个混合现实 HoloLens 应用程序,该应用程序将能够执行以下操作:Having completed this course, you will have a mixed reality HoloLens application, which will be able to do the following:

  1. 使用点击手势,HoloLens 的照相机将捕获图像。Using the Tap gesture, the camera of the HoloLens will capture an image.
  2. 映像将发送到 Azure 计算机视觉 API 服务。The image will be sent to the Azure Computer Vision API Service.
  3. 识别的对象将在位于 Unity 场景中的一个简单 UI 组中列出。The objects recognized will be listed in a simple UI group positioned in the Unity Scene.

在您的应用程序中,您将由您来决定如何将结果与您的设计相集成。In your application, it is up to you as to how you will integrate the results with your design. 本课程旨在向您介绍如何将 Azure 服务与 Unity 项目集成。This course is designed to teach you how to integrate an Azure Service with your Unity project. 您可以使用您在本课程中获得的知识来增强混合现实应用程序的工作。It is your job to use the knowledge you gain from this course to enhance your mixed reality application.

设备支持Device support

课程Course HoloLensHoloLens 沉浸式头戴显示设备Immersive headsets
MR 和 Azure 302:计算机视觉MR and Azure 302: Computer vision ✔️✔️ ✔️✔️

备注

尽管本课程主要侧重于 HoloLens,但你也可以将本课程中学习的内容应用于 Windows Mixed Reality 沉浸式 (VR) 耳机。While this course primarily focuses on HoloLens, you can also apply what you learn in this course to Windows Mixed Reality immersive (VR) headsets. 由于沉浸式 (VR) 耳机没有可访问的相机,因此你需要连接到电脑的外置相机。Because immersive (VR) headsets do not have accessible cameras, you will need an external camera connected to your PC. 在本课程中,您将看到有关在支持沉浸式 (VR) 耳机时可能需要执行的任何更改的说明。As you follow along with the course, you will see notes on any changes you might need to employ to support immersive (VR) headsets.

必备条件Prerequisites

备注

本教程专为具有 Unity 和 c # 基本经验的开发人员设计。This tutorial is designed for developers who have basic experience with Unity and C#. 请注意,本文档中的先决条件和书面说明表明了编写 (2018) 时测试和验证的内容。Please also be aware that the prerequisites and written instructions within this document represent what has been tested and verified at the time of writing (May 2018). 您可以随意使用最新的软件(如 安装工具 一文中所述),但不应假定本课程中的信息将与下面列出的内容完全匹配。You are free to use the latest software, as listed within the install the tools article, though it should not be assumed that the information in this course will perfectly match what you'll find in newer software than what's listed below.

本课程建议采用以下硬件和软件:We recommend the following hardware and software for this course:

开始之前Before you start

  1. 若要避免在生成此项目时遇到问题,强烈建议你在根或近乎根文件夹中创建本教程中所述的项目 (长文件夹路径在生成时) 会导致问题。To avoid encountering issues building this project, it is strongly suggested that you create the project mentioned in this tutorial in a root or near-root folder (long folder paths can cause issues at build-time).
  2. 设置并测试你的 HoloLens。Set up and test your HoloLens. 如果需要支持设置 HoloLens,请 确保访问 hololens 设置一文If you need support setting up your HoloLens, make sure to visit the HoloLens setup article.
  3. 在开始开发新的 HoloLens 应用程序时,最好执行校准和传感器调整 (有时,它可以帮助为每个用户) 执行这些任务。It is a good idea to perform Calibration and Sensor Tuning when beginning developing a new HoloLens App (sometimes it can help to perform those tasks for each user).

有关校准的帮助信息,请单击此链接,了解 到 HoloLens 校准文章For help on Calibration, please follow this link to the HoloLens Calibration article.

有关传感器优化的帮助,请单击 "HoloLens 传感器优化" 一文For help on Sensor Tuning, please follow this link to the HoloLens Sensor Tuning article.

第1章-Azure 门户Chapter 1 – The Azure Portal

若要在 Azure 中使用 计算机视觉 API 服务,你将需要配置服务的实例,使其可用于你的应用程序。To use the Computer Vision API service in Azure, you will need to configure an instance of the service to be made available to your application.

  1. 首先,登录到 Azure 门户First, log in to the Azure Portal.

    备注

    如果还没有 Azure 帐户,则需要创建一个。If you do not already have an Azure account, you will need to create one. 如果在课堂或实验室中按照本教程进行学习,请咨询教师或 proctors,以获得设置新帐户的帮助。If you are following this tutorial in a classroom or lab situation, ask your instructor or one of the proctors for help setting up your new account.

  2. 登录后,单击左上角的 " 新建 ",搜索 " 计算机视觉 API",然后单击 " Enter"。Once you are logged in, click on New in the top left corner, and search for Computer Vision API, and click Enter.

    在 Azure 中创建新资源

    备注

    在较新的门户中,可能已将 " 新建 " 一词替换为 " 创建资源"。The word New may have been replaced with Create a resource, in newer portals.

  3. 新页将提供 计算机视觉 API 服务的说明。The new page will provide a description of the Computer Vision API service. 在此页的左下角,选择 " 创建 " 按钮以创建与此服务的关联。At the bottom left of this page, select the Create button, to create an association with this service.

    关于计算机视觉 api 服务

  4. 单击 " 创建" 后:Once you have clicked on Create:

    1. 为此服务实例插入所需的 名称Insert your desired Name for this service instance.

    2. 选择一个“订阅” 。Select a Subscription.

    3. 选择适合于你的 定价层 ,如果这是第一次创建 计算机视觉 API 服务,则 (名为 F0) 的免费层。Select the Pricing Tier appropriate for you, if this is the first time creating a Computer Vision API Service, a free tier (named F0) should be available to you.

    4. 选择一个 资源组 ,或创建一个新的资源组。Choose a Resource Group or create a new one. 资源组提供一种监视、控制访问、预配和管理 Azure 资产集合的计费的方法。A resource group provides a way to monitor, control access, provision and manage billing for a collection of Azure assets. 建议保留与单个项目关联的所有 Azure 服务 (例如,这些实验室) 在常见资源组) 下。It is recommended to keep all the Azure services associated with a single project (e.g. such as these labs) under a common resource group).

      若要了解有关 Azure 资源组的详细信息,请 访问资源组一文If you wish to read more about Azure Resource Groups, please visit the resource group article.

    5. 如果要创建新的资源组) ,请确定资源组 (的位置。Determine the Location for your resource group (if you are creating a new Resource Group). 此位置理想情况下会在应用程序运行所在的区域中。The location would ideally be in the region where the application would run. 某些 Azure 资产仅在特定区域提供。Some Azure assets are only available in certain regions.

    6. 还需要确认是否已了解应用于此服务的条款和条件。You will also need to confirm that you have understood the Terms and Conditions applied to this Service.

    7. 单击“创建”。Click Create.

      服务创建信息

  5. 单击 " 创建" 后,需要等待创建服务,这可能需要一分钟时间。Once you have clicked on Create, you will have to wait for the service to be created, this might take a minute.

  6. 创建服务实例后,门户中将显示一个通知。A notification will appear in the portal once the Service instance is created.

    查看新服务的新通知

  7. 单击通知以浏览新服务实例。Click on the notification to explore your new Service instance.

    选择 "中转到资源" 按钮。

  8. 单击通知中的 " 中转到资源 " 按钮以浏览新服务实例。Click the Go to resource button in the notification to explore your new Service instance. 你将转到新的计算机视觉 API 服务实例。You will be taken to your new Computer Vision API service instance.

    新的计算机视觉 API 服务映像

  9. 在本教程中,你的应用程序将需要调用你的服务,这是通过使用你的服务的订阅密钥来完成的。Within this tutorial, your application will need to make calls to your service, which is done through using your service’s Subscription Key.

  10. 计算机视觉 API 服务的 "快速启动" 页上,导航到第一步,获取你的密钥,然后单击 "密钥" (你还可以通过单击 "服务" 导航菜单中的 "蓝色" 超链接项(位于 "服务" 导航菜单中,由键图标) 表示)来实现此目的。From the Quick start page, of your Computer Vision API service, navigate to the first step, Grab your keys, and click Keys (you can also achieve this by clicking the blue hyperlink Keys, located in the services navigation menu, denoted by the key icon). 这会显示你的服务 密钥This will reveal your service Keys.

  11. 复制其中一个所显示的密钥,因为稍后会在项目中使用此密钥。Take a copy of one of the displayed keys, as you will need this later in your project.

  12. 返回到 " 快速启动 " 页,从该处获取终结点。Go back to the Quick start page, and from there, fetch your endpoint. 请注意,你可能会有所不同,具体取决于你所在的区域 (如果是,则你稍后需要对代码进行更改) 。Be aware yours may be different, depending on your region (which if it is, you will need to make a change to your code later). 获取此终结点的副本供以后使用:Take a copy of this endpoint for use later:

    新的计算机视觉 API 服务

    提示

    可在 此处检查各种终结点。You can check what the various endpoints are HERE.

第2章–设置 Unity 项目Chapter 2 – Set up the Unity project

下面是用于使用混合现实进行开发的典型设置,因此,这是其他项目的一个不错的模板。The following is a typical set up for developing with mixed reality, and as such, is a good template for other projects.

  1. 打开 Unity ,并单击 " 新建"。Open Unity and click New.

    启动新的 Unity 项目。

  2. 现在需要提供 Unity 项目名称。You will now need to provide a Unity Project name. 插入 MR_ComputerVisionInsert MR_ComputerVision. 请确保 "项目类型" 设置为 " 3d"。Make sure the project type is set to 3D. 将位置设置为合适的 位置 (记住,更接近根目录) 。Set the Location to somewhere appropriate for you (remember, closer to root directories is better). 然后单击 " 创建项目"。Then, click Create project.

    提供新 Unity 项目的详细信息。

  3. 当 Unity 处于打开状态时,有必要选中 "默认 脚本编辑器 " 设置为 " Visual Studio"。With Unity open, it is worth checking the default Script Editor is set to Visual Studio. 转到 " 编辑 > 首选项 ",然后在新窗口中导航到 " 外部工具"。Go to Edit > Preferences and then from the new window, navigate to External Tools. 外部脚本编辑器 更改为 Visual Studio 2017Change External Script Editor to Visual Studio 2017. 关闭 " 首选项 " 窗口。Close the Preferences window.

    更新脚本编辑器首选项。

  4. 接下来,转到 " 文件 > 生成设置 ",选择 " 通用 Windows 平台",然后单击 " 切换平台 " 按钮以应用所选内容。Next, go to File > Build Settings and select Universal Windows Platform, then click on the Switch Platform button to apply your selection.

    生成设置窗口,将平台切换到 UWP。

  5. 尽管仍处于 文件 > 生成设置 ,但请确保:While still in File > Build Settings and make sure that:

    1. 目标设备 设置为 HoloLensTarget Device is set to HoloLens

      对于沉浸式耳机,将 " 目标设备 " 设置为 " 任何设备"。For the immersive headsets, set Target Device to Any Device.

    2. 生成类型 设置为 D3DBuild Type is set to D3D

    3. SDK 设置为 "最新安装"SDK is set to Latest installed

    4. Visual Studio 版本 设置为 "最新安装"Visual Studio Version is set to Latest installed

    5. "生成并运行" 设置为 "本地计算机"Build and Run is set to Local Machine

    6. 保存场景并将其添加到生成中。Save the scene and add it to the build.

      1. 通过选择 " 添加打开的场景" 来执行此操作。Do this by selecting Add Open Scenes. 将显示 "保存" 窗口。A save window will appear.

        单击 "添加打开的场景" 按钮

      2. 为此创建新文件夹,并为将来的任何场景创建一个新文件夹,然后选择 " 新建文件夹 " 按钮以创建新文件夹,将其命名为 场景Create a new folder for this, and any future, scene, then select the New folder button, to create a new folder, name it Scenes.

        "创建新脚本" 文件夹

      3. 打开新创建的 场景 文件夹,然后 在 "文件名:文本" 字段 中,键入 MR_ComputerVisionScene,并单击 " 保存"。Open your newly created Scenes folder, and then in the File name: text field, type MR_ComputerVisionScene, then click Save.

        为新场景指定名称。

        请注意,必须将 Unity 场景保存在 " 资产 " 文件夹中,因为它们必须与 Unity 项目相关联。Be aware, you must save your Unity scenes within the Assets folder, as they must be associated with the Unity Project. 创建场景文件夹 (和其他类似文件夹) 是构建 Unity 项目的典型方式。Creating the scenes folder (and other similar folders) is a typical way of structuring a Unity project.

    7. 现在," 生成设置" 中的其余设置应保留为默认值。The remaining settings, in Build Settings, should be left as default for now.

  6. 在 " 生成设置 " 窗口中,单击 " 播放机设置 " 按钮,这会在 检查器 所在的空间中打开相关面板。In the Build Settings window, click on the Player Settings button, this will open the related panel in the space where the Inspector is located.

    打开播放机设置。

  7. 在此面板中,需要验证几项设置:In this panel, a few settings need to be verified:

    1. 在 " 其他设置 " 选项卡中:In the Other Settings tab:

      1. 脚本运行时版本稳定 ( .net 3.5 等效) 。Scripting Runtime Version should be Stable (.NET 3.5 Equivalent).

      2. 脚本编写后端 应为 .netScripting Backend should be .NET

      3. API 兼容级别 应为 .net 4.6API Compatibility Level should be .NET 4.6

        更新其他设置。

    2. 在 " 发布设置 " 选项卡的 " 功能" 下,检查:Within the Publishing Settings tab, under Capabilities, check:

      1. InternetClientInternetClient

      2. 网络摄像头Webcam

        正在更新发布设置。

    3. 在面板中,在 " XR 设置 " 中, () "发布设置" 下的 " 发布设置 " 下提供了 支持,请确保已添加 Windows Mixed reality SDKFurther down the panel, in XR Settings (found below Publish Settings), tick Virtual Reality Supported, make sure the Windows Mixed Reality SDK is added.

      更新 X R 设置。

  8. 返回 生成设置 Unity c # 项目不再灰显;勾选此的旁边的复选框。Back in Build Settings Unity C# Projects is no longer greyed out; tick the checkbox next to this.

  9. 关闭“生成设置”窗口。Close the Build Settings window.

  10. 保存场景和项目 (文件 > 保存场景/文件 > 保存项目) 。Save your Scene and Project (FILE > SAVE SCENE / FILE > SAVE PROJECT).

第3章–照相机设置Chapter 3 – Main Camera setup

重要

如果希望跳过本课程的 Unity 设置 组件,并继续直接进入代码,可以下载 unitypackage,将其作为 自定义包导入项目,然后从 第5章继续。If you wish to skip the Unity Set up component of this course, and continue straight into code, feel free to download this .unitypackage, import it into your project as a Custom Package, and then continue from Chapter 5.

  1. 在 " 层次结构" 面板 中,选择 " 摄像机"。In the Hierarchy Panel, select the Main Camera.

  2. 选择后,你将能够在 "检查器" 面板 中看到 主相机 的所有组件。Once selected, you will be able to see all the components of the Main Camera in the Inspector Panel.

    1. 照相机对象 必须命名为 "主相机" (记下拼写! ) The Camera object must be named Main Camera (note the spelling!)

    2. 必须将主相机 标记 设置为 " MainCamera ", (记下拼写! ) The Main Camera Tag must be set to MainCamera (note the spelling!)

    3. 请确保将 转换位置 设置为 0,0,0Make sure the Transform Position is set to 0, 0, 0

    4. 将 " 清除标志 " 设置为 纯色 (为沉浸式头戴式耳机) 忽略此标志。Set Clear Flags to Solid Color (ignore this for immersive headset).

    5. 将相机组件的 背景 色设置为 黑色、Alpha 0 (十六进制代码: #00000000) (为沉浸式耳机) 忽略此颜色。Set the Background Color of the Camera Component to Black, Alpha 0 (Hex Code: #00000000) (ignore this for immersive headset).

      更新照相机组件。

  3. 接下来,必须创建一个附加到 主相机 的简单 "Cursor" 对象,这将帮助你在应用程序运行时定位图像分析输出。Next, you will have to create a simple “Cursor” object attached to the Main Camera, which will help you position the image analysis output when the application is running. 此光标将确定相机焦点的中心点。This Cursor will determine the center point of the camera focus.

创建游标:To create the Cursor:

  1. 在 " 层次结构" 面板 中,右键单击 主相机In the Hierarchy Panel, right-click on the Main Camera. 在 " 3D 对象" 下,单击 " 球面"。Under 3D Object, click on Sphere.

    选择 Cursor 对象。

  2. 球体 重命名为 光标 (双击光标对象或按下 "F2" 键盘按钮) 所选对象,并确保其位于 主相机 的子项。Rename the Sphere to Cursor (double click the Cursor object or press the ‘F2’ keyboard button with the object selected), and make sure it is located as child of the Main Camera.

  3. 在 " 层次结构" 面板 中,左键单击 光标In the Hierarchy Panel, left click on the Cursor. 选择光标后,在 " 检查器" 面板 中调整以下变量:With the Cursor selected, adjust the following variables in the Inspector Panel:

    1. 转换位置 设置为 0、0、5Set the Transform Position to 0, 0, 5

    2. 刻度 设置为 0.02、0.02、0.02Set the Scale to 0.02, 0.02, 0.02

      更新转换位置和缩放比例。

第4章–设置标签系统Chapter 4 – Setup the Label system

使用 HoloLens 相机捕获映像后,该映像将发送到 Azure 计算机视觉 API 服务实例进行分析。Once you have captured an image with the HoloLens’ camera, that image will be sent to your Azure Computer Vision API Service instance for analysis.

此分析的结果将是被识别对象(称为 标记)的列表。The results of that analysis will be a list of recognized objects called Tags.

您将使用标签 (为世界空间中的3D 文本) 在拍摄照片的位置显示这些标记。You will use Labels (as a 3D text in world space) to display these Tags at the location the photo was taken.

以下步骤将演示如何设置 标签 对象。The following steps will show how to setup the Label object.

  1. 右键单击 "层次结构" 面板中的任意位置 (此时位置并不重要) 在 " 三维对象" 下,添加 3d 文本Right-click anywhere in the Hierarchy Panel (the location does not matter at this point), under 3D Object, add a 3D Text. 将其命名为 LabelTextName it LabelText.

    创建3D 文本对象。

  2. 在 " 层次结构" 面板 中,单击 LabelTextIn the Hierarchy Panel, left click on the LabelText. 选择 LabelText 后,在 " 检查器" 面板 中调整以下变量:With the LabelText selected, adjust the following variables in the Inspector Panel:

    1. 位置 设置为 0,0,0Set the Position to 0,0,0
    2. 刻度 设置为 0.01、0.01、0.01Set the Scale to 0.01, 0.01, 0.01
    3. 在组件 文本网格 中:In the component Text Mesh:
    4. 文本 中的所有文本替换为 "..."Replace all the text within Text, with "..."
    5. 定位点 设置为 中间中心Set the Anchor to Middle Center
    6. 对齐方式 设置为 居中Set the Alignment to Center
    7. 选项卡大小 设置为 4Set the Tab Size to 4
    8. 字体大小 设置为 50Set the Font Size to 50
    9. 颜色 设置为 #FFFFFFFFSet the Color to #FFFFFFFF

    文本组件

  3. LabelText 从 " 层次结构" 面板 中拖到 " 资源" 文件夹 内的 " 项目" 面板 中。Drag the LabelText from the Hierarchy Panel, into the Asset Folder, within in the Project Panel. 这会将 LabelText 设置为 Prefab,以便可以在代码中对其进行实例化。This will make the LabelText a Prefab, so that it can be instantiated in code.

    创建 LabelText 对象的 prefab。

  4. 你应从 "层次结构" 面板 中删除 LabelText ,以使其不会在打开场景中显示。You should delete the LabelText from the Hierarchy Panel, so that it will not be displayed in the opening scene. 由于它现在是一个 prefab,你将从资产文件夹中的单个实例上调用,无需将其保存在场景中。As it is now a prefab, which you will call on for individual instances from your Assets folder, there is no need to keep it within the scene.

  5. " 层次结构" 面板 中的最后一个对象结构应类似于下图所示:The final object structure in the Hierarchy Panel should be like the one shown in the image below:

    层次结构面板的最终结构。

第5章–创建 ResultsLabel 类Chapter 5 – Create the ResultsLabel class

需要创建的第一个脚本是 ResultsLabel 类,该类负责以下操作:The first script you need to create is the ResultsLabel class, which is responsible for the following:

  • 在适当的世界空间中创建标签,相对于照相机的位置。Creating the Labels in the appropriate world space, relative to the position of the Camera.
  • 显示图像 Analysis 中的标记。Displaying the Tags from the Image Anaysis.

若要创建此类:To create this class:

  1. 右键单击 " 项目" 面板,然后 创建 > 文件夹Right-click in the Project Panel, then Create > Folder. 命名文件夹 脚本Name the folder Scripts.

    创建脚本文件夹。

  2. 在 " 脚本 " 文件夹中,双击以打开。With the Scripts folder create, double click it to open. 然后在该文件夹中,右键单击,然后选择 " 创建 > 然后选择" c # 脚本"。Then within that folder, right-click, and select Create > then C# Script. 将脚本命名为 ResultsLabelName the script ResultsLabel.

  3. 双击新的 ResultsLabel 脚本以通过 Visual Studio 打开它。Double click on the new ResultsLabel script to open it with Visual Studio.

  4. 在类中,在 ResultsLabel 类中插入以下代码:Inside the Class insert the following code in the ResultsLabel class:

        using System.Collections.Generic;
        using UnityEngine;
    
        public class ResultsLabel : MonoBehaviour
        {   
            public static ResultsLabel instance;
    
            public GameObject cursor;
    
            public Transform labelPrefab;
    
            [HideInInspector]
            public Transform lastLabelPlaced;
    
            [HideInInspector]
            public TextMesh lastLabelPlacedText;
    
            private void Awake()
            {
                // allows this instance to behave like a singleton
                instance = this;
            }
    
            /// <summary>
            /// Instantiate a Label in the appropriate location relative to the Main Camera.
            /// </summary>
            public void CreateLabel()
            {
                lastLabelPlaced = Instantiate(labelPrefab, cursor.transform.position, transform.rotation);
    
                lastLabelPlacedText = lastLabelPlaced.GetComponent<TextMesh>();
    
                // Change the text of the label to show that has been placed
                // The final text will be set at a later stage
                lastLabelPlacedText.text = "Analysing...";
            }
    
            /// <summary>
            /// Set the Tags as Text of the last Label created. 
            /// </summary>
            public void SetTagsToLastLabel(Dictionary<string, float> tagsDictionary)
            {
                lastLabelPlacedText = lastLabelPlaced.GetComponent<TextMesh>();
    
                // At this point we go through all the tags received and set them as text of the label
                lastLabelPlacedText.text = "I see: \n";
    
                foreach (KeyValuePair<string, float> tag in tagsDictionary)
                {
                    lastLabelPlacedText.text += tag.Key + ", Confidence: " + tag.Value.ToString("0.00 \n");
                }    
            }
        }
    
  5. 在返回到 Unity 之前,请务必保存 Visual Studio 中所做的更改。Be sure to save your changes in Visual Studio before returning to Unity.

  6. 返回 Unity 编辑器,单击 "脚本" 文件夹中的 ResultsLabel 类并将其拖到 "层次结构" 面板 中的 主相机 对象。Back in the Unity Editor, click and drag the ResultsLabel class from the Scripts folder to the Main Camera object in the Hierarchy Panel.

  7. 单击 主摄像机 ,查看 检查器面板Click on the Main Camera and look at the Inspector Panel.

你会注意到,从刚才拖到摄像机的脚本中,有两个字段: CursorLabel PrefabYou will notice that from the script you just dragged into the Camera, there are two fields: Cursor and Label Prefab.

  1. 将名为 cursor 的对象从 " 层次结构" 面板 拖动到名为 " cursor" 的槽中,如下图所示。Drag the object called Cursor from the Hierarchy Panel to the slot named Cursor, as shown in the image below.

  2. 将名为 LabelText 的对象从 "项目" 面板 中的 "资产" 文件夹 拖到名为 Label Prefab 的槽中,如下图所示。Drag the object called LabelText from the Assets Folder in the Project Panel to the slot named Label Prefab, as shown in the image below.

    在 Unity 中设置引用目标。

第6章–创建 ImageCapture 类Chapter 6 – Create the ImageCapture class

要创建的下一个类是 ImageCapture 类。The next class you are going to create is the ImageCapture class. 此类负责:This class is responsible for:

  • 使用 HoloLens 相机捕获映像,并将其存储在 App 文件夹中。Capturing an Image using the HoloLens Camera and storing it in the App Folder.
  • 正在捕获用户的点击手势。Capturing Tap gestures from the user.

若要创建此类:To create this class:

  1. 中转到前面创建的 " 脚本 " 文件夹。Go to the Scripts folder you created previously.

  2. 右键单击文件夹内部, 创建 > c # 脚本Right-click inside the folder, Create > C# Script. 调用脚本 ImageCaptureCall the script ImageCapture.

  3. 双击新的 ImageCapture 脚本以通过 Visual Studio 打开它。Double click on the new ImageCapture script to open it with Visual Studio.

  4. 将以下命名空间添加到文件顶部:Add the following namespaces to the top of the file:

        using System.IO;
        using System.Linq;
        using UnityEngine;
        using UnityEngine.XR.WSA.Input;
        using UnityEngine.XR.WSA.WebCam;
    
  5. 然后,将以下变量添加到 ImageCapture 类中的 Start ( # B1 方法之上:Then add the following variables inside the ImageCapture class, above the Start() method:

        public static ImageCapture instance; 
        public int tapsCount;
        private PhotoCapture photoCaptureObject = null;
        private GestureRecognizer recognizer;
        private bool currentlyCapturing = false;
    

TapsCount 变量将存储从用户捕获的攻丝手势的数目。The tapsCount variable will store the number of tap gestures captured from the user. 此编号用于捕获的映像的命名。This number is used in the naming of the images captured.

  1. 现在需要添加用于 唤醒 ( # B1Start ( # B3 方法的代码。Code for Awake() and Start() methods now needs to be added. 当类初始化时,将调用以下内容:These will be called when the class initializes:

        private void Awake()
        {
            // Allows this instance to behave like a singleton
            instance = this;
        }
    
        void Start()
        {
            // subscribing to the HoloLens API gesture recognizer to track user gestures
            recognizer = new GestureRecognizer();
            recognizer.SetRecognizableGestures(GestureSettings.Tap);
            recognizer.Tapped += TapHandler;
            recognizer.StartCapturingGestures();
        }
    
  2. 实现一个处理程序,该处理程序将在出现分流手势时调用。Implement a handler that will be called when a Tap gesture occurs.

        /// <summary>
        /// Respond to Tap Input.
        /// </summary>
        private void TapHandler(TappedEventArgs obj)
        {
            // Only allow capturing, if not currently processing a request.
            if(currentlyCapturing == false)
            {
                currentlyCapturing = true;
    
                // increment taps count, used to name images when saving
                tapsCount++;
    
                // Create a label in world space using the ResultsLabel class
                ResultsLabel.instance.CreateLabel();
    
                // Begins the image capture and analysis procedure
                ExecuteImageCaptureAndAnalysis();
            }
        }
    

TapHandler ( # B1 方法会递增从用户捕获的点击次数,并使用当前光标位置确定新标签的位置。The TapHandler() method increments the number of taps captured from the user and uses the current Cursor position to determine where to position a new Label.

然后,此方法会调用 ExecuteImageCaptureAndAnalysis ( # B1 方法来开始此应用程序的核心功能。This method then calls the ExecuteImageCaptureAndAnalysis() method to begin the core functionality of this application.

  1. 捕获并存储映像后,将调用以下处理程序。Once an Image has been captured and stored, the following handlers will be called. 如果该过程成功,则会将结果传递给 VisionManager (您还需要创建) 进行分析。If the process is successful, the result is passed to the VisionManager (which you are yet to create) for analysis.

        /// <summary>
        /// Register the full execution of the Photo Capture. If successful, it will begin 
        /// the Image Analysis process.
        /// </summary>
        void OnCapturedPhotoToDisk(PhotoCapture.PhotoCaptureResult result)
        {
            // Call StopPhotoMode once the image has successfully captured
            photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
        }
    
        void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result)
        {
            // Dispose from the object in memory and request the image analysis 
            // to the VisionManager class
            photoCaptureObject.Dispose();
            photoCaptureObject = null;
            StartCoroutine(VisionManager.instance.AnalyseLastImageCaptured()); 
        }
    
  2. 然后添加应用程序用于启动映像捕获进程并存储映像的方法。Then add the method that the application uses to start the Image capture process and store the image.

        /// <summary>    
        /// Begin process of Image Capturing and send To Azure     
        /// Computer Vision service.   
        /// </summary>    
        private void ExecuteImageCaptureAndAnalysis()  
        {    
            // Set the camera resolution to be the highest possible    
            Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();    
    
            Texture2D targetTexture = new Texture2D(cameraResolution.width, cameraResolution.height);
    
            // Begin capture process, set the image format    
            PhotoCapture.CreateAsync(false, delegate (PhotoCapture captureObject)    
            {    
                photoCaptureObject = captureObject;    
                CameraParameters camParameters = new CameraParameters();    
                camParameters.hologramOpacity = 0.0f;    
                camParameters.cameraResolutionWidth = targetTexture.width;    
                camParameters.cameraResolutionHeight = targetTexture.height;    
                camParameters.pixelFormat = CapturePixelFormat.BGRA32;
    
                // Capture the image from the camera and save it in the App internal folder    
                captureObject.StartPhotoModeAsync(camParameters, delegate (PhotoCapture.PhotoCaptureResult result)
                {    
                    string filename = string.Format(@"CapturedImage{0}.jpg", tapsCount);
    
                    string filePath = Path.Combine(Application.persistentDataPath, filename);
    
                    VisionManager.instance.imagePath = filePath;
    
                    photoCaptureObject.TakePhotoAsync(filePath, PhotoCaptureFileOutputFormat.JPG, OnCapturedPhotoToDisk);
    
                    currentlyCapturing = false;
                });   
            });    
        }
    

警告

此时,你会注意到在 Unity 编辑器控制台面板 中出现错误。At this point you will notice an error appearing in the Unity Editor Console Panel. 这是因为代码引用了将在下一章中创建的 VisionManager 类。This is because the code references the VisionManager class which you will create in the next Chapter.

第7章–调用 Azure 和映像分析Chapter 7 – Call to Azure and Image Analysis

需要创建的最后一个脚本是 VisionManager 类。The last script you need to create is the VisionManager class.

此类负责:This class is responsible for:

  • 加载作为字节数组捕获的最新映像。Loading the latest image captured as an array of bytes.
  • 将字节数组发送给 Azure 计算机视觉 API 服务实例进行分析。Sending the byte array to your Azure Computer Vision API Service instance for analysis.
  • 接收 JSON 字符串形式的响应。Receiving the response as a JSON string.
  • 反序列化响应并将生成的标记传递给 ResultsLabel 类。Deserializing the response and passing the resulting Tags to the ResultsLabel class.

若要创建此类:To create this class:

  1. 双击 " 脚本 " 文件夹以将其打开。Double click on the Scripts folder, to open it.

  2. 右键单击 " 脚本 " 文件夹中,单击 " 创建 > c # 脚本"。Right-click inside the Scripts folder, click Create > C# Script. 将脚本命名为 VisionManagerName the script VisionManager.

  3. 双击新脚本以通过 Visual Studio 打开它。Double click on the new script to open it with Visual Studio.

  4. VisionManager 类的顶部,将命名空间更新为与以下相同:Update the namespaces to be the same as the following, at the top of the VisionManager class:

        using System;
        using System.Collections;
        using System.Collections.Generic;
        using System.IO;
        using UnityEngine;
        using UnityEngine.Networking;
    
  5. 在脚本顶部 (" 开始 ( # B2) 方法 的上方, 你现在需要创建两个 ,用于表示从 Azure 反序列化的 JSON 响应:At the top of your script, inside the VisionManager class (above the Start() method), you now need to create two Classes that will represent the deserialized JSON response from Azure:

        [System.Serializable]
        public class TagData
        {
            public string name;
            public float confidence;
        }
    
        [System.Serializable]
        public class AnalysedObject
        {
            public TagData[] tags;
            public string requestId;
            public object metadata;
        }
    

    备注

    TagDataAnalysedObject 类需要添加 [system.exception] 特性,然后才能使用 Unity 库反序列化声明。The TagData and AnalysedObject classes need to have the [System.Serializable] attribute added before the declaration to be able to be deserialized with the Unity libraries.

  6. 在 VisionManager 类中,应添加以下变量:In the VisionManager class, you should add the following variables:

        public static VisionManager instance;
    
        // you must insert your service key here!    
        private string authorizationKey = "- Insert your key here -";    
        private const string ocpApimSubscriptionKeyHeader = "Ocp-Apim-Subscription-Key";
        private string visionAnalysisEndpoint = "https://westus.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=Tags";   // This is where you need to update your endpoint, if you set your location to something other than west-us.
    
        internal byte[] imageBytes;
    
        internal string imagePath;
    

    警告

    请确保将 身份验证密钥 插入到 authorizationKey 变量。Make sure you insert your Auth Key into the authorizationKey variable. 你将在本课程开头注明 身份验证密钥第1章You will have noted your Auth Key at the beginning of this course, Chapter 1.

    警告

    VisionAnalysisEndpoint 变量可能不同于在此示例中指定的变量。The visionAnalysisEndpoint variable might differ from the one specified in this example. " 美国西部 " 严格指为 "美国西部" 区域创建的服务实例。The west-us strictly refers to Service instances created for the West US region. 终结点 URL更新此值;下面是一些可能的示例:Update this with your endpoint URL; here are some examples of what that might look like:

    • 西欧: https://westeurope.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=TagsWest Europe: https://westeurope.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=Tags
    • 东南亚: https://southeastasia.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=TagsSoutheast Asia: https://southeastasia.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=Tags
    • 澳大利亚东部: https://australiaeast.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=TagsAustralia East: https://australiaeast.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=Tags
  7. 现在需要添加用于唤醒的代码。Code for Awake now needs to be added.

        private void Awake()
        {
            // allows this instance to behave like a singleton
            instance = this;
        }
    
  8. 接下来,将协同程序 (与其下面的静态流方法) ,它将获取 ImageCapture 类捕获的映像的分析结果。Next, add the coroutine (with the static stream method below it), which will obtain the results of the analysis of the image captured by the ImageCapture Class.

        /// <summary>
        /// Call the Computer Vision Service to submit the image.
        /// </summary>
        public IEnumerator AnalyseLastImageCaptured()
        {
            WWWForm webForm = new WWWForm();
            using (UnityWebRequest unityWebRequest = UnityWebRequest.Post(visionAnalysisEndpoint, webForm))
            {
                // gets a byte array out of the saved image
                imageBytes = GetImageAsByteArray(imagePath);
                unityWebRequest.SetRequestHeader("Content-Type", "application/octet-stream");
                unityWebRequest.SetRequestHeader(ocpApimSubscriptionKeyHeader, authorizationKey);
    
                // the download handler will help receiving the analysis from Azure
                unityWebRequest.downloadHandler = new DownloadHandlerBuffer();
    
                // the upload handler will help uploading the byte array with the request
                unityWebRequest.uploadHandler = new UploadHandlerRaw(imageBytes);
                unityWebRequest.uploadHandler.contentType = "application/octet-stream";
    
                yield return unityWebRequest.SendWebRequest();
    
                long responseCode = unityWebRequest.responseCode;     
    
                try
                {
                    string jsonResponse = null;
                    jsonResponse = unityWebRequest.downloadHandler.text;
    
                    // The response will be in Json format
                    // therefore it needs to be deserialized into the classes AnalysedObject and TagData
                    AnalysedObject analysedObject = new AnalysedObject();
                    analysedObject = JsonUtility.FromJson<AnalysedObject>(jsonResponse);
    
                    if (analysedObject.tags == null)
                    {
                        Debug.Log("analysedObject.tagData is null");
                    }
                    else
                    {
                        Dictionary<string, float> tagsDictionary = new Dictionary<string, float>();
    
                        foreach (TagData td in analysedObject.tags)
                        {
                            TagData tag = td as TagData;
                            tagsDictionary.Add(tag.name, tag.confidence);                            
                        }
    
                        ResultsLabel.instance.SetTagsToLastLabel(tagsDictionary);
                    }
                }
                catch (Exception exception)
                {
                    Debug.Log("Json exception.Message: " + exception.Message);
                }
    
                yield return null;
            }
        }
    
        /// <summary>
        /// Returns the contents of the specified file as a byte array.
        /// </summary>
        private static byte[] GetImageAsByteArray(string imageFilePath)
        {
            FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
            BinaryReader binaryReader = new BinaryReader(fileStream);
            return binaryReader.ReadBytes((int)fileStream.Length);
        }  
    
  9. 在返回到 Unity 之前,请务必保存 Visual Studio 中所做的更改。Be sure to save your changes in Visual Studio before returning to Unity.

  10. 返回 Unity 编辑器,单击 "脚本" 文件夹中的 VisionManagerImageCapture 类,然后将其拖到 层次结构面板 中的 主相机 对象。Back in the Unity Editor, click and drag the VisionManager and ImageCapture classes from the Scripts folder to the Main Camera object in the Hierarchy Panel.

第8章–生成之前Chapter 8 – Before building

若要对应用程序进行全面测试,需要将其旁加载到 HoloLens。To perform a thorough test of your application you will need to sideload it onto your HoloLens. 在执行此操作之前,请确保:Before you do, ensure that:

  • 第2章中提到的所有设置都已正确设置。All the settings mentioned in Chapter 2 are set correctly.
  • 所有脚本都附加到 相机 对象上。All the scripts are attached to the Main Camera object.
  • 主相机检查器面板 中的所有字段均已正确分配。All the fields in the Main Camera Inspector Panel are assigned properly.
  • 请确保将 身份验证密钥 插入到 authorizationKey 变量。Make sure you insert your Auth Key into the authorizationKey variable.
  • 确保你还在 VisionManager 脚本中检查了你的终结点,并将其与 所在 区域对齐 (本文档默认使用) 。Ensure that you have also checked your endpoint in your VisionManager script, and that it aligns to your region (this document uses west-us by default).

第9章–构建 UWP 解决方案并旁加载应用程序Chapter 9 – Build the UWP Solution and sideload the application

此项目的 Unity 部分所需的所有内容现在均已完成,因此可以从 Unity 构建它。Everything needed for the Unity section of this project has now been completed, so it is time to build it from Unity.

  1. 导航到 "生成设置" - 文件 > 生成设置 ...Navigate to Build Settings - File > Build Settings…

  2. 生成设置 窗口中,单击 " 生成"。From the Build Settings window, click Build.

    从 Unity 生成应用

  3. 如果尚未这样做,请勾选 Unity c # 项目If not already, tick Unity C# Projects.

  4. 单击“生成”。Click Build. Unity 将启动 文件资源管理器 窗口,在该窗口中,需要创建一个文件夹,然后选择要在其中生成应用的文件夹。Unity will launch a File Explorer window, where you need to create and then select a folder to build the app into. 立即创建该文件夹并将其命名为 应用Create that folder now, and name it App. 选择 应用 文件夹后,按 " 选择文件夹"。Then with the App folder selected, press Select Folder.

  5. Unity 将开始向 应用 文件夹生成项目。Unity will begin building your project to the App folder.

  6. Unity 完成生成后 (可能需要一些时间) ,它将在你的生成的位置上打开 " 文件资源管理器 " 窗口 (检查任务栏,因为它可能不会始终出现在 windows 上,但会通知你添加了新的窗口) 。Once Unity has finished building (it might take some time), it will open a File Explorer window at the location of your build (check your task bar, as it may not always appear above your windows, but will notify you of the addition of a new window).

第10章–部署到 HoloLensChapter 10 – Deploy to HoloLens

在 HoloLens 上部署:To deploy on HoloLens:

  1. 需要为远程部署) 提供 HoloLens (的 IP 地址,并确保 HoloLens 处于 开发人员模式You will need the IP Address of your HoloLens (for Remote Deploy), and to ensure your HoloLens is in Developer Mode. 具体方法为:To do this:

    1. 在戴上 HoloLens 的同时,请打开 设置Whilst wearing your HoloLens, open the Settings.
    2. Wi-Fi > 高级选项中转到网络 & Internet >Go to Network & Internet > Wi-Fi > Advanced Options
    3. 记下 IPv4 地址。Note the IPv4 address.
    4. 接下来,导航回 " 设置",然后为 开发人员更新 & Security >Next, navigate back to Settings, and then to Update & Security > For Developers
    5. 设置开发人员模式。Set Developer Mode On.
  2. 导航到新的 Unity 生成 (应用 文件夹) 并通过 Visual Studio 打开解决方案文件。Navigate to your new Unity build (the App folder) and open the solution file with Visual Studio.

  3. 在解决方案配置中,选择 " 调试"。In the Solution Configuration select Debug.

  4. 在解决方案平台中,选择 " x86远程计算机"。In the Solution Platform, select x86, Remote Machine.

    从 Visual Studio 部署解决方案。

  5. 请在 " 生成" 菜单 中,单击 " 部署解决方案",将应用程序旁加载到 HoloLens。Go to the Build menu and click on Deploy Solution, to sideload the application to your HoloLens.

  6. 应用现在应显示在你的 HoloLens 上已安装的应用列表中,可以启动了!Your App should now appear in the list of installed apps on your HoloLens, ready to be launched!

备注

若要部署到沉浸式耳机,请将 " 解决方案平台 " 设置为 " 本地计算机",并将 配置 设置为 " 调试",将 " x86 " 设置为 平台To deploy to immersive headset, set the Solution Platform to Local Machine, and set the Configuration to Debug, with x86 as the Platform. 然后,使用 " 生成" 菜单 选择 " 部署解决方案",将部署到本地计算机。Then deploy to the local machine, using the Build menu, selecting Deploy Solution.

已完成的计算机视觉 API 应用程序Your finished Computer Vision API application

恭喜,你构建了一个使用 Azure 计算机视觉 API 来识别真实世界对象的混合现实应用,并清楚地显示了所见的内容。Congratulations, you built a mixed reality app that leverages the Azure Computer Vision API to recognize real world objects, and display confidence of what has been seen.

实验室结果

额外练习Bonus exercises

练习1Exercise 1

正如你在 VisionManager) 内使用的 终结点 中使用了 Tags 参数 (作为出现,扩展应用程序以检测其他信息;查看你可以 访问的其他参数。Just as you have used the Tags parameter (as evidenced within the endpoint used within the VisionManager), extend the app to detect other information; have a look at what other parameters you have access to HERE.

练习2Exercise 2

显示返回的 Azure 数据,其显示方式更多,而且可能会隐藏数字。Display the returned Azure data, in a more conversational, and readable way, perhaps hiding the numbers. 如机器人可能对用户讲话。As though a bot might be speaking to the user.