在 Xamarin 中将 ARKit 与 UrhoSharp 配合使用Using ARKit with UrhoSharp in Xamarin.iOS

随着ARKit的引入,Apple 使开发人员能够更轻松地创建更扩充的现实应用程序。With the introduction of ARKit, Apple has made it simple for developers to create augmented reality applications. ARKit 可以跟踪设备的确切位置并检测世界上的各种表面,然后由开发人员将 ARKit 中的数据混合到代码中。ARKit can track the exact position of your device and detect various surfaces on the world, and it is then up to the developer to blend the data coming out of ARKit into your code.

UrhoSharp提供了一个全面且易于使用的 3d API,可用于创建3d 应用程序。UrhoSharp provides a comprehensive and easy to use 3D API that you can use to create 3D applications. 这两种方法都可以组合在一起,ARKit 来提供有关世界的物理信息,并 Urho 呈现结果。Both of these can be blended together, ARKit to provide the physical information about the world, and Urho to render the results.

本页介绍如何将这两个世界连接在一起,以创建强大的更高的现实应用程序。This page explains how to connect these two worlds together to create great augmented reality applications.

基本知识The Basics

我们要做的就是将三维内容呈现在世界上,如 iPhone/iPad 所见。What we want to do is present 3D content on top of the world as seen by the iPhone/iPad. 其思路是将来自设备照相机的内容与3D 内容混合在一起,而设备的用户在房间内移动以确保3D 对象的行为与它们在房间中的行为一样-这是通过将对象锚定到这一领域来完成的。The idea is to blend the contents coming from the device’s camera with the 3D content, and as the user of the device moves around the room to ensure that the 3D object behave as is they part of that room - this is done by anchoring the objects into this world.

ARKit 中的动画图形

我们将使用 Urho 库来加载我们的3D 资产,并将其放在世界上,我们将使用 ARKit 来获取来自照相机的视频流,以及手机在世界中的位置。We will be using the Urho library to load our 3D assets and place them on the world, and we will be using ARKit to get the video stream coming from the camera as well as the location of the phone in the world. 当用户通过手机移动时,我们将使用该位置中的更改来更新 Urho 引擎正在显示的坐标系统。As the user moves with his phone, we will use the changes in the location to update the coordinate system that the Urho engine is displaying.

这样一来,当您将一个对象放在三维空间中并移动用户时,三维对象的位置就会反映其放置位置和位置。This way, when you place an object in the 3D space and the user moves, the location of the 3D object reflects the place and location where it was placed.

设置应用程序Setting up your application

iOS 应用程序启动iOS Application Launch

你的 iOS 应用程序需要创建和启动你的3D 内容。为此,可以创建一个实现 Urho.Application 的子类,并通过重写 Start 方法来提供安装代码。Your iOS application needs to create and launch your 3D content, you do this by creating an implementing a subclass of the Urho.Application and provide your setup code by overriding the Start method. 这是在其中填充了数据的场景,事件处理程序是设置等。This is where your scene gets populated with data, event handlers are setup and so on.

我们引入了一个 Urho.ArkitApp 类,子类 Urho.Application 及其 Start 方法执行繁重的工作。We have introduced an Urho.ArkitApp class that subclasses Urho.Application and on its Start method does the heavy lifting. 你需要对现有的 Urho 应用程序进行的所有操作都是将基类更改为类型 Urho.ArkitApp,而你的应用程序将在世界中运行 Urho 场景。All you need to do to your existing Urho application is change the base class to be of type Urho.ArkitApp and you have an application that will run your urho Scene in the world.

ArkitApp 类The ArkitApp Class

此类提供一组方便的默认值,这两种默认情况下,场景都包含一些关键对象,以及由操作系统传递的 ARKit 事件的处理。This class provides a set of convenient defaults, both a scene with some key objects as well as the processing of ARKit events as they are delivered by the operating system.

设置在 Start 虚方法中进行。The setup takes place in the Start virtual method. 当你对子类重写此方法时,需要确保通过在你自己的实现上使用 base.Start() 链接到你的父项。When you override this method on your subclass, you need to make sure to chain to your parent by using base.Start() on your own implementation.

Start 方法设置场景、视区、照相机和方向光,并将其作为公共属性的表面显示:The Start method sets up the scene, viewport, camera and a directional light, and surfaces those as public properties:

  • 用于保存对象的 Scenea Scene to hold your objects,
  • 带有阴影的方向 Light,其位置通过 LightNode 属性提供a directional Light with shadows, and whose location is available via the LightNode property
  • 一个 Camera,其组件在 ARKit 将更新传递到应用程序时进行更新并a Camera whose components are updated when ARKit delivers an update to the application and
  • 显示结果 ViewPorta ViewPort displaying the results.

你的代码Your code

然后需要对 ArkitApp 类进行子类化,并重写 Start 方法。You then need to subclass the ArkitApp class and override the Start method. 你的方法应执行的第一件事是通过调用 base.Start()链接到 ArkitApp.StartThe first thing that your method should do is chain up to the ArkitApp.Start by calling base.Start(). 然后,可以使用 ArkitApp 设置的任何属性将对象添加到场景中,自定义要处理的光源、阴影或事件。After that, you can use any of the properties setup by ArkitApp to add your objects to the scene, customize the lights, shadows or events that you want to handle.

ARKit/UrhoSharp 示例使用纹理加载动画字符并播放动画,并提供以下实现:The ARKit/UrhoSharp sample loads an animated character with textures and plays the animation, with the following implementation:

public class MutantDemo : ArkitApp
{
    [Preserve]
    public MutantDemo(ApplicationOptions opts) : base(opts) { }

    Node mutantNode;

    protected override void Start()
    {
        base.Start ();

        // Mutant
        mutantNode = Scene.CreateChild();
        mutantNode.Rotation = new Quaternion(x: 0, y:15, z:0);
        mutantNode.Position = new Vector3(0, -1f, 2f); /*two meters away*/
        mutantNode.SetScale(0.5f);

        var mutant = mutantNode.CreateComponent<AnimatedModel>();
        mutant.Model = ResourceCache.GetModel("Models/Mutant.mdl");
        mutant.Material = ResourceCache.GetMaterial("Materials/mutant_M.xml");

        var animation = mutantNode.CreateComponent<AnimationController>();
        animation.Play("Animations/Mutant_HipHop1.ani", 0, true, 0.2f);
    }
}

这就是您在这个时候必须执行的所有操作,使您的3D 内容在增加的现实中显示出来。And that is really all that you have to do at this point to have your 3D content displayed in augmented reality.

Urho 对三维模型和动画使用自定义格式,因此需要将资产导出为此格式。Urho uses custom formats for 3D models and animations, so you need to export your assets into this format. 可以使用Urho3D Blender 外接程序UrhoAssetImporter之类的工具,这些工具可将这些资产从 .DBX、DAE、OBJ、Blend、三维最大格式转换为 Urho 所需的格式。You can use tools like the Urho3D Blender Add-in and UrhoAssetImporter that can convert these assets from popular formats like DBX, DAE, OBJ, Blend, 3D-Max into the format required by Urho.

若要了解有关使用 Urho 创建3D 应用程序的详细信息,请参阅UrhoSharp 指南简介To learn more about creating 3D applications using Urho, visit the Introduction to UrhoSharp guide.

深度 ArkitAppArkitApp in Depth

备注

本部分适用于想要自定义 UrhoSharp 和 ARKit 的默认体验的开发人员,或想要深入了解集成的工作方式。This section is intended for developers that want to customize the default experience of UrhoSharp and ARKit or want to get a deeper insight on how the integration works. 不需要阅读此部分。It is not necessary to read this section.

ARKit API 非常简单,你可以创建和配置ARSession对象,该对象随后会开始提供ARFrame对象。The ARKit API is pretty simple, you create and configure an ARSession object which then start delivering ARFrame objects. 其中包括相机捕获的图像,以及设备的预计实际位置。These contain both the image captured by the camera as well as the estimated real-world position of the device.

我们将使用我们的3D 内容将相机提供的图像组合到一起,并调整 UrhoSharp 中的相机,使其与设备位置和位置的可能性相匹配。We will be composing the images being delivered by the camera to us with our 3D content, and adjust the camera in UrhoSharp to match the chances in the device location and position.

下图显示了 ArkitApp 类中发生的情况:The following diagram shows what is taking place in the ArkitApp class:

ArkitApp 中的类和屏幕的关系图Diagram of classes and screens in the ArkitApp

呈现框架Rendering the Frames

这种思路很简单,只需要将相机中的视频与三维图形组合在一起即可生成合并的图像。The idea is simple, combine the video coming out of the camera with our 3D graphics to produce the combined image. 我们将按顺序获取这些捕获的映像,并将此输入与 Urho 场景混合使用。We will be getting a series of these captured images in sequence, and we will mix this input with the Urho scene.

执行此操作的最简单方法是在主 RenderPath中插入 RenderPathCommandThe simplest way to do it is to insert a RenderPathCommand into the main RenderPath. 这是一组用于绘制单个帧的命令。This is a set of commands that are performed to draw a single frame. 此命令将用传递给它的任何纹理填充视区。This command will fill the viewport with any texture we pass to it. 我们在第一帧上进行了此设置,实际定义是在此时加载的第一个ARRenderPath文件中完成的。We set this up on the first frame that is process, and the actual definition is done in th ARRenderPath.xml file that is loaded at this point.

但是,我们面临着两个问题将这两个世界混合在一起:However, we are faced with two problems to blend these two worlds together:

  1. 在 iOS 上,GPU 纹理的分辨率必须是2的幂,但我们将从照相机获取的帧的分辨率不是2的幂,例如:1280x720。On iOS, GPU Textures must have a resolution that is a power of two, but the frames that we will get from the camera do not have resolution that are a power of two, for example: 1280x720.
  2. 帧以YUV格式进行编码,用两个图像亮度和色度表示。The frames are encoded in YUV format, represented by two images - luma and chroma.

YUV 帧采用两种不同的分辨率。The YUV frames come in two different resolutions. 表示亮度的1280x720 图像(基本上为灰色缩放图像)和色度组件的更小640x360:a 1280x720 image representing luminance (basically a gray scale image) and much smaller 640x360 for the chrominance component:

演示组合 Y 和 UV 组件的图像

若要使用 OpenGL 来绘制完全彩色图像,必须编写一个小型着色器,使其从纹理槽获取亮度(Y 分量)和色度(UV 平面)。To draw a full colored image using OpenGL ES we have to write a small shader that takes luminance (Y component) and chrominance (UV planes) from the texture slots. 在 UrhoSharp 中,它们具有名称 "sDiffMap" 和 "sNormalMap",并将其转换为 RGB 格式:In UrhoSharp they have names - “sDiffMap” and “sNormalMap” and convert them into RGB format:

mat4 ycbcrToRGBTransform = mat4(
    vec4(+1.0000, +1.0000, +1.0000, +0.0000),
    vec4(+0.0000, -0.3441, +1.7720, +0.0000),
    vec4(+1.4020, -0.7141, +0.0000, +0.0000),
    vec4(-0.7010, +0.5291, -0.8860, +1.0000));

vec4 ycbcr = vec4(texture2D(sDiffMap, vTexCoord).r,
                    texture2D(sNormalMap, vTexCoord).ra, 1.0);
gl_FragColor = ycbcrToRGBTransform * ycbcr;

若要呈现没有两个分辨率幂的纹理,必须使用以下参数定义 Texture2D:To render the texture that does not have a power of two resolution we have to define Texture2D with the following parameters:

// texture for UV-plane;
cameraUVtexture = new Texture2D();
cameraUVtexture.SetNumLevels(1);
cameraUVtexture.SetSize(640, 360, Graphics.LuminanceAlphaFormat, TextureUsage.Dynamic);
cameraUVtexture.FilterMode = TextureFilterMode.Bilinear;
cameraUVtexture.SetAddressMode(TextureCoordinate.U, TextureAddressMode.Clamp);
cameraUVtexture.SetAddressMode(TextureCoordinate.V, TextureAddressMode.Clamp);

因此,我们可以将捕获的图像呈现为背景,并在其上方呈现任何场景,就像这种突变的变化一样。Thus we are able to render captured images as a background and render any scene above it like that scary mutant.

调整照相机Adjusting the Camera

ARFrame 对象还包含估计的设备位置。The ARFrame objects also contain the estimated device position. 现在,我们需要移动游戏摄像机 ARFrame-在 ARKit 之前,跟踪设备方向(滚动、螺距和偏航)并在视频上渲染固定的全息影像并不是一个大的We now we need to move game camera ARFrame - before ARKit it was not a big deal to track device orientation (roll, pitch and yaw) and render a pinned hologram on top of the video - but if you move your device a bit - holograms will drift.

出现这种情况的原因是,内置传感器(如陀螺仪)无法跟踪移动,只能加速。That happens because built-in sensors such as gyroscope are not able to track movements, they can only acceleration. ARKit 分析每个帧并提取要跟踪的功能点,从而能够为我们生成包含移动和旋转数据的准确的转换矩阵。ARKit analyses each frame and extracts feature points to track and thus is able to give us an accurate Transform matrix containing movement and rotation data.

例如,可以通过以下方式获取当前位置:For example, this is how we can obtain current position:

var row = arCamera.Transform.Row3;
CameraNode.Position = new Vector3(row.X, row.Y, -row.Z);

我们使用 -row.Z,因为 ARKit 使用右手坐标系。We use -row.Z because ARKit uses a right-handed coordinate system.

平面检测Plane detection

ARKit 能够检测到水平平面,此功能使您可以与现实世界交互,例如,我们可以将变化置于真实表或楼层上。ARKit is able to detect horizontal planes and this ability allows you to interact with the real world, for example, we can place the mutant on a real table or a floor. 要执行此操作,最简单的方法是使用 System.windows.media.visualtreehelper.hittest 方法(raycasting)。The simplest way to do that is to use HitTest method (raycasting). 它将屏幕坐标(0.5; 0.5 是中心)转换为实际坐标(0; 0; 0)是第一个帧的位置)。It converts screen coordinates (0.5;0.5 is the center) into the real world coordinates (0;0;0 is the first frame's location).

protected Vector3? HitTest(float screenX = 0.5f, float screenY = 0.5f)
{
    var result = ARSession.CurrentFrame.HitTest(new CGPoint(screenX, screenY),
        ARHitTestResultType.ExistingPlaneUsingExtent)?.FirstOrDefault();
    if (result != null)
    {
        var row = result.WorldTransform.Row3;
        return new Vector3(row.X, row.Y, -row.Z);
    }
    return null;
}

现在,我们可以根据设备屏幕上点击的位置,将变化置于水平表面上:now we can place the mutant on a horizontal surface depending on where on the device screen we tap:

void OnTouchEnd(TouchEndEventArgs e)
{
    float x = e.X / (float)Graphics.Width;
    float y = e.Y / (float)Graphics.Height;
    var pos = HitTest(x, y);
    if (pos != null)
    mutantNode.Position = pos.Value;
}

动画图将平面更改为视图移动

真实照明Realistic lighting

根据真实的照明条件,虚拟场景应较浅或更暗,以更好地匹配其周围的环境。Depending on the real world lighting conditions, the virtual scene should be lighter or darker to better match its surroundings. ARFrame 包含一个 LightEstimate 属性,可用于调整 Urho 环境光源,此操作如下所示:ARFrame contains a LightEstimate property that we can use to adjust the Urho ambient light, this is done like this:

var ambientIntensity = (float) frame.LightEstimate.AmbientIntensity / 1000f;
var zone = Scene.GetComponent<Zone>();
zone.AmbientColor = Color.White * ambientIntensity;

IOS 以外-HoloLensBeyond iOS - HoloLens

UrhoSharp在所有主要的操作系统上运行,因此你可以在其他位置重复使用现有代码。UrhoSharp runs on all major operating systems, so you can reuse your existing code elsewhere.

HoloLens 是运行它的最令人兴奋的平台之一。HoloLens is one of the most exciting platforms it runs on. 这意味着你可以轻松地在 iOS 和 HoloLens 之间切换,以使用 UrhoSharp 生成令人惊叹的更好的现实应用程序。This means that you can easily switch between iOS and HoloLens to build awesome Augmented Reality applications using UrhoSharp.

可以在github.com/EgorBo/ARKitXamarinDemo中找到 MutantDemo 源。You can find the MutantDemo source at github.com/EgorBo/ARKitXamarinDemo.