Episode

Sketch2Code

The Need- User Interface Design process involves a lot a creativity that starts on a whiteboard where designers share ideas. Once a design is drawn, it is usually captured within a photograph and manually translated into some working HTML wireframe to play with in a web browser. This takes effort and delays the design process.

 

The Solution- Within Microsoft Cognitive Services we host Computer Vision Service. The model behind this service has been trained with millions of images and enables object detection for a wide range of types of objects. In this case, we need to build a custom model and train it with images of hand-drawn design elements like a textbox, button or combobox. The Custom Vision Service gives us with the capability to train custom models and perform object detection for them. Once we can identify HTML objects we use the text recognition functionality present in the Computer Vision Service to extract hand-written text present in the design. By combining these two pieces of information, we can generate the HTML snippets of the different elements in the design. We then can infer the layout of the design from the position of the identified elements and generate the final HTML code accordingly.

 

Products and Services- Computer Vision (Handwritten OCR models), Custom Vision, Azure Functions, Blob Storage, WebApp

Create a Free Account (Azure): https://aka.ms/azft-ai

https://www.ailab.microsoft.com/experiments/30c61484-d081-4072-99d6-e132d362b99d/