Use Power Fx in AI Builder models in Power Apps (preview)
[This topic is pre-release documentation and is subject to change.]
Power Fx AI integration is a new approach that lets you reference AI models in any Power Apps control using the Power Fx open-source, low-code formula language. For example, you can detect the language of any user-contributed text and translate it to another language. If you've used canvas apps, you're already familiar with Power Fx.
Important
- This is a preview feature.
- Preview features aren’t meant for production use and may have restricted functionality. These features are available before an official release so that customers can get early access and provide feedback.
- This feature is being gradually rolled out across regions and might not be available yet in your region.
Requirements
To use Power Fx in AI Builder models, you must have:
Access to a Dataverse environment with a database
AI Builder license (trial or paid)
Starter or purchased AI credits (for non-preview models)
Enable the Power Fx feature
The Power Fx feature is enabled by default in Microsoft Power Apps. If it's been disabled and you want to enable it again, you can do this from the canvas apps creation page.
Sign in to Power Apps.
Create a canvas app by selecting Create > Canvas app from blank.

In the App name field, enter a name and select Create.
If you see the Welcome to Power Apps Studio screen, select Skip.
On the toolbar at the top, select Settings.

Select Upcoming features > Preview.
Enable the AI models as data sources feature by scrolling to the end and selecting On.

Select a model in canvas apps
To consume an AI model with Power Fx, you’ll need to create a canvas app, choose a control, and assign expressions to control properties.
For a list of AI Builder models you can consume, go to AI models and business scenarios. You can also consume models built in Microsoft Azure Machine Learning with the bring your own model feature.
Create an app by following steps 1 through 4 in Enable the Power Fx feature.
Select Data tab > Add data > AI models.

Select one or more models to add.
If you don’t see your model in this list, you might not have permissions to use it in Power Apps. Contact your administrator to resolve this.
Use a model in controls
In this example of a language detection model, the canvas app shows you the country code for the language of the text you type.
Create a canvas app by following steps 1 and 2 in the previous section, Select a model in canvas apps.
In the AI models list, select a language detection model.
Place a text input and two text labels on the canvas:
- Select + > Text input and place it on the canvas.
- Rename the text input to TextInput1.
- Select Text label and place it on the canvas.
- Rename the text label to Language.
- Add another text label by selecting Text label and place it to the right of the Language text label.

Select the text label you added in step 3e and enter the following Power Fx formula:
First('Language detection'.Predict(TextInput1.Text).results).languageNotice that the label changes to (Unknown).

Try out your app by selecting the Preview the app icon in the upper-right corner.

In the textbox, type bonjour. Notice that the country code for France (fr) appears below the textbox.

Try out your app again by typing guten tag. Notice the country code for Germany (de) appears below the textbox.
Note
If you move your app to a different environment, the model must be manually re-added to the app in the new environment.
Input/output by model type
This section provides inputs and outputs for custom and prebuilt models by model type.
Custom models
| Model type | Input | Output |
|---|---|---|
| Category classification | Language code, text. | results A table where each element has a type and a score. |
| Entity extraction | Language code, text. | entities A table where each element has a type, score, startIdx, length, and value (string represented from startIdx to startIdx+length). |
| Document processing | Document type (mime type string), document (base64 encoded string). | Four properties. layoutName (string), layoutConfidenceScore (number), labels (record containing the fields that can be identified in the form), and tables (record containing tables identified in the form). |
| Object detection | Image encoded as base64. | results A table with the different objects found in the picture. Each has a boundingBox, confidence value, and tagId. |
| Prediction | Properties defined when creating the model. Canvas receives these properties as a record. | A record with Explanation, Likelihood, and Prediction as properties. |
Prebuilt models
| Model type | Input | Output |
|---|---|---|
| Business card reader | Image type (mime type), image encoded as base64. | contact Contains all possible fields that can be identified by the model, and contactFields (table that contains all identified fields in the input image, with value, boundingBox, name, and parentName). |
| Identity document reader | Image encoded as base64. | result A record that contains a fields property, which holds all possible fields from the model. Each field has value, location, and confidence information. |
| Invoice processing | Image encoded as base64. | result A record that contains the fields and items properties, where fields is a record with all possible fields, and items is a table with identified items from the invoice. |
| Key phrase extraction | Language code, text. | results A table of records that have a single property called phrase, which is the extracted key phrase. |
| Language detection | Text | results Results is a table where each element has a language and a score. |
| Receipt processing | Image encoded as base64. | result A record that contains the fields and items properties, where fields is a record with all possible fields, and items is a table with identified items from the invoice. |
| Sentiment analysis | Language code, text. | result A record that contains sentiment, documentScores, and sentences properties. sentiment has the overall sentiment of the whole text input, documentScores are the computed "confidences" of each possible sentiment (positive, neutral, negative), and sentences is a table with the same results but at the sentence level. |
| Text recognition | Image encoded as base64. | results A table where each element has a lines table (with text and bounding box information). |
| Text translation | Language code for translateTo, language code for translateFrom, text |
Text property (which contains translated input). |
Input/output examples
In this preview, every model is invoked using the predict verb. For example, a language detection model takes text as an input and returns a table of possible languages, ordered by that language’s score. The score indicates how confident the model is with its prediction.
| Input | Output |
|---|---|
'Language detection'.Predict("Bonjour").results |
Table of possible languages, ordered by that language’s score. |
To return the most likely language country code:
| Input | Output |
|---|---|
First('Language detection'.Predict("Bonjour").results).language |
fr (country code for France) |
To save time and resources, save the result of a model call so you can use it in multiple places. You can save an output into a global variable (for example, lang). If you do this, you can use lang elsewhere in your app to show the identified language and its confidence score in two different labels.
| Input | Output |
|---|---|
Set(lang, First('Language detection'.Predict(TextInput1.OnChange).results)) |
Use these formulas:lang.scorelang.language |
See also
Maklum balas
Kirim dan lihat maklum balas untuk