Mistral (Independent Publisher) (Preview)

Mistral is open and portable generative AI for devs and businesses. Mistral models strike an unmatched latency to performance ratio, and achieve top-tier reasoning performance on all common benchmarks. Mistral designed the models to be as unbiased and useful as possible, providing full modular control over moderation.

This connector is available in the following products and regions:

Service Class Regions
Logic Apps Standard All Logic Apps regions except the following:
     -   Azure Government regions
     -   Azure China regions
     -   US Department of Defense (DoD)
Power Automate Premium All Power Automate regions except the following:
     -   US Government (GCC)
     -   US Government (GCC High)
     -   China Cloud operated by 21Vianet
     -   US Department of Defense (DoD)
Power Apps Premium All Power Apps regions except the following:
     -   US Government (GCC)
     -   US Government (GCC High)
     -   China Cloud operated by 21Vianet
     -   US Department of Defense (DoD)
Contact
Name Troy Taylor
URL https://www.hitachisolutions.com
Email ttaylor@hitachisolutions.com
Connector Metadata
Publisher Troy Taylor
Website https://mistral.ai/
Privacy policy https://mistral.ai/terms#privacy-policy
Categories AI

Creating a connection

The connector supports the following authentication types:

Default Parameters for creating connection. All regions Not shareable

Default

Applicable: All regions

Parameters for creating connection.

This is not shareable connection. If the power app is shared with another user, another user will be prompted to create new connection explicitly.

Name Type Description Required
API Key (in the form 'Bearer YOUR_API_KEY') securestring The API Key (in the form 'Bearer YOUR_API_KEY') for this api True

Throttling Limits

Name Calls Renewal Period
API calls per connection 100 60 seconds

Actions

Create chat completions

Creates a chat completion.

Create embeddings

Creates an embedding.

List available models

Retrieve a list of available models.

Create chat completions

Creates a chat completion.

Parameters

Name Key Required Type Description
Model
model True string

ID of the model to use.

Role
role string

The role.

Content
content string

The content.

Temperature
temperature number

What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

Top P
top_p number

Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

Max Tokens
max_tokens integer

The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.

Stream
stream boolean

Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.

Safe Prompt
safe_prompt boolean

Whether to inject a safety prompt before all conversations.

Random Seed
random_seed integer

The seed to use for random sampling. If set, different calls will generate deterministic results.

Returns

Create embeddings

Creates an embedding.

Parameters

Name Key Required Type Description
model
model string

The ID of the model to use for this request.

Input
input array of string

The list of strings to embed.

Encoding Format
encoding_format string

The format of the output data.

Returns

List available models

Retrieve a list of available models.

Returns

Body
ModelList

Definitions

ModelList

Name Path Type Description
Object
object string

The object.

data
data array of Model

ChatCompletionResponse

Name Path Type Description
ID
id string

The identifier.

Object
object string

The object.

Created
created integer

The created.

Model
model string

The model.

choices
choices array of object
Index
choices.index integer

The index.

Role
choices.message.role string

The role.

Content
choices.message.content string

The content.

Finish Reason
choices.finish_reason string

The finish reason.

Prompt Tokens
usage.prompt_tokens integer

The prompt tokens.

Completion Tokens
usage.completion_tokens integer

The completion tokens.

Total Tokens
usage.total_tokens integer

The total tokens.

EmbeddingResponse

Name Path Type Description
ID
id string

The identifier.

Object
object string

The object.

data
data array of object
Object
data.object string

The object.

Embedding
data.embedding array of double

The embedding.

Index
data.index integer

The index.

Model
model string

The model.

Prompt Tokens
usage.prompt_tokens integer

The prompt tokens.

Total Tokens
usage.total_tokens integer

The total tokens.

Model

Name Path Type Description
ID
id string

The identifier.

Object
object string

The object.

Created
created integer

The created.

Owned By
owned_by string

The owned by.