Share via


OpenAI (Independent Publisher) (Preview)

Connect to the OpenAI API and use the Power of GPT3, API key must be entered as "Bearer YOUR_API_KEY"

This connector is available in the following products and regions:

Service Class Regions
Logic Apps Standard All Logic Apps regions except the following:
     -   Azure Government regions
     -   Azure China regions
     -   US Department of Defense (DoD)
Power Automate Premium All Power Automate regions except the following:
     -   US Government (GCC)
     -   US Government (GCC High)
     -   China Cloud operated by 21Vianet
     -   US Department of Defense (DoD)
Power Apps Premium All Power Apps regions except the following:
     -   US Government (GCC)
     -   US Government (GCC High)
     -   China Cloud operated by 21Vianet
     -   US Department of Defense (DoD)
Contact
Name Robin Rosengrün
URL https://linktr.ee/r2power
Email robin@r2power.de
Connector Metadata
Publisher Robin Rosengrün
Website https://openai.com/
Privacy policy https://openai.com/api/policies/terms/
Categories AI

Creating a connection

The connector supports the following authentication types:

Default Parameters for creating connection. All regions Not shareable

Default

Applicable: All regions

Parameters for creating connection.

This is not shareable connection. If the power app is shared with another user, another user will be prompted to create new connection explicitly.

Name Type Description Required
API Key securestring Enter API Key as as "Bearer YOUR_API_KEY" True

Throttling Limits

Name Calls Renewal Period
API calls per connection 100 60 seconds

Actions

Chat Completion

Use models like ChatGPT and GPT4 to hold a conversation

Create an Image

DallE2 creates an image from your prompt

Embeddings

Get a vector representation of a given input

GPT3 Completes your prompt

GPT3 Completes your prompt

GPT3 Completes your prompt [DEPRECATED]

GPT3 Completes your prompt (deprecated by OpenAI - use Completion_New)

Chat Completion

Use models like ChatGPT and GPT4 to hold a conversation

Parameters

Name Key Required Type Description
model
model True string

The used model, choose between gpt-3.5-turbo, gpt-4 and others

role
role True string

The role of the author of this message. One of system, user, or assistant.

content
content True string

The contents of the message.

n
n integer

How many completions to generate for each prompt

temperature
temperature float

Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. Use this OR top p

max tokens
max_tokens integer

One token equals roughly 4 characters of text (up to 4000 or more tokens between prompt and completion, depending on model)

top p
top_p float

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

frequency penalty
frequency_penalty float

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the models likelihood to repeat the same line verbatim.

presence penalty
presence_penalty float

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the models likelihood to talk about new topics.

stop
stop array of string

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence

Returns

Name Path Type Description
id
id string

id

object
object string

object

created
created integer

created

choices
choices array of object

choices

index
choices.index integer

index

role
choices.message.role string

role

content
choices.message.content string

content

finish_reason
choices.finish_reason string

finish_reason

prompt_tokens
usage.prompt_tokens integer

prompt_tokens

completion_tokens
usage.completion_tokens integer

completion_tokens

total_tokens
usage.total_tokens integer

total_tokens

Create an Image

DallE2 creates an image from your prompt

Parameters

Name Key Required Type Description
prompt
prompt True string

The prompt that describes the image

Number of images
n integer

Number of images from 1 to 10

size
size string

The size of the generated images. 256x256, 512x512 or 1024x1024 (default: 1024x1024)

format
response_format string

Get url to picture or receive it in base64 format (default: url)

Returns

Name Path Type Description
data
data array of object

data

url
data.url string

URL to created Image

b64image
data.b64_json byte

Image in base64 format

Embeddings

Get a vector representation of a given input

Parameters

Name Key Required Type Description
model
model True string

model

input
input True string

input

Returns

Name Path Type Description
object
object string

object

data
data array of object

data

object
data.object string

object

embedding
data.embedding array of float

embedding

index
data.index integer

index

model
model string

model

prompt_tokens
usage.prompt_tokens integer

prompt_tokens

total_tokens
usage.total_tokens integer

total_tokens

GPT3 Completes your prompt

GPT3 Completes your prompt

Parameters

Name Key Required Type Description
Engine
model True string

The used model, choose between text-davinci-002, text-curie-001, text-babbage-001, text-ada-001

prompt
prompt True string

Text that will be completed by GPT3

n
n integer

How many completions to generate for each prompt

best_of
best_of integer

If set to more than 1, generates multiple completions server-side and returns the "best". Must be greater than "n". Use with caution, can consume a lot of tokens.

temperature
temperature float

Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. Use this OR top p

max tokens
max_tokens integer

One token equals roughly 4 characters of text (up to 4000 tokens between prompt and completion)

top p
top_p float

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

frequency penalty
frequency_penalty float

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the models likelihood to repeat the same line verbatim.

presence penalty
presence_penalty float

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the models likelihood to talk about new topics.

stop
stop array of string

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence

Returns

Name Path Type Description
id
id string

id

object
object string

object

created
created integer

created

choices
choices array of object

Returned Completion(s)

Text
choices.text string

Completion text

Index
choices.index integer

Number of completion

Finish reason
choices.finish_reason string

Reason why the text finished (stop condition / natural end / length)

Prompt Tokens
choices.usage.prompt_tokens integer

Number of tokens in the prompt

Completion Tokens
choices.usage.completion_tokens integer

Number of tokens in the completion

Total Tokens
choices.usage.total_tokens integer

Total number of tokens in prompt and completion

GPT3 Completes your prompt [DEPRECATED]

GPT3 Completes your prompt (deprecated by OpenAI - use Completion_New)

Parameters

Name Key Required Type Description
Engine
engine True string

The used engine, choose between text-davinci-002/003, text-curie-001, text-babbage-001, text-ada-001

prompt
prompt True string

Text that will be completed by GPT3

n
n integer

How many completions to generate for each prompt

best_of
best_of integer

If set to more than 1, generates multiple completions server-side and returns the "best". Must be greater than "n". Use with caution, can consume a lot of tokens.

temperature
temperature float

Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. Use this OR top p

max tokens
max_tokens integer

One token equals roughly 4 characters of text (up to 4000 tokens between prompt and completion)

top p
top_p float

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

frequency penalty
frequency_penalty float

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the models likelihood to repeat the same line verbatim.

presence penalty
presence_penalty float

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the models likelihood to talk about new topics.

user
user string

A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse

stop
stop array of string

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence

Returns

Name Path Type Description
id
id string

id

object
object string

object

created
created integer

created

choices
choices array of object

Returned Completion(s)

Text
choices.text string

Completion text

Index
choices.index integer

Number of completion

Logprobs
choices.logprobs string

Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 3, the API will return a list of the 3 most likely tokens.

Finish reason
choices.finish_reason string

Reason why the text finished (stop condition / natural end / length)