Connect(); 2017

Volume 32 Number 13

Azure Functions - Create Serverless APIs Using Azure Functions

By Alex Karcher; 2017

In the world of API development, serverless application platforms have redefined the problems developers must solve. With instant scale and consumption billing, developers are increasingly turning to serverless tools like Azure Functions to reduce development time, be more adaptive to changing traffic patterns, and to stop paying for overprovisioned infrastructure.

Serverless simply means that the developer doesn’t have to consider the underlying application server when writing code. You write a single class, with a run method, and then define triggers that execute that code. As a trigger fires multiple times in close succession, your code is loaded onto more and more workers to handle the load. The scale out happens on the sub-second level, and you’re only charged for the CPU running time plus the allocated memory of your function. There’s no charge for idle Functions.

Think of serverless like wireless. There are still wires behind your wireless router, but the consumers of your wireless service don’t have to manage or configure them. With serverless, you don’t have to manage or configure servers to run your code.

In Azure Functions, the abstraction of the application server is accomplished through a powerful trigger and binding framework. Triggers and bindings are a declarative way to define how a function is invoked and with which data it works. A trigger defines how a function is invoked. A function must have exactly one trigger. Triggers have associated data, which is usually the payload that triggered the function.

Input and output bindings provide a declarative way to connect to data from within your code. Similar to triggers, you specify connection strings and other properties in your function configuration. Bindings are optional, and a function can have multiple input and output bindings.

Using triggers and bindings, you can write code that’s more generic and doesn’t hardcode the details of the services with which it interacts. Data coming from services simply become input values for your function code. To output data to another service (such as creating a new row in Azure Table Storage), use the return value of the method. See Figure 1 for a listing of all supported triggers and bindings. 

Figure 1 Azure Functions Triggers and Bindings

Type Service Trigger* Input Output
Schedule Azure Functions    
HTTP (REST or Webhook) Azure Functions  
Blob Storage Azure Storage
Events Azure Event Hubs  
Queues Azure Storage  
Queues and Topics Azure Service Bus  
Storage Tables Azure Storage  
SQL Tables Azure Mobile Apps  
NoSQL DB Azure Cosmos DB
Push Notifications Azure Notification Hubs    
Twilio SMS Text Twilio    
SendGrid E-Mail SendGrid    
Excel Tables Microsoft Graph  
OneDrive Files Microsoft Graph  
Outlook E-Mail Microsoft Graph    
Microsoft Graph Events Microsoft Graph
Auth Tokens Microsoft Graph    
(* The HTTP output binding requires an HTTP trigger.)

Azure Functions is particularly well suited to applications with bursty workloads, like orchestration and automation tasks. In the Infrastructure-as-a-Service (IaaS) world, these infrequently requested tasks are often bundled in with other services to maximize virtual machine (VM) usage, or run on expensive dedicated VMs that are idle most of the time. For example, a synchronization service might get 10,000 requests once a day, exactly at midnight; Azure Functions is perfectly suited to handle that uneven load. The function will parallelize out to meet that incoming demand, and further reduce the wall clock runtime of the operation.

Azure Functions Proxies

One of the most popular uses for Azure Functions is API hosting. With that in mind, we created Azure Functions Proxies to allow developers to separate their API functionality across multiple function apps, without having to spread their API across multiple domains. Azure API management provides the same ability to composite together APIs; however, it’s much pricier because of the extra functionality that would go unused in this scenario. This past spring we released Azure Functions Proxies in preview, and this month we’re releasing Azure Functions Proxies to general availability. It has reached release quality, and is now ready to host full production workloads.

Azure Functions Proxies provides a core set of API development tools specifically suited for the serverless API developer. First, Azure Functions Proxies allows you to composite multiple APIs across functions and services together into one, unified API surface. Second, Azure Functions Proxies enables hosting mock API endpoints to quickly get started developing against an API. Third, Azure Functions Proxies allows any API to be routed through functions, to leverage metrics, security and OpenAPI definition support. Finally, Azure Functions Proxies allows static content to be hosted on a function domain.

Building a Serverless API

This tutorial will make ample use of public APIs as inte­gration points, in lieu of Azure services. The Azure Functions documentation has many walk-throughs for accessing just about any resource through Azure Functions, or Logic Apps, so I’ll stick to the API examples.

Mocking an API in Azure Functions Proxies It’s day one of your new project, and the most important task for you, as an API developer, is to validate the API design, and get enough implemented that the mobile devs on your team can get started building a client for the API. A mock API in Azure Functions Proxies will allow you to quickly create a live API, to visualize the whole API surface and unblock the mobile team. 

Azure Functions Proxies enables you to host mock APIs by adding response override rules, and no back-end URL. A proxy will respond to matching a request with whatever sample data you’ve given. This will allow you to expose a live API in a matter of minutes to unblock your partner team developing against that API. As you complete functionality, you can replace individual proxy rules without requiring the partner team to update their code.

To start, you’ll need to navigate to the Azure portal and create a new function app with the default configuration. Once you’re in your blank app, click on the plus sign to the right of Proxies and fill out the form to match Figure 2. This will configure a mock API proxy, which returns a simple JSON object and appropriate header when sent a GET request to the /mobile-user endpoint. 

Configuring a Mock API in Azure Functions Proxies
Figure 2 Configuring a Mock API in Azure Functions Proxies

Use Postman, or your preferred API testing tool, to send a GET to the proxy URL. You should receive back your formatted JSON object in the response body, like so:

  "Name": "Proxies Connect"

You can add as many request overrides as you want, to create more complex hardcoded mock APIs with different headers, status codes or body contents.

Creating an Interactive Mock API The mobile devs have tested a basic connection with your static mock API, but now they want something with dynamic content. We’ll create a new mock API that allows them to test out a POST operation. They’ll POST a username and receive that username back.

Create a new proxy, but this time click on the Advanced editor. This will open up the Monaco editor, and allow you to directly edit the proxies.json file. Proxies.json sits at the root directory of your function app, and controls the proxy rules shown in the UI. Paste in the code from Figure 3 above the mobile-user proxy, noting proper commas and brackets in the JSON object.

Figure 3 An interactive Mock API

"MobileUserPOST": {
  "matchCondition": {
    "route": "/mobile-user",
    "methods": [
  "responseOverrides": {
    "response.statusCode": "200",
    "response.statusReason": "OK",
    "response.headers.Content-Type": "Application/JSON"

Note that you can use request.headers.headername to insert request header values into a response. This is perfect for making more interactive mock APIs.

This proxy will create a new mock endpoint at /mobile-user that responds to POST requests. The new endpoint will return a response body with the user that the client posted as a request header. Send a POST request to the /mobile-user endpoint using Postman, and add a User header with your own username. The response back should contain that username in the response body, like so:

  "headers": {
    "User": "MyUsername"

Utilizing the request parameter, you can insert all sorts of request info into the response: Request.method, request.headers.<headername>, and request.querystring.<paramatername> are all available to you in the response overrides.

Moving forward implementing the API, it’s important to note that request and response overrides are additive, overwriting any matching object and passing along all other objects. If you want to remove an object entirely, you need to overwrite it with a blank entry.

Replace the Mock API with a Real API Now that the partner team has been unblocked, it’s time to add a real API into the solution. To start off, the real API will be the endpoint. This testing endpoint returns a dictionary of HTTP headers, so it’ll be perfect to replace the /mobile-user POST endpoint. However, the endpoint only accepts GET requests, so use a proxy request override to send the back-end service the correct request.

Replace the MobileUserPOST proxy with the following rule:

"MobileUserPOST": {
  "matchCondition": {
    "route": "/mobile-user",
    "methods": [
  "backendUri": "",
  "requestOverrides": {
    "backend.request.method": "get"

Now resend the Post request to the /mobile-user endpoint. You should receive a complete list of headers, along with the name header you sent. Your response should somewhat match Figure 4.

Figure 4 The API Response to the Proxied /Mobile-User Endpoint

  "headers": {
    "Accept": "*/*",
    "Accept-Encoding": "gzip,deflate",
    "Cache-Control": "no-cache",
    "Connection": "close",
    "Disguised-Host": "",
    "Host": "",
    "Max-Forwards": "9",
    "Postman-Token": "5541efc6-709a-4e95-b782-cbc946452564",
    "User": "MyUsername",
    "User-Agent": "PostmanRuntime/6.4.1",
    "Was-Default-Hostname": "",
    "X-Arr-Log-Id": "60cf1436-ca3d-4408-801e-8cf47afd4a46",
    "X-Arr-Ssl": "2048|256|C=US, |CN=*",
    "X-Ms-Ext-Routing": "1",
    "X-Original-Url": "/mobile-user",
    "X-Site-Deployment-Id": "ConnectFnDemo",
    "X-Waws-Unencoded-Url": "/mobile-user"

Composite an Additional API Part of the strength of Azure Functions Proxies is the ability to combine multiple separate API endpoints into one outward-facing API endpoint. For the next example, we’ll add a function-hosted API into the /mobile-user API.

Start by creating a new function app and creating a new HTTP triggered function inside of the new app. We’ll utilize the built-in HTTP sample code that takes in a name as a query parameter and returns “hello name.”

When you proxy the function, you’re not going to want to hardcode the back-end function URL and API key into the proxy rule, so we’ll use an application setting to store that value, and retrieve that value in the proxy.

In the function editor of the back-end function, copy the function URL—API key and all—to the clipboard for later. Now navigate back to the function hosting the proxies, and add an application setting with the function URL you just copied. The app settings page is under functionname | Platform features | Application settings | + Add new setting. Call the new setting “backendfunction” and hit save.

Paste the following proxy into the advanced editor, to create a rule that proxies /mobile-user/name to the back-end function, to ultimately return “hello name,” like so:

"MobileUserHello": {
  "matchCondition": {
    "route": "/mobile-user/{user}"
  "backendUri": "%backendfunction%",
  "requestOverrides": {
    "": "{user}"

Tip: Use application settings to store secrets outside of your proxies, so that the proxies.json can be checked into source control, shared widely or simply updated dynamically. Use %appsettingname% to reference an application setting.

When it’s all said and done, your finished API topology will look like Figure 5.

Final Proxy API Topology
Figure 5 Final Proxy API Topology

Advanced API Topologies with APIM

For many developers, Azure Functions Proxies just scratches the surface of its API integration needs; instead, it makes sense to step up to API Management. When compositing multiple APIs together, Azure API Management (APIM) provides a much more expansive set of transformation rules, as well as policies for rate limiting and security. APIM also allows APIs to be monetized or restricted using a developer onboarding portal. In large APIs the topology is usually a single APIM endpoint resolving to multiple Azure Functions Proxies endpoints, which then have functions and Software-as-a-Service services under them.

Serverless API Design Tips

Stick to one function per operation. There’s no cost penalty to have many idle functions, and you’ll end up with more portable and reusable code.

Use Software-as-a-Service offerings whenever possible. Any service you don’t have to manage saves developer time, and provides one less regression point.

Long-running functions should delegate. You’re paying for code that’s async awaiting responses, so use durable functions or queues to delegate that work to another function, and call a final function on task completion.

Connect services with OpenAPI definitions. OpenAPI allows you to programmatically define your API surface and onboard new API consumers. OpenAPI powers client SDK generators for most languages, as well as a wide range of documentation generators.

HTTP communications between functions must be secured. Each HTTP endpoint is Internet-addressable, so Azure Functions natively supports service principle authentication to secure HTTP traffic between functions. Alternatively, you can use any other trigger to avoid HTTP entirely, or put your functions in an App Service Environment to secure communications.

Adding Extra Functionality

Azure Functions Proxies enables you to add a whole suite of functionality to any existing API. You can expose an OpenAPI definition, layer on authentication/authorization to provide a secured API endpoint, or measure traffic on an API using App Insights.

Host a Static Page Using Azure Functions Proxies: Azure Functions Proxies isn’t just useful for API compositing; it can also be used for general HTTP redirection. You can host static HTML on your function app by either proxying the back-end URL of a service hosting HTML, or creating a proxy that returns HTTP 302 to redirect the user to a Web page. This combo can be used to host single-page applications, and is especially useful if the API endpoints used by the single-page application is in the same proxy.

Monitor a Legacy API with Application (App) Insights: Azure Functions Proxies can be used to add lots of Platform-as-a-Service-like functionality to an existing API, such as monitoring with App Insights. App Insights is a strong monitoring suite that can calculate complex metrics on API traffic, send alerts on certain conditions, and provide a real-time analytics stream.

To enable App Insights monitoring, simply create a new function app and toggle App Insights to on at the creation blade. Then add a proxy, as simple as one route with a back-end URL. Send some requests to your proxy to have data to measure; then navigate to the App Insights resource with the same name as your function app in the portal. You’ll be able to see the latency, request count, and status codes in a live metrics stream, and query up to 90 days of historical data in the analytics portal.

For example, enter the following query into the analysis portal to see what days of the week your API receives the most requests:

| where timestamp > ago(90d)
| summarize count() by dayofweek(timestamp)

Debug Proxies: Azure Functions Proxies has the ability to generate debug traces upon request. They allow you to see what rules were applied to a particular request, along with the request info. Send a request to a proxies endpoint along with the header Proxy-Trace-Enabled set to true, and you’ll receive a header back with a link to the trace file. You can also set debug: true for a proxy in proxies.json to generate a debug trace for every request. The trace is stored under the function’s blob storage, which is only accessible to members of your Azure subscription.

Run Azure Functions Proxies Locally: As a part of the general availability of Azure Functions Proxies, you can now run it locally, using the Azure Functions Core Tools. Find them at

Create proxies locally with the following line:

func proxy create [--name] [--route] [--methods] [--backend-url]

And then run those proxies by typing:

func host start

You can then send API requests to the local function endpoint using Postman, Fiddler and so on. See a full list of commands and samples at

When proxying functions in the same function app, you can use the localhost back-end URI to reference a local function. This allows you to have one unified keyword to proxy the same function locally and when deployed to the cloud. Localhost in Azure Functions Proxies is a keyword, so you don’t have to worry about appending the correct port when running locally.

Here’s an example of a simple proxy to a function in the same function app:

"LocalFunction": {
  "matchCondition": {
    "route": "/localfn1"
  "backendUri": "https://localhost/api/httptriggercsharp1"

Give Functions Proxies a Try

We’re very excited for the general availability of Azure Functions Proxies, and hope you’ll give it a try. Azure Functions has a generous charter of 1 million free executions and 400,000 free Gbps per month, giving you no reason to hold off on playing with Azure Functions Proxies. Serverless has been gaining traction fast in cloud native development, and now is the perfect time to jump on board.

We love feedback, and you can reach out to the product team on Twitter @azurefunctions, or submit a GitHub issue with any feature requests or issues at

Alex Karcher is a program manager on Azure Functions, working on API tools such as Proxies and OpenAPI. He’s on Twitter: @alexkarcher.

Thanks to the product team for their technical expertise in reviewing this article: Matthew Henderson, Galin Iliev, Eduardo Laureano, Omkar More, Hamid Safi, Colby Tresness

Discuss this article in the MSDN Magazine forum