Azure Cosmos DB FAQ

Azure Cosmos DB fundamentals

What is Azure Cosmos DB?

Azure Cosmos DB is a globally replicated, multi-model database service that that offers rich querying over schema-free data, helps deliver configurable and reliable performance, and enables rapid development. It's all achieved through a managed platform that's backed by the power and reach of Microsoft Azure.

Azure Cosmos DB is the right solution for web, mobile, gaming, and IoT applications when predictable throughput, high availability, low latency, and a schema-free data model are key requirements. It delivers schema flexibility and rich indexing, and it includes multi-document transactional support with integrated JavaScript.

For more database questions, answers, and instructions for deploying and using this service, see the Azure Cosmos DB documentation page.

What happened to DocumentDB?

The DocumentDB API is one of the supported APIs and data models for Azure Cosmos DB. In addition, Azure Cosmos DB supports you with Graph API (Preview), Table API (Preview) and MongoDB API. For more information, see Questions from DocumentDB customers.

How do I get to my DocumentDB account in the Azure portal?

In the Azure portal, click the Azure Cosmos DB icon in the left pane. If you had a DocumentDB account before, you now have an Azure Cosmos DB account, with no change to your billing.

What are the typical use cases for Azure Cosmos DB?

Azure Cosmos DB is a good choice for new web, mobile, gaming, and IoT applications where automatic scale, predictable performance, fast order of millisecond response times, and the ability to query over schema-free data is important. Azure Cosmos DB lends itself to rapid development and supporting the continuous iteration of application data models. Applications that manage user-generated content and data are common use cases for Azure Cosmos DB.

How does Azure Cosmos DB offer predictable performance?

A request unit (RU) is the measure of throughput in Azure Cosmos DB. A 1-RU throughput corresponds to the throughput of the GET of a 1-KB document. Every operation in Azure Cosmos DB, including reads, writes, SQL queries, and stored procedure executions, has a deterministic RU value that's based on the throughput required to complete the operation. Instead of thinking about CPU, IO, and memory and how they each affect your application throughput, you can think in terms of a single RU measure.

You can reserve each Azure Cosmos DB container with provisioned throughput in terms of RUs of throughput per second. For applications of any scale, you can benchmark individual requests to measure their RU values, and provision a container to handle the total of request units across all requests. You can also scale up or scale down your container's throughput as the needs of your application evolve. For more information about request units and for help determining your container needs, see Estimating throughput needs and try the throughput calculator. The term container here refers to refers to a DocumentDB API collection, Graph API graph, MongoDB API collection, and Table API table.

How does Azure Cosmos DB support various data models such as key/value, columnar, document and graph?

Key/value (table), columnar, document and graph data models are all natively supported because of the ARS (atoms, records and sequences) design that Azure Cosmos DB is built on. Atoms, records, and sequences can be easily mapped and projected to various data models. The APIs for a subset of models are available right now (DocumentDB, MongoDB, Table, and Graph APIs) and others specific to additional data models will be available in the future.

Azure Cosmos DB has a schema agnostic indexing engine capable of automatically indexing all the data it ingests without requiring any schema or secondary indexes from the developer. The engine relies on a set of logical index layouts (inverted, columnar, tree) which decouple the storage layout from the index and query processing subsystems. Cosmos DB also has the ability to support a set of wire protocols and APIs in an extensible manner and translate them efficiently to the core data model (1) and the logical index layouts (2) making it uniquely capable of supporting multiple data models natively.

Is Azure Cosmos DB HIPAA compliant?

Yes, Azure Cosmos DB is HIPAA-compliant. HIPAA establishes requirements for the use, disclosure, and safeguarding of individually identifiable health information. For more information, see the Microsoft Trust Center.

What are the storage limits of Azure Cosmos DB?

There is no limit to the total amount of data that a container can store in Azure Cosmos DB.

What are the throughput limits of Azure Cosmos DB?

There is no limit to the total amount of throughput that a container can support in Azure Cosmos DB. The key idea is to distribute your workload roughly evenly among a sufficiently large number of partition keys.

How much does Azure Cosmos DB cost?

For details, refer to the Azure Cosmos DB pricing details page. Azure Cosmos DB usage charges are determined by the number of provisioned containers, the number of hours the containers were online, and the provisioned throughput for each container. The term containers here refers to the DocumentDB API collection, Graph API graph, MongoDB API collection, and Table API tables.

Is a free account available?

Yes, you can sign up for a time-limited account at no charge, with no commitment. To sign up, visit Try Azure Cosmos DB for free or read more in the Try Azure Cosmos DB FAQ.

If you are new to Azure, you can sign up for an Azure free account, which gives you 30 days and and a credit to try all the Azure services. If you have a Visual Studio subscription, you are also eligible for free Azure credits to use on any Azure service.

You can also use the Azure Cosmos DB Emulator to develop and test your application locally for free, without creating an Azure subscription. When you're satisfied with how your application is working in the Azure Cosmos DB Emulator, you can switch to using an Azure Cosmos DB account in the cloud.

How can I get additional help with Azure Cosmos DB?

If you need any help, reach out to us on Stack Overflow or the MSDN forum, or schedule a one-on-one chat with the Azure Cosmos DB engineering team by sending mail to askcosmosdb@microsoft.com.

Try Azure Cosmos DB subscriptions

You can now enjoy a time-limited Azure Cosmos DB experience without a subscription, free of charge and commitments. To sign up for a Try Azure Cosmos DB subscription, go to Try Azure Cosmos DB for free. This subscription is separate from the Azure Free Trial, and can be used in addition to an Azure Free Trial or an Azure paid subscription.

Try Azure Cosmos DB subscriptions appear in the Azure portal next other subscriptions associated with your user ID.

The following conditions apply to Try Azure Cosmos DB subscriptions:

  • One container per subscription for SQL (DocumentDB API), Gremlin (Graph API), and Table accounts.
  • Up to 3 collections per subscription for MongoDB accounts.
  • 10 GB storage capacity.
  • Global replication is available in the following Azure regions: Central US, North Europe and Southeast Asia
  • Maximum throughput of 5K RU/s.
  • Subscriptions expire after 24 hours, and can be extended to a maximum of 48 hours total.
  • Azure support tickets cannot be created for Try Azure Cosmos DB accounts; however, support is provided for subscribers with existing support plans.

Set up Azure Cosmos DB

How do I sign up for Azure Cosmos DB?

Azure Cosmos DB is available in the Azure portal. First, sign up for an Azure subscription. After you've signed up, you can add a DocumentDB API, Graph API (Preview), Table API (Preview), or MongoDB API account to your Azure subscription.

What is a master key?

A master key is a security token to access all resources for an account. Individuals with the key have read and write access to all resources in the database account. Use caution when you distribute master keys. The primary master key and secondary master key are available on the Keys blade of the Azure portal. For more information about keys, see View, copy, and regenerate access keys.

What are the regions that PreferredLocations can be set to?

The PreferredLocations value can be set to any of the Azure regions in which Cosmos DB is available. For a list of available regions, see Azure regions.

Is there anything I should be aware of when distributing data across the world via the Azure datacenters?

Azure Cosmos DB is present across all Azure regions, as specified on the Azure regions page. Because it is the core service, every new datacenter has an Azure Cosmos DB presence.

When you set a region, remember that Azure Cosmos DB respects sovereign and government clouds. That is, if you create an account in a sovereign region, you cannot replicate out of that sovereign region. Similarly, you cannot enable replication into other sovereign locations from an outside account.

Develop against the DocumentDB API

How do I start developing against the DocumentDB API?

Microsoft DocumentDB API is available in the Azure portal. First you must sign up for an Azure subscription. Once you sign up for an Azure subscription, you can add DocumentDB API container to your Azure subscription. For instructions on adding an Azure Cosmos DB account, see Create an Azure Cosmos DB database account. If you had a DocumentDB account in the past, you now have an Azure Cosmos DB account.

SDKs are available for .NET, Python, Node.js, JavaScript, and Java. Developers can also use the RESTful HTTP APIs to interact with Azure Cosmos DB resources from various platforms and languages.

Can I access some ready-made samples to get a head start?

Samples for the DocumentDB API .NET, Java, Node.js, and Python SDKs are available on GitHub.

Does the DocumentDB API database support schema-free data?

Yes, the DocumentDB API allows applications to store arbitrary JSON documents without schema definitions or hints. Data is immediately available for query through the Azure Cosmos DB SQL query interface.

Does the DocumentDB API support ACID transactions?

Yes, the DocumentDB API supports cross-document transactions expressed as JavaScript-stored procedures and triggers. Transactions are scoped to a single partition within each collection and executed with ACID semantics as "all or nothing," isolated from other concurrently executing code and user requests. If exceptions are thrown through the server-side execution of JavaScript application code, the entire transaction is rolled back. For more information about transactions, see Database program transactions.

What is a collection?

A collection is a group of documents and their associated JavaScript application logic. A collection is a billable entity, where the cost is determined by the throughput and used storage. Collections can span one or more partitions or servers and can scale to handle practically unlimited volumes of storage or throughput.

Collections are also the billing entities for Azure Cosmos DB. Each collection is billed hourly, based on the provisioned throughput and used storage space. For more information, see Azure Cosmos DB Pricing.

How do I create a database?

You can create databases by using the Azure portal, as described in Add a collection, one of the Azure Cosmos DB SDKs, or the REST APIs.

How do I set up users and permissions?

You can create users and permissions by using one of the Cosmos DB API SDKs or the REST APIs.

Does the DocumentDB API support SQL?

The SQL query language is an enhanced subset of the query functionality that's supported by SQL. The Azure Cosmos DB SQL query language provides rich hierarchical and relational operators and extensibility via JavaScript-based, user-defined functions (UDFs). JSON grammar allows for modeling JSON documents as trees with labeled nodes, which are used by both the Azure Cosmos DB automatic indexing techniques and the SQL query dialect of Azure Cosmos DB. For information about using SQL grammar, see the QueryDocumentDB article.

Does the DocumentDB API support SQL aggregation functions?

The DocumentDB API supports low-latency aggregation at any scale via aggregate functions COUNT, MIN, MAX, AVG, and SUM via the SQL grammar. For more information, see Aggregate functions.

How does the DocumentDB API provide concurrency?

The DocumentDB API supports optimistic concurrency control (OCC) through HTTP entity tags, or ETags. Every DocumentDB API resource has an ETag, and the ETag is set on the server every time a document is updated. The ETag header and the current value are included in all response messages. ETags can be used with the If-Match header to allow the server to decide whether a resource should be updated. The If-Match value is the ETag value to be checked against. If the ETag value matches the server ETag value, the resource is updated. If the ETag is no longer current, the server rejects the operation with an "HTTP 412 Precondition failure" response code. The client then re-fetches the resource to acquire the current ETag value for the resource. In addition, ETags can be used with the If-None-Match header to determine whether a re-fetch of a resource is needed.

To use optimistic concurrency in .NET, use the AccessCondition class. For a .NET sample, see Program.cs in the DocumentManagement sample on GitHub.

How do I perform transactions in the DocumentDB API?

The DocumentDB API supports language-integrated transactions via JavaScript-stored procedures and triggers. All database operations inside scripts are executed under snapshot isolation. If it is a single-partition collection, the execution is scoped to the collection. If the collection is partitioned, the execution is scoped to documents with the same partition-key value within the collection. A snapshot of the document versions (ETags) is taken at the start of the transaction and committed only if the script succeeds. If the JavaScript throws an error, the transaction is rolled back. For more information, see Server-side JavaScript programming for Azure Cosmos DB.

How can I bulk-insert documents into Cosmos DB?

You can bulk-insert documents into Azure Cosmos DB in either of two ways:

Yes, because Azure Cosmos DB is a RESTful service, resource links are immutable and can be cached. DocumentDB API clients can specify an "If-None-Match" header for reads against any resource-like document or collection and then update their local copies after the server version has changed.

Is a local instance of DocumentDB API available?

Yes. The Azure Cosmos DB Emulator provides a high-fidelity emulation of the Cosmos DB service. It supports functionality that's identical to Azure Cosmos DB, including support for creating and querying JSON documents, provisioning and scaling collections, and executing stored procedures and triggers. You can develop and test applications by using the Azure Cosmos DB Emulator, and deploy them to Azure at a global scale by making a single configuration change to the connection endpoint for Azure Cosmos DB.

Develop against the API for MongoDB

What is the Azure Cosmos DB API for MongoDB?

The Azure Cosmos DB API for MongoDB is a compatibility layer that allows applications to easily and transparently communicate with the native Azure Cosmos DB database engine by using existing, community-supported Apache MongoDB APIs and drivers. Developers can now use existing MongoDB tool chains and skills to build applications that take advantage of Azure Cosmos DB. Developers benefit from the unique capabilities of Azure Cosmos DB, which include auto-indexing, backup maintenance, financially backed service level agreements (SLAs), and so on.

How do I connect to my API for MongoDB database?

The quickest way to connect to the Azure Cosmos DB API for MongoDB is to head over to the Azure portal. Go to your account and then, on the left navigation menu, click Quick Start. Quick Start is the best way to get code snippets to connect to your database.

Azure Cosmos DB enforces strict security requirements and standards. Azure Cosmos DB accounts require authentication and secure communication via SSL, so be sure to use TLSv1.2.

For more information, see Connect to your API for MongoDB database.

Are there additional error codes for an API for MongoDB database?

In addition to the common MongoDB error codes, the MongoDB API has its own specific error codes:

Error Code Description Solution
TooManyRequests 16500 The total number of request units consumed has exceeded the provisioned request-unit rate for the collection and has been throttled. Consider scaling the throughput of the collection from the Azure portal or retrying again.
ExceededMemoryLimit 16501 As a multi-tenant service, the operation has exceeded the client's memory allotment. Reduce the scope of the operation through more restrictive query criteria or contact support from the Azure portal.

Example:     db.getCollection('users').aggregate([
        {$match: {name: "Andy"}},
        {$sort: {age: -1}}
    ])
)

Develop with the Table API (Preview)

Terms

The Azure Cosmos DB: Table API (Preview) refers to the premium offering by Azure Cosmos DB for table support announced at Build 2017.

The standard table SDK is the existing Azure Storage table SDK.

How can I use the new Table API (Preview) offering?

The Azure Cosmos DB Table API is available in the Azure portal. First you must sign up for an Azure subscription. After you've signed up, you can add an Azure Cosmos DB Table API account to your Azure subscription, and then add tables to your account.

During the preview period, when SDKs are available for .NET, you can get started by completing the Table API quick-start article.

Do I need a new SDK to use the Table API (Preview)?

Yes, the Windows Azure Storage Premium Table (Preview) SDK is available on NuGet. Additional information is available on the Azure Cosmos DB Table .NET API: Download and release notes page.

How do I provide feedback about the SDK or bugs?

You can share your feedback in any of the following ways:

What is the connection string that I need to use to connect to the Table API (Preview)?

The connection string is:

DefaultEndpointsProtocol=https;AccountName=<AccountNamefromCosmos DB;AccountKey=<FromKeysPaneofCosmosDB>;TableEndpoint=https://<AccountNameFromDocumentDB>.documents.azure.com

You can get the connection string from the Keys page in the Azure portal.

How do I override the config settings for the request options in the new Table API (Preview)?

For information about config settings, see Azure Cosmos DB capabilities. You can change the settings by adding them to app.config in the appSettings section in the client application.

<appSettings>
    <add key="TableConsistencyLevel" value="Eventual|Strong|Session|BoundedStaleness|ConsistentPrefix"/>
    <add key="TableThroughput" value="<PositiveIntegerValue"/>
    <add key="TableIndexingPolicy" value="<jsonindexdefn>"/>
    <add key="TableUseGatewayMode" value="True|False"/>
    <add key="TablePreferredLocations" value="Location1|Location2|Location3|Location4>"/>....
</appSettings>

Are there any changes for customers who are using the existing standard table SDK?

None. There are no changes for existing or new customers who are using the existing standard table SDK.

How do I view table data that is stored in Azure Cosmos DB for use with the Table API (review)?

You can use the Azure portal to browse the data. You can also use the Table API (Preview) code or the tools mentioned in the next answer.

Which tools work with the Table API (Preview)?

You can use the older version of Azure Explorer (0.8.9).

Tools with the flexibility to take a connection string in the format specified previously can support the new Table API (Preview). A list of table tools is provided on the Azure Storage Client Tools page.

Do PowerShell or Azure CLI work with the new Table API (Preview)?

We plan to add support for PowerShell and Azure CLI for Table API (Preview).

Is the concurrency on operations controlled?

Yes, optimistic concurrency is provided via the use of the ETag mechanism.

Is the OData query model supported for entities?

Yes, the Table API (Preview) supports OData query and LINQ query.

Can I connect to the standard Azure table and the new premium Table API (Preview) side by side in the same application?

Yes, you can connect by creating two separate instances of the CloudTableClient, each pointing to its own URI via the connection string.

How do I migrate an existing Azure Table storage application to this new offering?

To take advantage of the new Table API offering on your existing Table storage data, contact askcosmosdb@microsoft.com.

What is the roadmap for this service, and when will you offer other standard Table API functionality?

We plan to add support for SAS tokens, ServiceContext, Stats, Client side Encryption, Analytics and other features as we proceed toward GA. You can give us feedback on Uservoice.

How is expansion of the storage size done for this service if, for example, I start with n GB of data and my data will grow to 1 TB over time?

Azure Cosmos DB is designed to provide unlimited storage via the use of horizontal scaling. The service can monitor and effectively increase your storage.

How do I monitor the Table API (Preview) offering?

You can use the Table API (Preview) Metrics pane to monitor requests and storage usage.

How do I calculate the throughput I require?

You can use the capacity estimator to calculate the TableThroughput that's required for the operations. For more information, see Estimate Request Units and Data Storage. In general, you can represent your entity as JSON and provide the numbers for your operations.

Can I use the new Table API (Preview) SDK locally with the emulator?

Yes, you can use the Table API (Preview) with the local emulator when you use the new SDK. To download new emulator, go to Use the Azure Cosmos DB Emulator for local development and testing. The StorageConnectionString value in app.config needs to be:

DefaultEndpointsProtocol=https;AccountName=localhost;AccountKey=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==;TableEndpoint=https://localhost:8081`. 

Can my existing application work with the Table API (Preview)?

The surface area of the new Table API (Preview) is compatible with the existing Azure standard table SDK across the create, delete, update, and query operations in the .NET API. Ensure that you have a row key, because the Table API (Preview) requires both a partition key and a row key. We also plan to add more SDK support as we proceed toward GA of this service offering.

Do I need to migrate my existing Azure table-based applications to the new SDK if I do not want to use the Table API (Preview) features?

No, you can create and use existing standard table assets without interruption of any kind. However, if you do not use the new Table API (Preview), you cannot benefit from the automatic index, the additional consistency option, or global distribution.

How do I add replication of the data in the premium Table API (Preview) across multiple regions of Azure?

You can use the Azure Cosmos DB portal’s global replication settings to add regions that are suitable for your application. To develop a globally distributed application, you should also add your application with the PreferredLocation information set to the local region for providing low read latency.

How do I change the primary write region for the account in the premium Table API (Preview)?

You can use the Azure Cosmos DB global replication portal pane to add a region and then fail over to the required region. For instructions, see Developing with multi-region Azure Cosmos DB accounts.

How do I configure my preferred read regions for low latency when I distribute my data?

To help read from the local location, use the PreferredLocation key in the app.config file. For existing applications, the Table API (Preview) throws an error if LocationMode is set. Remove that code, because the premium Table API (Preview) picks up this information from the app.config file. For more information, see Azure Cosmos DB capabilities.

How should I think about consistency levels in the Table API (Preview)?

Azure Cosmos DB provides well-reasoned trade-offs between consistency, availability, and latency. Azure Cosmos DB offers five consistency levels to Table API (Preview) developers, so you can choose the right consistency model at the table level and make individual requests while querying the data. When a client connects, it can specify a consistency level. You can change the level via the app.config setting for the value of the TableConsistencyLevel key.

The Table API (Preview) provides low-latency reads with "Read your own writes," with Session consistency as the default. For more information, see Consistency levels.

By default, Azure Table storage provides Strong consistency within a region and Eventual consistency in the secondary locations.

Does Azure Cosmos DB offer more consistency levels than standard tables?

Yes, for information about how to benefit from the distributed nature of Azure Cosmos DB, see Consistency levels. Because guarantees are provided for the consistency levels, you can use them with confidence. For more information, see Azure Cosmos DB capabilities.

When global distribution is enabled, how long does it take to replicate the data?

We commit the data durably in the local region and push the data to other regions immediately in a matter of milliseconds. This replication is dependent only on the round-trip time (RTT) of the datacenter. To learn more about the global-distribution capability of Azure Cosmos DB, see Azure Cosmos DB: A globally distributed database service on Azure.

Can the read request consistency level be changed?

With Azure Cosmos DB, you can set the consistency level at the container level (on the table). By using the SDK, you can change the level by providing the value for TableConsistencyLevel key in the app.config file. The possible values are: Strong, Bounded Staleness, Session, Consistent Prefix, and Eventual. For more information, see Tunable data consistency levels in Azure Cosmos DB. The key idea is that you cannot set the request consistency level at more than the setting for the table. For example, you cannot set the consistency level for the table at Eventual and the request consistency level at Strong.

How does the premium Table API (Preview) account handle failover if a region goes down?

The premium Table API (Preview) borrows from the globally distributed platform of Azure Cosmos DB. To ensure that your application can tolerate datacenter downtime, enable at least one more region for the account in the Azure Cosmos DB portal Developing with multi-region Azure Cosmos DB accounts. You can set the priority of the region by using the portal Developing with multi-region Azure Cosmos DB accounts.

You can add as many regions as you want for the account and control where it can fail over to by providing a failover priority. Of course, to use the database, you need to provide an application there too. When you do so, your customers will not experience downtime. The client SDK is auto homing. That is, it can detect the region that's down and automatically fail over to the new region.

Is the premium Table API (Preview) enabled for backups?

Yes, the premium Table API (Preview) borrows from the platform of Azure Cosmos DB for backups. Backups are made automatically. For more information, see Online backup and restore with Azure Cosmos DB.

Does the Table API (Preview) index all attributes of an entity by default?

Yes, all attributes of an entity are indexed by default. For more information, see Azure Cosmos DB: Indexing policies.

Does this mean I do not have to create multiple indexes to satisfy the queries?

Yes, Azure Cosmos DB provides automatic indexing of all attributes without any schema definition. This automation frees developers to focus on the application rather than on index creation and management. For more information, see Azure Cosmos DB: Indexing policies.

Can I change the indexing policy?

Yes, you can change the indexing policy by providing the index definition. For more information, see Azure Cosmos DB capabilities. You need to properly encode and escape the settings.

In string json format in the app.config file:

{
  "indexingMode": "consistent",
  "automatic": true,
  "includedPaths": [
    {
      "path": "/somepath",
      "indexes": [
        {
          "kind": "Range",
          "dataType": "Number",
          "precision": -1
        },
        {
          "kind": "Range",
          "dataType": "String",
          "precision": -1
        } 
      ]
    }
  ],
  "excludedPaths": 
[
 {
      "path": "/anotherpath"
 }
]
}

Azure Cosmos DB as a platform seems to have lot of capabilities, such as sorting, aggregates, hierarchy, and other functionality. Will you be adding these capabilities to the Table API?

In preview, the Table API provides the same query functionality as Azure Table storage. Azure Cosmos DB also supports sorting, aggregates, geospatial query, hierarchy, and a wide range of built-in functions. We will provide additional functionality in the Table API in a future service update. For more information, see SQL queries for Azure Cosmos DB DocumentDB API.

When should I change TableThroughput for the Table API (Preview)?

You should change TableThroughput when either of the following conditions applies:

  • You're performing an extract, transform, and load (ETL) of data, or you want to upload a lot of data in short amount of time.
  • You need more throughput from the container at the back end. For example, you see that the used throughput is more than the provisioned throughput, and you are getting throttled. For more information, see Set throughput for Azure Cosmos DB containers.

Can I scale up or scale down the throughput of my Table API (Preview) table?

Yes, you can use the Azure Cosmos DB portal’s scale pane to scale the throughput. For more information, see Set throughput.

Is a default TableThroughput set for newly provisioned tables?

Yes, if you do not override the TableThroughput via app.config and do not use a pre-created container in Azure Cosmos DB, the service creates a table with throughput of 400.

Is there any change of pricing for existing customers of the standard Table API?

None. There is no change in price for existing standard Table API customers.

How is the price calculated for the Table API (Preview)?

The price depends on the allocated TableThroughput.

How do I handle any throttling on the tables in Table API (Preview) offering?

If the request rate exceeds the capacity of the provisioned throughput for the underlying container, you will get an error, and the SDK will retry the call by applying the retry policy.

Why do I need to choose a throughput apart from PartitionKey and RowKey to take advantage of the premium Table API (Preview) offering of Azure Cosmos DB?

Azure Cosmos DB sets a default throughput for your container if you do not provide one in the app.config file.

Azure Cosmos DB provides guarantees for performance and latency, with upper bounds on operation. This guarantee is possible when the engine can enforce governance on the tenant's operations. Setting TableThroughput ensures that you get the guaranteed throughput and latency, because the platform reserves this capacity and guarantees operational success.

By using the throughput specification, you can elastically change it to benefit from the seasonality of your application, meet the throughput needs, and save costs.

Azure Storage SDK has been very inexpensive for me, because I pay only to store the data, and I rarely query. The new Azure Cosmos DB offering seems to be charging me even though I have not performed a single transaction or stored anything. Can you please explain?

Azure Cosmos DB is designed to be a globally distributed, SLA-based system with guarantees for availability, latency, and throughput. When you reserve throughput in Azure Cosmos DB, it is guaranteed, unlike the throughput of other systems. Azure Cosmos DB provides additional capabilities that customers have requested, such as secondary indexes and global distribution. During the preview period, we provide a throughput-optimized model and, eventually, we plan to provide a storage-optimized model to meet our customers' needs.

I never get a “quota full" notification (indicating that a partition is full) when I ingest data into Table storage. With the Table API (Preview), I do get this message. Is this offering limiting me and forcing me to change my existing application?

Azure Cosmos DB is an SLA-based system that provides unlimited scale, with guarantees for latency, throughput, availability, and consistency. To ensure guaranteed premium performance, make sure that your data size and index are manageable and scalable. The 10-GB limit on the number of entities or items per partition key is to ensure that we provide great lookup and query performance. To ensure that your application scales well even for Azure Storage, we recommend that you not create a hot partition by storing all information in one partition and querying it.

So PartitionKey and RowKey are still required with the new Table API (Preview)?

Yes. Because the surface area of the Table API (Preview) is similar to that of the Table storage SDK, the partition key provides an efficient way to distribute the data. The row key is unique within that partition. The row key needs to be present and can't be null as in the standard SDK. The length of RowKey is 255 bytes and the length of PartitionKey is 100 bytes (soon to be increased to 1 KB).

What are the error messages for the Table API (Preview)?

Because this preview is compatible with the standard table, most of the errors will map to the errors from the standard table.

Why do I get throttled when I try to create lot of tables one after another in the Table API (Preview)?

Azure Cosmos DB is an SLA-based system that provides latency, throughput, availability and consistency guarantees. Because it is a provisioned system, it reserves resources to guarantee these requirements. The rapid rate of creation of tables is detected and throttled. We recommend that you look at the rate of creation of tables and lower it to less than 5 per minute. Remember that the Table API (Preview) is a provisioned system. The moment you provision it, you will begin to pay for it.

Develop against the Graph API (Preview)

How can I apply the functionality of Graph API (Preview) to Azure Cosmos DB?

You can use an extension library to apply the functionality of Graph API (Preview). This library is called Microsoft Azure Graphs, and it is available on NuGet.

It looks like you support the Gremlin graph traversal language. Do you plan to add more forms of query?

Yes, we plan to add other mechanisms for query in the future.

How can I use the new Graph API (Preview) offering?

To get started, complete the Graph API quick-start article.

Questions from DocumentDB customers

Why are you moving to Azure Cosmos DB?

Azure Cosmos DB is the next big leap in globally distributed, at-scale cloud databases. As a DocumentDB customer, you now have access to the breakthrough system and capabilities offered by Azure Cosmos DB.

Azure Cosmos DB started as “Project Florence” in 2010 to address the pain points faced by developers in building large-scale applications inside Microsoft. The challenges of building globally distributed apps are not unique to Microsoft, so we made the first generation of this technology available in 2015 to Azure developers in the form of Azure DocumentDB.

Since that time, we’ve added new features and introduced significant new capabilities. Azure Cosmos DB is the result. As a part of this release, DocumentDB customers, with their data, automatically and seamlessly become Azure Cosmos DB customers. These capabilities are in the areas of the core database engine, as well as global distribution, elastic scalability, and industry-leading, comprehensive SLAs. Specifically, we have evolved the Azure Cosmos DB database engine to efficiently map all popular data models, type systems, and APIs to the underlying data model of Azure Cosmos DB.

The current developer-facing manifestation of this work is the new support for Gremlin and Table storage APIs. And this is just the beginning. We plan to add other popular APIs and newer data models over time, with more advances in performance and storage at global scale.

It is important to point out that the DocumentDB SQL dialect has always been just one of the many APIs that the underlying Azure Cosmos DB can support. For developers who use a fully managed service such as Azure Cosmos DB, the only interface to the service is the APIs that are exposed by the service. Nothing really changes for existing DocumentDB customers. In Azure Cosmos DB, you get exactly the same SQL API that DocumentDB offers. And now (and in the future), you can access other previously inaccessible capabilities

Another manifestation of our continued work is the extended foundation for global and elastic scalability of throughput and storage. We have made several foundational enhancements to the global distribution subsystem. One of the many such developer-facing features is the Consistent Prefix consistency model, which makes a total five well-defined consistency models. We will release many more interesting capabilities as they mature.

What do I need to do to ensure that my DocumentDB resources continue to run on Azure Cosmos DB?

You need to make no changes at all. Your DocumentDB resources are now Azure Cosmos DB resources, and there was no interruption in the service when this move occurred.

What changes do I need to make for my app to work with Azure Cosmos DB?

There are no changes to make. Classes, namespaces, and NuGet package names have not changed. As always, we recommend that you keep your SDKs up to date to take advantage of the latest features and improvements.

What's changed in the Azure portal?

DocumentDB no longer appears in the portal as an Azure service. In its place is a new Azure Cosmos DB icon, as shown in the following image. All your collections are available, as they were before, and you can still scale throughput, change consistency levels, and monitor SLAs. The capabilities of Data Explorer (Preview) have been enhanced. You can now view and edit documents, create and run queries, and work with stored procedures, triggers, and UDF from one page, as shown in the following image:

The Azure Cosmos DB Collections blade

Are there changes to pricing?

No, the cost of running your app on Azure Cosmos DB is the same as it was before.

Are there changes to the SLAs?

No, the SLAs for availability, consistency, latency, and throughput are unchanged and are still displayed in the portal. For more information, see SLA for Azure Cosmos DB.

To-do app with sample data