Talking Point: to ADO.NET Data Services 1.5 (CTP1)
A while back we hovered over ADO.NET Data Services. Last night I thought it would be good to take the opportunity to review the 1.5 CTP1. This Talking Point covers enhancements made on both the server and client APIs.
Using ADO.NET Data Services today, you can make a request to a service for a specific entity set, such as “Products”. This provides a great level of flexibility to the consumer, but it comes with a few problems. Because there was no paging options applied to the request, the user has no idea just how much data they are about to get back. This could end up creating a network bottleneck.
The consumer can optionally specify paging parameters in their request for products, which removes the potential bandwidth tax. The problem with this approach is that you have no idea how many products actually exist on the server. If you wanted to provide some paging functionality in a client application, there isn’t any way to determine how many pages there are without retrieving all products.
In ADO.NET Data Services 1.5, there is a new pseudo-selector that can be applied to a request: $count. It isn’t a query option (like $skip or $take) but rather a value that you specifically target, much like the $value or $links pseudo-selectors.
When you target an entity-sets $count, you get back only the count of items in that set. This is nice if you already have the data and are only concerned with determining the server-side count. But if you want data as well as the count, this approach doesn’t lend itself so vwell to that scenario.
In addition to the $count pseudo-selector, there is a new query option called $inlinecount that allows you to query for data and then optionally include the count inside the response. This option is great because now you can get the data you need, paged to the size you need, and also find out how many total items exist on the server, all in a single request. This makes it much easier to develop client applications that consume a data service.
Back to the scenario where a customer requests an entire entity set, without specifying paging information, there are additional issues than simply not being able to retrieve count data. What if there were half a million product records on the server? The customer is now able to pull that data down without possibly even knowing what they’re doing. This solution obviously won’t scale very well; a data service can’t assume that the client will do the right thing.
In 1.5, when a client requests an entity set that potentially contains a lot of data on the server, the service can enforce paging on the request and send back only the records it wants to give to the client. This allows the consumer to continue making the same resource-centric requests they’re familiar with, but it allows to server to “guide” the client down the path of success for both ends. As part of the response content, when server-side paging is enforced, a URI will be included that points the client at the next page (if applicable). That way the RESTfulness of the service remains intact such that the connectivity of the resources doesn’t break in the name of paging.
If the client follows the link for a subsequent page (that was provided by the server), the service will then respond with the appropriate page of data as well as next link (if applicable). ADO.NET Data Services 1.5 will only include next links. Links for previous, first, and last pages will come in a subsequent release.
By default, when ADO.NET Data Services exposes an instance of an entity type, it serializes the data using the Atom Publishing Protocol (Atom Pum/APP) format. Every public property on the entity type gets mapped to an element within the content of the respective entry element.
While this default behavior works just fine for many situations, there are some oddities when using APP. For instance, the APP format requires that every entry include a title and an author. ADO.NET Data Services will render these elements, but never actually fill them with content. This could confuse consumers of the service that are APP aware and would expect to be provided with a title and/or author.
In 1.5, ADO.NET Data Services introduces a feature called “friendly feeds” that allows you to map an entity property to an element within the APP entry. This can either be a pre-defined element such a title or author, or a custom element. The ability to map properties to custom elements allows you to add additional information to your data feeds, such as microformats (i.e. GeoRSS), that can be interpreted by understanding clients.
If your data service exposes an entity that contains binary data, you can end up with a less than desirable situation. Because data is ultimately serialized as string content, any binary content has to be base64 serialized before being sent to the client. This expands the size of the response and requires de-serialization and base64 un-encoding on the client-side. In addition, whenever a request comes in for entities with data properties, that binary data must be fully loaded into memory before it could be sent to the client. This could lead to memory issues if the service was under pressure.
If the consumer never even uses the binary data, you end up with wasted bandwidth, memory and processing time. You could remove the binary data from the entity that the services exposes, but then any clients that did need to use the binary data won’t have access to it. You could remove the binary data from the entity and then create a service operation that returned the binary content, but that behavior isn’t exactly ideal and doesn’t follow the AtomPub semantics.
In 1.5 of ADO.NET Data Services you are able to separate an entity with binary data into two pieces: a media resource and a media link entry. The media resource represents the actual binary data, as well as its content type. The media link entry represents the metadata and additional information that goes along with the binary data. This makes it possible to query the two individually.
For instance, we could query for the photo of the product whose ID is 100, which would return to the metadata of the photo (media link entry) without the actual image binary content. If we wanted to retrieve the media resource for the photo (the binary content). we could simply append $value to the request. At this point it is up to the service to determine where to actually retrieve this media resource. It could come from a database, file system, the cloud, or anywhere else. Once the resource is retrieved, the service returns the binary image back to the client.
Every server-side feature that we just discussed has corresponding support on the client as well, which makes the end-to-end experience of consuming a 1.5 data service very easy and rich.
There is also one enhancement that is exclusive to the client API:
WPF/Silverlight data binding
Using the ADO.NET Data Services client API, you can create an instance of a DataServiceContext to retrieve an entity instance. Once you have the desired instance, you can bind it to any FrameworkElement (or subtype) instance in a WPF application. If the bound FrameworkElement accommodates edits, then you can modify the entity data via UI.
This process works just fine in v1, but the problem is that once you want to save the changes made back to the Data Service you have to explicitly notify the DataServiceContext. It would be a lot nicer if the entity instance included the necessary change tracking behavior such that when you modify its contents via its bound FrameworkElement, it would automatically notify its parent DataServiceContext of its modification.
In addition to just single entity instances, if you bind a list of entities to a WPF items control, any changes made to the list via UI (instance modification/deletion) will automatically notify the parent DataServiceContext.
ADO.NET Data Services is a key enabler to efficient cloud computing scenarios. This CTP takes us several steps forward in creating a model that matches the current enterprise data access model. It highlights Microsoft tenacious commitment to evolving our platform to the new emerging reality of cloud-based services.