Volume 24 Number 05
Under the Table - Programming with FileStreams in SQL Server 2008
By Bob Beauchemin | May 2009
Programming Filestreams with Transact SQL
Programming Filestreams with File I/O
The SqlFileStream .NET Data Type
FileStreams and Transactions
Feature Synergy with Filestreams
There has always been lots of discussion about whether large blobs (binary large objects), such as document and multimedia items, should be stored in the database or in the file system. On one hand, the database is a specialized data repository that provides built-in integrated backup and restore, point-in-time recovery, transactions, indexing, and much more. On the other hand, having large data in a database can cause database fragmentation, and the nice recovery features also mean that the large object data is always written twice, once to the database itself and once to the transaction log. Even reading large data with SQL means using precious database buffers and flushing data that would otherwise be cached in memory out to disk as a side-effect.
To settle this controversy once and for all, Microsoft Research did a study on the subject and published their conclusions in a whitepaper "To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem?" (See research.microsoft.com/apps/pubs/default.aspx?id=64525). Their conclusions were a setback for the "let's store everything in a database" opinion, as they tested a SQL Server 2005 database against the NTFS file system and recommended "BLOBs smaller than 256KB are more efficiently handled by SQL Server, while NTFS is more efficient for BLOBS larger than 1MB. This break-even point will vary among different database systems, file systems, and workloads." Some programmers attempt to have the best of both worlds by storing the location of an NTFS file in a database row, but this sacrifices transactional consistency, integrated query, and the other database features. In SQL Server 2008, you don't have to choose; a new feature known as the filestream storage attribute allows you to define columns in SQL Server tables with a specification that the data is actually stored in the file system. You can query the data with Transact SQL or streaming APIs using C++ and .NET-compliant languages, and retain all of the other database management features. In this column, I'll describe how you'd program this hybrid storage model using both types of APIs.
First, you may have noticed that I referred to the feature as the filestream storage attribute. The filestream is not a new data type; it's a new storage mechanism. It's available only with the varbinary(max) data type introduced in SQL Server 2005. To use filestream storage requires that the system administrator enable it at an operating system level using SQL Server Configuration Manager and the database administrator enable it on a SQL Server instance level using sp_configure. Filestream can be disabled, enabled for local access, or enabled for local and remote access. In addition, the DBA must define a database filegroup that ties an NTFS file system location to a SQL Server database. Note that the filegroup needs to point to a local file system location; filestreams can't live on a remote server or a network addressable storage (NAS) devices unless the NAS device is presented as a local NFS volume via iSCSI. Accessing the filestream uses the server message block (SMB) protocol, so if you're going to allow file I/O-style access from outside the machine where SQL Server is installed, you must allow access to the SMB port (usually port 445, with port 139 as a fallback) through the firewall. Once the administrative prerequisites are in place, you simply define a varbinary (max) column as part of a table definition with the FILESTREAM property and your data for this column is automatically stored in files. In addition, each table that uses a FILESTREAM column requires a column defined using the UNIQUEIDENTIFIER data type that has the ROWGUIDCOL property. A version of the Northwind database's Employees table that uses filestreams would look like Figure 1.
Figure 1 Employees Table Using Filestreams
CREATE TABLE [dbo].[Employees2]( [EmployeeID] [int] IDENTITY NOT NULL PRIMARY KEY, [LastName] [nvarchar](20) NOT NULL, [FirstName] [nvarchar](10) NOT NULL, [Title] [nvarchar](30) NULL, -- filestream storage for photo column [Photo] [varbinary](max) FILESTREAM NULL, [ReportsTo] [int] NULL, -- identifier column [RowGuid] [UNIQUEIDENTIFIER] NOT NULL ROWGUIDCOL UNIQUE DEFAULT NEWID() );
Filestream uses both the traditional database log and a specific logging mechanism as an extension of traditional transaction log that records file changes. The extension is separate from the regular SQL Server database transaction log but integrated with the backup and restore utilities. For more information on filestream from an administrative and database internals point of view, refer to Paul Randal's excellent whitepaper "Filestream Storage in SQL Server 2008" at msdn.microsoft.com/library/cc949109.
From a SQL Server programmer's point of view, the filestream object can behave almost exactly like the varbinary (max) data type would with a few caveats. You can use Transact SQL to query the data without changing application programs at all. The simple Windows Forms application included in the code for this article needs no changes to display the Employees2 table I defined above in a DataGrid control. However, the value added from a performance perspective comes when using the filestream-specific API enhancements to read and write the data through stream-based APIs. An additional win is that the maximum size of the data in a particular column is no longer limited to 2GB for the filestream-based varbinary(max) column. The maximum size of these particular columns is unlimited. That means that you can store, for example, 7GB medical images, like X-Ray images, in a column in an ordinary SQL Server table. Filestream data does not count toward the 4GB maximum size limit in a SQL Server Express Edition database, so it can even be used with blobs in SQL Server 2008 Express Edition.
Before I go into the APIs, I'd like to stress that, once you've stored file data using SQL Server's filestream storage, you must not access or change the data outside of SQL Server's control. The directories where filestream's files live are protected by a discretionary access control list (DACL) that allows access only by the Windows group that contains the SQL Server service account and the system administrator account. And the system administrator can remove her access by changing the DACL. When a user tries to open a file using the streaming APIs, an access check is performed against the traditional SQL permissions on the database, schema, table, and column levels and NTFS permissions are ignored. Editing the filestream data directly using notepad.exe (or more likely, a specialized photo editor program) will usually result in a corrupt database. However, using the transactional streaming APIs that SQL Server provides, you could write your own SQL Server-based "transactional Notepad" program.
Programming Filestreams with Transact SQL
Although programming filestreams with T-SQL is just like ordinary T-SQL programming, there are a few caveats. First, partial updates to the filestream column using the varbinary(max) WRITE method are not allowed. In addition, because the streaming APIs require a file path name supplied by SQL Server (that is, the streaming APIs can only be used when the filestream column value is non-NULL),, T-SQL is the only way to obtain the file name. Although inserting a NULL value into a filestream column will not create a file, INSERTing any non-NULL value will cause a file to be created. An UPDATE to a filestream column will cause the old file to be deleted and a new file to be created in its place. A DELETE operation on a row will cause the corresponding file to be deleted. Note that the file deleted with an UPDATE or DELETE operation will not disappear from the file system immediately; a garbage collector thread will be used to physically delete the file and reclaim the space.
All of the T-SQL built-in functions that work with varbinary(max) work with a filestream column, including SUBSTRING, REPLACE, LEN, and CHARINDEX. Filestream storage is permitted only for columns in database tables; variables, parameters (including table-valued parameters), columns in temporary table structures, and table-valued function return values may not use the FILESTREAM attribute. Also note that the FILESTREAM attribute is available only on varbinary(max) columns; other large data types like varchar(max) and XML may not specify the filestream attribute. If you want to store character or XML data in filestream storage, you must specify the column as varbinary(max) and use CAST or CONVERT to process the data as characters or XML.
In order to access the filestream data using the streaming APIs, you must first use T-SQL to obtain a path name; this is accomplished with the (case-sensitive) PathName() method on the filestream column. Filestream's logical paths are versioned, and in the version 1 format that's used by SQL Server 2008, the path name is tied to (but not equal to) the value of the ROWGUIDCOL column in the same row. Although you can use filestream columns in views and derived tables, be sure to keep the ROWGUIDCOL around in case you need to use the PathName() method. For example, for table T that contains a filestream and a rowguid column, write the code in Figure 2.
Figure 2 Create a View
CREATE VIEW View1 AS SELECT RowGuidColumn , FileStreamColumn FROM T; SELECT FileStreamColumn.PathName() FROM View1; -- works GO CREATE VIEW View2 AS SELECT FileStreamColumn FROM T; GO -- Fails because it is missing the RowGuidColumn SELECT FileStreamColumn.PathName() FROM View2; GO
Finally, data encryption is not supported on filestream columns. This is not only true for the data encryption APIs like EncryptByKey, but also for the SQL Server 2008 Transparent Data Encryption feature. Although databases with filestream storage may specify Transparent Data Encryption, the filestream files will not be encrypted.
Programming Filestreams with File I/O
Before using streaming I/O with the filestream storage column, there are some requirements on the client side. You must use an Integrated Security account when using streaming I/O as there is no way to pass SQL credentials when opening the handle.Because filestream storage was implemented using a special file system filter driver, programming with a column that uses filestream storage involves working with a file handle that does not support all Win32 operations. The nativeWin32 function that retrieves the file handle is OpenSqlFilestream. This function is part of the SQL Native Client 10 DLL that also contains the SQL Server ODBC driver, OLE DB provider, and network libraries. The function returns a Win32 handle that supports most of the streaming functionality of a Win32 file handle. While obtaining a Win32 file handle requires a path name and some additional options, the OpenSqlFilestream function requires two pieces of information that can be provided only by SQL Server. These are the name for the file returned by calling the PathName() method on the column that uses filestream storage and a transaction token. This means that when you're programming with OpenSqlFilestream, you're splitting the INSERT, UPDATE, or SELECT statement into the equivalent of two "SQL" statements, the T-SQL statement and the streaming statement. These statements must be tied together with a transaction. Although I'll mention more about the transaction and the supported isolation levels later, for now, it's enough to know that you must call a T-SQL function, GET_FILESTREAM_TRANSACTION_CONTEXT() to obtain the transaction token. This token must be obtained in the context of same windows user as the one opening the file. Notice that the original SQL statement to get the transaction token must be part of a transaction, otherwise the transaction token will be NULL and OpenSqlFilestream will fail. You may not commit the transaction until the HANDLE is closed. The file handle APIs that are supported using the special file handle are ReadFile, WriteFile, TransmitFile, SetFilePointer, SetEndOfFile, or FlushFileBuffers. Attempting to call any other file API returns ERROR_ACCESS_DENIED. Memory-mapped files are specifically not supported with the special HANDLE.
A complete example of using ODBC and C++ to insert a new row and filestream data using OpenSqlFilestreamis provided in the SQL Server 2008 Books Online.
With a SELECT statement, you'll make only a single round-trip to the database for your PathNames and transaction token; with an INSERT or UPDATE statement, you'll want to make only one database round-trip as well. With these statements, the T-SQL OUTPUT clause, introduced in SQL Server 2005, comes to your assistance. Here's an UPDATE statement that processes a single row and gets all of the information you need in one round-trip:
UPDATE dbo.Employees2 SET name = 'NewName' WHERE id = 8 OUTPUT inserted.photo.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT()
I haven't mentioned using the DELETE statement because deleting a row will also delete the file; you cannot use streaming I/O to perform a DELETE. However, there are some things you need to know when using INSERT or UPDATE to read and write the blob. In addition, I'll divide the blob update into two use cases: complete replacement (rewrite) and partial update.
Inserting a row that contains a blob using INSERT is one of the best reasons for using streaming I/O. SQL Server APIs do not support streaming input using a varbinary(max) column, although you can write the data in chunks by using the T-SQL STUFF function or the WRITE method. (Remember that the WRITE method is prohibited when using filestream storage.) The interesting twist with INSERT is that inserting a NULL value creates no file. The PathName() method would also return NULL and you'd have no file for streaming the data. The way to approach an INSERT is to insert an empty string rather than a NULL value, retrieve the path name and stream the content into the empty file. The T-SQL statement would look something like this:
INSERT dbo.Employees2 (id, ... photo) VALUES(1, ... CAST('' as VARBINARY(MAX)) OUTPUT inserted.photo.PathName(),GET_FILESTREAM_TRANSACTION_CONTEXT()
You can then open the file for writing and stream in the data. A complete replacement UPDATE would work like an INSERT. You'd update the non-filestream columns, return the path name and transaction token, and stream data in through the file handle. Unless you know that your filestream storage column already contains data, it's a good idea to set the data to the empty string in the T-SQL update statement.
A partial update of a filestream column is a little more complicated because you may want to read the information in the file before doing an update. To accomplish this, you can use the DeviceIoControl function with the handle and the FSCTL_SQL_FILESTREAM_FETCH_OLD_CONTENT parameter. This causes a server-side copy of the contents of the file. After performing this call, ReadFile will return the appropriate data before the call ReadFile will return End Of File. If you're reading and updating this way, it's a good idea to use overlapped I/O for high performance apps and especially those that may access filestream from the same machine as Sql Server. Bear in mind that if you need to do a lot of partial updates, using filestream storage is much slower than using SQL Server's varbinary(max) without filestream storage.
The SqlFileStream .NET Data Type
.NET programmers don't usually want to deal with Win32 file handles. Although it's possible to use OpenSqlFilestream in .NET programs using a technique called PInvoke (platform code invoke), it would be more convenient to have the OpenSqlFilestream function and file handle access encapsulated in the .NET class. In .NET Framework 3.5 SP1, the class System.Data.Types.SqlFileStream fills the bill nicely. The usage is mostly the same; the SqlFileStream's constructor requires a path name and a transaction token, as well as some options. Using SqlFileStream to INSERT a row is illustrated in the code download that accompanies this column.
The SqlFileStream object derives from System.IO.Stream, and you use it as you would use an "ordinary" .NET Stream. Some important differences between using the SqlFileStream API and the OpenSqlFilestream Win32 function is that SqlFileStream uses a default buffer size of 4K; OpenSqlFilestream's handle does not. In addition, using SqlFileStream for ReadWrite mode will automatically perform the DeviceIoControl calls as a convenience; the Native API does not support this and "partial update" patterns will involved explicit DeviceIoCtrl calls. To use the SqlFileStream data type, you must have .NET 3.5 SP1 must be installed on the client or web server.
FileStreams and Transactions
As mentioned previously, one of the attractions of using filestream storage instead of simply storing file names in SQL Server and files on the file system is that the data is transactionally consistent. If you insert a row in Transact SQL and insert a file using the streaming APIs, the entire operation either succeeds or fails. There is no possibility of having file names of files that do not exist or orphaned file system items that do not have a corresponding entry in SQL Server. The OpenSqlFilestream API ensures this by requiring you to have a valid, open transaction at all times when using the streaming APIs. But SQL Server supports a variety of transaction isolation levels and two different transaction semantics: locking and versioning. How well does the streaming APIs work with all of the isolation levels, given that SQL Server's lock manager does not manage file system locks?
When using streaming, it's helpful to think of your atomic operation as consisting of two distinct SQL statements--one for the T-SQL portion and a second statement for the streaming portion. In order to obtain the transaction token, you need to perform the T-SQL portion first. Before executing the T-SQL command you must begin an explicit transaction by using the ADO.NET SqlConnection.BeginTransaction method or System.Transactions' TransactionScope; analogous methods for manual and automatic transaction management are surfaced on other APIs like ODBC. The transaction must be kept open until the file handle is closed. Issuing a SqlTransaction.Save call is disallowed while the file handle is open and will make the transaction uncommittable. A SqlTransaction.Rollback call issued will roll back the transaction, even if the file is opened. Triggers fire when the file handle is closed.
Using the streaming APIs and filestream storage is permitted under all four locking transaction isolation levels, although the semantics of streaming access is always similar to read committed. Using read-committed isolation level (the default), if the file is opened for reading, other readers will be able to read the file—writers will be blocked. If the file is opened for reading, it remains blocked until the end of the transaction. Read-uncommitted isolation level is allowed with the caveat that if an update of the file is in progress, the read-uncommitted transaction will read the old version of the file, rather than the new, proposed version as is the usual read-uncommitted behavior. Repeatable read and serializable behavior is the same when using file streaming as when rows are locked in the database.
Versioning isolation levels—that is, read-committed snapshot isolation and snapshot transactions—are not allowed in a database that uses filestreams. This is true for the entire database, not just the table that contains a filestream column. The ALTER DATABASE statement that enables read-committed snapshot isolation and that enables snapshot transactions will fail in the presence of filestream storage.
Feature Synergy with Filestreams
Filestream is useful for storing large data in a database without incurring the transaction log overhead and database fragmentation behavior, but on occasion it will be useful to hoist a portion of the data to a persisted computed column. This allows you to do queries on properties of the blob without reading the file at all. As an example, say that you were storing photos in JPEG format and would like to store portions of the color table, such as background color or photo height and width, separately. You can simply define a persisted computed column for that part of the varbinary(max) column; the data is stored in-row. An example, using the Employees2 table defined above, would look like this:
ALTER TABLE dbo.Employees2 ADD PhotoWidth AS dbo.ExtractWidth(Photo) PERSISTED; A SQL query to obtain the photo width does not need to access the file at all: SELECT Id, Photo FROM dbo.Employees2 WHERE PhotoWidth > 1200;
Be aware, however, that indexes on persisted computed columns of type varbinary are disallowed.
A great example of feature synergy is the combination of the SQL Server fulltext indexes and fulltext searching with filestream storage. Fulltext search filter components exist for document types, such as Microsoft Office documents, PDF files, and text files, and filestream storage allows these files to be stored in the file system rather than in the database. As a concrete example, you can store copies of error logs or Web log files in a filestream storage column and fulltext index the column. In SQL Server 2008, fulltext searching runs in-process, so it's not necessary to make out-of-process calls to the search service. A query like the one below works in-process and can be quite fast:
SELECT id, Abstract FROM error_logs WHERE CONTAINS(ErrorText, 'unhandled');
I began this article by suggesting filestream storage provided not only integrated backup and restore, but transactional consistency between SQL data and file data, although filestream storage could use only local disk storage. If you're concerned only with transactional consistency and want the files to reside on a remote server or content-addressable storage, SQL Server 2008 includes a companion feature, Remote Blob Storage (RBS). This feature is available as a free download as part of the SQL Server 2008 Feature Pack. RBS consists of an API that permits transaction manipulation and a series of stored procedures for handling blob pointers in database tables, blob garbage collection, and transaction participation. Content-addressable storage vendors provide drivers for their specific hardware. A sample RBS driver for the NTFS file system is available as a download on CodePlex and EMC has release a beta RBS provider for their Centera content-addressed storage system. The RBS API is not similar to the filestream API, but these APIs may be aligned in future.
In conclusion, filestream storage gives you the ability to store large blobs as files in the file system and access them with either T-SQL statements or transactional streaming APIs. With this feature, you can gain the performance of streaming as well as full database integration for large blobs.
Send your questions and comments for Bob to firstname.lastname@example.org
Bob Beauchemin is a database-centric application practitioner and architect, course author and instructor, writer, and Developer Skills Partner at SQLskills . He's written books and articles on SQL Server, data access and integration technologies, and database security. You can reach him at email@example.com.