Conor vs. SQL Azure Resource Usage
For those of you watching SQL Azure, you probably have noticed that it releases more frequently than regular SQL Server. There was a release in late November/early December (it is complicated since the installation of the new software is done largely transparently to customers and it is done over the course of several days – there are many different parts to the SQL Azure service beyond what you would call the normal “database engine” part, and we don’t install new software on them exactly at the same time in all cases.
So, today I wanted to help introduce people to some new technology that you can go try. Since SQL Azure hardware is not yours and you can’t just go look at the performance monitor at the OS level, it is important that you have mechanisms that help you understand how much resources you have been using to run your application. This will help you tune your usage or determine when you might run out of capability in the future if the application becomes busier. There are a lot of pieces needed here to make a complete picture, and this is really only part of that broader puzzle, but here is something you can go try to see your CPU usage over time in an application.
We’ve added a new table in the logical master database in SQL Azure (every logical server has a free “logical” master database since you can’t see the “physical" layer). Within the logical master, SQL Azure will write out the usage data for the consumed CPU and the consumed storage (primary database, not tempdb) for each database in your logical server.
(remember to connect to the “master” database on your logical server in SQL Azure, not your database and not “logical master”)
select * from sys.resource_stats
|2012-01-13 21:15:00.00||2012-01-13 21:20:00.00||test1||Web||0||0.77|
|2012-01-13 21:45:00.00||2012-01-13 21:50:00.00||test1||Web||10||0.78|
|2012-01-13 22:45:00.00||2012-01-13 22:50:00.00||test2||Web||0||0.85|
This isn’t a DMV, but you can think of it as such. In SQL Azure, there exist many machines and DMVs (which traditionally would allow you to read the state of some internal memory structure in the database server) doesn’t make sense when there’s not just one memory to examine. Internally, SQL Azure needs to communicate between nodes and sometimes this is done in a persisted manner to enable various features, troubleshooting, etc. In this case, you can read this system table to see how many resources each database has consumed over a 5 minute period. I’ve put an example from one of my test subscriptions that I use for my own internal work – I’ve changed the names and a few of the numbers to make my blog post interesting and not to disclose possible future features ).
You can see a few things:
1. Each row has a start and end time covering a 5 minute window
2. There are gaps – not all rows abut (I was not using these databases consistently across 5 minute windows, so you will find that gaps are just fine)
3. It records CPU usage as usage_in_seconds. You can read this as “number of CPU seconds where the CPU was allocated to user operations on this database”. If you run a query that takes 30ms, that will add 0.030 to the usage counter.
I’ll add a few tidbits that might be interesting to you. First, SQL Azure has some number of services that run in the background. For example, it has logic to do backups for you automatically. These should not show up in the CPU summation (though it may show up as a row with 0 when nothing else is happening on that database). Second, the storage can vary as well – either you are adding/removing data or the system operations could do this. If SQL Azure had run an upgrade step, it might introduce a new system table that shows up in this table as a minor bump in the size.
If you are playing with SQL Azure, please go give this a try and see if it represents your usage – you might find it interesting
Thanks for reading!