How to Dynamically Scale an Azure SQL Database

By: Data Team | August 24, 2017

We have had opportunities to work in Azure solutions on various occasions, including web applications, data warehousing scenarios, and IoT. Quite often in these solutions, there is a database in place to store information, in which case there may be an Azure SQL Database in the architecture.

As of now, the cost of running an Azure SQL database instance is based on the number of Database Throughput Units (DTUs) allocated for the database. When determining the number of units to allocate for the solution, a major contributing factor is to identify what processing power is needed to handle the volume of expected requests.

At a baseline, you would likely plan your architecture around the highest expected utilization, giving your users the best performance possible. But more than likely, your solution will not always be utilized to the level you expect. At times, there may be little to no traffic, leaving you to pay for resources that aren’t being used. It is challenging to find a single level of resources that is both cost effective and fulfils all of your requirements.

For Azure SQL Databases, there is a solution to this utilization problem. Microsoft Azure is intended to allow organizations to scale up or scale down their architecture depending on what their needs are. Scaling up will cost more, but can give you more resources when you need them the most. Scaling down at non-peak times can reduce your cost by reducing the available resources when you don’t need as much.

You can scale between the service tiers of an edition, and you can also scale into other editions of Azure SQL Database. This can be done manually, which may be of benefit when you expect to make significant adjustments and leave it to run for an extended period of time. This process can also be automated, giving you the opportunity to routinely scale your solution to the needs at any given time.

There are multiple approaches to programmatically scale your Azure SQL Database, including T-SQL statements and PowerShell. To automate this process, you will need to use PowerShell for at least part of your solution. As of this writing, Azure SQL Database does not have a built-in ‘SQL Agent’ like there is on-premises, so a pure T-SQL solution is not possible yet. Instead, you can utilize Azure Automation, creating Runbooks to run your scaling code at specific dates and times.

When considering if dynamically scaling your database will benefit you, here are some things to think about:

Not All DTUs Are Created Equal

Like most of Azure, Microsoft offers their Azure SQL Database services in multiple editions: Basic, Standard, Premium, and Premium RS. As you move up editions, you gain DTUs for your database. What you also gain is access to more efficient DTUs, especially when moving from Standard to Premium editions. Premium DTUs are more recent versions of the DTU structure, and they perform more efficiently than Standard DTUs. So aside from increasing the number of DTUs your environment has, note that you may get additional benefit from efficient DTUs at higher editions.

Scaling Your Database Up / Down Isn’t as Fast as it Sounds

Running the statement to upgrade/downgrade your database takes a matter of seconds. The actual process of scaling your database takes much longer. Regardless of whether you are scaling up or down, it will take a few minutes at the very least, with the time increasing relative to the size of your database.

There Will Be a Small Interruption of Database Services When the Scaling is Complete

Near the end of the scaling process, there is a span of a few seconds where database connectivity is lost, and active transactions will stop, reporting a failure. Be careful to only perform the scaling when you aren’t running important processes on your database.

Consider the Size of Your Database if You Attempt to Scale Your Database Into Another Edition

Each Azure SQL Database edition has limitations on the maximum allowed size of a database, which you can define independently on each database. While your focus in scaling your database will likely be on the number of DTUs, you also have the ability to scale the maximum size of your database. Even if you do not wish to scale your database size, I still recommend that you specify in your code what you want your database size to be. Functionally it changes nothing, but it serves to prevent your code from breaking, If you don’t specify this value, your process will set the database to the maximum size available for the edition. This isn’t a problem scaling up an edition, as your database size will naturally increase. The problem is when you want to scale down an edition. If the maximum size of your database is larger than what your target edition allows, and you do not specify an acceptable size value for that edition, your code will fail. 

By scaling up/down your Azure SQL DB and other Azure resources, you can save money while still getting the performance you need for your solution to succeed. Incorporating a dynamic scaling plan into your architecture will give you the best of both worlds.

Core's Data Team is deeply experienced in the technologies and methodologies needed to transform data into actionable insights.

Subscribe to our Newsletter

Stay informed on the latest technology news and trends

Relevant Insights

Mastering Cloud Migration from selecting CSPs to Prepping for Audits

Here’s what you need to know about the complexities of CSP selection, Microsoft portfolio management, and audit preparedness. Modern businesses...
Read More about Mastering Cloud Migration from selecting CSPs to Prepping for Audits

Enhancing Patient Care: Overcoming Technological Hurdles for Improved Experiences

As technology disrupts the healthcare sector, organizations must leverage it to deliver better care and meet patient expectations. While healthcare...
Read More about Enhancing Patient Care: Overcoming Technological Hurdles for Improved Experiences