In a large & complicated database system where the stress is huge, data access is subjected to high levels of I/O. As an organization that caters to database performance, it is very integral for us to handle & contain a situation where business impact is likely due to high I/O.

To explain it in simple terms, a situation arises when the number of requests on a particular set of data outpaces the expected load. In such a scenario where we have seen it happening for MS SQL Server, the respective team immediately needs to be engaged to avoid any cascading impact.

As evident, the handling is complex and most likely an organization like us with adequate exposure to handling large databases understands its critical nature. So, the first optimal check is made for checking overall utilization bottlenecks. In a situation where CPU usage and memory usage have surged suddenly it is very natural that Disk I/O will have an extended lag. Once the above is ruled out, then it comes down to identifying large queries which by any chance have been consuming resources due to a bad execution plan choice or non-optimized indexing which needs to be reviewed.

The team at Scalability is equipped with people who isolate these queries and coordinate with the application team to either bring down the I/O contention or effectively tune the indexes later to avoid a reoccurrence.

To conclude though, the point remains that an adequately trained team forms the heart of a complex database support system and undeniably the assurance from Scalability thus runs high enough.