The performance of an application depends on the availability of adequate IT resources, such as CPU, memory, storage and so on.
Storage metrics of interest are:
■ Data capacity
■ Input/output capacity (I/O performance)
■ Durability, space, cooling, cost, ROI and other mainly commercial factors.
We are concerned in this blog with the second item, I/O capability, which is not as simple as my system does X input/output operations per second (IOPs). First, let us look at some background to input/output. The classical I/O time for a disk access is:
TCPU+TCTL+TSEEK+TWAIT+TSEARCH+TACC+TXFR+TCOMP
TCPU = Time to parse and generate the I/O request in the processor
TCTL = Time for the controller to format and issue the request to the HDD, plus the time for the request to reach the HDD
TSEEK = Time to move to the correct track on the HDD (called a SEEK)
TWAIT = Time waiting to reach the required record
(In case of disk subsystems with set sector capability, the channel disconnects from the particular I/O until the record position is about to be reached on the track, then reconnects to complete the I/O. In the meantime it can do something else with its time. Prior to this feature, the channel would wait until the head reached the right position and then release it after the I/O was complete.)
TACC = Time to access the record (SEARCH) which will have an overhead depending on the format of the data (RDBMS, flat file, RAID x and so on)
TXFR = Transfer time of the accessed data to the processor via the controller/channel
TCOMP = Time to complete/post the end of the I/O.
This time is divided into 1 second to get I/Os per second (IOPs). Is physical I/O speed all that matters then?
Records: A record to an application usually means a logical record, for example, the name and address of a client. This can be made up of more than one physical record, which is normally retrieved as a block of a certain size, for example, 2048 bytes. Some though, a physical record may contain more than one logical record.
Disk Access: An I/O operation consists of several activities and the list of these depends how far you go back in the chain from data need to fulfillment. This is shown in the I/O time equation above.
Myth 1
This myth is propagated widely in internet articles and is totally erroneous, so beware. The misconception is a follows:
■ if an I/O operation (seek, search, read) takes X milliseconds, then that disk arm is capable of supporting 1000/X I/Os per second (IOPs). Yes it is, if you don't mind a response time of approximately infinity, give or take a few ms as the arm would be running at 100% utilization.
A sensible approach would be to do this calculation and settle for, say, 40% of this IOPs rate as an average which might be sustained.
Myth 2
If we make the allowance above, then a storage subsystem supporting X IOPs will perform better than one supporting 0.8X IOPs. In its raw form, this statement is not true I'm afraid, since the I/Os needed to satisfy an application's request for data depends on other factors, many within the designer's control:
■ the positioning of the physical data and its fragmentation, the former no longer in the control of the programmer, the latter a fact of life, except for the ability to defragment when necessary
■ the type of application (email, query, OLTP etc.) and access mode (random, sequential, read or write intensive)
■ block sizes and other physical characteristics, such as rotational speed (up to 15,0000 rpm)
■ the use of memory caching or disk caching, which can eliminate some I/Os
■ the design of the database layout, which is crucial and trees have been sacrificed writing about this topic
■ what RAID level, or other access method, is employed
■ the program's mode of accessing logical records (see below) might be sub-optimal (to be mild about it); does it chain reads/writes, save records or retrieve them again and so on
■ the key and indexing should be optimized to avoid long synonym chains to compose a single record - the shorter the key the greater chance of synonyms
■ Other factors and storage subsystem parameters
The upshot of this is that very fast I/O performance can be negated by poor design and often is. If the items above are properly thought through then, and only then, will the system supporting X IOPs outperform the system supporting 0.8X IOPs. These design features assume that any metadata, such as logs, indexes, copies etc. are not written to the disks containing the application data.
Dr. Terry Critchley is the Author of “High Availability IT Services” ISBN 9781482255904 (CRC Press).
The Latest
If you were lucky, you found out about the massive CrowdStrike/Microsoft outage last July by reading about it over coffee. Those less fortunate were awoken hours earlier by frantic calls from work ... Whether you were directly affected or not, there's an important lesson: all organizations should be conducting in-depth reviews of testing and change management ...
In MEAN TIME TO INSIGHT Episode 11, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Secure Access Service Edge (SASE) ...
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...