The performance of an application depends on the availability of adequate IT resources, such as CPU, memory, storage and so on.
Storage metrics of interest are:
■ Data capacity
■ Input/output capacity (I/O performance)
■ Durability, space, cooling, cost, ROI and other mainly commercial factors.
We are concerned in this blog with the second item, I/O capability, which is not as simple as my system does X input/output operations per second (IOPs). First, let us look at some background to input/output. The classical I/O time for a disk access is:
TCPU+TCTL+TSEEK+TWAIT+TSEARCH+TACC+TXFR+TCOMP
TCPU = Time to parse and generate the I/O request in the processor
TCTL = Time for the controller to format and issue the request to the HDD, plus the time for the request to reach the HDD
TSEEK = Time to move to the correct track on the HDD (called a SEEK)
TWAIT = Time waiting to reach the required record
(In case of disk subsystems with set sector capability, the channel disconnects from the particular I/O until the record position is about to be reached on the track, then reconnects to complete the I/O. In the meantime it can do something else with its time. Prior to this feature, the channel would wait until the head reached the right position and then release it after the I/O was complete.)
TACC = Time to access the record (SEARCH) which will have an overhead depending on the format of the data (RDBMS, flat file, RAID x and so on)
TXFR = Transfer time of the accessed data to the processor via the controller/channel
TCOMP = Time to complete/post the end of the I/O.
This time is divided into 1 second to get I/Os per second (IOPs). Is physical I/O speed all that matters then?
Records: A record to an application usually means a logical record, for example, the name and address of a client. This can be made up of more than one physical record, which is normally retrieved as a block of a certain size, for example, 2048 bytes. Some though, a physical record may contain more than one logical record.
Disk Access: An I/O operation consists of several activities and the list of these depends how far you go back in the chain from data need to fulfillment. This is shown in the I/O time equation above.
Myth 1
This myth is propagated widely in internet articles and is totally erroneous, so beware. The misconception is a follows:
■ if an I/O operation (seek, search, read) takes X milliseconds, then that disk arm is capable of supporting 1000/X I/Os per second (IOPs). Yes it is, if you don't mind a response time of approximately infinity, give or take a few ms as the arm would be running at 100% utilization.
A sensible approach would be to do this calculation and settle for, say, 40% of this IOPs rate as an average which might be sustained.
Myth 2
If we make the allowance above, then a storage subsystem supporting X IOPs will perform better than one supporting 0.8X IOPs. In its raw form, this statement is not true I'm afraid, since the I/Os needed to satisfy an application's request for data depends on other factors, many within the designer's control:
■ the positioning of the physical data and its fragmentation, the former no longer in the control of the programmer, the latter a fact of life, except for the ability to defragment when necessary
■ the type of application (email, query, OLTP etc.) and access mode (random, sequential, read or write intensive)
■ block sizes and other physical characteristics, such as rotational speed (up to 15,0000 rpm)
■ the use of memory caching or disk caching, which can eliminate some I/Os
■ the design of the database layout, which is crucial and trees have been sacrificed writing about this topic
■ what RAID level, or other access method, is employed
■ the program's mode of accessing logical records (see below) might be sub-optimal (to be mild about it); does it chain reads/writes, save records or retrieve them again and so on
■ the key and indexing should be optimized to avoid long synonym chains to compose a single record - the shorter the key the greater chance of synonyms
■ Other factors and storage subsystem parameters
The upshot of this is that very fast I/O performance can be negated by poor design and often is. If the items above are properly thought through then, and only then, will the system supporting X IOPs outperform the system supporting 0.8X IOPs. These design features assume that any metadata, such as logs, indexes, copies etc. are not written to the disks containing the application data.
Dr. Terry Critchley is the Author of “High Availability IT Services” ISBN 9781482255904 (CRC Press).
The Latest
Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...
In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...
The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...
In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...
eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...
Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...
Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...
Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...
This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...