An effective breakpoint strategy helps deliver sharp, properly sized images, which are some of the most compelling pieces of content on a web page. Lack of such a strategy can lead to jagged images or ones that take too long to render due to excessive size, potentially reducing the overall effectiveness of web pages — and driving down the quality of the user experience.
Creation of derivative images at properly determined breakpoints is critical but often challenging for web developers and designers to achieve. Generating the right number of variants — and at the correct widths and spacings — for their users' critical devices can provide a great balance between byte savings and cache dilution.
While it's true that having too few breakpoints can improve offload, it can also deliver many unused bytes and place an unnecessarily high rendering burden on the device and browser. At the same time, when there are too many breakpoints, it may result in the opposite effect. Finding the right balance mandates some time and resource commitment, but the benefits are truly worth the exercise.
In this 2-part blog, we will explore just how significant image breakpoints are to businesses, and some important device-related factors to consider in image breakpoint decisions — from screen resolution market share and user base, to device characteristics to pixel density — to deliver the optimally-sized web image every time.
The Byte Loss Breakdown
Getting image breakpoints right is especially important at larger image widths. To illustrate, let's look at this example in which a web page is displaying two images that need to be resized by the browser 50 pixels in both height and width:
As you can see, delivering a 200 X 200 pixel (px) image for a 150 X 150 px use case (50 extra pixels in width and height) results in 70,000 wasted bytes required in memory for the browser to display the image than if the image were delivered at 150 X 150 px.
When looking at the larger image example, delivering a 600 X 600 px image instead of a 550 X 550 px use case (still 50 extra pixels in height and width) resulted in 230,000 wasted bytes, or 3.3-times more wasted bytes. This illustrates why it's recommended to have more breakpoints at larger image sizes (in this case, images larger than 700 px in width).
Another example to consider involves displaying images on a typical smartphone, in this case, the Samsung Galaxy S5. This model has a screen resolution of 1080 X 1920 px. Take, for example, a server sending an image of 1200 X 2000 px dimensions that must be resized to 1080 X 1920 px. This would result in more than 1,305,600 bytes — that's 1.3 MB — waste, not to mention a significant degradation in user experience.
Read Image Optimization Best Practices for Device Breakpoints - Part 2, covering 4 tips for getting image breakpoints right.
The Latest
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...
Organizations recognize the benefits of generative AI (GenAI) yet need help to implement the infrastructure necessary to deploy it, according to The Future of AI in IT Operations: Benefits and Challenges, a new report commissioned by ScienceLogic ...
Splunk's latest research reveals that companies embracing observability aren't just keeping up, they're pulling ahead. Whether it's unlocking advantages across their digital infrastructure, achieving deeper understanding of their IT environments or uncovering faster insights, organizations are slashing through resolution times like never before ...