On infinite scrolling and pagination
I’ve been thinking a lot about infinite scrolling, how fashionable it’s become lately and why I think it’s being fundamentally misused in many many cases.
Infinite scrolling is a great way to reduce friction and make the user consume more content. No need to find and click that “next” button. Just keep on mindlessly scrolling until you find something that looks interesting to click on it.
Infinite scrolling is perfect for collections of content that are pretty much impermanent since they get updated or change too often to make pagination viable or useful and when archiving doesn’t make much sense because the content’s freshness is a mayor asset. Things such as such as the Tumblr dashboard, Svpply’s shop page and the like. The user cares about the latest stuff, not what was posted a month ago… (For those interested, here’s a great post by Alex MacCaw about dynamic pagination on sites with constantly changing content. Sites like Hacker News or Medium.)
Pagination on the other hand is great for going through large collections of items, with archival value, in an orderly manner. You may not have time to go through it all in one sitting so you want to be able to save your position to take a look at it in a few days. In the meantime there might be new content that makes that saved position a bit inexact, but it provides with a good enough reference point to resume navigation.
The problem begins when it’s used just because “it’s the cool thing to do“.
More and more sites are adopting it as the way to go through their contents. News sites. Blogs. Portfolios… and in many cases it’s fundamentally flawed because of this 2 main reasons:
It doesnt give you a reference point to resume navigation on a later date:
If I want to keep on reading, I need to keep the page open in my browser or else I lose my position on the site and I have to start all over again, having to reload everything and then try to find where I was before.
If I click on something that interests me I have to make sure I open it in a new tab if I want to resume browsing later.
Too resource intensive:
Yeah, the actual loading of new content is a breeze, i’m not saying using javascript to fetch more stuff is bad… but when you go through what would normally be 200+ pages of content just by scrolling and you end up with a insanely tall single page site… things start to get heavy. All those pictures and elements make the browser laggy.
So, what’s a good solution that has the best of both worlds? Using hashbangs. I know hashbang URLs have a bad reputation and I agree with most of the critics, but this is one of this times when they can be very valuable if implemented correctly:
Update the URL with every content request:
Every time the script fetches new content update the URL adding a “page” counter (ie: site.com/#57). This index number must really match the actual page number for the new content if the site was using standard pagination
Transform the hashbang to an actual page position.
If the user enters an URL containing a hashbang into the browser, redirect him to that archive page (ie: site.com/#57 automatically redirects you to site.com/page/57)
Let the user keep on scrolling:
When he loads said page 57, keep the site working as if he had started from the beggining. He keeps scrolling, new page numbers get added to the URL and so on (ie: he ends up in site.com/page/57#98 that would redirect to site.com/page/98)
This lets the user effortlessly navigate the site, keep on browsing on a later date, and “clean” the page state to get rid of all that content that he’s already gone through and free browser resources. (Old content could also be discarded using JS to keep it from wasting resources, but that’s not the point of this post)
As said: Best of both worlds.
(I wrote this originally on Tumblr on Aug 3rd, 2012.)