OFFSET is just a row counterā has no concept of what the data is, it just skips a number of rows
If data changes (new rows added or old rows deleted) while the user is paging, the dataset shifts underneath them.
Result ā The user can experience missing data (items get skipped) or repeating data (items appear on two different pages).
Example
You are on Page 1 (items 1-20). You click āNext.ā
Before your request for Page 2 (OFFSET 20) arrives, 5 new items are added to the beginning of the list.
Your OFFSET 20 query now skips the 5 new items plus the first 15 items you saw on Page 1. The last 5 items from Page 1 (items 16-20) will now appear at the top of your Page 2. You see repeated data.
Poor Database Performance
OFFSET is very slow on large tables, especially with high page numbers (e.g., OFFSET 1000000).
To execute OFFSET 1,000,000, the database still has to read all 1,000,000 rows from disk and count them, just to throw them away.
This performance degrades linearly as the page number increases, making deep pagination unusable.
š Cursor based
Cursor-based pagination (also known as keyset pagination) is a method that avoids the problems of OFFSET by using a ācursorāāa stable, unique pointer to the last item the user saw.
Using a cursor (pointer) to the ālast record seenā
Identifying a specific row and going from there
So weāre immune to data being removed/added
How it works
Initial Request
The client makes a normal request for the first page.
GET /comments?limit=20
SQL: SELECT * FROM comments ORDER BY id LIMIT 20;
Server Response
The server returns the first 20 items and, most importantly, a ācursorā pointing to the last item in that list (e.g., the 20th item has id=12345).
Next Request: The client uses this cursor to ask for the next page.
GET /comments?limit=20&after=12345
Backend Logic & SQL: The backend uses this after value in a WHERE clause.
SQL: SELECT * FROM comments WHERE id > 12345 ORDER BY id LIMIT 20;
This query is extremely fast because the database can use an index to instantly āseekā to id = 12345 and start reading from there. It doesnāt need to read and discard 1,000,000 rows like OFFSET would.
Implementation: Cursor vs tokens
How you pass the cursor is an implementation detail
Simple (but bad) Approach: Exposing IDs
This is the simple example from above.
Request:GET /comments?after-id=12345
Response:next: "/comments?after-id=12378"
Downsides
Bad Practice: Exposes internal database IDs to the client.
Confusing: Hard to manage direction (e.g., after vs. before).
Limited: Only works if youāre sorting by one column (id). What if you sort by createdAt and then id?
Best Practice: Continuation Tokens
- The professional way to implement it ā A ācontinuation tokenā is an opaque (unreadable) string that encodes all the information needed to get the next page.
- Token:continuation = abc123xyz
This abc123xyz string is actually a Base64-encoded version of the last itemās data, like {"createdAt": "2025-11-17T13:30:00", "id": 12378}.
- Request:GET /comments?limit=20&continuation=abc123xyz
- Backend Logic
Backend decodes abc123xyz back into {"createdAt": "2025-11-17T13:30:00", "id": 12378}.
It uses this info to build a complex, stable query.
SELECT * FROM comments WHERE (createdAt, id) > ('2025-11-17T13:30:00', 12378) ORDER BY createdAt, id LIMIT 20;
- Response:
The next attribute is critical. The client doesnāt need to understand the token; it just needs to save the next_continuation_token from the response and send it back for the next page.
Limitations
Limited to certain columns
it has to be UNIQUE, NOT NULL, unchanging, and indexed
commonly used: Id or a createdAt timestamp
No arbitrary page jump
itās pretty just much next and previous, like a linkedlist!