When making requests to HTTP APIs, some endpoints may return paginated responses. This is common when querying for lists of items like products, orders, users, etc. Rather than returning the entire dataset in one response, it's broken up into "pages".
The exact way pagination works is API-specific but a typical pattern is:
Here's an example of how we might use the cpr library to handle a paginated APIÂ response:
#include <cpr/cpr.h>
#include <iostream>
#include <nlohmann/json.hpp>
using json = nlohmann::json;
int main() {
cpr::Url base{"https://api.example.com/items"};
cpr::Parameters params{{"perPage", "10"}};
cpr::Response r = cpr::Get(base, params);
json resp = json::parse(r.text);
std::cout << resp["total"].get<int>()
<< " total items\n";
while (true) {
for (auto& item : resp["data"]) {
std::string id =
item["id"].get<std::string>();
std::cout << "Item ID: " << id << '\n';
// ...process each item
}
if (resp.contains("nextPage")) {
std::string nextUrl = resp["nextPage"];
r = cpr::Get(cpr::Url{nextUrl});
resp = json::parse(r.text);
} else {
break;
}
}
}
This code assumes a JSON response in the following format:
{
"total": 100,
"perPage": 10,
"page": 1,
"lastPage": 10,
"data": [
{"id": "abc123", ...},
...
],
"nextPage": "https://api.example.com/items?page=2"
}
The perPage
parameter specifies how many items to return per page. We extract the total number of items from the first response.
Then we loop through each "page" of data, processing the items. If the response contains a nextPage
key, we request that URL to get the next page. Once nextPage
is missing, we know we've reached the end.
Of course, the exact logic depends on the API spec, but the general idea is to keep requesting pages until there are none left. We can accumulate the items into a local collection if needed.
Answers to questions are automatically generated and may not have been reviewed.
A detailed and practical tutorial for working with HTTP in modern C++ using the cpr
library.