PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0

Friday, May 13, 2022

[FIXED] How do I extract all results from a GET request that spans multiple pages?

 May 13, 2022     append, dataframe, json, python, python-requests     No comments   

Issue

I have successfully written code that calls an API and then converts the results into a DataFrame.

wax_wallet = "zqsfm.wam"

# Get Assets from AtomicHub API
response1 = requests.get(
    "https://wax.api.atomicassets.io/atomicassets/v1/assets?"
    f"owner={wax_wallet}"
    "&collection_whitelist=nftdraft2121"
    "&page=1"
    "&limit=1000"
    "&order=asc"
    "&sort=name")

# Save Response as JSON
json_assets = response1.json()

# Convert JSON to DataFrame
df = pd.json_normalize(json_assets['data'])

This API returns at most 1000 items per page so I need to have it loop through as many pages as needed and ultimately get the results stored into a DataFrame.

I attempted to solve it with the below code, but was unsuccessful.

asset_count = 2500
pages = int(math.ceil(asset_count / 1000))

# Get Assets from AtomicHub API
all_assets = []
for page in range(1, pages):
    url = f'https://wax.api.atomicassets.io/atomicassets/v1/assets?owner={wax_wallet}' \
          f'&collection_whitelist=nftdraft2121&page={page}&limit=1000&order=asc&sort=name'
    response = rq.get(url)
    all_assets.append(json.loads(response.text))["response"]

Thanks in advance for any help!


Solution

You can turn them into dataframes and then concatenate the individual frames into a final result:

def get_page(page_num):
    wax_wallet = "zqsfm.wam"

    response = requests.get(
        "https://wax.api.atomicassets.io/atomicassets/v1/assets",
        params={
            "owner": wax_wallet,
            "collection_whitelist": "nftdraft2121",
            "page": page_num,
            "limit": "1000",
            "order": "asc",
            "sort": "name"
        }
    )

    json_assets = response.json()
    return pd.json_normalize(json_assets['data'])

# The number of pages you want
number_of_pages_requested = 10

# Get all pages as dataframes
pages = [get_page(n + 1) for n in range(number_of_pages_requested)]

# Combine pages to single dataframe
df = pd.concat(pages)

Edit: updated using params based on Olvin Roght's comment

Edit 2: fixed indexing error



Answered By - tituszban
Answer Checked By - Cary Denson (PHPFixing Admin)
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Newer Post Older Post Home

0 Comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
Comments
Atom
Comments

Copyright © PHPFixing