我正在尝试使用一个小函数从 JSON 端点抓取数据,
网址类似于 https://xxxxxxxx.com/products.json?&page= “我可以插入页码,
当我使用请求模块时,我只是有一个 while 循环并递增页码并中断,直到得到空响应(哪个页面是空的)
有没有可能用 aiohttp 做同样的事情?
到目前为止我只实现了预生成一定数量的网址并将其传递给任务 想知道我是否也可以使用循环并在看到空响应时停止
非常感谢
'''
import asyncio
import aiohttp
async def download_one(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as resp:
pprint.pprint(await resp.json(content_type=None))
async def download_all(sites):
tasks = [asyncio.create_task(download_one(site)) for site in sites]
await asyncio.gather(*tasks)
def main():
sites = list(map(lambda x: request_url + str(x), range(1, 50)))
asyncio.run(download_all(sites))
'''
最佳答案
这是一段未经测试的代码。即使它不起作用,它也会让你知道如何完成这项工作
import asyncio
import aiohttp
async def download_one(session, url):
async with session.get(url) as resp:
resp = await resp.json()
if not resp:
raise Exception("No data found") # needs to be there for breaking the loop
async def download_all(sites):
async with aiohttp.ClientSession() as session:
futures = [download_one(session, site) for site in sites]
done, pending = await asyncio.wait(
futures, return_when=FIRST_EXCEPTION # will return the result when exception is raised by any future
)
for future in pending:
future.cancel() # it will shut down all redundant jobs
def main():
sites = list(map(lambda x: request_url + str(x), range(1, 50)))
asyncio.run_until_complete(download_all(sites))
关于python - Aiohttp 尝试获取带有页码的页面响应,直到达到空响应,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58889842/