when to use asyncio in python

2021-06-07  本文已影响0人  旭日丶丶

https://stackoverflow.com/a/33399896/2268680

By default all your code is synchronous. You can make it asynchronous defining functions with async def and "calling" these functions with await. A More correct question would be "When should I write asynchronous code instead of synchronous?". Answer is "When you can benefit from it". In cases when you work with I/O operations as you noted you will usually benefit:

# Synchronous way:
download(url1)  # takes 5 sec.
download(url2)  # takes 5 sec.
# Total time: 10 sec.

# Asynchronous way:
await asyncio.gather(
    async_download(url1),  # takes 5 sec. 
    async_download(url2)   # takes 5 sec.
)
# Total time: only 5 sec. (+ little overhead for using asyncio)

Of course, if you created a function that uses asynchronous code, this function should be asynchronous too (should be defined as async def). But any asynchronous function can freely use synchronous code. It makes no sense to cast synchronous code to asynchronous without some reason:

# extract_links(url) should be async because it uses async func async_download() inside
async def extract_links(url):  
    
    # async_download() was created async to get benefit of I/O
    html = await async_download(url)  

    # parse() doesn't work with I/O, there's no sense to make it async
    links = parse(html)  

    return links

One very important thing is that any long synchronous operation (> 50 ms, for example, it's hard to say exactly) will freeze all your asynchronous operations for that time:

async def extract_links(url):
    data = await download(url)
    links = parse(data)
    # if search_in_very_big_file() takes much time to process,
    # all your running async funcs (somewhere else in code) will be frozen
    # you need to avoid this situation
    links_found = search_in_very_big_file(links)

You can avoid it calling long running synchronous functions in separate process (and awaiting for result):

executor = ProcessPoolExecutor(2)

async def extract_links(url):
    data = await download(url)
    links = parse(data)
    # Now your main process can handle another async functions while separate process running    
    links_found = await loop.run_in_executor(executor, search_in_very_big_file, links)

One more example: when you need to use requests in asyncio. requests.get is just synchronous long running function, which you shouldn't call inside async code (again, to avoid freezing). But it's running long because of I/O, not because of long calculations. In that case, you can use ThreadPoolExecutor instead of ProcessPoolExecutor to avoid some multiprocessing overhead:

executor = ThreadPoolExecutor(2)

async def download(url):
    response = await loop.run_in_executor(executor, requests.get, url)
    return response.text
上一篇下一篇

猜你喜欢

热点阅读