Skip to content Skip to sidebar Skip to footer

Asynchronously Get And Store Images In Python

The following code is a sample of non-asynchronous code, is there any way to get the images asynchronously? import urllib for x in range(0,10): urllib.urlretrieve('http://t

Solution 1:

You don't need any third party library. Just create a thread for every request, start the threads, and then wait for all of them to finish in the background, or continue your application while the images are being downloaded.

import threading

results = []
defgetter(url, dest):
   results.append(urllib.urlretreave(url, dest))

threads = []
for x inrange(0,10):
    t = threading.Thread(target=getter, args=('http://test.com/file %s.png' % x,
                                              'temp/file %s.png' % x))
    t.start()
    threads.append(t)
# wait for all threads to finish# You can continue doing whatever you want and# join the threads when you finally need the results.# They will fatch your urls in the background without# blocking your main application.map(lambda t: t.join(), threads)

Optionally you can create a thread pool that will get urls and dests from a queue.

If you're using Python 3 it's already implemented for you in the futures module.

Solution 2:

Something like this should help you

import grequests
urls = ['url1', 'url2', ....] # this should be the list of urls

    requests = (grequests.get(u) for u in urls)
    responses = grequests.map(requests)
    for response in responses:
        if199 < response.status_code < 400:
             name = generate_file_name()    # generate some name for your image file with extension like example.jpgwithopen(name, 'wb') as f:    # or save to S3 or something like that
                  f.write(response.content)

Here only the downloading of images would be parallel but writing each image content to a file would be sequential so you can create a thread or do something else to make it parallel or asynchronous

Post a Comment for "Asynchronously Get And Store Images In Python"