Skip to content Skip to sidebar Skip to footer

Tracking File Load Progress In Python

A lot of modules I use import entire files into memory or trickle a file's contents in while they process it. I'm wondering if there's any way to track this sort of loading progre

Solution 1:

I would do by this by determining the size of the file, and then simply dividing the total by the number of bytes read. Like this:

import os

defshow_progress(file_name, chunk_size=1024):
    fh = open(file_name, "r")
    total_size = os.path.getsize(file_name)
    total_read = 0whileTrue:
        chunk = fh.read(chunk_size)
        ifnot chunk: 
            fh.close()
            break
        total_read += len(chunk)
        print"Progress: %s percent" % (total_read/total_size)
        yield chunk

for chunk in show_progress("my_file.txt"):
    # Process the chunkpass

Edit: I know it isn't the best code, but I just wanted to show the concept.

Solution 2:

If you actually mean "import" (not "read") then you can override the import module definitions. You can add timing capabilities.

See the imp module.

If you mean "read", then you can trivially wrap Python files with your own file-like wrapper. Files don't expose too many methods. You can override the interesting ones to get timing data.

>>>classMyFile(file):...defread(self,*args,**kw):...# start timing...        result= super(MyFile,self).read(*args,**kw)...# finish timing...return result

Post a Comment for "Tracking File Load Progress In Python"