Unable To Read Large Csv File From S3 Bucket To Python
So I am trying to load a csv file from s3 bucket. The following is the code import pandas as pd import boto3 import io s3_file_key = 'iris.csv' bucket = 'data' s3 = boto3.client(
Solution 1:
Here are the few things that you can do:
- Make sure the region of the S3 bucket is the same as your AWS configure. Otherwise, it won't work. S3 service is global but every bucket is created in a specific region. The same region should be used by AWS clients.
- Make sure the access keys for the resource has the right set of permissions.
- Make sure the file is actually uploaded.
- Make sure there is no bucket policy applied that revokes access.
- You can enable logging on your S3 bucket to see errors.
- Make sure the bucket is not versioned. If versioned, specify the object version.
- Make sure the object has the correct set of ACLs defined.
- If the object is encrypted, make sure you have permission to use that KMS key to decrypt the object.
Post a Comment for "Unable To Read Large Csv File From S3 Bucket To Python"