-
-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: read_parquet
times out resolving S3 region
#57449
Comments
Thanks for the report. Grepping the pandas code, I find no instances of |
# works perfectly
df = pd.read_csv("s3://my-bucket/example.csv")
# throws error seemingly about the region
df = pd.read_parquet("s3://my-bucket/example_parquet") Just to make sure; your |
Appreciate you both taking a look at this. I agree, it's odd I didn't include an extension. I did that because generally I'm working with multipart parquets from Spark which are stored in a folder. That said, I was doing this testing with a single parquet file, and I just repeated it now with a proper extension and the same result occurs. Thanks for the suggestion though, it was worth trying. |
Dj |
Agreed this doesn't appear to be a pandas issue, additionally it appears this is being raised in pyarrow so I would suggest seeing if you are getting the same error with using pyarrow's read_parquet itself so closing |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
I was not sure where to report this issue because it probably involves
pandas
,pyarrow
,s3fs
andfsspec
.FWIW I'm using an S3 Compatible Storage called Scality.
As the example code shows, I'm able to use
read_csv
with the only configuration being theAWS_PROFILE
environmental variable. Trying the analogousread_parquet
it times out and seems as if it can't resolve the region. My S3-compatible instance is on-premises so there isn't a "region", however settingAWS_DEFAULT_REGION
to""
also did not solve this.Interestingly, supplying either of
AWS_CA_BUNDLE
tostorage_options
or supplying thefilesystem
argument fixes this. Worth noting that in my~/.aws/config
file theregion
andca_bundle
values are provided (as isendpoint_url
).Expected Behavior
I think
read_parquet
should behave likeread_csv
and be able to access and read the data without passing in additional configuration.Installed Versions
INSTALLED VERSIONS
commit : fd3f571
python : 3.9.16.final.0
python-bits : 64
OS : Linux
OS-release : 4.18.0-513.9.1.el8_9.x86_64
Version : #1 SMP Thu Nov 16 10:29:04 EST 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.0
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.8.2
setuptools : 69.1.0
pip : 24.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.3
IPython : 8.18.1
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.2.0
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 15.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : 2024.2.0
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : 0.19.0
tzdata : 2024.1
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: