Pandas on AWS
NOTE
Due the new major version 1.0.0
with breaking changes, please make sure that all your old projects has dependencies frozen on the desired version (e.g. pip install awswrangler==0.3.2
). You can always read the legacy docs here.
Source | Downloads | Page | Installation Command |
---|---|---|---|
PyPi | Link | pip install awswrangler |
|
Conda | Link | conda install -c conda-forge awswrangler |
Install the Wrangler with: pip install awswrangler
import awswrangler as wr
import pandas as pd
df = pd.DataFrame({"id": [1, 2], "value": ["foo", "boo"]})
# Storing data on Data Lake
wr.s3.to_parquet(
df=df,
path="s3://bucket/dataset/",
dataset=True,
database="my_db",
table="my_table"
)
# Retrieving the data directly from Amazon S3
df = wr.s3.read_parquet("s3://bucket/dataset/", dataset=True)
# Retrieving the data from Amazon Athena
df = wr.athena.read_sql_query("SELECT * FROM my_table", database="my_db")
# Getting Redshift connection (SQLAlchemy) from Glue Catalog Connections
engine = wr.catalog.get_engine("my-redshift-connection")
# Retrieving the data from Amazon Redshift Spectrum
df = wr.db.read_sql_query("SELECT * FROM external_schema.my_table", con=engine)
- What is AWS Data Wrangler?
- Install
- Tutorials
- 01 - Introduction
- 02 - Sessions
- 03 - Amazon S3
- 04 - Parquet Datasets
- 05 - Glue Catalog
- 06 - Amazon Athena
- 07 - Databases (Redshift, MySQL and PostgreSQL)
- 08 - Redshift - Copy & Unload.ipynb
- 09 - Redshift - Append, Overwrite and Upsert
- 10 - Parquet Crawler
- 11 - CSV Datasets
- 12 - CSV Crawler
- 13 - Merging Datasets on S3
- 15 - EMR
- 16 - EMR & Docker
- API Reference
- License
- Contributing