Skip to content

Commit

Permalink
Release 3.0.0b2 (#1640)
Browse files Browse the repository at this point in the history
  • Loading branch information
kukushking authored Sep 29, 2022
1 parent 1073eb9 commit 715f163
Show file tree
Hide file tree
Showing 23 changed files with 105 additions and 105 deletions.
2 changes: 1 addition & 1 deletion .bumpversion.cfg
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[bumpversion]
current_version = 3.0.0b1
current_version = 3.0.0b2
commit = False
tag = False
tag_name = {new_version}
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@ available_node_types:
SubnetId: {replace with subnet within above AZs}
setup_commands:
- pip install "awswrangler[distributed]==3.0.0b1"
- pip install "awswrangler[distributed]==3.0.0b2"
- pip install pytest
```
Expand Down
6 changes: 3 additions & 3 deletions CONTRIBUTING_COMMON_ERRORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ Requirement already satisfied: pbr!=2.1.0,>=2.0.0 in ./.venv/lib/python3.7/site-
Using legacy 'setup.py install' for python-Levenshtein, since package 'wheel' is not installed.
Installing collected packages: awswrangler, python-Levenshtein
Attempting uninstall: awswrangler
Found existing installation: awswrangler 3.0.0b1
Uninstalling awswrangler-3.0.0b1:
Successfully uninstalled awswrangler-3.0.0b1
Found existing installation: awswrangler 3.0.0b2
Uninstalling awswrangler-3.0.0b2:
Successfully uninstalled awswrangler-3.0.0b2
Running setup.py develop for awswrangler
Running setup.py install for python-Levenshtein ... error
ERROR: Command errored out with exit status 1:
Expand Down
74 changes: 37 additions & 37 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Easy integration with Athena, Glue, Redshift, Timestream, OpenSearch, Neptune, Q

> An [AWS Professional Service](https://aws.amazon.com/professional-services/) open source initiative | [email protected]
[![Release](https://img.shields.io/badge/release-3.0.0b1-brightgreen.svg)](https://pypi.org/project/awswrangler/)
[![Release](https://img.shields.io/badge/release-3.0.0b2-brightgreen.svg)](https://pypi.org/project/awswrangler/)
[![Python Version](https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9%20%7C%203.10-brightgreen.svg)](https://anaconda.org/conda-forge/awswrangler)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
Expand All @@ -26,7 +26,7 @@ Easy integration with Athena, Glue, Redshift, Timestream, OpenSearch, Neptune, Q
| **[PyPi](https://pypi.org/project/awswrangler/)** | [![PyPI Downloads](https://pepy.tech/badge/awswrangler)](https://pypi.org/project/awswrangler/) | `pip install awswrangler` |
| **[Conda](https://anaconda.org/conda-forge/awswrangler)** | [![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/awswrangler.svg)](https://anaconda.org/conda-forge/awswrangler) | `conda install -c conda-forge awswrangler` |

> ⚠️ **For platforms without PyArrow 3 support (e.g. [EMR](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#emr-cluster), [Glue PySpark Job](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#aws-glue-pyspark-jobs), MWAA):**<br>
> ⚠️ **For platforms without PyArrow 3 support (e.g. [EMR](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#emr-cluster), [Glue PySpark Job](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#aws-glue-pyspark-jobs), MWAA):**<br>
➡️ `pip install pyarrow==2 awswrangler`

Powered By [<img src="https://arrow.apache.org/img/arrow.png" width="200">](https://arrow.apache.org/powered_by/)
Expand All @@ -44,7 +44,7 @@ Powered By [<img src="https://arrow.apache.org/img/arrow.png" width="200">](http

Installation command: `pip install awswrangler`

> ⚠️ **For platforms without PyArrow 3 support (e.g. [EMR](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#emr-cluster), [Glue PySpark Job](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#aws-glue-pyspark-jobs), MWAA):**<br>
> ⚠️ **For platforms without PyArrow 3 support (e.g. [EMR](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#emr-cluster), [Glue PySpark Job](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#aws-glue-pyspark-jobs), MWAA):**<br>
➡️`pip install pyarrow==2 awswrangler`

```py3
Expand Down Expand Up @@ -98,17 +98,17 @@ FROM "sampleDB"."sampleTable" ORDER BY time DESC LIMIT 3

## [Read The Docs](https://aws-sdk-pandas.readthedocs.io/)

- [**What is AWS SDK for pandas?**](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/what.html)
- [**Install**](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html)
- [PyPi (pip)](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#pypi-pip)
- [Conda](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#conda)
- [AWS Lambda Layer](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#aws-lambda-layer)
- [AWS Glue Python Shell Jobs](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#aws-glue-python-shell-jobs)
- [AWS Glue PySpark Jobs](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#aws-glue-pyspark-jobs)
- [Amazon SageMaker Notebook](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#amazon-sagemaker-notebook)
- [Amazon SageMaker Notebook Lifecycle](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#amazon-sagemaker-notebook-lifecycle)
- [EMR](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#emr)
- [From source](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/install.html#from-source)
- [**What is AWS SDK for pandas?**](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/what.html)
- [**Install**](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html)
- [PyPi (pip)](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#pypi-pip)
- [Conda](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#conda)
- [AWS Lambda Layer](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#aws-lambda-layer)
- [AWS Glue Python Shell Jobs](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#aws-glue-python-shell-jobs)
- [AWS Glue PySpark Jobs](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#aws-glue-pyspark-jobs)
- [Amazon SageMaker Notebook](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#amazon-sagemaker-notebook)
- [Amazon SageMaker Notebook Lifecycle](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#amazon-sagemaker-notebook-lifecycle)
- [EMR](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#emr)
- [From source](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/install.html#from-source)
- [**Tutorials**](https://github.com/aws/aws-sdk-pandas/tree/main/tutorials)
- [001 - Introduction](https://github.com/aws/aws-sdk-pandas/blob/main/tutorials/001%20-%20Introduction.ipynb)
- [002 - Sessions](https://github.com/aws/aws-sdk-pandas/blob/main/tutorials/002%20-%20Sessions.ipynb)
Expand Down Expand Up @@ -144,29 +144,29 @@ FROM "sampleDB"."sampleTable" ORDER BY time DESC LIMIT 3
- [032 - Lake Formation Governed Tables](https://github.com/aws/aws-sdk-pandas/blob/main/tutorials/032%20-%20Lake%20Formation%20Governed%20Tables.ipynb)
- [033 - Amazon Neptune](https://github.com/aws/aws-sdk-pandas/blob/main/tutorials/033%20-%20Amazon%20Neptune.ipynb)
- [034 - Distributing Calls on Ray Remote Cluster](https://github.com/aws/aws-sdk-pandas/blob/release-3.0.0/tutorials/034%20-%20Distributing%20Calls%20on%20Ray%20Remote%20Cluster.ipynb)
- [**API Reference**](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html)
- [Amazon S3](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#amazon-s3)
- [AWS Glue Catalog](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#aws-glue-catalog)
- [Amazon Athena](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#amazon-athena)
- [AWS Lake Formation](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#aws-lake-formation)
- [Amazon Redshift](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#amazon-redshift)
- [PostgreSQL](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#postgresql)
- [MySQL](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#mysql)
- [SQL Server](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#sqlserver)
- [Oracle](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#oracle)
- [Data API Redshift](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#data-api-redshift)
- [Data API RDS](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#data-api-rds)
- [OpenSearch](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#opensearch)
- [Amazon Neptune](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#amazon-neptune)
- [DynamoDB](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#dynamodb)
- [Amazon Timestream](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#amazon-timestream)
- [Amazon EMR](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#amazon-emr)
- [Amazon CloudWatch Logs](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#amazon-cloudwatch-logs)
- [Amazon Chime](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#amazon-chime)
- [Amazon QuickSight](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#amazon-quicksight)
- [AWS STS](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#aws-sts)
- [AWS Secrets Manager](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#aws-secrets-manager)
- [Global Configurations](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/api.html#global-configurations)
- [**API Reference**](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html)
- [Amazon S3](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#amazon-s3)
- [AWS Glue Catalog](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#aws-glue-catalog)
- [Amazon Athena](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#amazon-athena)
- [AWS Lake Formation](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#aws-lake-formation)
- [Amazon Redshift](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#amazon-redshift)
- [PostgreSQL](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#postgresql)
- [MySQL](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#mysql)
- [SQL Server](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#sqlserver)
- [Oracle](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#oracle)
- [Data API Redshift](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#data-api-redshift)
- [Data API RDS](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#data-api-rds)
- [OpenSearch](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#opensearch)
- [Amazon Neptune](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#amazon-neptune)
- [DynamoDB](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#dynamodb)
- [Amazon Timestream](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#amazon-timestream)
- [Amazon EMR](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#amazon-emr)
- [Amazon CloudWatch Logs](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#amazon-cloudwatch-logs)
- [Amazon Chime](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#amazon-chime)
- [Amazon QuickSight](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#amazon-quicksight)
- [AWS STS](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#aws-sts)
- [AWS Secrets Manager](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#aws-secrets-manager)
- [Global Configurations](https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/api.html#global-configurations)
- [**License**](https://github.com/aws/aws-sdk-pandas/blob/main/LICENSE.txt)
- [**Contributing**](https://github.com/aws/aws-sdk-pandas/blob/main/CONTRIBUTING.md)

Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
3.0.0b1
3.0.0b2
2 changes: 1 addition & 1 deletion awswrangler/__metadata__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@

__title__: str = "awswrangler"
__description__: str = "Pandas on AWS."
__version__: str = "3.0.0b1"
__version__: str = "3.0.0b2"
__license__: str = "Apache License 2.0"
16 changes: 8 additions & 8 deletions awswrangler/athena/_read.py
Original file line number Diff line number Diff line change
Expand Up @@ -706,11 +706,11 @@ def read_sql_query(
**Related tutorial:**
- `Amazon Athena <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/
- `Amazon Athena <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/
tutorials/006%20-%20Amazon%20Athena.html>`_
- `Athena Cache <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/
- `Athena Cache <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/
tutorials/019%20-%20Athena%20Cache.html>`_
- `Global Configurations <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/
- `Global Configurations <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/
tutorials/021%20-%20Global%20Configurations.html>`_
**There are three approaches available through ctas_approach and unload_approach parameters:**
Expand Down Expand Up @@ -774,7 +774,7 @@ def read_sql_query(
/athena.html#Athena.Client.get_query_execution>`_ .
For a practical example check out the
`related tutorial <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/
`related tutorial <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/
tutorials/024%20-%20Athena%20Query%20Metadata.html>`_!
Expand Down Expand Up @@ -1019,11 +1019,11 @@ def read_sql_table(
**Related tutorial:**
- `Amazon Athena <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/
- `Amazon Athena <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/
tutorials/006%20-%20Amazon%20Athena.html>`_
- `Athena Cache <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/
- `Athena Cache <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/
tutorials/019%20-%20Athena%20Cache.html>`_
- `Global Configurations <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/
- `Global Configurations <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/
tutorials/021%20-%20Global%20Configurations.html>`_
**There are two approaches to be defined through ctas_approach parameter:**
Expand Down Expand Up @@ -1068,7 +1068,7 @@ def read_sql_table(
/athena.html#Athena.Client.get_query_execution>`_ .
For a practical example check out the
`related tutorial <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/
`related tutorial <https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/
tutorials/024%20-%20Athena%20Query%20Metadata.html>`_!
Expand Down
4 changes: 2 additions & 2 deletions awswrangler/s3/_read_parquet.py
Original file line number Diff line number Diff line change
Expand Up @@ -448,7 +448,7 @@ def read_parquet(
must return a bool, True to read the partition or False to ignore it.
Ignored if `dataset=False`.
E.g ``lambda x: True if x["year"] == "2020" and x["month"] == "1" else False``
https://aws-data-wrangler.readthedocs.io/en/3.0.0b1/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
https://aws-data-wrangler.readthedocs.io/en/3.0.0b2/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
columns : List[str], optional
List of columns to read from the file(s).
validate_schema : bool, default False
Expand Down Expand Up @@ -647,7 +647,7 @@ def read_parquet_table(
must return a bool, True to read the partition or False to ignore it.
Ignored if `dataset=False`.
E.g ``lambda x: True if x["year"] == "2020" and x["month"] == "1" else False``
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
columns : List[str], optional
List of columns to read from the file(s).
validate_schema : bool, default False
Expand Down
6 changes: 3 additions & 3 deletions awswrangler/s3/_read_text.py
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ def read_csv(
This function MUST return a bool, True to read the partition or False to ignore it.
Ignored if `dataset=False`.
E.g ``lambda x: True if x["year"] == "2020" and x["month"] == "1" else False``
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
parallelism : int, optional
The requested parallelism of the read. Only used when `distributed` add-on is installed.
Parallelism may be limited by the number of files of the dataset. 200 by default.
Expand Down Expand Up @@ -409,7 +409,7 @@ def read_fwf(
This function MUST return a bool, True to read the partition or False to ignore it.
Ignored if `dataset=False`.
E.g ``lambda x: True if x["year"] == "2020" and x["month"] == "1" else False``
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
parallelism : int, optional
The requested parallelism of the read. Only used when `distributed` add-on is installed.
Parallelism may be limited by the number of files of the dataset. 200 by default.
Expand Down Expand Up @@ -566,7 +566,7 @@ def read_json(
This function MUST return a bool, True to read the partition or False to ignore it.
Ignored if `dataset=False`.
E.g ``lambda x: True if x["year"] == "2020" and x["month"] == "1" else False``
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
parallelism : int, optional
The requested parallelism of the read. Only used when `distributed` add-on is installed.
Parallelism may be limited by the number of files of the dataset. 200 by default.
Expand Down
6 changes: 3 additions & 3 deletions awswrangler/s3/_write_parquet.py
Original file line number Diff line number Diff line change
Expand Up @@ -375,18 +375,18 @@ def to_parquet( # pylint: disable=too-many-arguments,too-many-locals,too-many-b
concurrent_partitioning: bool
If True will increase the parallelism level during the partitions writing. It will decrease the
writing time and increase the memory usage.
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/tutorials/022%20-%20Writing%20Partitions%20Concurrently.html
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/tutorials/022%20-%20Writing%20Partitions%20Concurrently.html
mode: str, optional
``append`` (Default), ``overwrite``, ``overwrite_partitions``. Only takes effect if dataset=True.
For details check the related tutorial:
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/tutorials/004%20-%20Parquet%20Datasets.html
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/tutorials/004%20-%20Parquet%20Datasets.html
catalog_versioning : bool
If True and `mode="overwrite"`, creates an archived version of the table catalog before updating it.
schema_evolution : bool
If True allows schema evolution (new or missing columns), otherwise a exception will be raised. True by default.
(Only considered if dataset=True and mode in ("append", "overwrite_partitions"))
Related tutorial:
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b1/tutorials/014%20-%20Schema%20Evolution.html
https://aws-sdk-pandas.readthedocs.io/en/3.0.0b2/tutorials/014%20-%20Schema%20Evolution.html
database : str, optional
Glue/Athena catalog: Database name.
table : str, optional
Expand Down
Loading

0 comments on commit 715f163

Please sign in to comment.