Skip to content

Commit

Permalink
Use a max line length of 80 universally (keras-team#1552)
Browse files Browse the repository at this point in the history
* Enforces line length 80 Part 1

* Enforces line length 80 Part 2 and fixes typos

* Enforces line length 80 Part 3 and fixes typos

* Enforces line length 80 Part 4 and fixes typos

* Enforces line length 80 Part 5 and fixes typos

* Minor change

* Replaced double spaces with a single space

* Merge Conflicts

* Minor Changes

* Resolves Merge Conflicts

* Additional improvements + Changes requested

* Merging

* Minor improvements

---------

Co-authored-by: Your Name <haifeng-jin@users.noreply.github.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
  • Loading branch information
3 people authored Apr 7, 2023
1 parent d22091a commit 3aaeefc
Show file tree
Hide file tree
Showing 265 changed files with 3,620 additions and 2,997 deletions.
34 changes: 17 additions & 17 deletions .github/API_DESIGN.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,34 +2,34 @@
In general, KerasCV abides to the
[API design guidelines of Keras](https://github.com/keras-team/governance/blob/master/keras_api_design_guidelines.md).

There are a few API guidelines that apply only to KerasCV. These are discussed
There are a few API guidelines that apply only to KerasCV. These are discussed
in this document.

# Label Names
When working with `bounding_box` and `segmentation_map` labels the
abbreviations `bbox` and `segm` are often used. In KerasCV, we will *not* be
using these abbreviations. This is done to ensure full consistency in our
naming convention. While the team is fond of the abbreviation `bbox`, we are
less fond of `segm`. In order to ensure full consistency, we have decided to
abbreviations `bbox` and `segm` are often used. In KerasCV, we will *not* be
using these abbreviations. This is done to ensure full consistency in our
naming convention. While the team is fond of the abbreviation `bbox`, we are
less fond of `segm`. In order to ensure full consistency, we have decided to
use the full names for label types in our code base.

# Preprocessing Layers
## Strength Parameters
Many augmentation layers take a parameter representing a strength, often called
`factor`. When possible, factor values must conform to a the range: `[0, 1]`, with
`factor`. When possible, factor values must conform to the range: `[0, 1]`, with
1 representing the strongest transformation and 0 representing a no-op transform.
The strength of an augmentation should scale linearly with this factor. If needed,
a transformation can be performed to map to a large value range internally. If
The strength of an augmentation should scale linearly with this factor. If needed,
a transformation can be performed to map to a large value range internally. If
this is done, please provide a thorough explanation of the value range semantics in
the docstring.

Additionally, factors should support both float and tuples as inputs. If a float is
Additionally, factors should support both float and tuples as inputs. If a float is
passed, such as `factor=0.5`, the layer should default to the range `[0, factor]`.

## BaseImageAugmentationLayer
When implementing preprocessing, we encourage users to subclass the
`keras_cv.layers.preprocessing.BaseImageAugmentationLayer`. This layer provides
a common `call()` method, auto vectorization, and more.
`keras_cv.layers.preprocessing.BaseImageAugmentationLayer`. This layer provides
a common `call()` method, auto vectorization, and more.

When subclassing `BaseImageAugmentationLayer`, several methods can overridden:

Expand All @@ -41,20 +41,20 @@ When subclassing `BaseImageAugmentationLayer`, several methods can overridden:

## Vectorization
`BaseImageAugmentationLayer` requires you to implement augmentations in an
image-wise basis instead of using a vectorized approach. This design choice
image-wise basis instead of using a vectorized approach. This design choice
was based made on the results found in the
[vectorization\_strategy\_benchmark.py](../benchmarks/vectorization_strategy_benchmark.py)
benchmark.

In short, the benchmark shows that making use of `tf.vectorized_map()` performs
almost identically to a manually vectorized implementation. As such, we have
almost identically to a manually vectorized implementation. As such, we have
decided to rely on `tf.vectorized_map()` for performance.

![Results of vectorization strategy benchmark](images/runtime-plot.png)

## Color Based Preprocessing Layers
Some preprocessing layers in KerasCV perform color based transformations. This
includes `RandomBrightness`, `Equalize`, `Solarization`, and more.
Some preprocessing layers in KerasCV perform color based transformations. This
includes `RandomBrightness`, `Equalize`, `Solarization`, and more.
Preprocessing layers that perform color based transformations make the
following assumptions:

Expand All @@ -63,10 +63,10 @@ following assumptions:
- input images may be of any `dtype`

The decision to support inputs of any `dtype` is made based on the nuance that
some Keras layers cast user inputs without the user knowing. For example, if
some Keras layers cast user inputs without the user knowing. For example, if
`Solarization` expected user inputs to be of type `int`, and a custom layer
was accidentally casting inputs to `float32`, it would be a bad user experience
to raise an error asserting that all inputs must be of type `int`.
to raise an error asserting that all inputs must be of type `int`.

New preprocessing layers should be consistent with these decisions.

Expand Down
4 changes: 2 additions & 2 deletions .github/CALL_FOR_CONTRIBUTIONS.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Call For Contributions
Contributors looking for a task can look at the following list to find an item
to work on. Should you decide to contribute a component, please comment on the
corresponding GitHub issue that you will be working on the component. A team
to work on. Should you decide to contribute a component, please comment on the
corresponding GitHub issue that you will be working on the component. A team
member will then follow up by assigning the issue to you.

[There is a contributions welcome label available here](https://github.com/keras-team/keras-cv/issues?page=2&q=is%3Aissue+is%3Aopen+label%3Acontribution-welcome)
10 changes: 5 additions & 5 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ to open a PR without discussion.

### Step 2. Make code changes

To make code changes, you need to fork the repository. You will need to setup a
To make code changes, you need to fork the repository. You will need to set up a
development environment and run the unit tests. This is covered in section
"Setup environment".
"set up environment".

If your code change involves introducing a new API change, please see our
[API Design Guidelines](API_DESIGN.md).
Expand All @@ -43,7 +43,7 @@ The agreement can be found at [https://cla.developers.google.com/clas](https://c

### Step 5. Code review

CI tests will automatically be run directly on your pull request. Their
CI tests will automatically be run directly on your pull request. Their
status will be reported back via GitHub actions.

There may be
Expand Down Expand Up @@ -92,7 +92,7 @@ We currently support only a small handful of ops that run on CPU and are not use

If you are updating existing custom ops, you can re-compile the binaries from source using the instructions in the `Tests that require custom ops` section below.

## Setup environment
## set up environment

Setting up your KerasCV development environment requires you to fork the KerasCV repository,
clone the repository, install dependencies, and execute `python setup.py develop`.
Expand Down Expand Up @@ -157,7 +157,7 @@ cp bazel-bin/keras_cv/custom_ops/*.so keras_cv/custom_ops/
Tests which use custom ops are disabled by default, but can be run by setting the environment variable `TEST_CUSTOM_OPS=true`.

## Formatting the Code
We use `flake8`, `isort`, `black` and `clang-format` for code formatting. You can run
We use `flake8`, `isort`, `black` and `clang-format` for code formatting. You can run
the following commands manually every time you want to format your code:

- Run `shell/format.sh` to format your code
Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Include citation counts if possible.

**Existing Implementations**
<!---
Link to existing implementations. TensorFlow implementations are preferred.
Link to existing implementations. TensorFlow implementations are preferred.
-->

**Other Information**
Expand Down
2 changes: 1 addition & 1 deletion .github/ROADMAP.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Roadmap
The team will release 2 quarters of roadmap in advance so contributors will know
what we are working on and be better aligned when creating PRs.
what we are working on and be better aligned when creating PRs.
As an exception, widely used backbones are always welcome. Contributors can search for `contribution-welcome` label on github.
The team will release one minor version upgrade each quarter, or whenever a new task is officially supported.

Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,9 +109,9 @@ We would like to leverage/outsource the Keras community not only for bug reporti
but also for active development for feature delivery. To achieve this, here is the predefined
process for how to contribute to this repository:

1) Contributors are always welcome to help us fix an issue, add tests, better documentation.
1) Contributors are always welcome to help us fix an issue, add tests, better documentation.
2) If contributors would like to create a backbone, we usually require a pre-trained weight set
with the model for one dataset as the first PR, and a training script as a follow-up. The training script will preferrably help us reproduce the results claimed from paper. The backbone should be generic but the training script can contain paper specific parameters such as learning rate schedules and weight decays. The training script will be used to produce leaderboard results.
with the model for one dataset as the first PR, and a training script as a follow-up. The training script will preferrably help us reproduce the results claimed from paper. The backbone should be generic but the training script can contain paper specific parameters such as learning rate schedules and weight decays. The training script will be used to produce leaderboard results.
Exceptions apply to large transformer-based models which are difficult to train. If this is the case,
contributors should let us know so the team can help in training the model or providing GCP resources.
3) If contributors would like to create a meta arch, please try to be aligned with our roadmap and create a PR for design review to make sure the meta arch is modular.
Expand All @@ -137,7 +137,7 @@ An example of this can be found in the ImageNet classification training
All results are reproducible using the training scripts in this repository.

Historically, many models have been trained on image datasets rescaled via manually
crafted normalization schemes.
crafted normalization schemes.
The most common variant of manually crafted normalization scheme is subtraction of the
imagenet mean pixel followed by standard deviation normalization based on the imagenet
pixel standard deviation.
Expand All @@ -158,7 +158,7 @@ instructions below.
### Installing KerasCV with Custom Ops from Source

Installing custom ops from source requires the [Bazel](https://bazel.build/) build
system (version >= 5.4.0). Steps to install Bazel can be [found here](https://github.com/keras-team/keras/blob/v2.11.0/.devcontainer/Dockerfile#L21-L23).
system (version >= 5.4.0). Steps to install Bazel can be [found here](https://github.com/keras-team/keras/blob/v2.11.0/.devcontainer/Dockerfile#L21-L23).

```
git clone https://github.com/keras-team/keras-cv.git
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ def produce_random_data(
"""Generates a fake list of bounding boxes for use in this test.
Returns:
a tensor list of size [128, 25, 5/6]. This represents 128 images, 25 bboxes
and 5/6 dimensions to represent each bbox depending on if confidence is
set.
a tensor list of size [128, 25, 5/6]. This represents 128 images, 25
bboxes and 5/6 dimensions to represent each bbox depending on if
confidence is set.
"""
images = []
for _ in range(num_images):
Expand Down
6 changes: 3 additions & 3 deletions benchmarks/metrics/coco/mean_average_precision_performance.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ def produce_random_data(
"""Generates a fake list of bounding boxes for use in this test.
Returns:
a tensor list of size [128, 25, 5/6]. This represents 128 images, 25 bboxes
and 5/6 dimensions to represent each bbox depending on if confidence is
set.
a tensor list of size [128, 25, 5/6]. This represents 128 images, 25
bboxes and 5/6 dimensions to represent each bbox depending on if
confidence is set.
"""
images = []
for _ in range(num_images):
Expand Down
6 changes: 3 additions & 3 deletions benchmarks/metrics/coco/recall_performance.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ def produce_random_data(
"""Generates a fake list of bounding boxes for use in this test.
Returns:
a tensor list of size [128, 25, 5/6]. This represents 128 images, 25 bboxes
and 5/6 dimensions to represent each bbox depending on if confidence is
set.
a tensor list of size [128, 25, 5/6]. This represents 128 images, 25
bboxes and 5/6 dimensions to represent each bbox depending on if
confidence is set.
"""
images = []
for _ in range(num_images):
Expand Down
23 changes: 12 additions & 11 deletions benchmarks/vectorization_strategy_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,12 +78,13 @@ def fill_single_rectangle(
"""Fill rectangles with fill value into images.
Args:
images: Tensor of images to fill rectangles into.
image: Tensor of images to fill rectangles into.
centers_x: Tensor of positions of the rectangle centers on the x-axis.
centers_y: Tensor of positions of the rectangle centers on the y-axis.
widths: Tensor of widths of the rectangles
heights: Tensor of heights of the rectangles
fill_values: Tensor with same shape as images to get rectangle fill from.
fill_values: Tensor with same shape as images to get rectangle fill
from.
Returns:
images with filled rectangles.
"""
Expand Down Expand Up @@ -127,7 +128,7 @@ def __init__(
if fill_mode not in ["gaussian_noise", "constant"]:
raise ValueError(
'`fill_mode` should be "gaussian_noise" '
f'or "constant". Got `fill_mode`={fill_mode}'
f'or "constant". Got `fill_mode`={fill_mode}'
)

if not isinstance(self.height_lower, type(self.height_upper)):
Expand Down Expand Up @@ -307,7 +308,7 @@ def __init__(
if fill_mode not in ["gaussian_noise", "constant"]:
raise ValueError(
'`fill_mode` should be "gaussian_noise" '
f'or "constant". Got `fill_mode`={fill_mode}'
f'or "constant". Got `fill_mode`={fill_mode}'
)

if not isinstance(self.height_lower, type(self.height_upper)):
Expand Down Expand Up @@ -481,7 +482,7 @@ def __init__(
if fill_mode not in ["gaussian_noise", "constant"]:
raise ValueError(
'`fill_mode` should be "gaussian_noise" '
f'or "constant". Got `fill_mode`={fill_mode}'
f'or "constant". Got `fill_mode`={fill_mode}'
)

if not isinstance(self.height_lower, type(self.height_upper)):
Expand Down Expand Up @@ -657,7 +658,7 @@ def __init__(
if fill_mode not in ["gaussian_noise", "constant"]:
raise ValueError(
'`fill_mode` should be "gaussian_noise" '
f'or "constant". Got `fill_mode`={fill_mode}'
f'or "constant". Got `fill_mode`={fill_mode}'
)

if not isinstance(self.height_lower, type(self.height_upper)):
Expand Down Expand Up @@ -837,7 +838,7 @@ def __init__(
if fill_mode not in ["gaussian_noise", "constant"]:
raise ValueError(
'`fill_mode` should be "gaussian_noise" '
f'or "constant". Got `fill_mode`={fill_mode}'
f'or "constant". Got `fill_mode`={fill_mode}'
)

if not isinstance(self.height_lower, type(self.height_upper)):
Expand Down Expand Up @@ -1011,7 +1012,7 @@ def __init__(
if fill_mode not in ["gaussian_noise", "constant"]:
raise ValueError(
'`fill_mode` should be "gaussian_noise" '
f'or "constant". Got `fill_mode`={fill_mode}'
f'or "constant". Got `fill_mode`={fill_mode}'
)

if not isinstance(self.height_lower, type(self.height_upper)):
Expand Down Expand Up @@ -1234,7 +1235,7 @@ def get_config(self):
# Extra notes
## Warnings
it would be really annoying as a user to use an official keras_cv component and get
warned that "RandomUniform" or "RandomUniformInt" inside pfor may not get the same
output.
it would be really annoying as a user to use an official keras_cv component and
get warned that "RandomUniform" or "RandomUniformInt" inside pfor may not get
the same output.
"""
8 changes: 4 additions & 4 deletions benchmarks/vectorized_auto_contrast.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,15 @@ class OldAutoContrast(BaseImageAugmentationLayer):
"""Performs the AutoContrast operation on an image.
Auto contrast stretches the values of an image across the entire available
`value_range`. This makes differences between pixels more obvious. An example of
this is if an image only has values `[0, 1]` out of the range `[0, 255]`, auto
contrast will change the `1` values to be `255`.
`value_range`. This makes differences between pixels more obvious. An
example of this is if an image only has values `[0, 1]` out of the range
`[0, 255]`, auto contrast will change the `1` values to be `255`.
Args:
value_range: the range of values the incoming images will have.
Represented as a two number tuple written [low, high].
This is typically either `[0, 1]` or `[0, 255]` depending
on how your preprocessing pipeline is setup.
on how your preprocessing pipeline is set up.
"""

def __init__(
Expand Down
12 changes: 7 additions & 5 deletions benchmarks/vectorized_channel_shuffle.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,15 +38,17 @@ class OldChannelShuffle(BaseImageAugmentationLayer):
`(..., height, width, channels)`, in `"channels_last"` format
Args:
groups: Number of groups to divide the input channels. Default 3.
groups: Number of groups to divide the input channels, defaults to 3.
seed: Integer. Used to create a random seed.
Call arguments:
inputs: Tensor representing images of shape
`(batch_size, width, height, channels)`, with dtype tf.float32 / tf.uint8,
` or (width, height, channels)`, with dtype tf.float32 / tf.uint8
training: A boolean argument that determines whether the call should be run
in inference mode or training mode. Default: True.
`(batch_size, width, height, channels)`, with dtype
tf.float32 / tf.uint8,
` or (width, height, channels)`, with dtype
tf.float32 / tf.uint8
training: A boolean argument that determines whether the call should be
run in inference mode or training mode, defaults to True.
Usage:
```python
Expand Down
3 changes: 2 additions & 1 deletion benchmarks/vectorized_grayscale.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@


class OldGrayscale(BaseImageAugmentationLayer):
"""Grayscale is a preprocessing layer that transforms RGB images to Grayscale images.
"""Grayscale is a preprocessing layer that transforms RGB images to
Grayscale images.
Input images should have values in the range of [0, 255].
Input shape:
3D (unbatched) or 4D (batched) tensor with shape:
Expand Down
Loading

0 comments on commit 3aaeefc

Please sign in to comment.