Skip to content

Commit

Permalink
update to latest talos master
Browse files Browse the repository at this point in the history
  • Loading branch information
lukaspie committed Feb 6, 2024
2 parents e8a4621 + ef303b5 commit 12cecf7
Show file tree
Hide file tree
Showing 58 changed files with 754 additions and 700 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug-report.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Thank you very much for reporting a bug on Talos. Before you do, please go throu
- [ ] My bug report includes a `Scan()` command
- [ ] My bug report question includes a link to a sample of the data

NOTE: If the data is sensitive and can't be shared, [create dummy data](https://scikit-learn.org/stable/modules/classes.html#samples-generator) that mimics it.
NOTE: If the data is sensitive and can't be shared, [create dummy data](https://scikit-learn.org/stable/modules/classes.html#samples-generator) that mimics it or provide a command for generating it.

**A self-contained Jupyter Notebook, Google Colab, or similar is highly preferred and will speed up helping you with your issue.**

Expand Down
6 changes: 5 additions & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,15 @@ the below items:
- [ ] I'm aware of the implications of the proposed changes
- [ ] Code is [PEP8](https://www.python.org/dev/peps/pep-0008/)
- [ ] I'm making the PR to `master`
- [ ] I've updated the versions based on [Semantic Versioning](https://semver.org/)
- [ ] `setup.py`
- [ ] `talos/__init__.py`
- [ ] `docs/index.html`
- [ ] `docs/_coverpage.md`

#### Docs

- [ ] [Docs](https://autonomio.github.io/talos) are updated
- [ ] [Docs](https://autonomio.github.io/talos) version is correct (index.html and \_coverpage.md)

#### Tests

Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ jobs:
strategy:
max-parallel: 9
matrix:
python-version: [3.6, 3.7]
python-version: [3.7, 3.8]
os: [ubuntu-latest, macos-latest]

steps:
Expand All @@ -33,7 +33,7 @@ jobs:
- name: Tests
run: |
export MPLBACKEND=agg
pip install tensorflow
pip install tensorflow>=2.0
pip install coveralls
coverage run --source=talos ./test-ci.py
- name: Coverage
Expand Down
13 changes: 0 additions & 13 deletions .github/workflows/greetings.yml

This file was deleted.

25 changes: 25 additions & 0 deletions .pep8speaks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# File : .pep8speaks.yml

scanner:
diff_only: False
linter: flake8

flake8: # Same as scanner.linter value. Other option is flake8
max-line-length: 88 # Default is 79 in PEP 8
ignore: # Errors and warnings to ignore
- W504 # line break after binary operator
- E402 # module level import not at top of file
- E731 # do not assign a lambda expression, use a def
- C406 # Unnecessary list literal - rewrite as a dict literal.

no_blank_comment: True # If True, no comment is made on PR without any errors.
descending_issues_order: False # If True, PEP 8 issues in message will be displayed in descending order of line numbers in the file

message:
opened:
header: "Hello @{name}! Thanks for opening this PR."
footer: "Do see the [Hitchhiker's guide to code style](https://goo.gl/hqbW4r)"
updated:
header: "Hello @{name}! Thanks for updating this PR."
footer: ""
no_errors: "There are currently no PEP 8 issues detected in this PR. Great work! :heart:"
42 changes: 20 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
<br>
</h1>

<h3 align="center">Hyperparameter Optimization for Keras</h3>
<h3 align="center">Hyperparameter Optimization for Keras, TensorFlow (tf.keras) and PyTorch</h3>

<p align="center">

Expand All @@ -19,19 +19,19 @@
</p>

<p align="center">
<a href="#Talos">Talos</a> •
<a href="#Key-Features">Key Features</a> •
<a href="#Examples">Examples</a> •
<a href="#Install">Install</a> •
<a href="#Support">Support</a> •
<a href="#talos">Talos</a> •
<a href="#wrench-key-features">Key Features</a> •
<a href="#arrow_forward-examples">Examples</a> •
<a href="#floppy_disk-install">Install</a> •
<a href="#speech_balloon-how-to-get-support">Support</a> •
<a href="https://autonomio.github.io/talos/">Docs</a> •
<a href="https://github.com/autonomio/talos/issues">Issues</a> •
<a href="#License">License</a> •
<a href="#page_with_curl-license">License</a> •
<a href="https://github.com/autonomio/talos/archive/master.zip">Download</a>
</p>
<hr>
<p align="center">
Talos radically changes the ordinary Keras workflow by <strong>fully automating hyperparameter tuning</strong> and <strong>model evaluation</strong>. Talos exposes Keras functionality entirely and there is no new syntax or templates to learn.
Talos radically changes the ordinary Keras, TensorFlow (tf.keras), and PyTorch workflow by <strong>fully automating hyperparameter tuning</strong> and <strong>model evaluation</strong>. Talos exposes Keras and TensorFlow (tf.keras) and PyTorch functionality entirely and there is no new syntax or templates to learn.
</p>
<p align="center">
<img src='https://i.ibb.co/3NFH646/keras-model-to-talos.gif' width=550px>
Expand All @@ -41,14 +41,14 @@ Talos radically changes the ordinary Keras workflow by <strong>fully automating

TL;DR

Talos radically transforms ordinary Keras workflows without taking away any of Keras.
Talos radically transforms ordinary Keras, TensorFlow (tf.keras), and PyTorch workflows without taking away.

- works with ANY Keras model
- works with ANY Keras, TensorFlow (tf.keras) or PyTorch model
- takes minutes to implement
- no new syntax to learn
- adds zero new overhead to your workflow

Talos is made for data scientists and data engineers that want to remain in **complete control of their Keras models**, but are tired of mindless parameter hopping and confusing optimization solutions that add complexity instead of reducing it. Within minutes, without learning any new syntax, Talos allows you to configure, perform, and evaluate hyperparameter optimization experiments that yield state-of-the-art results across a wide range of prediction tasks. Talos provides the **simplest and yet most powerful** available method for hyperparameter optimization with Keras.
Talos is made for data scientists and data engineers that want to remain in **complete control of their TensorFlow (tf.keras) and PyTorch models**, but are tired of mindless parameter hopping and confusing optimization solutions that add complexity instead of reducing it. Within minutes, without learning any new syntax, Talos allows you to configure, perform, and evaluate hyperparameter optimization experiments that yield state-of-the-art results across a wide range of prediction tasks. Talos provides the **simplest and yet most powerful** available method for hyperparameter optimization with TensorFlow (tf.keras) and PyTorch.

<hr>

Expand All @@ -74,7 +74,7 @@ Talos works on **Linux, Mac OSX**, and **Windows** systems and can be operated c

<hr>

### 📈 Examples
### :arrow_forward: Examples

Get the below code [here](https://gist.github.com/mikkokotila/4c0d6298ff0a22dc561fb387a1b4b0bb). More examples further below.

Expand All @@ -90,13 +90,13 @@ The *Simple* example below is more than enough for starting to use Talos with an

[Field Report](https://towardsdatascience.com/hyperparameter-optimization-with-keras-b82e6364ca53) [~15 mins]

For more information on how Talos can help with your Keras workflow, visit the [User Manual](https://autonomio.github.io/talos/).
For more information on how Talos can help with your Keras, TensorFlow (tf.keras) and PyTorch workflow, visit the [User Manual](https://autonomio.github.io/talos/).

You may also want to check out a visualization of the [Talos Hyperparameter Tuning workflow](https://github.com/autonomio/talos/wiki/Workflow).

<hr>

### 💾 Install
### :floppy_disk: Install

Stable version:

Expand All @@ -108,32 +108,30 @@ Daily development version:

<hr>

### 💬 How to get Support
### :speech_balloon: How to get Support

| I want to... | Go to... |
| -------------------------------- | ---------------------------------------------------------- |
| **...troubleshoot** | [Docs] · [Wiki] · [GitHub Issue Tracker] |
| **...report a bug** | [GitHub Issue Tracker] |
| **...suggest a new feature** | [GitHub Issue Tracker] |
| **...get support** | [Stack Overflow] · [Spectrum Chat] |
| **...have a discussion** | [Spectrum Chat] |
| **...get support** | [Stack Overflow] |

<hr>

### 📢 Citations
### :loudspeaker: Citations

If you use Talos for published work, please cite:

`Autonomio Talos [Computer software]. (2019). Retrieved from http://github.com/autonomio/talos.`
`Autonomio Talos [Computer software]. (2020). Retrieved from http://github.com/autonomio/talos.`

<hr>

### 📃 License
### :page_with_curl: License

[MIT License](https://github.com/autonomio/talos/blob/master/LICENSE)

[github issue tracker]: https://github.com/automio/talos/issues
[github issue tracker]: https://github.com/autonomio/talos/issues
[docs]: https://autonomio.github.io/talos/
[wiki]: https://github.com/autonomio/talos/wiki
[stack overflow]: https://stackoverflow.com/questions/tagged/talos
[spectrum chat]: https://spectrum.chat/talos
6 changes: 3 additions & 3 deletions docs/AutoParams.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
#### to automatically create a params dictionary

```python
p = talos.Autom8.AutoParams().params
p = talos.autom8.AutoParams().params

```
NOTE: The above example yields a very large permutation space so configure `Scan()` accordingly with `fraction_limit`.

#### an alternative way where a class object is returned

```python
param_object = talos.Autom8.AutoParams()
param_object = talos.autom8.AutoParams()

```

Expand All @@ -30,7 +30,7 @@ Now the modified params dictionary can be accessed through `params_object.params
#### to append a current parameter dictionary

```python
params_dict = talos.Autom8.AutoParams(p, task='multi_label').params
params_dict = talos.autom8.AutoParams(p, task='multi_label').params

```
NOTE: Note, when the dictionary is created for a prediction task other than 'binary', the `task` argument has to be declared accordingly (`binary`, `multi_label`, `multi_class`, or `continuous`).
Expand Down
2 changes: 2 additions & 0 deletions docs/Deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ Parameter | type | Description
`model_name` | str | Name for the .zip file to be created.
`metric` | str | The metric to be used for picking the best model.
`asc` | bool | Make this True for metrics that are to be minimized (e.g. loss)
`saved` | bool | if a model saved on local machine should be used
`custom_objects` | dict | if the model has a custom object, pass it here

## Deploy Package Contents

Expand Down
16 changes: 9 additions & 7 deletions docs/Energy_Draw.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,17 @@ A callback for recording GPU power draw (watts) on epoch begin and end. The call

### how-to-use

Use it as you would use any other Callback in Tensorflow or Keras.
Before `model.fit()` in the input model:

`
power_draw = PowerDrawCallback()
`power_draw = PowerDraw()`

model.fit(...callbacks=[power_draw]...)
`
Then use `power_draw` as you would callbacks in general:

It's possible to read the energy draw data:
`model.fit(...callbacks=[power_draw]...)`

`print(power_draw.logs)`
To get the energy draw data into the experiment log:

`history = talos.utils.power_draw_append(history, power_draw)`

NOTE: this line has to be after `model.fit()`.

3 changes: 3 additions & 0 deletions docs/Evaluate.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,14 @@ Parameter | Default | Description
--------- | ------- | -----------
`x` | NA | the predictor data x
`y` | NA | the prediction data y (truth)
`task`| NA | One of the following strings: 'binary', 'multi_class', 'multi_label', or 'continuous'.
`model_id` | None | the model_id to be used
`folds` | None | number of folds to be used for cross-validation
`shuffle` | None | if data is shuffled before splitting
`average` | 'binary' | 'binary', 'micro', 'macro', 'samples', or 'weighted'
`metric` | None | the metric against which the validation is performed
`asc` | None | should be True if metric is a loss
`saved` | bool | if a model saved on local machine should be used
`custom_objects` | dict | if the model has a custom object, pass it here

The above arguments are for the <code>evaluate</code> attribute of the <code>Evaluate</code> object.
2 changes: 2 additions & 0 deletions docs/Examples_Multiple_Inputs.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ x_train, y_train, x_val, y_val = wrangle.array_split(x, y, .5)
```
In the case of multi-input models, the data must be split into training and validation datasets before using it in `Scan()`. `x` is expected to be a list of numpy arrays and `y` a numpy array.

**NOTE:** For full support of Talos features for multi-input models, set `Scan(...multi_input=True...)`.

### Defining the Model
```python

Expand Down
2 changes: 1 addition & 1 deletion docs/Examples_Typical.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ p = {'activation':['relu', 'elu'],
'epochs': [10, 20]}
```

Note that the parameter dictionary allows either list of values, or tuples with range in the form `(min, max, step)`
Note that the parameter dictionary allows either list of values, or tuples with range in the form `(min, max, number_of_values)`


### Scan()
Expand Down
31 changes: 21 additions & 10 deletions docs/Examples_Typical_Code.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,23 @@
# Typical Case Example

```python
import talos as ta
import talos as talos
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

x, y = ta.templates.datasets.iris()
x, y = talos.templates.datasets.iris()

# define the model
def iris_model(x_train, y_train, x_val, y_val, params):

model = Sequential()

model.add(Dense(32, input_dim=4, activation=params['activation']))
model.add(Dense(3, activation='softmax'))
model.compile(optimizer=params['optimizer'], loss=params['losses'])

model.compile(optimizer=params['optimizer'],
loss=params['losses'],
metrics=[talos.utils.metrics.f1score])

out = model.fit(x_train, y_train,
batch_size=params['batch_size'],
Expand All @@ -24,14 +29,20 @@ def iris_model(x_train, y_train, x_val, y_val, params):

return out, model

# set the parameter space boundaries
p = {'activation':['relu', 'elu'],
'optimizer': ['Nadam', 'Adam'],
'losses': ['logcosh'],
'hidden_layers':[0, 1, 2],
'batch_size': (20, 50, 5),
'epochs': [10, 20]}

scan_object = ta.Scan(x, y, model=iris_model, params=p, fraction_limit=0.1)
'optimizer': ['Nadam', 'Adam'],
'losses': ['categorical_crossentropy'],
'epochs': [100, 200],
'batch_size': [4, 6, 8]}

# start the experiment
scan_object = talos.Scan(x=x,
y=y,
model=iris_model,
params=p,
experiment_name='iris',
round_limit=20)
```

`Scan()` always needs to have `x`, `y`, `model`, and `params` arguments declared. Find the description for all `Scan()` arguments [here](Scan.md#scan-arguments).
11 changes: 6 additions & 5 deletions docs/Monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,29 +7,30 @@ There are several options for monitoring the experiment.
Scan(disable_progress_bar=True)

# enable live training plot
from talos import live
from talos.callbacks import TrainingPlot

out = model.fit(X,
Y,
epochs=20,
callbacks=[live()])
callbacks=[TrainingPlot()])

# turn on parameter printing
Scan(print_params=True)
```

**Progress Bar :** A round-by-round updating progress bar that shows the remaining rounds, together with a time estimate to completion. Progress bar is on by default.

**Live Monitoring :** Live monitoring provides an epoch-by-epoch updating line graph that is enabled through the `live()` custom callback.
**Live Monitoring :** Live monitoring provides an epoch-by-epoch updating line graph that is enabled through the `TrainingPlot()` custom callback.

**Round Hyperparameters :** Displays the hyperparameters for each permutation. Does not work together with live monitoring.

### Local Monitoring

Epoch-by-epoch training data is available during the experiment using the `ExperimentLogCallback`:
Epoch-by-epoch training data is available during the experiment using the `ExperimentLog`:

```python
model.fit(...
callbacks=[talos.utils.ExperimentLogCallback('experiment_name', params)])
callbacks=[talos.callbacks.ExperimentLog('experiment_name', params)])
```
Here `params` is the params dictionary in the `Scan()` input model. Both
`experiment_name` and `experiment_id` should match with the current experiment,
Expand Down
Loading

0 comments on commit 12cecf7

Please sign in to comment.