Skip to content

Commit

Permalink
Version 0.1.0 (#41)
Browse files Browse the repository at this point in the history
* Extended unit tests to classifier and fixed pooling (#17)

* Extended unit tests to classifier and fixed pooling

* Changed trigger of doctest workflow

* Fixing issue #18

* fixed linters

* Add pre-commit hooks

* Doctest only on PRs

* Fixed network conversion from GPU

Also tested on Windows machine.

* Create python_versions.yml

* Update and rename python_versions.yml to tests.yml

* Update export.yaml

* CI fix (#21)

* Create pre-commit.yaml

* remove code.yaml

* fixing pre-commit

* Doctest with pytest

* change trigger

* change trigger

* Delete LICENSE

* checkpoint from filesystem (#20)

* checkpoint from filesystem

* fixed deps

* Update README.md

* Update LICENSE

* Updating LICENSE

---------

Co-authored-by: fpaissan <[email protected]>
Co-authored-by: Francesco Paissan <[email protected]>

* Create LICENSE (#22)

* Update README.md (#23)

* new min python version to 3.8

* 🐛 extra_requirements now have a version - fixed CI (#24)

* 🐛 extra_requirements now have a version

* fixed linter errors

* testing actions

* fixed linter

* removing tf_probability

* fixed tf prob version

---------

Co-authored-by: fpaissan <[email protected]>

* Documentation upgrade - guide for contribution (#25)

* add contribution guide to docs

* documentation with contribution guide

* cosmetic

* bump version 0.0.4 -> 0.0.5

* Bump requests from 2.28.2 to 2.31.0 (#27)

Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](psf/requests@v2.28.2...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix pypi release

* Update README.md (#29)

* Patch for faster GPU inference (#35)

* Patch for faster GPU inference

* remove unused zeropad def

---------

Co-authored-by: fpaissan <[email protected]>

* initial commit

* add eval loop

* add acceleration

* modules as dict

* add checkpointer

* minor

* load best checkpoint

* restore epoch, optimizer, lr sched

* fix logging on multi-gpu

* minor fixes

* working on single gpu

* fix checkpointer + multi-gpu

* fp16 might not be ok yet

* load_modules and unwrap_model

* fixed convert and export

* cosmetic on export

* add argparse

* add metrics -- check something is off with acc

* its print strange

* fixed checkpointer viz

* fix checkpointers and metrics

* cosmetic

* linters

* add credits

* fix requirements

* fix unittest

* remove recipes

* remove unused files

* remove unused fuctions from networks

* fix tests

* hot fix

* onnx conversion without convert

* fix requirements

* add default class config and temp folder for debug mode

* add doc for class Metric

* finish doc MicroMind

* update docs

* linters fix

* new initial page

* bump version 0.0.5 -> 0.1.0

* final touches and bumpver

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Matteo Beltrami <[email protected]>
Co-authored-by: SebastianCavada <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Matteo Tremonti <[email protected]>
Co-authored-by: Matteo Beltrami <[email protected]>
  • Loading branch information
6 people authored Oct 12, 2023
1 parent b4747c1 commit d318e0b
Show file tree
Hide file tree
Showing 23 changed files with 1,097 additions and 2,038 deletions.
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,14 @@ for the basic install. To install `micromind` with the full exportability featur
pip install -e .[conversion]
```

### Training networks with recipes

After the installation, get started looking at the examples and the docs!

### Export your model and run it on your MCU
Check out [this](https://docs.google.com/document/d/1zt5urvNtI9VSJcoJdIeo10YrdH-tZNcS4JHbT1z5udI/edit?usp=sharing)
tutorial and have fun deploying your network on MCU!

---------------------------------------------------------------------------------------------------------

## 📧 Contact
Expand Down
68 changes: 68 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,74 @@
Welcome to micromind's documentation!
=====================================

.. image:: https://img.shields.io/badge/python-3.9%20|%203.10-blue
:target: https://www.python.org/downloads/

.. image:: https://img.shields.io/badge/License-Apache_2.0-blue.svg
:target: https://github.com/fpaissan/micromind/blob/main/LICENSE

.. image:: https://img.shields.io/pypi/v/micromind

This is the official repository of `micromind`, a toolkit that aims to bridge two communities: artificial intelligence and embedded systems. `micromind` is based on `PyTorch <https://pytorch.org>`_ and provides exportability for the supported models in ONNX, Intel OpenVINO, and TFLite.

Key Features
------------

- Smooth flow from research to deployment;
- Support for multimedia analytics recipes (image classification, sound event detection, etc);
- Detailed API documentation;
- Tutorials for embedded deployment.

Installation
------------

Using Pip
~~~~~~~~~

First of all, install `Python 3.8 or later <https://www.python.org>`_. Open a terminal and run:

.. code:: shell
pip install micromind
for the basic install. To install `micromind` with the full exportability features, run

.. code:: shell
pip install micromind[conversion]
Basic how-to
------------

If you want to launch a simple training on an image classification model, you just need to define a class that extends `MicroMind <https://micromind.readthedocs.org/en/latest/micromind.html#micromind.core.MicroMind>`_, defining the modules you want to use, such as a `PhiNet`, the forward method of the model and the way in which to calculate your loss function. micromind takes care of the rest for you.

.. code-block:: python
class ImageClassification(MicroMind):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.modules["classifier"] = PhiNet(
(3, 32, 32), include_top=True, num_classes=10
)
def forward(self, batch):
return self.modules["classifier"](batch[0])
def compute_loss(self, pred, batch):
return nn.CrossEntropyLoss()(pred, batch[1])
Afterwards, you can export the model in the format you like best between **ONNX**, **TFLite** and **OpenVINO**, just run this simple code:

.. code-block:: python
m = ImageClassification()
m.export("output_onnx", "onnx", (3, 32, 32))
Here is the link to the Python `file <https://github.com/micromind-toolkit/micromind/blob/mm_refactor/examples/mind.py>`_ inside our repository that illustrates how to use the MicroMind class.

.. toctree::
:maxdepth: 2
:caption: Contents:
Expand Down
21 changes: 0 additions & 21 deletions docs/source/micromind.conversion.rst

This file was deleted.

8 changes: 0 additions & 8 deletions docs/source/micromind.networks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,3 @@ micromind.networks.phinet module
:members:
:undoc-members:
:show-inheritance:

Module contents
---------------

.. automodule:: micromind.networks
:members:
:undoc-members:
:show-inheritance:
27 changes: 17 additions & 10 deletions docs/source/micromind.rst
Original file line number Diff line number Diff line change
@@ -1,20 +1,27 @@
micromind package
=================

micromind.core module
---------------------

.. automodule:: micromind.core
:members:
:undoc-members:
:show-inheritance:

micromind.convert module
------------------------

.. automodule:: micromind.convert
:members:
:undoc-members:
:show-inheritance:

Subpackages
-----------

.. toctree::
:maxdepth: 4
:maxdepth: 2

micromind.conversion
micromind.networks
micromind.utils

Module contents
---------------

.. automodule:: micromind
:members:
:undoc-members:
:show-inheritance:
20 changes: 14 additions & 6 deletions docs/source/micromind.utils.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,26 @@ micromind.utils package
Submodules
----------

micromind.utils.configlib module
--------------------------------
micromind.utils.checkpointer module
-----------------------------------

.. automodule:: micromind.utils.configlib
.. automodule:: micromind.utils.checkpointer
:members:
:undoc-members:
:show-inheritance:

Module contents
---------------
micromind.utils.helpers module
------------------------------

.. automodule:: micromind.utils
.. automodule:: micromind.utils.helpers
:members:
:undoc-members:
:show-inheritance:

micromind.utils.parse module
----------------------------

.. automodule:: micromind.utils.parse
:members:
:undoc-members:
:show-inheritance:
67 changes: 67 additions & 0 deletions examples/mind.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
from micromind import MicroMind, Metric
from micromind.networks import PhiNet
from micromind.utils.parse import parse_arguments

import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms

batch_size = 128


class ImageClassification(MicroMind):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

self.modules["classifier"] = PhiNet(
(3, 32, 32), include_top=True, num_classes=10
)

def forward(self, batch):
return self.modules["classifier"](batch[0])

def compute_loss(self, pred, batch):
return nn.CrossEntropyLoss()(pred, batch[1])


if __name__ == "__main__":
hparams = parse_arguments()
m = ImageClassification(hparams)

def compute_accuracy(pred, batch):
tmp = (pred.argmax(1) == batch[1]).float()
return tmp

transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)

trainset = torchvision.datasets.CIFAR10(
root="data/cifar-10", train=True, download=True, transform=transform
)
trainloader = torch.utils.data.DataLoader(
trainset, batch_size=batch_size, shuffle=True, num_workers=1
)

testset = torchvision.datasets.CIFAR10(
root="data/cifar-10", train=False, download=True, transform=transform
)
testloader = torch.utils.data.DataLoader(
testset, batch_size=batch_size, shuffle=False, num_workers=1
)

acc = Metric(name="accuracy", fn=compute_accuracy)

m.train(
epochs=10,
datasets={"train": trainloader, "val": testloader, "test": testloader},
metrics=[acc],
debug=hparams.debug,
)

m.test(
datasets={"test": testloader},
)

m.export("output_onnx", "onnx", (3, 32, 32))
6 changes: 2 additions & 4 deletions micromind/__init__.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
from .networks.phinet import PhiNet
from .utils import configlib

from .core import MicroMind, Metric, Stage

# Package version
__version__ = "0.0.5"
__version__ = "0.1.0"

"""datasets_info is a dictionary that contains information about the attributes
of the datasets.
Expand Down
1 change: 0 additions & 1 deletion micromind/conversion/__init__.py

This file was deleted.

Loading

0 comments on commit d318e0b

Please sign in to comment.