Skip to content

Latest commit

 

History

History
10 lines (9 loc) · 741 Bytes

README.md

File metadata and controls

10 lines (9 loc) · 741 Bytes
Colab

This repo contains the following tools for running MLPerf benchmarks:

  • eval.py: For the tiny MLPerf visual wake word (vww), this script downloads the dataset from Silabs and runs both TFLite reference models (int8-model and float-model) with the 1000 images listed in y_labels.csv to measure their accuracy.
  • eval.ipynb: Jupyter notebook generated from eval.py, click here to run it from your browser.