From fbdd0e87cde616fc5b5f206d414721d64a2b7540 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Thomas=20Hamb=C3=B6ck?= Date: Mon, 21 Mar 2022 13:36:28 +0000 Subject: [PATCH 1/3] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 565fb5a..34413f2 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ For deployment - Ubuntu 18.04 or 20.4 -For taining +For training - GPU enabled device (recommended) - [hailo.ai](www.hailo.ai) developer account - [docker](https://docs.docker.com/engine/install/ubuntu/) From d8809ba7e7a49c04531c436d044a9af3aff5d681 Mon Sep 17 00:00:00 2001 From: Raphael Maenle Date: Mon, 21 Mar 2022 14:15:34 +0000 Subject: [PATCH 2/3] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 34413f2..f5e3650 100644 --- a/README.md +++ b/README.md @@ -58,6 +58,7 @@ Here you'll train, quantize, compile (on a gpu if possible) and infer (on the ha explains well on how to create that. - There's a minimal example dataset in this repository under `/dataset` - To mount this, use eg.: `docker run -it --gpus all -ipc=host -v /path/to/dataset/:/dataset yolov5:v0` + - `python train.py --img 640 --batch 16 --epochs 3 --data /dataset/dataset/dataset.yaml --weights yolov5m.pt --cfg models/yolov5m.yaml` - For training, make sure you target the correct `--model` and use the correct `--weights` (which are now conveniently already in the hailo docker) - once you've saved the best.pb onnx file, you can exit this docker container - once you are done with the steps 'training and exporting to ONNX', move on to the next step. From cce1dd1c842cb14d494a4ac59ef4cb34b94989f0 Mon Sep 17 00:00:00 2001 From: Raphael Maenle Date: Mon, 21 Mar 2022 14:27:19 +0000 Subject: [PATCH 3/3] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f5e3650..216f30f 100644 --- a/README.md +++ b/README.md @@ -90,7 +90,7 @@ Further Notes and Examples: # Other Networks - For yolov5m, Hailo provides a configuration yaml which defines the quantization levels for the various networks. If you have a custom network you will have to define your custom yaml file. I liked using [netron](https://github.com/lutzroeder/Netron) to visualize the network for that. -- Anything the hailo\_model\_zoo provides a configuration file for (under `hailo_odel_zoo/cfg/networks/`) can be easily trained and deployed using the process +- Anything the hailo\_model\_zoo provides a configuration file for (under `hailo_model_zoo/cfg/networks/`) can be easily trained and deployed using the process described above. - Also check out the [Hailo Tappas](https://hailo.ai/developer-zone/tappas-apps-toolkit/) which supply a variety of pre-trained _.hef_ files, which you can run without having to compile anything.