These installation and setup instructions are aimed for yolov5 deployment on a PCIe connected Hailo device.
If you want to deploy a different network, check the [other networks](Other Networks) section.
# Preemptive Note
The Hailo pipeline is constantly updating and changing, so it might not be ideal to follow these instructions as they are not updated regularly. I would
recommend following the hailo tutorials from the [hailo model zoo](https://github.com/hailo-ai/hailo_model_zoo/blob/master/docs/RETRAIN_ON_CUSTOM_DATASET.md)
and go through the docs on hailo.ai.
guide
# Requirements
For deployment
- access to [hailo.ai developer zone](https://hailo.ai/developer-zone)
This is split into the 'deploy device' (which has your Hailo PCIe Device connected) and the 'training device' which you'll use for network training and network to hailo quantization. Skip this if you already have a `.har` file
you would like to deploy.
## For your (GPU enabled) training Device
You'll need a way to train your network (yielding a .pb file from it) and the hailo software suite to generate the Hailo File from that.
- always download the newest version of the [hailo software suite](https://hailo.ai/developer-zone/sw-downloads) from the [hailo.ai software download zone](https://hailo.ai/developer-zone/sw-downloads/)
- For yolov5 we'll be using hailo Docker containers, which are based on the [ultralytics yolov5 containers](https://github.com/ultralytics/yolov5)
- hailo model zoo now has a [guide how to train yolov5 for hailo](https://github.com/hailo-ai/hailo_model_zoo/blob/master/docs/RETRAIN_ON_CUSTOM_DATASET.md) follow that one. Some notes:
- You'll need to create your own dataset structure for the training process. [This guide](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)
- run `./hailo_sw_suite_docker_run.sh --resume` to get into the docker container
-`cd hailo_model_zoo` and continue the tutorial.
Further Notes and Examples:
- Another example to compile the model is to run: `python hailo_model_zoo/main.py compile yolov5m --ckpt /files/my_dataset/best.onnx --calib-path /files/watt_dataset/dataset/dataset/images/ --yaml ./hailo_model_zoo/cfg/networks/yolov5m.yaml`
- note that here you're quantizing and compiling to HEF file. If you just want to quantize into a _har_ file, run `python hailo_model_zoo/main.py quantize yolov5m --ckpt /files/my_dataset/best.onnx --calib-path /files/my_dataset/dataset/dataset/images/ --yaml ./hailo_model_zoo/cfg/networks/yolov5m.yaml`, which yields the _.har_ representation of the file
- You have to provide a list of images for quantization (--calib-path)
- also provide a quantization scheme (--yaml), where hailo\_model\_zoo provides a variety
- **There's no real reason to not use _compile_ directly from what I know.**
- Note that the file locations in these examples are different to Hailos. These worked, while Hailo's didn't.
- follow the getting started guide from [hailo model zoo / GETTING STARTED](https://github.com/hailo-ai/hailo_model_zoo/blob/master/docs/GETTING_STARTED.md), specifically the
_optimizing_ and _compile_ steps if you want more information on quantization parameters and how to go on from there.
- this git repository includes a _inference.py_ file, which which loads a specified _.hef_ file. Change that to your _.hef_ file location.
- If you have anything other than yolov5m, you will have to change the pre/post-processing steps. Otherwise it won't work. Check out the Hailo Handler Class and go from there.
- For yolov5m, Hailo provides a configuration yaml which defines the quantization levels for the various networks. If you have a custom network you will have to define your custom yaml file. I liked using [netron](https://github.com/lutzroeder/Netron) to visualize the network for that.
- Anything the hailo\_model\_zoo provides a configuration file for (under `hailo_odel_zoo/cfg/networks/`) can be easily trained and deployed using the process
described above.
- Also check out the [Hailo Tappas](https://hailo.ai/developer-zone/tappas-apps-toolkit/) which supply a variety of pre-trained _.hef_ files, which you can run