+Implementation of paper - [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696)
+- Integrated into [Huggingface Spaces 🤗](https://huggingface.co/spaces/akhaliq/yolov7) using Gradio. Try out the Web Demo [](https://huggingface.co/spaces/akhaliq/yolov7)
+
+## Performance
+
+MS COCO
+
+| Model | Test Size | AP<sup>test</sup> | AP<sub>50</sub><sup>test</sup> | AP<sub>75</sub><sup>test</sup> | batch 1 fps | batch 32 average time |
+ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.83868
+```
+
+To measure accuracy, download [COCO-annotations for Pycocotools](http://images.cocodataset.org/annotations/annotations_trainval2017.zip) to the `./coco/annotations/instances_val2017.json`
+
+## Training
+
+Data preparation
+
+``` shell
+bash scripts/get_coco.sh
+```
+
+* Download MS COCO dataset images ([train](http://images.cocodataset.org/zips/train2017.zip), [val](http://images.cocodataset.org/zips/val2017.zip), [test](http://images.cocodataset.org/zips/test2017.zip)) and [labels](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip). If you have previously used a different version of YOLO, we strongly recommend that you delete `train2017.cache` and `val2017.cache` files, and redownload [labels](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip)
+**Pytorch to CoreML (and inference on MacOS/iOS)** <a href="https://colab.research.google.com/github/WongKinYiu/yolov7/blob/main/tools/YOLOv7CoreML.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
+
+**Pytorch to ONNX with NMS (and inference)** <a href="https://colab.research.google.com/github/WongKinYiu/yolov7/blob/main/tools/YOLOv7onnx.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
+**Pytorch to TensorRT with NMS (and inference)** <a href="https://colab.research.google.com/github/WongKinYiu/yolov7/blob/main/tools/YOLOv7trt.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
+**Pytorch to TensorRT another way** <a href="https://colab.research.google.com/gist/AlexeyAB/fcb47ae544cf284eb24d8ad8e880d45c/yolov7trtlinaom.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <details><summary> <b>Expand</b> </summary>
+Instructions to deploy YOLOv7 as TensorRT engine to [Triton Inference Server](https://github.com/NVIDIA/triton-inference-server).
+
+Triton Inference Server takes care of model deployment with many out-of-the-box benefits, like a GRPC and HTTP interface, automatic scheduling on multiple GPUs, shared memory (even on GPU), dynamic server-side batching, health metrics and memory resource management.
+
+There are no additional dependencies needed to run this deployment, except a working docker daemon with GPU support.
+
+## Export TensorRT
+
+See https://github.com/WongKinYiu/yolov7#export for more info.
+
+```bash
+#install onnx-simplifier not listed in general yolov7 requirements.txt
+pip3 install onnx-simplifier
+
+# Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size
+See [Triton Model Repository Documentation](https://github.com/triton-inference-server/server/blob/main/docs/model_repository.md#model-repository) for more info.
+See [Triton Model Configuration Documentation](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#model-configuration) for more info.
+
+Minimal configuration for `triton-deploy/models/yolov7/config.pbtxt`:
+See [Triton Model Analyzer Documentation](https://github.com/triton-inference-server/server/blob/main/docs/model_analyzer.md#model-analyzer) for more info.
+Throughput for 16 clients with batch size 1 is the same as for a single thread running the engine at 16 batch size locally thanks to Triton [Dynamic Batching Strategy](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#dynamic-batcher). Result without dynamic batching (disable in model configuration) considerably worse:
+ {dummy,image,video} Run mode. 'dummy' will send an emtpy buffer to the server to test if inference works. 'image' will process an image. 'video' will process a video.
+ input Input file to load from in image or video mode
+
+optional arguments:
+ -h, --help show this help message and exit
+ -m MODEL, --model MODEL
+ Inference model name, default yolov7
+ --width WIDTH Inference model input width, default 640
+ --height HEIGHT Inference model input height, default 640
+ -u URL, --url URL Inference server URL, default localhost:8001
+ -o OUT, --out OUT Write output into file instead of displaying it
+ help='Run mode. \'dummy\' will send an emtpy buffer to the server to test if inference works. \'image\' will process an image. \'video\' will process a video.')
+ parser.add_argument('input',
+ type=str,
+ nargs='?',
+ help='Input file to load from in image or video mode')
+ parser.add_argument('-m',
+ '--model',
+ type=str,
+ required=False,
+ default='yolov7',
+ help='Inference model name, default yolov7')
+ parser.add_argument('--width',
+ type=int,
+ required=False,
+ default=640,
+ help='Inference model input width, default 640')
+ parser.add_argument('--height',
+ type=int,
+ required=False,
+ default=640,
+ help='Inference model input height, default 640')
+ parser.add_argument('-u',
+ '--url',
+ type=str,
+ required=False,
+ default='localhost:8001',
+ help='Inference server URL, default localhost:8001')
+ parser.add_argument('-o',
+ '--out',
+ type=str,
+ required=False,
+ default='',
+ help='Write output into file instead of displaying it')
+ parser.add_argument('-f',
+ '--fps',
+ type=float,
+ required=False,
+ default=24.0,
+ help='Video output fps, default 24.0 FPS')
+ parser.add_argument('-i',
+ '--model-info',
+ action="store_true",
+ required=False,
+ default=False,
+ help='Print model status, configuration and statistics')
+ parser.add_argument('-v',
+ '--verbose',
+ action="store_true",
+ required=False,
+ default=False,
+ help='Enable verbose client output')
+ parser.add_argument('-t',
+ '--client-timeout',
+ type=float,
+ required=False,
+ default=None,
+ help='Client timeout in seconds, default no timeout')
+ parser.add_argument('-s',
+ '--ssl',
+ action="store_true",
+ required=False,
+ default=False,
+ help='Enable SSL encrypted channel to the server')
+ "Reparameterization is used to reduce trainable BoF modules into deploy model for fast inference. For example merge BN to conv, merge YOLOR to conv, ..etc\n",
+ "However, before reparameterization, the model has more parameters and computation cost.reparameterized model (cfg/deploy) used for deployment purpose\n",
+ "\n",
+ "\n",
+ "\n",
+ "### Steps required for model conversion.\n",
+ "1.train custom model & you will get your own weight i.e custom_weight.pt / use (pretrained weight which is available i.e yolov7_traing.pt)\n",
+ "\n",
+ "2.Converting this weight using Reparameterization method.\n",
+ "\n",
+ "3.Trained model (cfg/training) and reparameterized model (cfg/deploy) will get same prediction results.\n",
+ "However, before reparameterization, the model has more parameters and computation cost.\n",
+ "\n",
+ "4.Convert reparameterized weight into onnx & tensorrt\n",
+ "intersect_state_dict = {k: v for k, v in state_dict.items() if k in model.state_dict() and not any(x in k for x in exclude) and v.shape == model.state_dict()[k].shape}\n",
+ "intersect_state_dict = {k: v for k, v in state_dict.items() if k in model.state_dict() and not any(x in k for x in exclude) and v.shape == model.state_dict()[k].shape}\n",
+ "intersect_state_dict = {k: v for k, v in state_dict.items() if k in model.state_dict() and not any(x in k for x in exclude) and v.shape == model.state_dict()[k].shape}\n",
+ "intersect_state_dict = {k: v for k, v in state_dict.items() if k in model.state_dict() and not any(x in k for x in exclude) and v.shape == model.state_dict()[k].shape}\n",
+ "intersect_state_dict = {k: v for k, v in state_dict.items() if k in model.state_dict() and not any(x in k for x in exclude) and v.shape == model.state_dict()[k].shape}\n",
+ "intersect_state_dict = {k: v for k, v in state_dict.items() if k in model.state_dict() and not any(x in k for x in exclude) and v.shape == model.state_dict()[k].shape}\n",