-
Notifications
You must be signed in to change notification settings - Fork 40
Open
Description
In the README.md, it is mentioned that the inference speed on TX2 is around 10-11ms. Is this speed measured around the function doInference here: https://github.com/tjuskyzhang/yolov4-tiny-tensorrt/blob/bc49483e49e4de698fd88b878799b8b0a979e88f/yolov4-tiny.cpp#L496.
Or around context.enqueue line here: https://github.com/tjuskyzhang/yolov4-tiny-tensorrt/blob/bc49483e49e4de698fd88b878799b8b0a979e88f/yolov4-tiny.cpp#L392
Thank you for sharing this project.
Metadata
Metadata
Assignees
Labels
No labels