This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Track human poses in real-time with Tinker Edge T
- Tinker Board T
- Image version >= Tinker_Edge_T-Mendel-Chef-V1.0.0-20200221 [link]
- USB camera

We have built Google Coral PoseNet on Tinker Edge T image. You can refer to the following introduction for how to use it. 

First of all, you need to have a USB camera connected to Tinker Edge T such as the following one.

[Image: iRlM5Ef.png]

Then, you can power on Tinker Edge T and launch the terminal console by clicking on the red box as shown in the following figure using mouse.

[Image: oYjyly1.png]

In the terminal console, issue the following commands to go the the directory /usr/share/project-posenet and run a simple camera example that streams the camera image through posenet and draws the pose on top as an overlay.

$ cd /usr/share/project-posenet
$ python3 --videosrc=/dev/video2

The argument --videosrc is to specify from where to stream the camera image. Here the node /dev/video2 is for USB camera we just connected. For more argument information, you can run with -h as the following to get more information.

$ python3 -h

usage: [-h] [--mirror] [--model MODEL]

                     [--res {480x360,640x480,1280x720}] [--videosrc VIDEOSRC]


optional arguments:
 -h, --help            show this help message and exit

 --mirror              flip video horizontally (default: False)

 --model MODEL         .tflite model path. (default: None)

 --res {480x360,640x480,1280x720}

                       Resolution (default: 640x480)

 --videosrc VIDEOSRC   Which video source to use (default: /dev/video0)

 --h264                Use video/x-h264 input (default: False)

In the end, you should see something as the following and you have successfully run the example.

[Image: xX9aUTr.png]

[Image: AzE1fY0.png]  [Image: eVzgGZ8.png]

For more information, please refer to
[-] The following 1 user Likes Tinker Board's post:
  • kukukan
which pre-trained model used in this case? mobilenet or resnet?
can anyone be so kind to let me know how to check and change the pre-trained model

Forum Jump:

Users browsing this thread: 1 Guest(s)