This is an app that continuously detects the body parts in the frames seen by your device's camera. These instructions walk you through building and running the demo on an Android device. Camera captures are discarded immediately after use, nothing is stored or saved.
The app demonstrates how to use 3 models:
- Posenet
- Movenet Lightning
- Movenet Thunder
See this blog post for a comparison between these models.
-
If you don't have it already, install Android Studio 3.2 or later, following the instructions on the website.
-
Android device and Android development environment with minimum API 21.
-
Open Android Studio, and from the
Welcome
screen, selectOpen an existing Android Studio project
. -
From the
Open File or Project
window that appears, navigate to and select thelite/examples/pose_estimation/android
directory from wherever you cloned thetensorflow/examples
GitHub repo. ClickOK
. -
If it asks you to do a
Gradle Sync
, clickOK
. -
You may also need to install various platforms and tools, if you get errors like
Failed to find target with hash string 'android-21'
and similar. Click theRun
button (the green arrow) or selectRun
>Run 'android'
from the top menu. You may need to rebuild the project usingBuild
>Rebuild Project
. -
If it asks you to use
Instant Run
, clickProceed Without Instant Run
. -
Also, you need to have an Android device plugged in with developer options enabled at this point. See here for more details on setting up developer devices.
Downloading, extraction and placement in assets folder has been managed
automatically by download.gradle
.
If you explicitly want to download the model, you can download it from here:
Please do not delete the assets folder content. If you explicitly deleted the
files, then please choose Build
> Rebuild
from menu to re-download the
deleted model files into assets folder.