This is a sample project that allows users to match voice commands to 3D models and load them dynamically in the augmented world. The voice command is processed using StanfordNLP while the models are obtained and loaded dynamically from Google Poly API. The AR interaction is implemented through ARCore with Sceneform.
Also, with the use of Cloud Anchors, the augmentated experience is persistent thus allowing the user to add models to a 3D room and then retrieve them at any time. The 3D room contains details like assets ids (that correspond to Google Poly assets) and cloud anchor ids thus allowing to recover the state of several models throughout the room.
- Download or clone the repository
- Add your own Google Poly API key in the Manifest.xml file:
You can create your own Poly API key using the official documentation.
<meta-data android:name="com.google.android.ar.API_KEY" android:value="YOUR_KEY_HERE" />
- Make sure you can run the app on a physical device that supports ARCore. You can check the devices available here.
- Install the app and provide Audio permissions like below:
- Enjoy!
- 24h persistence limit for anchors imposed by Cloud Anchors API
- Local persistence of 3D room