Simplifying Android Augmented Images- Android ARCore
- Prakhar Srivastava
- Aug 31, 2020
- 3 min read

Android has come a long way over the past years. With the augmented reality coming into android, the whole android ecosystem has got a whole new dimension to play around. Though the AR sdk has limited features, it is fascinating.
In this tutorial, we will see how to detect and augmented image and show information using a ViewRenderable or show a 3-D model.
The Requirement
We had a requirement to show the pressure flowing through a pipe using the mobile application. The AR implementation was: A person should be able to see the pressure levels at certain points in a pipe.
We implemented the ARCore with a ViewRenderable to show the information on a card. And it looks pretty cool.

The full source code can be found on my github:
There are three steps to achieve this.
Create an augmented image
Match the image in camera frame with the augmented image
If matched, place an AR Object.
I our scenario, we had to identify the pressure sensor. Unfortunately the android ARCore SDK does not support the 3-D objects recognition. So we were not able to identify the sensor. Instead, we added an image on top of the sensors with the ‘Sensor Id’. This is what we used as an augmented image as well. Let’s go through the steps in detail:
Create augmented Image
We copy the image (which we want to detect) to the ‘assets’ directory.
private fun setupAugmentedImageDb(config: Config): Boolean {
val augmentedImageDatabase: AugmentedImageDatabase
val augmentedImageBitmap = loadAugmentedImage() ?: return false
augmentedImageDatabase = AugmentedImageDatabase(session)
augmentedImageDatabase.addImage("snsr_image", augmentedImageBitmap)
config.augmentedImageDatabase = augmentedImageDatabase
return true
}
Here we have created an Augmented Image Database.
The `addImage` method here takes one String and one Bitmap as the arguments. So we have added our name as ‘snsr_name’ and the bitmap of our image using the following function
private fun loadAugmentedImage(): Bitmap? {
try {
assets.open("snsr_image.jpg").use { `is` -> return BitmapFactory.decodeStream(`is`) }
} catch (e: IOException) {
Log.e(TAG, "IO exception loading augmented image bitmap.", e)
}
return null
}
Now we are ready to use this AugmentedImage with the real world camera image.
Compare the camera Image with Augmented Image
When we initialize our ARSceneView, we set an update callback which notifies us of all the frame updates on the camera
arSceneView.scene.setOnUpdateListener(this::onUpdateFrame)
An when the frame is updated, we compare the image in the frame with the augmented image in the DB.
private fun onUpdateFrame(frameTime: FrameTime) {
frameTime.toString()
val frame = arSceneView.arFrame
val updatedAugmentedImages = frame.getUpdatedTrackables(AugmentedImage::class.java)
for (augmentedImage in updatedAugmentedImages) {
if (augmentedImage.trackingState == TrackingState.TRACKING) {
// Check camera image matches our reference image
if (augmentedImage.name == "snsr_image") {
//the image has matched, do work here
}
}
}
}
Here, we get all the updated tractable on frame update as the Augmented Images and we compare them each with the DB. And when the name matches, we have just hit our image. Now once the images are matched, we need to place an info card.
Placing an Info Card
When we get the correct match from our augmented images, we pass the correct one in a method.
for (augmentedImage in updatedAugmentedImages) {
if (augmentedImage.trackingState == TrackingState.TRACKING) {
// Check camera image matches our reference image
if (augmentedImage.name == "snsr_image") {
createRenderable(augmentedImage,"snsr_002")
}
}
}
Here in the example I have passed the hardcoded value. In the sample on GitHub, I have the original functionality implemented. Once the image is matched, we scan the QRCode in the image, which is a sensor id and pass that value in the function below:
private fun createRenderable(augmentedImage: AugmentedImage, name : String) {
var renderable : ViewRenderable?=null
try {
renderable = dataView.get()
} catch (e: InterruptedException) {
e.printStackTrace()
} catch (e: ExecutionException) {
e.printStackTrace()
}
val node = Node()
try {
val anchorNode = AnchorNode(arSceneView.session.createAnchor(augmentedImage.centerPose))
arSceneView.scene.removeChild(anchorNode)
val pose = Pose.makeTranslation(0.0f, 0.0f, 0.12f)
node.localPosition = Vector3(pose.tx(), pose.ty(), pose.tz())
node.renderable = renderable
node.setParent(anchorNode)
node.localRotation = Quaternion(pose.qx(), 90f, -90f, pose.qw())
arSceneView.scene.addChild(anchorNode)
setNodeData(renderable!!,name)
sensorsMap[name] = renderable!!
makeInfoView()
} catch (e: Exception) {
e.toString()
}
}
Here we are using ViewRenderable. We get the view to be inflated in the view renderable
renderable = dataView.get()
Where dataView is an XML from the layout folder
dataView = ViewRenderable.builder().setView(this, R.layout.layout_bg).build()
Once we have the view renderable, we setup the anchor where we will place this renderable. The anchor is placed in the center of the augmented image.
AnchorNode(arSceneView.session.createAnchor(augmentedImage.centerPose))
And when we have our anchor ready, we create a node, set the renderable and set our anchor node as the parent of the node.
And DONE. We have successfully placed our ARObject in front of the augmented image
The last step is to set the data.
You can get the full source code on my GitHub here:
Let me know in comments if this helped you. :) Thanks
Comments