In Part 1, we set up our Android Studio workspace and project to use the OpenCV4Android SDK. But what if some of my OpenCV code is not in Java but in C/C++? What we need to do is ensure that the C/C++ code is also loaded as part of your APK, and is linked correctly. We already set up the NDK in Part 1, so now we can use it to accomplish the task at hand. This is how you can do that:
Create a directory jni inside app -> src - > main. This location should contain our native C++ source code. Go ahead and create 3 files. A C++ file here named native-lib.cpp, and 2 files named Application.mk and Android.mk respectively.
Okay, so we have native C++ code, and a way to build it. Finally, we just need to load this native code through Android and call it's methods. We first load the native library by loading the library statically. This can be done by using System.loadLibrary. Our static block will now look like this
We do this by first defining the native method in MainActivity.java just before the closing parenthesis.
Finally, all we have to do is make the call to the native method we've defined. Replace the onCameraFrame method we have with the following:
Now simply run MainActivity. If the build goes correctly, you should see libnative-lib.so is added to the jniLibs directory under the architecture you're building for, and when the app is pushed to your device, it shows you realtime edge detection.
That's it! So now we have Android + OpenCV4Android SDK + native C/C++ code working together to perform realtime computer vision tasks. In fact, there are ways in which you can further improve performance of JNI, but let's talk about that some other time.
codex pulchra est
- Open up app -> build.gradle and add the following just before buildtypes
- Basically this overwrites the default ndkBuild task and instead asks it to build from src/main/jni . Also, it defines a build script (Android.mk) and a NDK app Makefile (Application.mk) and outputs stuff into src/main/jniLibs. Remember jniLibs? That's where we had previously added the architecture specific opencv_java3.so files. Turns out that .so files written to that directory are included in the final build for a given architecture. So what we'll do is build our code into .so files and add them into the jniLibs directory for the given architecture
Add the following content into Android.mk, Application.mk and native-lib.cpp respectively.
- You'll have to provide the path to the OpenCV4Android SDK on line 7 of Android.mk
- Also, in Application.mk , we mention which architecture we are building for (in my case, armeabi-v7a). You can change it as per your architecture.
- The code in native-lib.cpp simply takes an OpenCV Mat object (referenced by it's address) and runs Canny Edge detection on it, and saves the result back in the same address.
- If you want to know why the esoteric naming of methods in native-lib.cpp, it's because JNI needs to be given methods in that format. Read this to know more about this.
- Note that the name of the method in Java and C++ are strongly correlated. C++ names are basically named as Java_packagename_classname_methodname. Make sure you are following this convention, otherwise things WILL break.
- As can be seen, this code reads an image from the camera, converts it to greyscale, passes it to the native C++ code (explained above) and then returns the result back to the display.
|libnative-lib.so added to jniLibs|
|Running the app. The thumbs up says it all!|
codex pulchra est