Life Of Navin

Random Musings, Random Bullshit.

kc

Collaborative Neural Network Mona Lisa


Just give it a couple of seconds to get set up.
Get a modern browser.

You are now part of a collaborative neural network painting project. What you see is a neural network's approximation of Leonardo da Vinci's masterpiece. If you're on this page, the network begins to use your browser resources to further train itself. The network itself is saved as a checkpoint after a predetermined number of training steps. Anyone visiting the page starts off from the network of the last saved checkpoint.

The network has been setup to train itself very slowly, so it should not eat up too many resources, but in case it does, I apologize in advance. :)

This experiment is powered by ConvNetJS

OpenCV on Android with Java SDK and JNI - Part 2

In Part 1, we set up our Android Studio workspace and project to use the OpenCV4Android SDK. But what if some of my OpenCV code is not in Java but in C/C++? What we need to do is ensure that the C/C++ code is also loaded as part of your APK, and is linked correctly. We already set up the NDK in Part 1, so now we can use it to accomplish the task at hand. This is how you can do that:
  1. Open up app -> build.gradle and add the following just before buildtypes

    • Basically this overwrites the default ndkBuild task and instead asks it to build from src/main/jni . Also, it defines a build script (Android.mk) and a NDK app Makefile (Application.mk) and outputs stuff into src/main/jniLibs. Remember jniLibs? That's where we had previously added the architecture specific opencv_java3.so files. Turns out that .so files written to that directory are included in the final build for a given architecture. So what we'll do is build our code into .so files and add them into the jniLibs directory for the given architecture
  2. Create a directory jni inside app -> src - > main. This location should contain our native C++ source code. Go ahead and create 3 files. A C++ file here named native-lib.cpp, and 2 files named Application.mk and Android.mk respectively.

  3. Add the following content into Android.mk, Application.mk and native-lib.cpp respectively.
    • You'll have to provide the path to the OpenCV4Android SDK on line 7 of Android.mk
    • Also, in Application.mk , we mention which architecture we are building for (in my case, armeabi-v7a). You can change it as per your architecture. 
    • The code in native-lib.cpp  simply takes an OpenCV Mat object (referenced by it's address) and runs Canny Edge detection on it, and saves the result back in the same address.
    • If you want to know why the esoteric naming of methods in native-lib.cpp, it's because JNI needs to be given methods in that format. Read this to know more about this.
  4. Okay, so we have native C++ code, and a way to build it. Finally, we just need to load this native code through Android and call it's methods. We first load the native library by loading the library statically. This can be done by using System.loadLibrary. Our static block will now look like this

  5. We do this by first defining the native method in MainActivity.java just before the closing parenthesis.
    • Note that the name of the method in Java and C++ are strongly correlated. C++ names are basically named as Java_packagename_classname_methodname. Make sure you are following this convention, otherwise things WILL break.
  6. Finally, all we have to do is make the call to the native method we've defined. Replace the onCameraFrame method we have with the following:

    • As can be seen, this code reads an image from the camera, converts it to greyscale, passes it to the native C++ code (explained above) and then returns the result back to the display. 
  7. Now simply run MainActivity. If the build goes correctly, you should see libnative-lib.so is added to the jniLibs directory under the architecture you're building for, and when the app is pushed to your device, it shows you realtime edge detection.
    libnative-lib.so added to jniLibs
    Running the app. The thumbs up says it all!
That's it! So now we have Android + OpenCV4Android SDK + native C/C++ code working together to perform realtime computer vision tasks. In fact, there are ways in which you can further improve performance of JNI, but let's talk about that some other time.

codex pulchra est

OpenCV on Android with Java SDK and JNI - Part 1

Anyone who has even the slightest understanding of computer vision has at some point worked with OpenCV. It's a super cool (and fast) general purpose CV library. At a recent hackathon we were at, our hack required getting OpenCV running on an Android device, but because the documentation for doing this is very vague, we spent a lot of time just on the setup itself. What further complicated everything was that we had some code in C++ and some in Java and needed to bring all of this together, with rewriting the code of one language into the other not an option. We succeeded in the end, but we knew that our build flow was super hacky. But we're engineers and we don't like hacky code (not even at hackathons) so I thought it would be good to document how to do it the right way. So this is how you go about running OpenCV on Android:

Step 1: The Setup

  1. Download the following
  2. On first launch of Android Studio, it'll download all the latest SDK Components (Assuming you downloaded it without the SDK), platform-tools, build-tools etc. These are the components it offered to download for me. Note the build-tools version (Mine is 24.0.2, yours may differ), but you'll need it later on.
  3. Start a new project. Uncheck the checkbox that says "include C++ support". Next.
  4. On Target Android Devices, choose Phone and Tablet with Minimum SDK Version as 15 (That's Ice Cream Sandwich!!). Next.
  5. Add an Empty Activity. Next. On the Customize the Activity screen, just click Next.
  6. Your workspace is now ready! Go to Android -> SDK Manager.  Under SDK Platforms, check Android 6.0 Marshmallow, which is API Level 23 [1]. Under SDK Tools, check NDK (My version shows 12.1.2977051). Click on apply and it'll download the components.
    Open up the SDK Manager
    Download NDK
  7. Once this is done, go to File -> Project Settings and click on Select default NDK for Android NDK location. This will populate the text field with the path to the NDK you just downloaded.
  8. Unzip the OpenCV for Android zip file into some location.

Step 2: Adding OpenCV for Android SDK

  1. Go to File -> New -> Import Module and then add <OpenCV for Android Unzipped Path>/sdk/java as a module. 
    • You will probably get an error. Let's fix that. In the left menu listing the files of the project, you can switch from Android view to Project view (using the dropdown right at the top).  Open the build.gradle file for the module you just imported (openCVLibrary310 -> build.gradle) and edit values to what's shown below. Make sure the buildToolsVersion is the same as what you noted down in step 2 of setting up. Then select Sync Now and all errors should disappear.
  2. Right click on app in the left menu, and select Open Module Settings. Select the module app, click on Dependencies, and then add the openCVLibrary310 as a module dependency onto app.
    Open module settings for app
    After adding the dependency.
  3. Create a directory names jniLibs inside app -> src -> main. In this directory we add the architecture dependent .so file for OpenCV. You'll find all the architectures supported in <OpenCV for Android Unzipped Path>/sdk/native/libs/ . Choose your architecture (usually armeabi-v7a) and copy the directory into jniLibs.  We only need the libopencv_java3.so file so go ahead and delete all the other files.
    How it should look

Step 3: Test it out!

  1. Let's test if OpenCV loads properly in our Android app. Go to MainActivity.java and add the following code segment right at the start of the class:
  2. Now just connect your phone in USB Debugging mode and click on the Run icon at the top to build and push your application to your device [2].
  3. The application builds and pushes to the device and opens up the Android Monitor. In the filter, filter by OpenCV::Main and you should see OpenCV loaded. If instead you see OpenCV not loaded, it may be because you've use used the wrong architecture library in step 3 of Adding OpenCV. 
    OpenCV loaded. Yay!

Step 4: Let's Code!

We're all set to write our OpenCV code in our Android Application. Let's write a simple application which takes camera input, converts it to greyscale, performs a Gaussian on it and returns it to the screen.
  1. Create a file named camera.xml in src -> main -> res -> layout. Add the following in it.

  2. Open up src -> main -> AndroidManifest.xml and add the following lines just before the </manifest>

    • Also change the android:theme to @style/Theme.AppCompat.DayNight.NoActionBar to ensure no bar is seen at the top of the application
  3. Go to src -> main -> java and open MainActivity.java. Replace it's contents by:

    • The OpenCV code is actually all on the onCameraFrame method. As you can see, it takes a total of 2 lines of code to do what we want!
  4. Click the run icon to rebuild and push the application to your device. It should ask for Camera Permissions, which we just added. Give it access, and you should see a realtime Grayscale Gaussian-ed view of whatever the Camera sees!
    Success!
That's the first bit. In the second part of this post, I'll show how we can write native C++ OpenCV code to run on the device using JNI  (which is what the NDK we installed initially is for). The advantages of using JNI is both speed and flexibility. I'll also show how you can mix the Android OpenCV code and JNI code if you so wish to. Until then...

Codex vincit omnia

----

[1] I usually download a couple of earlier SDKs as well for testing as you can see (Why?). Nougat (API level 24) also should work just fine, but I use the Marshmallow (API level 23)
[2] If you're new to all of this, read this tutorial!

Deep Learning Setup QuickStart on AWS EC2

TL;DR: Use this gist and you're all set! The remainder of this post is a step-by-step for a newbie to Deep Learning/AWS

The Deep Learning ecosystem has matured tremendously over the last few months or so. I've been playing around with some of these applications over the same time period as well, and it's amazing how much the field has moved ahead in such a short time. Today, terms like Word2Vec, CNN and LSTMs have became part of the lingo of nearly every researcher, hobbyist or otherwise.



And of course, AWS makes running deep-learning applications possible at almost no cost. Especially if you lack a good GPU on your personal system, AWS is a godsend.  I've been working with some of these amazing frameworks so thought it might be good to automate the process of setting up a brand new EC2 instance with everything required to get up and running with deep learning. This includes:
So, let's get started:

1) Get yourself an AWS EC2 instance.
Go ahead and get yourself a new instance with an Ubuntu AMI (14.04/16.04 should be fine). Since we obviously we want to use the GPU instances, pick either g2.2xlarge or g2.8xlarge instance type. (Lost?)

2) SSH into the machine and either:
a) Download this zip file and extract it into the machine using:
$ curl "http://link_to_zip" -o dl_setup.zip 
and unzip the file using
$ unzip dl_setup.zip
b) Simply create the 3 files and copy the contents from the gist using your favorite editor. The 3 files are:

i) deep_learning_bootstrap.sh

This is a shell script which installs all the required dependencies and libraries required.


ii) test_install.sh

This is a shell script which tests to ensure that the install is successful.


iii) theano_test.py

This is a small python script which tests to ensure Theano is using the GPU for it's computations.


3) Next up run the following to ensure the shell scripts are executable:
chmod +x *.sh
4) Run the bootstrap script using:
./deep_learning_bootstrap.sh
 This will take some time (~5-10 minutes) to download and install all the dependencies.

If a pink screen pops up mentioning "A new version of /boot/grub/menu.lst is available", choose "Keep local version" and select OK.

Once everything is done, you should see the following message:
Reboot System (sudo shutdown -r 0) and run ./test_install.sh
5) Follow the instruction and run:
sudo shutdown -r 0
Your SSH connection will disconnect while your machine reboots. Wait for ~30 seconds and SSH back into the machine. You should see the prompt changed to:
ubuntu@D33P_L34RN $
6) Now run the test script to ensure everything has correctly installed:
./test_install.sh
Just look for the messages in green and ensure the output matches it.

And... that's it! You're all set to start developing your own deep learning applications on AWS EC2. Here's a set of examples you can get started with.

Happy coding! :)


Notes: Why both Theano and Tensorflow? Because choice. Honestly, thanks to Keras, using either as a backend is as simple as a string change. Also, most early to mid level posts in the field use one of the two with no strong monopoly, so it simply made sense to have both of them.  

Prologue

Finally after all these years, here's to the beginning of what was there, what is there and hopefully what will remain!! So here are my thoughts & words -Online!!

Blog Archive