Sunday, January 30, 2011

CrossCompiling OpenCV 2.1 for ARM-CPUs

It's unbelievable but is has been exactly one year since I wrote my last post on this blog. It has been a busy year and I have worked almost every day with the new C++-Interface of OpenCV. It has greatly improved with the last two versions 2.1 and 2.2 but there are still some issues need fixing. 

One thing I was particularly interested in last year was compiling OpenCV 2.1 for ARM-CPUs which will help you develop your applications for mobile devices without the need to compile it with the iOS-Toolchain or anything like that. When it comes to compiling OpenCV 2.1 with the CodeSourcery g++ compiler there are a few errors that need addressing first although many known bugs from the previous version are already fixed. Since we run our ARM (TI OMAP35x) without a unix-like os, the most of the problems are os-based. Beneath you'll find a list of all errors I encountered and also a hint for a quick fix. 

One of the biggest issues compiling OpenCV 2.1 for ARM is the “cxrand.cpp”-file which obviously implements a random number generator that uses threads. Because “pthreads” are a problem when you don't have a suitable OS that takes care of threads I had to remove the code in order to successfully compile the library. So if you want to have random numbers you may have to implement them for your own. Since I'm not entirely sure which OpenCV functions use this random number generator be careful!
Error:  
cxcore/cxrand.cpp:594: error: 'pthread_key_t' does not name a type
Solution:
Replace line 634-668 with:

RNG& theRNG()
{
   CV_Assert(FALSE); // runtime stop if this one is really used!
   RNG* rng = NULL;
   return *rng;
}
There is also a strange error that seems to come from the CodeSourceryG++ itself. When compiling 'cv/smooth.cpp' the compiler breaks with
Error:
cv/cvsmooth.cpp:1149: internal compiler error: Segmentation fault
Maybe a newer version of the compiler will help – but theres also a simple workaround: Just use the compilerflag “-verbose” and the file will be compiled successfully.
 Solution: 
use compiler flag '-verbose' for gcc/g++
There is also an error in the highgui part. Since it is not (easily) possible to exclude the hole highgui part from the compilation which would seem reasonable since most of it's functionality couldn't be used on the target we need to address the issue. The solution is quite simple. Just add '#include '

Error:

highgui/loadsave.cpp: In function 'void* cv::imdecode_(const cv::Mat&, int, int, cv::Mat*)':
highgui/loadsave.cpp:333: error: 'unlink' was not declared in this scope

Solution:

#include <unistd.h>

The next error isn't far away, this time it shows first when the linker is active. The problem is again the 'unistd.h' which is not included. But instead of simply including the header replace the following lines. Doing this means, that you need to define NO_OS before compiling!
Error: 
../../lib/libcvhaartraining.a(cvcommon.obj): In function `icvMkDir(char const*)':
cvcommon.cpp:(.text._Z8icvMkDirPKc+0x90): undefined reference to `mkdir'
Solution:
[...]
#else /* _WIN32 */
#ifdef NO_OS
    assert(FALSE);
#else
    if( stat( path, &st ) != 0 )
   {
       if( mkdir( path, mode ) != 0 ) return 0;
   }
#endif
#endif /* _WIN32 */

The next error occurs when compiling the 'traincascade/imagestorage.h' here it is also only a missing include file:
Error:
/apps/traincascade/features.h:4, from /apps/traincascade/cascadeclassifier.h:5, from /apps/traincascade/traincascade.cpp:2:
/apps/traincascade/imagestorage.h:27: error: ISO C++ forbids declaration of 'FILE' with no type
/apps/traincascade/imagestorage.h:27: error: expected ';' before '*' token
Solution:
#include <stdio.h>

This should help YOU to get OpenCV 2.1 up and running on your ARM-machine. Some of these bugs are already I the official cv bugtracker - I hope they are fixed by version 2.3, since most of them are still active in 2.2

The next time I'll write about UnitTesting with OpenCV datatypes using the great GTest (googletest) framework that comes with a BSD licence and can therefore be used commerically and free. GoogleTest provides some nice features to test your coding and brings built in capability for compilation on other target systems like the ARM-CPU. I hope It won't take another year to write this post... ;)

Saturday, January 30, 2010

Thin-plate spline example

Hello again,
Since there have been some questions about an example, I will show you how to use the CThinPlateSpline-Class. First of all we have to load an image from our disk. The image can be of any size, depth or color format. Just call


// load a nice picture
cv::Mat img = cv::imread("C:\\lena512color.jpg");




Now that we have an image we should set some reference points on which the spline algorithm will evaluate the distortion. Normally you would use a interest point detector, but it was easier to just add some generic points to the dataset.
 
// generate some generic points
// usually you would use a interest point detector such as SURF or SIFT
 
std::vector iP, iiP;

// push some points into the vector for the source image
iP.push_back(cv::Point(50,50));
iP.push_back(cv::Point(400,50));
iP.push_back(cv::Point(50,400));
iP.push_back(cv::Point(400,400));
iP.push_back(cv::Point(256,256));
iP.push_back(cv::Point(150,256));

// push some point into the vector for the dst image
iiP.push_back(cv::Point(70,70));
iiP.push_back(cv::Point(430,60));
iiP.push_back(cv::Point(60,410));
iiP.push_back(cv::Point(430,420));
iiP.push_back(cv::Point(220,280));
iiP.push_back(cv::Point(180,240));



Now we have already done the tedious part, lets create a CThinPlateSpline object that will do all the work for you.

 // create thin plate spline object and put the vectors into the constructor

CThinPlateSpline tps(iP,iiP);

The only thing we have to do now is to call the internal warping function and set the correct parameters.

// warp the image to dst
Mat dst;
tps.warpImage(img,dst,0.01,INTER_CUBIC,BACK_WARP);


To see the results just call the imshow function out of opencv

// show images
cv::imshow("original",img);
cv::imshow("distorted",dst);
cv::waitKey(0);


Because it's quite a tradition I used our good old gal "lena" to show you the advantages of the spline algorithm. Just take a look. 

Thin-plate spline interpolation scheme

Few days have passed since the last post. Since then I was busy implementing my new CThinPlateSpline-Class that wraps the functionality I intended to provide you with. To give you all better access I started a project on code.google.com where you can checkout the sources an use them.

svn checkout http://ipwithopencv.googlecode.com/svn/trunk/

This link will provide you the access to the svn repository where you can find the actual source code in the trunk folder. I had not the time to intensely check the code for bugs, so I hope you will provide me with bug reports if you find something.

The code is not really in its final state, since I want to add more TPS approximations and improvements. But as it is now, you should get acquainted with the code really fast since it's kind of self-explaining. Nonetheless I will try to post a simple code example so everyone knows how to use the class.

I also took the time to comment the code, at least the header file. It was done in doxygen-style so the is a small but nice documentation included in the project. The documentation is located at the source folder and can be found in the folder "doc". If you really trying to use my code base, take a look there.

Since time was of the essence, I simply uploaded the VisualStudio2010(beta2) project to the server. I know this isn't the nicest way to provide you with my code, but know this - you really do need only two files to get going with your own project. Simply check out  the following files:

CThinPlateSpline.h
CThinPlateSpline.cpp

All functionality is provided by those two files. Just add them to your own project and link against the OpenCV 2.0 libraries. In the coming week I will try to provide you with a more suitable approach. I'm planning to wrap the sources into a library or a dll (for windows users) so you simply have to compile the code.

Saturday, January 23, 2010

Non-rigid image transformations with Thin-Plate-Spline interpolation scheme

It has been several month that I wrote something, but because there were many questions about my OpenCV implementation of Bookstein's Thin-Plate-Spline Bookstein-ThinPlateSpline [pdf] algorithm, I decided to write a few words about it and show you my implementation so that everyone who might need the capabilities of the algorithm can use it.

Since my implementation was implemented over a year ago, it is based on the old OpenCV 1.1pre release. Since the new C++-Interface introduced with OpenCV 2.0 has more capabilities and is much more convenient to use, I think it is best to port the TPS to this version.

Over the next few days I will rework my functions into a easy to use c++-class and show you my implementation which was somewhat improved over the originally introduced spline algorithm. I will start today to wrap my source into that class and start several threads to show you the steps and the results, even the process of working with the Thin Plate Spline.

Monday, November 2, 2009

Using OpenCV with MacOS 10.6 Snow Leopard - Part II

Since I found out, that it is quite unintuitive to work with the OpenCV framework created with the script i introduced in my first post, I'll show you how to compile OpenCV 2.0 using CMake and ffmpeg for video in- and output.

I recently tested the video capability with mit Logitech QuickCam Communicate STX using the "macam" drivers. It worked like charm, but I didn't get a good framerate - but this should be a camera problem!
 
1. GET OPENCV SOURCES 

So then let's start. The first thing we need are the OpenCV source files. Just follow the previous guide and use a svn client to get the actual data.

>> svn co https://opencvlibrary.svn.sourceforge.net/svnroot/opencvlibrary/trunk'


2. INSTALL FFMPEG

We also have to install the "ffmepg" which is done by using the macports. Just type

>> sudo port -v install ffmpeg

This should take a while. Meanwhile you can get acquainted with CMake. So now we have to solve a little problem. Since OpenCV tries to find the ffmpeg header files only in "/opt/local/include" we will have to copy them there to make sure that the CMake-Script will find them. I know that isn't the cleanest solution, but I couldn't think of something more "advanced", so if you have another solution feel free to post it.

So to get everything to work just search for the folders
  • libavcodec
  • libavdevice
  • libavfilter
  • libavformat
  • libavutil
Copy them to the include folder and go on with the next steps.

3. INSTALL CMAKE 

Next thing we have to do is get a copy of CMake. You can get the newest release on the projekt hompage "http://www.cmake.org/". Install it and Open the CMake-GUI.

Start the application and ook for the "Where is the source code"-line and browse to the OpenCV folder where your svn client put the data. In the following I'll call this directory OPENCV_DIR. Create a folder, say "_make" in the OPENCV_DIR and put the path into "Where to build the binaries"-line in CMake. Now hit the "Configure"-button and CMake will ask you which compiler you want to use. Just use the default one which should be XCode, if you have installed the IDE already. After configuring you will see a reed colored window with many compile-options.

In the text output you should find entries that tell you, that "libavformat - found" and and the result tabular in the ffmpeg-section should be a line that says "gentoo-style:     1". If this is the case everything worked out just fine and we can go on to configure OpenCV.

4. COFIGURING OPENCV USING CMAKE
This is something individual, but my settings should work for most of you out there - everyone who need a different setting should know what to do when reading the parameter list. So do the following:
 
Uncheck:
  • BUILD_EXAMPLES
  • BUILD_TEST
This should do the trick. Just hit "Configure" again, and everything should get white and the "Generate" button will appear. Hit it and wait. In the OPENCV_DIR/_make/ folder you should now find an XCode-Project file which you can double-click. Just compile it in Release and/or Debug-Mode. The libs will be located in  OPENCV_DIR/_make/lib/. You will have to add them to the XCode library path or copy them into the project by drag-dropping them into the Project window (worked like charm) for me.

Make sure you are in the debug mode and compile your OpenCV programm. The debug mode will work out just fine. I haven't had luck with the release mode though - XCode doesn't seem to find the libraries anymore and I have absolutely no clue why...

Sunday, October 4, 2009

Using OpenCV2.0 with MacOS 10.6 and XCode 3.2 - PART I

First of all, I'm completely new to programming under MacOS. Some times ago I'tried to develop some IPhone applications but didn't find the time to finish my tasks. With the new OpenCV-Release few days ago I decided to try to install the library on my MacMini. As a posted yesterday the compilation went just fine, but including the library into XCode wasn't as easy as I thought it would be - but after some hours of traial end error I did eventually get the system up an running.

Here is what i've done so far. First of all you should get the newest OpenCV snapshot from the svn server. This is quite simple if you've already installed an svn-client on your mac. If this isn't the case you should first of all install 'fink' and/or 'port' to get native unix applications on your system. After installing one of these packaging systems you have to install more tools. These are:
  • subversion
  • libjpeg
  • libpng
  • libtiff
To install them just open the the 'terminal' an type: 'sudo port -v install subversion libjpeg libpng libtiff' 

After all of these libraries are install you should check out the OpenCV-SVN-folder. This is really simple but takes a while. First create a folder 'OpenCV' and browse there with your terminal. Then all you have to do is type:'svn co https://opencvlibrary.svn.sourceforge.net/svnroot/opencvlibrary/trunk'

Once everything was synchronized between your folder an the svn-server there should be a folder 'trunk/opencv' in your path. There you'll find a sh-skript "make_frameworks.sh" which will configure and compile the OpenCV library as private framework to work with XCode. Just type in the terminal: './make_frameworks.sh'.  


When the skript finally finishes, there should be a file in your folder with the name "OpenCV.framework". This private framework can now be easily be included into XCode.

This is as far as a will go on today. The next time I'll describe how to successfuly integrate the framework into XCode.

Saturday, October 3, 2009

OpenCV 2.0a Released!

This is my very first try to get the new OpenCV 2.0 version running on Mac OS 10.6 (Snow Leopard). While im writing the library is still compiling and I'm hoping I will get OpenCV to work with the new XCode deliverd with Snow Leopard.

I build OpenCV yesterday on a Windows machine using Visual Studio 2008 and CMake. The building process worked out just fine but there where some problems concering the test environment. When executing the "cvtest.exe" it crashes right after the "hist_backprojection" test. I read in the mailinglist that I'm not the only person with this problem, so I added a bug report. Let's wait and see where the problem is.

Despite of the failing test, the library works just fine under Windows and my own build libs. The new C++-Inteface is great and let's you code more easily than with the old ANSI-C-Style.