Building OpenCV for iPhone
2011-10-09 12:54
483 查看
OpenCV (Open Source Computer Vision) is a library of programming functions for real time computer vision. This library has a huge number of algorithms.OpenCV supports Windows and Linux platforms (and Android starting from 2.2 version). But, unfortunately, there
is no official iOS platform support for this moment. In this post i will show you that can build OpenCV for this platform and run it on your iPhone or iPad.
Preparation
So, we will need
XCode (Developer
profile to be able debug on device)
CMake
Fresh OpenCV
SVN command line tool or any GUI SVN client
Getting the new version of OpenCV is pretty easy - just check out them from public svn repository:
is no official iOS platform support for this moment. In this post i will show you that can build OpenCV for this platform and run it on your iPhone or iPad.
Preparation
So, we will need
XCode (Developer
profile to be able debug on device)
CMake
Fresh OpenCV
SVN command line tool or any GUI SVN client
Getting the new version of OpenCV is pretty easy - just check out them from public svn repository:
mkdir opencv-library cd opencv svn co https://code.ros.org/svn/opencv/trunk/opencv[/code]
I suggest to use the most actual version instead official releases because, usually guys from WillowGarage commit several bug fixes per week.Now, when we have copy of OpenCV source code we need to generate XCode workspace to build it. Run CMake (I prefer to use CMake GUI, but use of command line tool is also possible). Select correct source code directory and specify folder where generated workspace
Building
should be placed and hit "Generate" button. Choose XCode workspace in dialog window.
Now your CMake window should looks like this:
Here we should change few flags:
BUILD_SHARED_LIBS = NO
BUILD_NEW_PYTHON_SUPPORT = NO
BUILD_EXAMPLES = NO
ENABLE_SSE.. = NO (All of them)
WITH_EIGEN2 = NO
WITH_PVAPI = NO
WITH_OPENEXR = NO
WITH_QT = NO
WITH_QUICKTIME = NO
And one more important option which define folder where libraries headers will be placed:
CMAKE_INSTALL_PREFIX = <Your path here>
In this article I use following directory structure:
opencv-library/opencv – Sources from SVN repository
opencv-library/build – Build directory
opencv-library/install-dir – Install directory
Hit "Configure" and then "Generate". We a ready to build OpenCV.
Locate and load OpenCV.xcodeproj in your build directory to XCode. Here we also should do some tricks.
1) Change SDK from MacOS SDK to iOS SDK. It's important, because with this option we tell XCode correct architecture (armv6/armv7).
2) Disable Compile for Thumb for all projects. Disabling thumbs will increase speed of floating point operations twice [1]!
Notice:
Disabling "Compile for thumbs" option is actual only for iPhone 3G and older models. If you are targeting to the iPhone 3GS and never models with modern CPU (armv7 architecture) you don't need to disable "Compile for Thumbs". Thanks to the Shervin
Emami, who pointed me to this nuance.
Finally
we are ready to build something! We have to build both debug and release configurations for both device and simulator platform. But when we start building library, we will got a lot of compilation errors in highgui project.
This happening because OpenCV uses QTKit for camera capturing. Unfortunately, this framework is missing in iOS. There are two ways - remove problem files from project (This requires some knowledge of CMake syntax) or simply not use highgui in your project.
For you it means you won't be able to use very useful functions: cvLoadImage, cvShowImage, cvCreateCaptureFromCam, cvCreateCaptureFromFile etc... But, actually it's not a big problem - iOS API provide all methods you may want to do such things.
There are not elegant, but working solution – simply remove all projects, that uses highgui library from workspace (and highgui itselft). After this manipulations all projects should compile without problems.Okay, now we have four versions of staticaly linked libraries. And I suggest to use lipo tool to create fat-binaries for OpenCV libraries. It's very useful to have one lib file for both Simulator (i386 architecture) and Device (armv6/armv7).
Bringing all together
Here is bash script that will merge them together:# Create armv7 + i386 OpenCV library mkdir -p build/lib/universal lipo -create build/lib/Release-iphoneos/libopencv_calib3d.a build/lib/Release-iphonesimulator/libopencv_calib3d.a -output build/lib/universal/libopencv_calib3d.a lipo -create build/lib/Release-iphoneos/libopencv_contrib.a build/lib/Release-iphonesimulator/libopencv_contrib.a -output build/lib/universal/libopencv_contrib.a lipo -create build/lib/Release-iphoneos/libopencv_core.a build/lib/Release-iphonesimulator/libopencv_core.a -output build/lib/universal/libopencv_core.a lipo -create build/lib/Release-iphoneos/libopencv_features2d.a build/lib/Release-iphonesimulator/libopencv_features2d.a -output build/lib/universal/libopencv_features2d.a lipo -create build/lib/Release-iphoneos/libopencv_gpu.a build/lib/Release-iphonesimulator/libopencv_gpu.a -output build/lib/universal/libopencv_gpu.a lipo -create build/lib/Release-iphoneos/libopencv_imgproc.a build/lib/Release-iphonesimulator/libopencv_imgproc.a -output build/lib/universal/libopencv_imgproc.a lipo -create build/lib/Release-iphoneos/libopencv_legacy.a build/lib/Release-iphonesimulator/libopencv_legacy.a -output build/lib/universal/libopencv_legacy.a lipo -create build/lib/Release-iphoneos/libopencv_ml.a build/lib/Release-iphonesimulator/libopencv_ml.a -output build/lib/universal/libopencv_ml.a lipo -create build/lib/Release-iphoneos/libopencv_objdetect.a build/lib/Release-iphonesimulator/libopencv_objdetect.a -output build/lib/universal/libopencv_objdetect.a lipo -create build/lib/Release-iphoneos/libopencv_video.a build/lib/Release-iphonesimulator/libopencv_video.a -output build/lib/universal/libopencv_video.a lipo -create build/lib/Release-iphoneos/libopencv_flann.a build/lib/Release-iphonesimulator/libopencv_flann.a -output build/lib/universal/libopencv_flann.a lipo -create build/3rdparty/lib/Release-iphoneos/libopencv_lapack.a build/3rdparty/lib/Release-iphonesimulator/libopencv_lapack.a -output build/lib/universal/libopencv_lapack.a lipo -create build/3rdparty/lib/Release-iphoneos/liblibjpeg.a build/3rdparty/lib/Release-iphonesimulator/liblibjpeg.a -output build/lib/universal/liblibjpeg.a lipo -create build/3rdparty/lib/Release-iphoneos/liblibpng.a build/3rdparty/lib/Release-iphonesimulator/liblibpng.a -output build/lib/universal/liblibpng.a lipo -create build/3rdparty/lib/Release-iphoneos/libzlib.a build/3rdparty/lib/Release-iphonesimulator/libzlib.a -output build/lib/universal/libzlib.a lipo -create build/lib/Debug-iphoneos/libopencv_calib3d.a build/lib/Debug-iphonesimulator/libopencv_calib3d.a -output build/lib/universal/libopencv_calib3dd.a lipo -create build/lib/Debug-iphoneos/libopencv_contrib.a build/lib/Debug-iphonesimulator/libopencv_contrib.a -output build/lib/universal/libopencv_contribd.a lipo -create build/lib/Debug-iphoneos/libopencv_core.a build/lib/Debug-iphonesimulator/libopencv_core.a -output build/lib/universal/libopencv_cored.a lipo -create build/lib/Debug-iphoneos/libopencv_features2d.a build/lib/Debug-iphonesimulator/libopencv_features2d.a -output build/lib/universal/libopencv_features2dd.a lipo -create build/lib/Debug-iphoneos/libopencv_gpu.a build/lib/Debug-iphonesimulator/libopencv_gpu.a -output build/lib/universal/libopencv_gpud.a lipo -create build/lib/Debug-iphoneos/libopencv_imgproc.a build/lib/Debug-iphonesimulator/libopencv_imgproc.a -output build/lib/universal/libopencv_imgprocd.a lipo -create build/lib/Debug-iphoneos/libopencv_legacy.a build/lib/Debug-iphonesimulator/libopencv_legacy.a -output build/lib/universal/libopencv_legacyd.a lipo -create build/lib/Debug-iphoneos/libopencv_ml.a build/lib/Debug-iphonesimulator/libopencv_ml.a -output build/lib/universal/libopencv_mld.a lipo -create build/lib/Debug-iphoneos/libopencv_objdetect.a build/lib/Debug-iphonesimulator/libopencv_objdetect.a -output build/lib/universal/libopencv_objdetectd.a lipo -create build/lib/Debug-iphoneos/libopencv_video.a build/lib/Debug-iphonesimulator/libopencv_video.a -output build/lib/universal/libopencv_videod.a lipo -create build/lib/Debug-iphoneos/libopencv_flann.a build/lib/Debug-iphonesimulator/libopencv_flann.a -output build/lib/universal/libopencv_flannd.a lipo -create build/3rdparty/lib/Debug-iphoneos/libopencv_lapack.a build/3rdparty/lib/Debug-iphonesimulator/libopencv_lapack.a -output build/lib/universal/libopencv_lapackd.a lipo -create build/3rdparty/lib/Debug-iphoneos/liblibjpeg.a build/3rdparty/lib/Debug-iphonesimulator/liblibjpeg.a -output build/lib/universal/liblibjpegd.a lipo -create build/3rdparty/lib/Debug-iphoneos/liblibpng.a build/3rdparty/lib/Debug-iphonesimulator/liblibpng.a -output build/lib/universal/liblibpngd.a lipo -create build/3rdparty/lib/Debug-iphoneos/libzlib.a build/3rdparty/lib/Debug-iphonesimulator/libzlib.a -output build/lib/universal/libzlibd.a
Universal binaries will be places in build/lib/universal folder. “build” folder is the name of the directory you’ve entered on the first stage when generating workspace in CMake.
Include headers can be obtained by running install target in XCode in build/include.My approach doesn't intend creation of private framework. You are responsible of manual adding of include and link directories into your project. If you have experience how to make private framework - write me email, i will add it to the article.
UsageHere is precompiled version of OpenCV library. I’ve used revision №4771.
Bonus
Download Precompiled
OpenCV library (2490)
Also i've published automated build script that can build OpenCV for iPhone in one click.
Download OpenCV
Build Script (4239)1.Break
References
That Thumb For Best iPhone Performance
Tagged
as: Apple, CMake, iOS, iPhone, OpenCV, XCodeLeave
a comment
Comments
(45)Trackbacks
(3)(
subscribe to comments on this post )
Shervin
Emami
December
19th, 2010 - 18:42
Nice article. But the article that told you to disable Thumb is misleading, because it was for an old iPhone (probably iPhone 2), whereas all the recent iPhones (iPhone 3G, 3GS, iPhone 4, iPad, and all recent iPod Touch) have a different CPU that is usually
best to turn Thumb on! But to be even more specific, if you compile to ARMv6 (such as for iPhone 3G or older) and mainly do floating point stuff then you should disable Thumb, whereas if you mainly use integer operations (which is common for image processing
such as OpenCV) or you compile to ARMv7 (for iPhone 3GS or newer) then Thumb will make your app download faster and usually run faster too.
(More details are on the ARM website that makes the Cortex A8 CPU in iPhones & iPads)
(
REPLY )
EKhvedchenya
December
19th, 2010 - 19:14
Thanks a lot for clarification! I’ve add your comments to this post.
(
REPLY )
EKhvedchenya
December
21st, 2010 - 20:46
Noticed, that when building OpenCV for ARM processors, SIFT feature detector excluded from compilation using macroses. But there are one using of SIFT detector in OpenCV that cause link-time errors.
Posted ticket https://code.ros.org/trac/opencv/ticket/772.
Hope, they will fix them sometime.
(
REPLY )
Tom
December
27th, 2010 - 12:36
I can compile the opencv code, couldn’t get the install target to work as it was having problems with the absence of the highgui libs. However I just used your download to get the include directory. After adding the libs to my frameworks to an existing iphone
project and setting the header path and library include path manually I hit a snag however. I get multiple compilation errors in the opencv header files.
Do i have to change some compilation setting in the project?
(
REPLY )
EKhvedchenya
December
27th, 2010 - 14:22
Hi! This depends on which settings you are using now. To get OpenCV headers i’ve used “Install” target from MacOSX build. So OpenCV headers in my package wasn’t changed w.r.t to their originals.
Since guys from Willowgarage changing include convension from version to version it’s hard to write all-in-one solution. But for my project i use following include folders configuration:
Added include search path:
$(ExternalLibs)/opencv_2.2.0/include < – in this folder i’ve copied both “opencv” and “opencv2″ folders from OpenCV package.
And include OpenCV like this:
#include <opencv2/opencv.hpp>
(
REPLY )
Tom
December
27th, 2010 - 16:06
Hmm, I did all that, and get this error on the include directory
/Documents/opencv-svn/opencv-install/include/opencv2/core/core.hpp:523: error: statement-expressions are allowed only inside functions
which is this line:
typedef Vec diag_type;
I guess the compiler is complaining about the MIN function. Seems I’m not the only one with the problem
http://stackoverflow.com/questions/3810800/how-to-include-opencv-in-cocoa-application
Anyway, I appreciate all the help and the excellent article, but I really can’t get it to work and my objective-c/xcode knowledge isn’t good enough to solve it :s
(
REPLY )
EKhvedchenya
December
27th, 2010 - 16:23
Ah, i see the problem. I solved it by moving all OpenCV-dependent code outside Objective-C files.
However, i believe that correct solution is exist. If i’ll find them, you will first be noticed about it
(
REPLY )
Tom
December
27th, 2010 - 17:15
thanks! that’s great
I’ll
use your work around for the time being
(
REPLY )
Michael
January
7th, 2011 - 18:08
Thanks for the article. I am having the same issue as Tom and tried a lot for 2 hours. Now I am kind of stuck and I could not find a solution. Any new hints to solve the problem?
(
REPLY )
EKhvedchenya
January
9th, 2011 - 16:27
Hi Michael!
I can recommend you to do following things:
1) Include OpenCV headers in the begining of the .mm files. You’re using .mm (This extension tells compiler to build code as Objective-C++ Code), aren’t you?
2) Try to differentiate OpenCV code from Objective-C. There are several reasons:
- ObjectiveC is slower than C++ code, so writing any processing logic using it is a bad idea.
- Some kind of single responsibility principle. Let the ObjectiveC be responsible of all UI-related tasks as it should be and move all data processing logic to C++ code. In this case you 100% will avoid situations of including OpenCV headers to .mm files.
(
REPLY )
Aziz
January
10th, 2011 - 13:12
Hello,
Thanks a lot for the article. I’ve been sucessfully using opencv c extensions within objective c code. However when I try to switch to c++ I get the same problem as Michael. What I did is:
1) Added #include to view controller implementation.
2) Renamed my view controller implementation to .mm
If I keep the view controller in .m format everything compiles fine and I can use the opencv c extensions. I read at several places (http://stackoverflow.com/questions/3890552/problem-when-import-c-header-file-in-iphone-ipad-project)
that when including a cpp header in .mm file, one should wrap it into:
#ifdef __cplusplus
#include mycppheader.h
#endif
This does not seem to help though.
Could you please elaborate on how you got around the problem?
Also, I’ve noticed that OpenCV is pretty slow on iPhone 4. It takes me around 500ms to detect a face in a downsampled 512×512 image. Do you have similar results?
Thanks in advance!
Aziz
(
REPLY )
EKhvedchenya
January
11th, 2011 - 19:25
Hi!
Thanks for a comment.
About ObjectiveC + OpenCV – i will reveal tips how to make them live together as soon as i will have enough time for it, because i’m currently involved into development of the new type of feature descriptor.
OpenCV actually is fast enough on iPhone. There are two reasons.
1) iPhone CPU 2-3 times slower than PC processor.
2) OpenCV has a lot of low-level optimizations for x86 architecture (SSE intrinsics, TBB). But when targeting to ARM architecture they are not valid, and general C code is used.
So if you want to get processing really fast the best way to do this – learn assembler, architecture of ARM processors and use ARM NEON SIMD engine for fast data processing.
I will show how to improve performance of several typical operations in image processing in my future posts.
(
REPLY )
Aziz
January
13th, 2011 - 16:10
Thanks alot! Looking forward to it.
(
REPLY )
Amine
Jaidi
January
22nd, 2011 - 01:29
Hey
I found your article to be quite interesting. It’s a good workaround for iPhone as you use the CMake to compile for x86 and switch it to arm. You script didn’t work for me however so what i did was to compile the arm6 libraries on Debug and the simulator x86
on Release and combined them afterwards using a different script, but where are the other libraries such as libcv and libcvaux?
Thanks
(
REPLY )
EKhvedchenya
January
24th, 2011 - 08:17
Those modules were removed starting from the OpenCV 2.0 version. If you have to use older versions, you have to make corrections to my scripts. Hovewer i suggest to use latest OpenCV in all cases.
(
REPLY )
Claudio
January
25th, 2011 - 20:24
Hi, this article was very useful for me, but I have the same problem about C++ include files. I read that someone of you resolved this problem taking all CV stuff outside obj-c. Would someone of you, please, explain how you did this? And how you can use your
functions/classes inside the viewcontroller.m, for example when you tap a button?
I’d really appreciate if you could help me because I’m trying to make a software for visually impaired people using iPhone…
Thanks
Claudio (Italy)
(
REPLY )
Aziz
January
26th, 2011 - 13:08
Hi Claudio,
First of all try to rename all of your .m files (including both AppDelegate and ViewController) to .mm and their .h headers to .hpp. Check if your errors are gone. I have done the code separation as follows:
1) MyOpenCVCode.h is included in the ViewController, has no includes and contains a struct like this:
typedef struct
{
char *imageData;
int imageSize;
int width;
int height;
int depth;
int nChannels;
int widthStep;
}ImgData;
And a function declaration to receive something of this type, ie.
void gotFrame(&ImgData);
2) MyOpenCVCode.cpp includes all necessary opencv libraries, and has a function:
void convertToIplImage(const ImgData& data, IplImage*& capture)
This allows me to convert the above mentioned struct received by gotFrame function to OpenCV IplImage and do further processing on it. If you adjust the struct you can probably do direct conversion to the newer Mat structure used in opencv2. I have not looked
into that though. For more information on conversion between IplImage and UIImage see: http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en
3) Finally I have a method in my ViewController that converts the ImgData struct back to UIImage. Again see Yoshimasa Niwa’s site to see how to do that in objective c. I do the drawing upon the same pixel buffer that I pass to my OpenCV code in the ImgData
struct, thus I do not need to return anything from my OpenCV code. Might be different in your case.
Most probably this is not the best method, but it works and allows me to concentrate on the actual OpenCV coding.
(
REPLY )
Aziz
January
26th, 2011 - 13:14
I forgot to mention that I also have a function in my ViewController to convert an UIImage to an ImgData struct. In fact I’m using the ImgData struct to pass data back and forth between iOS code and the OpenCV code. This allows me to eliminate OpenCV includes
in my iOS code.
(
REPLY )
trezor
January
29th, 2011 - 14:37
hello, I succesfully followed the tutorial, but when I tried to use FeatureDetector I got this linking error:
Undefined symbols:
“_gzputs”, referenced from:
icvPuts(CvFileStorage*, char const*)in libopencv_core.a(persistence.o)
ld: symbol(s) not found
collect2: ld returned 1 exit status
I also append an excerpt from my code:
using namespace cv;
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
FeatureDetector *detector = new StarFeatureDetector();
… and so on.
Does anybody know any solution for this?
Thank you.
(
REPLY )
trezor
January
29th, 2011 - 14:54
ok nevermind, i forgot to link libzlib.a
(
REPLY )
Jordan
February
7th, 2011 - 22:02
I compiled the libraries and adapted the sample app written by Yoshimasa Niwa (http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en)
to use the library I generated. I thought I had a home run when the app worked in the simulator in both debug and release mode, and in the device in debug mode. However, when I build the release version for the device, I get:
“___restore_vfp_d8_d15_regs”, referenced from:
(a few hundred lines from libopencv_core.a and libopencv_imgproc.a)
“___save_vfp_d8_d15_regs”, referenced from:
(a few hundred lines from libopencv_core.a and libopencv_imgproc.a)
ld: symbol(s) not found
Any ideas on how to fix this? I was so excited when it worked for the first three configurations.
Thanks.
(
REPLY )
EKhvedchenya
February
7th, 2011 - 22:07
Hello!
I think this topic can be helpful: http://stackoverflow.com/questions/2804953/xcode-linking-error-when-targeting-armv7
(
REPLY )
KM
Senthil Kumar
February
10th, 2011 - 16:16
Steps followed by me
a) Used the CMAKE Gui to generate the OpenCV Xcode project
– Successfully got the xcode project by the following the above mentioned steps.
b) Opened the XCode project and set the Base SDK as iPhone Device 4.0 then removed highgui related 5 projects. Now when i clicked on build, it was building, but it was throwing an assertion failed error as below
File: /SourceCache/DevToolsBase/DevToolsBase-1691/pbxcore/PBXContainerItemProxy.m
Line: 277
Object:
Method: _containerPortal
Assertion failed: _containerPortal == [self container] || ([_containerPortal isKindOfClass:[PBXFileReference class]] && [(PBXFileReference *)_containerPortal container] == [self container])
Please anybody can help me on this?
(
REPLY )
EKhvedchenya
February
10th, 2011 - 20:06
Hi! Yep, i experience this bug too. Don’t know really where is the problem. As i noticed, you can ignore them and continue working with project.
Also it’s a good idea to comment include of the highgui module into the project in the CMakeFiles. If you succeed please share with us how did you do this)
(
REPLY )
Aziz
February
15th, 2011 - 15:31
Proper editing of CMakeFiles the mentioned problem. Here is a diff:
diff –git a/CMakeLists.txt b/CMakeLists.txt
index 79a3125..3600b1b 100644
— a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -227,7 +227,6 @@ else()
set(OPENCV_BUILD_3RDPARTY_LIBS FALSE CACHE BOOL “Build 3rd party libraries”)
endif()
-include(OpenCVPCHSupport.cmake REQUIRED)
include(OpenCVModule.cmake REQUIRED)
if(UNIX)
@@ -341,23 +340,6 @@ macro(CHECK_MODULE module_name define)
set(ALIAS_LIBRARIES ${ALIAS}_LIBRARIES)
PKG_CHECK_MODULES(${ALIAS} ${module_name})
-
- if (${ALIAS_FOUND})
- set(${define} 1)
- foreach(P “${ALIAS_INCLUDE_DIRS}”)
- if (${P})
- list(APPEND HIGHGUI_INCLUDE_DIRS ${${P}})
- endif()
- endforeach()
-
- foreach(P “${ALIAS_LIBRARY_DIRS}”)
- if (${P})
- list(APPEND HIGHGUI_LIBRARY_DIRS ${${P}})
- endif()
- endforeach()
-
- list(APPEND HIGHGUI_LIBRARIES ${${ALIAS_LIBRARIES}})
- endif()
endif()
endmacro()
diff –git a/modules/CMakeLists.txt b/modules/CMakeLists.txt
index ce50df3..ebd1395 100644
— a/modules/CMakeLists.txt
+++ b/modules/CMakeLists.txt
@@ -14,7 +14,6 @@ if(MSVC OR MINGW)
endif()
add_subdirectory(ts)
-add_subdirectory(highgui)
add_subdirectory(imgproc)
add_subdirectory(legacy)
add_subdirectory(contrib)
@@ -27,11 +26,6 @@ endif()
add_subdirectory(video)
-if(NOT ANDROID)
-add_subdirectory(haartraining)
-add_subdirectory(traincascade)
-add_subdirectory(gpu)
-endif()
(
REPLY )
Gordon
February
24th, 2011 - 03:10
Hey Man,
I tried using your already compiled version in hopes of saving myself some headache. I get errors with including cv.h, as opencv2/flann/flann.hpp, opencv2/legacy/compat.hpp and the corresponding libraries are missing from your zip file.
Also, when I compile the device and simulator, the library files just overwrite one another in the build/lib folder, not generating the paths you’ve mentioned. Is this a XCode setting? I think for now I’ll just move the files out when I recompile in a different
version.
Regardless, thanks for the tutorial, this is the best thing I’ve found on the web regarding OpenCV port for iOS
(
REPLY )
EKhvedchenya
February
24th, 2011 - 16:50
Hi man!
Come back in a two-three days. I’m working right now on a build script, which will do all the dirty job.
Just in one click you will get static libs for OpenCV
And also i will update precompiled package with all missing libs and headers.
(
REPLY )
EKhvedchenya
February
26th, 2011 - 17:17
Hi all. Check this silver bullet: http://computer-vision-talks.com/2011/02/building-opencv-for-iphone-in-one-click/
(
REPLY )
nic
May
2nd, 2011 - 17:53
hello i have the same problem than tom (Tom
December 27th, 2010 – 16:06
/Documents/opencv-svn/opencv-install/include/opencv2/core/core.hpp:523: error: statement-expressions are allowed only inside functions
which is this line:
typedef Vec diag_type;
I guess the compiler is complaining about the MIN function. Seems I’m not the only one with the problem
)
with a sample who i found here:
http://code.google.com/p/morethantechnical/source/browse/#svn/trunk/FaceDetector-iPhone
Have you find a solution for this problem? Thank’s
(
REPLY )
EKhvedchenya
May
4th, 2011 - 11:10
Do your best to separate pure C/C++ code from Objective-C one. And ensure that include files goes in correct order.
(
REPLY )
tyua
May
19th, 2011 - 15:15
compared to android, opencv is much slower on iphone
i wonder if not having openMP on iOS enabled slows down quite much the lib
and i also wonder what else would slow down that much openCV on iOS..
any clue ?
(
REPLY )
EKhvedchenya
May
23rd, 2011 - 12:26
How did you measured performance?
(
REPLY )
tyua
May
23rd, 2011 - 14:05
benched opencv surf on both on several resolutions and images, android version is boosted by neon + openMP compiling options and actually much faster on android (desire) than iphone 4. (250ms average for android, 2s average on android)
(
REPLY )
tyua
May
23rd, 2011 - 14:06
meaning 2s average on iphone4. ipad2 is quite faster tho.
(
REPLY )
EKhvedchenya
May
23rd, 2011 - 15:15
Well, here is answer – opencv for iOS has been built without intrinsics and openmp. AFAIK, willowgarage decided to bring official support for iOS to OpenCV. So i hope they will add iOS optimized versions for all heavy algorithms.
tyua
May
23rd, 2011 - 16:32
do you know where i may find some informations about the current status of openCV for iPhone, and noticely is gpgpu solutions are scheduled (not for cuda but for ogles 2.0) ?
thank you
(
REPLY )
EKhvedchenya
May
23rd, 2011 - 16:47
Well, watch the recent changes in http://opencv.willowgarage.com/wiki/RecentChanges
Do brief review of their commits: https://code.ros.org/trac/opencv/timeline
This is where i get all major news updates (Official press-release is usually made 1-2 weeks later or so).
As i know, there are no plans of adding OpenCL support.
(
REPLY )
Ashrad
May
26th, 2011 - 16:17
Hi,
can any body tell me how to call FastDescriptor because i can’t find it and when ever I write it xcode tells me that it’s not declared in that scope.
I’m quite new to xcode and iPhone programming.
(
REPLY )
EKhvedchenya
May
26th, 2011 - 16:20
Exactly like on other platforms.
#include
cv::FastFeatureDetector detector;
detector.detectFeatures(…)
(
REPLY )
Ashrad
May
26th, 2011 - 16:28
but when i try to write it this way it gives me these errors
“cv::FeatureDetector::~FeatureDetector()”, referenced from:
cv::FastFeatureDetector::~FastFeatureDetector()in MyAVController.o
“vtable for cv::FastFeatureDetector”, referenced from:
__ZTVN2cv19FastFeatureDetectorE$non_lazy_ptr in MyAVController.o
(maybe you meant: __ZTVN2cv19FastFeatureDetectorE$non_lazy_ptr)
“cv::FastFeatureDetector::FastFeatureDetector(int, bool)”, referenced from:
-[MyAVController captureOutput:didOutputSampleBuffer:fromConnection:] in MyAVController.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
(
REPLY )
EKhvedchenya
May
26th, 2011 - 16:37
You didn’t put opencv libs to linker input. See http://computer-vision-talks.com/2011/01/using-opencv-in-objective-c-code/ of
how to pass necessary libs.
And add ALL opencv libraries to linker input (i suggest you did bulit opencv for iOS) with regards to debug/release variants (debug libraries to debug target, release to release).
Good luck!
(
REPLY )
Ashrad
May
26th, 2011 - 16:56
Thanks for your help the project built without errors but the autocomplete doesn’t work for the detector so I wanted to know is that normal or is something wrong.
tobiasz
June
9th, 2011 - 13:02
hey
So there is no option to use SIFT algorithm on iphone?
when i try to compile the line wifh sift detector:
GridAdaptedFeatureDetector detector(new SiftFeatureDetector( 0.006,3.0,4,3,-1,0), DESIRED_FTRS, 4, 4);
i got errors :
Undefined symbols:
“cv::SIFT::getDetectorParams() const”, referenced from:
cv::SiftFeatureDetector::write(cv::FileStorage&) const in libopencv_features2d.a(detectors.o)
“cv::SIFT::getCommonParams() const”, referenced from:
cv::SiftFeatureDetector::write(cv::FileStorage&) const in libopencv_features2d.a(detectors.o)
(
REPLY )
EKhvedchenya
June
10th, 2011 - 22:22
Well, AFAIK, it is available in trunk.
At which revision did it happened?
(
REPLY )
Chris
July
29th, 2011 - 15:34
I was able to link against these libraries for arch=armv7 (the ios device), but I got exactly the same error when linking for the simulator arch=i386. Anyone have any luck resolving these symbols for a simulator build?
Thanks, Chris
(
REPLY )
相关文章推荐
- Script of Building ffmpeg for iPhone 4.0 SDK
- Building OpenCV for ARM Cortex-A8
- Xcode resource groups and folder references when building for iPhone
- Building FFmpeg for iPhone
- Building c++ static library for iPhone
- Building FFmpeg for iPhone
- building live555 library for iphoneos
- 网络电话pjsip Getting Started: Building for Apple iPhone, iPad and iPod Touch
- Building FFmpeg for iPhone
- Building OpenSSL for iOS (iPhone/iPad)
- 网络电话pjsip Getting Started: Building for Apple iPhone, iPad and iPod Touch
- 9、TX2学习笔记--Building OpenCV for Tegra with CUDA
- Building FFmpeg for iPhone
- [FFmpeg-devel] [HOWTO] Building FFmpeg for iPhone
- C++ Logging and building Boost for iPhone/iPad 3.2 and MacOSX
- Professional iPhone and iPod touch Programming: Building Applications for Mobile Safari
- OpenGL ES Programming Guide for iPhone OS (As a 学习大纲)
- Mysql 数据库工具:DB Tracklayer For iPhone
- OpenCV for Ios 学习笔记(5)-标记检测2
- Five Minds for the Future: Mental Building Blocks for the New Century