QtCreator下用openpose搭建自己的QT工程
2017-10-31 15:59
1196 查看
最近做东西需要用到openpose,需要用到QT搭建自己的显示界面,而且用海康摄像机的SDK来获取图像
编译环境Ubuntu14.04 x64 QT5.8
1 首先按照openpose Git上的教程安装编译,编译过程不再赘述,
2 编译完成后 make distribute ,会生成distribute文件夹,将里面的/lib/文件夹下的库文件拷到工程目录,然后 将/3rdparty/caffe/distribute/lib/文件夹下的库文件拷到工程目录
3 修改工程的.pro文件,如下:红颜色的是要根据自己的实际情况修改的,这样可以利用openpose的代码来实现自己想实现的功能。
4 程序示例:本程序是利用自己的方法读取图片转化成cv::Mat格式,然后利用openpose的模型获取人体姿态,并显示,比较简单,为了方便起见,直接用opencv读取视频,进行处理,不用openpose封装好的图片读取接口。程序如下:
编译环境Ubuntu14.04 x64 QT5.8
1 首先按照openpose Git上的教程安装编译,编译过程不再赘述,
2 编译完成后 make distribute ,会生成distribute文件夹,将里面的/lib/文件夹下的库文件拷到工程目录,然后 将/3rdparty/caffe/distribute/lib/文件夹下的库文件拷到工程目录
3 修改工程的.pro文件,如下:红颜色的是要根据自己的实际情况修改的,这样可以利用openpose的代码来实现自己想实现的功能。
QT += core gui greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = pose_test TEMPLATE = app # The following define makes your compiler emit warnings if you use # any feature of Qt which as been marked as deprecated (the exact warnings # depend on your compiler). Please consult the documentation of the # deprecated API in order to know how to port your code away from it. DEFINES += QT_DEPRECATED_WARNINGS # You can also make your code fail to compile if you use deprecated APIs. # In order to do so, uncomment the following line. # You can also select to disable deprecated APIs only up to a certain version of Qt. #DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000 # disables all the APIs deprecated before Qt 6.0.0 SOURCES += main.cpp\ mainwindow.cpp HEADERS += mainwindow.h \ openpose_test.h FORMS += mainwindow.ui #增加头文件包含路径 INCLUDEPATH += /home/xxx/opencv-2.4.13/include/opencv \ INCLUDEPATH += /home/xxx/opencv-2.4.13/include/opencv2 \ INCLUDEPATH += /home/xxx/dlib \ INCLUDEPATH += /usr/include \ INCLUDEPATH += /usr/local/include \ INCLUDEPATH += /usr/local/cuda-8.0/include \ INCLUDEPATH += /home/xxx/openpose/include \ INCLUDEPATH += /home/xxx/openpose/3rdparty/caffe/include \ ##增加库文件包含路径 LIBS += -L/usr/lib/ LIBS += -L/usr/local/lib/ LIBS += -L/usr/lib/x86_64-linux-gnu LIBS += -L/usr/local/cuda-8.0/lib64/ LIBS += -L/usr/local/cuda/lib/ LIBS += -L/usr/lib/nvidia-375/ LIBS += -lcurand -lcublas -lcublas_device -lcudnn -lcudart_static LIBS += -L/home/neu-lu/opencv-2.4.13\build\lib\ -lopencv_core \ -lopencv_imgproc \ -lopencv_highgui \ -lopencv_ml \ -lopencv_video \ -lopencv_features2d \ -lopencv_calib3d \ -lopencv_objdetect \ -lopencv_contrib \ -lopencv_legacy \ -lopencv_flann \ LIBS += -lGLU -lGL -lglut LIBS += -lcudnn -lglog -lgflags -lboost_system -lboost_filesystem -lm -lboost_thread LIBS += -pthread -fPIC -std=c++11 -fopenmp LIBS+=-L/home/neu-lu/qt_project/pose_test \ -lopenpose -lcaffe
4 程序示例:本程序是利用自己的方法读取图片转化成cv::Mat格式,然后利用openpose的模型获取人体姿态,并显示,比较简单,为了方便起见,直接用opencv读取视频,进行处理,不用openpose封装好的图片读取接口。程序如下:
void MainWindow::on_pushButton_clicked() { //openPoseDemo(); //openPoseTutorialPose1(); op::log("OpenPose Library Tutorial - Example 1.", op::Priority::High); // ------------------------- INITIALIZATION ------------------------- // Step 1 - Set logging level // - 0 will output all the logging messages // - 255 will output nothing op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.", __LINE__, __FUNCTION__, __FILE__); op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level); op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); // Step 2 - Read Google flags (user defined configuration) // outputSize const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1"); // netInputSize const auto netInputSize = op::flagsToPoint(FLAGS_net_resolution, "-1x368"); // poseModel const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose); // Check no contradictory flags enabled if (FLAGS_alpha_pose < 0. || FLAGS_alpha_pose > 1.) op::error("Alpha value for blending must be in the range [0,1].", __LINE__, __FUNCTION__, __FILE__); if (FLAGS_scale_gap <= 0. && FLAGS_scale_number > 1) op::error("Incompatible flag configuration: scale_gap must be greater than 0 or scale_number = 1.", __LINE__, __FUNCTION__, __FILE__); // Enabling Google Logging const bool enableGoogleLogging = true; // Logging op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); // Step 3 - Initialize all required classes op::ScaleAndSizeExtractor scaleAndSizeExtractor(netInputSize, outputSize, FLAGS_scale_number, FLAGS_scale_gap); op::CvMatToOpInput cvMatToOpInput; op::CvMatToOpOutput cvMatToOpOutput; op::PoseExtractorCaffe poseExtractorCaffe{poseModel, FLAGS_model_folder, FLAGS_num_gpu_start, {}, op::ScaleMode::ZeroToOne, enableGoogleLogging}; op::PoseCpuRenderer poseRenderer{poseModel, (float)FLAGS_render_threshold, !FLAGS_disable_blending, (float)FLAGS_alpha_pose}; op::OpOutputToCvMat opOutputToCvMat; op::FrameDisplayer frameDisplayer{"OpenPose Tutorial - Example 1", outputSize}; // Step 4 - Initialize resources on desired thread (in this case single thread, i.e. we init resources here) poseExtractorCaffe.initializationOnThread(); poseRenderer.initializationOnThread(); //cv::Mat inputImage = cv::imread("/home/neu-lu/openpose/examples/media/COCO_val2014_000000000257.jpg"); //cv::imshow("test",inputImage); cv::VideoCapture cap; cap.open("video.avi"); cv::Mat frame; while(cap.isOpened()) { cap>>frame; cv::Mat inputImage = frame; if(inputImage.empty()) op::error("Could not open or find the image: "); //op::error("Could not open or find the image: " + FLAGS_image_path, __LINE__, __FUNCTION__, __FILE__); const op::Point<int> imageSize{inputImage.cols, inputImage.rows}; // Step 2 - Get desired scale sizes std::vector<double> scaleInputToNetInputs; std::vector<op::Point<int>> netInputSizes; double scaleInputToOutput; op::Point<int> outputResolution; std::tie(scaleInputToNetInputs, netInputSizes, scaleInputToOutput, outputResolution) = scaleAndSizeExtractor.extract(imageSize); // Step 3 - Format input image to OpenPose input and output formats const auto netInputArray = cvMatToOpInput.createArray(inputImage, scaleInputToNetInputs, netInputSizes); auto outputArray = cvMatToOpOutput.createArray(inputImage, scaleInputToOutput, outputResolution); // Step 4 - Estimate poseKeypoints poseExtractorCaffe.forwardPass(netInputArray, imageSize, scaleInputToNetInputs); const auto poseKeypoints = poseExtractorCaffe.getPoseKeypoints(); // Step 5 - Render poseKeypoints poseRenderer.renderPose(outputArray, poseKeypoints, scaleInputToOutput); // Step 6 - OpenPose output format to cv::Mat auto outputImage = opOutputToCvMat.formatToCvMat(outputArray); // ------------------------- SHOWING RESULT AND CLOSING ------------------------- // Step 1 - Show results //frameDisplayer.displayFrame(outputImage, 0); cv::imshow("test",outputImage);// + cv::waitKey(0) // Step 2 - Logging information message //cv::imshow("test",frame); cv::waitKey(1); } }
相关文章推荐
- OpenOffice Calc开发(C++) 6 如何创建自己的工程
- windows7_QtCreator2.4.1_Qt4.8.0_Qwt6.0.1_msvc2010 编译环境搭建的总结
- QT工程pro设置实践(with QtCreator)----非弄的像VS一样才顺手?
- OpenATS续篇:搭建自己的卫星地球站
- 关于cocos2d-x-3.2版本环境的搭建和自己创建工程的步骤
- windows7_QtCreator2.4.1_Qt4.8.0_Qwt6.0.1_msvc2010 编译环境搭建的总结
- 自己新建一个支持c++11的Qt工程模板
- 用qtcreator创建工程时,没有Applications这个选项(Qt Gui应用)
- 完全手工搭建一个完整QT工程后再VS上运行
- 把静态编译的QT添加到QTCreator当中创建新工程
- VS+QT和qtcreator工程的互相转换
- QT工程pro设置实践(with QtCreator)----非弄的像VS一样才顺手?
- ubuntu14.04+caffe+cuda8.0+openpose工作环境搭建
- 使用QtCreator创建Qt工程
- vs2013 + Qt 5.6 + caffe — Qt 调用 caffe 生成的静态链接库用到自己的工程(项目)中
- Ubuntu+QT+VTK+Eclipse&QtCreator开发平台的搭建(一行命令即可)
- windows7_QtCreator2.4.1_Qt4.8.0_Qwt6.0.1_msvc2010 编译环境搭建的总结
- qt 在linux环境下的搭建 (自己整理) 分类: 嵌入式开发学习 2011-04-29 20:54 6033人阅读 评论(0) 收藏
- qtcreator下cmake工程交叉编译及远程部署环境搭建
- OpenCV19(stitch工程的配置,搭建自己的stitch,补充说明)