faster-rcnn学习笔记

2017-08-05  本文已影响633人  信步闲庭v

环境配置

Microsoft/caffe-master + windows

我的电脑比较渣,独立显卡计算能力低于要求,所以并没有使用GPU+Cudnn加速。首先,依照README.md中的指示在Makefile.config依次配置python,matlab环境。其中,在配置Miniconda2环境时,需要将E:\ProgramData\Miniconda2 & E:\ProgramData\Miniconda2\Scripts写入系统路径,然后才可以使用pip命令安装numpy和protobuf,安装指令为

pip install numpy
conda install --yes numpy scipy matplotlib scikit-image pip
pip install protobuf

环境配置完成后,修改windows下的CommonSettings.props文件,然后编译caffe for windows源码,记得将roi_pooling_layer和smooth_l1_loss_layer加到解决方案中来,这两个层默认是没有加进来的。如果只用CPU的话,在运行faster-rcnn代码时,将和GPU有关的代码屏蔽掉即可。

BVLC/caffe + ubuntu

首先下载caffe源码: git clone git://github.com/BVLC/caffe.git
配置第三方库的过程请参考博客:www.cnblogs.com/yaoyaoliu/p/5850993.html

安装Cudnn

下载地址:developer.nvidia.com/cudnn,需要先注册账号,下载的时候需要注意cuda和cudnn版本的兼容性,我的是cuda7.5 & cudnn-7.5-linux-x64-v5.1。解压之后,需要将相应的头文件和库文件拷贝到cuda的路径下以便调用,指令如下:

sudo tar xvf cudnn-7.5-linux-x64-v5.1.tgz
cd cuda
sudo cp include/*.h /usr/local/cuda/include/
sudo cp lib64/lib* /usr/local/cuda/lib64
cd /usr/local/lib
sudo chmod +r libcudnn.so.5.1.10
sudo ln -sf licd bcudnn.so.5.1.10 libcudnn.so.5
sudo ln -sf libcudnn.so.5 libcudnn.so
sudo ldconfig

修改配置文件

# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1
# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1
USE_OPENCV := 1
USE_LEVELDB := 1
USE_LMDB := 1
OPENCV_VERSION := 3
CUDA_DIR := /usr/local/cuda
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
-gencode arch=compute_20,code=sm_21 \
-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_52,code=sm_52 \
-gencode arch=compute_52,code=compute_52
BLAS := atlas
MATLAB_DIR := /usr/local/MATLAB/R2014a
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include
PYTHON_LIB := /usr/lib
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
TEST_GPUID := 0
Q ?= @

开始编译

在caffe路径下执行make all -j8编译源码。在编译matcaffe时可能会出现问题,如果是g++版本的问题,可以这样解决:在Makefile文件中加上一句话

CXXFLAGS += -MMD -MP
CXXFLAGS += -std=c++11

如果报出缺少caffe.pb.h的错误,那么可以用 protoc caffe.proto --cpp_out=./ 指令生成caffe.pb.h文件,然后拷贝到 caffe/include/caffe/proto/路径下。这样编译之后的caffe在faster-rcnn中仍然会报错,原因是缺少roi_pooling和smooth_l1_loss层,所以要在源码中加入这两个层,之后还要修改caffe.proto文件,要分别在LayerParameter和V1layerParameter中添加:

optional ROIPoolingParameter roi_pooling_param = 150 
optional SmoothL1LossParameter smooth_l1_loss_param = 151;

另外,需要用如下方法来声明这两个层:

message ROIPoolingParameter {
optional uint32 pooled_h = 1 [default = 0]; // The pooled output height
optional uint32 pooled_w = 2 [default = 0]; // The pooled output width
optional float spatial_scale = 3 [default = 1];
}
message SmoothL1LossParameter {
optional float sigma = 1 [default = 1];
}

如此配置之后的caffe在faster-rcnn中仍会报错,这其中包括dropout train scale没有声明以及caffe缺少成员函数reshape_as_input等错误。说明faster-rcnn的caffe在BVLC版caffe的基础上有了较大改动,所以我决定使用shaoqingren提供的caffe代码重新编译。

shaoqingren/caffe + ubuntu

作者提供的caffe由于是基于cuda6.5实现的,所以可能和自己装的cuda版本不一致,和cudnn相关的所有代码将出现不兼容的问题,并不想重装cuda,所有我尝试用caffe-master的代码重新编译。

Microsoft/caffe-master + ubuntu

这个过程还算顺利,只需要去掉box_annotator_ohem_layer.cpp 这个文件,然后使用之前修改好的Makefile.config就可以了,大功告成!但是高兴太早,这个版本的caffe只能跑跑demo,训练时候仍然会报错,所以想要训练自己的数据还是要编译shaoqingren提供的caffe源码。

代码解读

整体框架

  1. faster-rcnn 把整张图片输入CNN, 进行特征提取
  2. RPN网络生成建议窗口(Proposals),每张图片大概300个
  3. fast-rcnn网络把建议窗口映射到CNN的最后一层卷积feature map上
  4. ROI pooling层使每个ROI生成固定尺寸的feature map
  5. fast-rcnn网络利用softmax loss和smooth L1 loss对分类概率和边框回归联合训练

网络结构

RPN网络:

layer input size_kernal pad stride output
conv1 3@800×600 7 × 7 3 2 96@401 × 301
pool1 96@401 × 301 3 × 3 1 2 96@201 × 152
conv2 96@201 × 152 5 × 5 2 2 256@101 × 77
pool2 256@101 × 77 3 × 3 1 2 256@51 × 39
conv3 256@51 × 39 3 × 3 1 1 384@51 × 39
conv4 384@51 × 39 3 × 3 1 1 384@51 × 39
conv5 384@51 × 39 3 × 3 1 1 256@51 × 39
conv_prososal1 256@51 × 39 3 × 3 1 1 256@51 × 39
proposal_bbox_pred 256@51 × 39 1 × 1 0 1 36@51 × 39
proposal_cls_score 256@51 × 39 1 × 1 0 1 18@51 × 39
proposal_cls_score_reshape 18@51 × 39 - - - 2 ×@51 × 351

fast-rcnn网络:

fast-rcnn网络结构

训练代码

需要配置的参数包括model,dataset,conf_proposal,conf_fast_rcnn。model包含了图像均 值、预训练网络、stage1_rpn、stage1_fast_rcnn、stage2_rpn、stage2_fast_rcnn、pre_trainde_net_file。dataset经训练数据组织成imdb和roidb的形式,imdb文件是一个matlab的表结构,表的每一行是一幅图像,分别包含如下信息:图像的路径,编号,大小,groundtruth等。conf_proposal以及conf_fast_rcnn配置了RPN和fast-rcnn网络的基本参数。

model = Model.ZF_for_Faster_RCNN_VOC0712;
dataset = Dataset.voc0712_trainval(dataset, 'train', use_flipped);
dataset = Dataset.voc2007_test(dataset, 'test', false);
conf_proposal = proposal_config('image_means', model.mean_image, 'feat_stride', model.feat_stride);
conf_fast_rcnn = fast_rcnn_config('image_means', model.mean_image);

首先产生输入图像大小和conv5_3大小的对应关系map;然后产生9个基本anchors。

[conf_proposal.anchors, conf_proposal.output_width_map, conf_proposal.output_height_map]
= proposal_prepare_anchors(conf_proposal, model.stage1_rpn.cache_name, model.stage1_rpn.test_net_def_file);
anchors

训练采取分步训练的方式,共分为4个阶段:训练RPN网络;RPN网络提取的Proposal作为输入训练fast-rcnn网络;用前一阶段训练得到的网络作为初始网络,继续训练RPN网络;用第二阶段训练的网络作为初始网络,前一阶段训练的RPN网络提取的Propsol作为输入训练fast-RCNN网络。

%%  stage one proposal
% train
model.stage1_rpn            = Faster_RCNN_Train.do_proposal_train(conf_proposal, dataset, model.stage1_rpn, opts.do_val);
% test
dataset.roidb_train         = cellfun(@(x, y) Faster_RCNN_Train.do_proposal_test(conf_proposal, model.stage1_rpn, x, y), dataset.imdb_train, dataset.roidb_train, 'UniformOutput', false);
dataset.roidb_test          = Faster_RCNN_Train.do_proposal_test(conf_proposal, model.stage1_rpn, dataset.imdb_test, dataset.roidb_test);

%%  stage one fast rcnn
% train
model.stage1_fast_rcnn      = Faster_RCNN_Train.do_fast_rcnn_train(conf_fast_rcnn, dataset, model.stage1_fast_rcnn, opts.do_val);
% test
opts.mAP                    = Faster_RCNN_Train.do_fast_rcnn_test(conf_fast_rcnn, model.stage1_fast_rcnn, dataset.imdb_test, dataset.roidb_test);

%%  stage two proposal
% net proposal
% train
model.stage2_rpn.init_net_file = model.stage1_fast_rcnn.output_model_file;
model.stage2_rpn            = Faster_RCNN_Train.do_proposal_train(conf_proposal, dataset, model.stage2_rpn, opts.do_val);
% test
dataset.roidb_train         = cellfun(@(x, y) Faster_RCNN_Train.do_proposal_test(conf_proposal, model.stage2_rpn, x, y), dataset.imdb_train, dataset.roidb_train, 'UniformOutput', false);
dataset.roidb_test          = Faster_RCNN_Train.do_proposal_test(conf_proposal, model.stage2_rpn, dataset.imdb_test, dataset.roidb_test);

%%  stage two fast rcnn
% train
model.stage2_fast_rcnn.init_net_file = model.stage1_fast_rcnn.output_model_file;
model.stage2_fast_rcnn      = Faster_RCNN_Train.do_fast_rcnn_train(conf_fast_rcnn, dataset, model.stage2_fast_rcnn, opts.do_val);

do_proposal_train代码如下,采样时并不是整张图像的每个位置都参与梯度反传,而是通过随机采样的方式决定哪些前景或背景的rois参与训练,参与训练权重为1,否则置为0。这样可以保证背景和前景的数量不至于悬殊太大。

    % Preparing training data
    [image_roidb_train, bbox_means, bbox_stds]...
                            = proposal_prepare_image_roidb(conf, opts.imdb_train, opts.roidb_train);

    while (iter_ < max_iter)
        caffe_solver.net.set_phase('train');

        % generate minibatch training data
        [shuffled_inds, sub_db_inds] = generate_random_minibatch(shuffled_inds, image_roidb_train, conf.ims_per_batch);        
        [net_inputs, scale_inds] = proposal_generate_minibatch_fun(conf, image_roidb_train(sub_db_inds));
        
        caffe_solver.net.reshape_as_input(net_inputs);

        % one iter SGD update
        caffe_solver.net.set_input_data(net_inputs);
        caffe_solver.step(1);
        iter_ = caffe_solver.iter();
    end

检测代码

model_dir = fullfile(pwd, 'output', 'faster_rcnn_final', 'faster_rcnn_VOC0712_ZF'); 
proposal_detection_model    = load_proposal_detection_model(model_dir);
proposal_detection_model.conf_proposal.test_scales = opts.test_scales;
proposal_detection_model.conf_detection.test_scales = opts.test_scales;
rpn_net = caffe.Net(proposal_detection_model.proposal_net_def, 'test');
rpn_net.copy_from(proposal_detection_model.proposal_net);
fast_rcnn_net = caffe.Net(proposal_detection_model.detection_net_def, 'test');
fast_rcnn_net.copy_from(proposal_detection_model.detection_net);
    [im_blob, im_scales] = get_image_blob(conf, im);
    im_size = size(im);
    scaled_im_size = round(im_size * im_scales);
    
    im_blob = im_blob(:, :, [3, 2, 1], :); % from rgb to brg
    im_blob = permute(im_blob, [2, 1, 3, 4]);
    im_blob = single(im_blob);

    net_inputs = {im_blob};

    caffe_net.reshape_as_input(net_inputs);
    output_blobs = caffe_net.forward(net_inputs);

    box_deltas = output_blobs{1};
    featuremap_size = [size(box_deltas, 2), size(box_deltas, 1)];
    box_deltas = permute(box_deltas, [3, 2, 1]);
    box_deltas = reshape(box_deltas, 4, [])';
    
    anchors = proposal_locate_anchors(conf, size(im), conf.test_scales, featuremap_size);
    pred_boxes = fast_rcnn_bbox_transform_inv(anchors, box_deltas);
    
    scores = output_blobs{2}(:, :, end);
    scores = reshape(scores, size(output_blobs{1}, 1), size(output_blobs{1}, 2), []);
    scores = permute(scores, [3, 2, 1]);
    scores = scores(:);

    [scores, scores_ind] = sort(scores, 'descend');
    pred_boxes = pred_boxes(scores_ind, :);
    caffe_net.blobs('data').copy_data_from(conv_feat_blob);
    net_inputs = {[], sub_rois_blob};

    % Reshape net's input blobs
    caffe_net.reshape_as_input(net_inputs);
    output_blobs = caffe_net.forward(net_inputs);
    
    pred_boxes = fast_rcnn_bbox_transform_inv(boxes, box_deltas);
    pred_boxes = clip_boxes(pred_boxes, size(im, 2), size(im, 1));

【参考文献】
http://blog.csdn.net/Seven_year_Promise/article/details/60954553
http://blog.csdn.net/mllearnertj/article/details/53709766

上一篇下一篇

猜你喜欢

热点阅读