Flow-Guided Feature Aggregation

2018-08-03  本文已影响0人  晓智AI

研究背景

为比赛准备,视频目标检测算法

研究参考

github代码
mxnet

环境配置

python 2.7

# packages in environment at /home/ouc/anaconda3/envs/flow1:
#
# Name                    Version                   Build  Channel
blas                      1.0                         mkl  
ca-certificates           2018.03.07                    0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
certifi                   2018.4.16                py27_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudatoolkit               8.0                           3  
cudnn                     7.0.5                 cuda8.0_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
Cython                    0.28.4                    <pip>
dill                      0.2.8.2                   <pip>
easydict                  1.4                      py27_0    auto
easydict                  1.6                       <pip>
freetype                  2.9.1                h8a8886c_0  
intel-openmp              2018.0.3                      0  
jpeg                      9b                   h024ee3a_2  
libedit                   3.1                  heed3624_0  
libffi                    3.2.1                hd88cf55_4  
libgcc-ng                 7.2.0                hdf63c60_3  
libgfortran-ng            7.2.0                hdf63c60_3  
libopenblas               0.2.20               h9ac9557_7  
libpng                    1.6.34               hb9fc6fc_0  
libprotobuf               3.5.2                h6f1eeef_0  
libstdcxx-ng              7.2.0                hdf63c60_3  
libtiff                   4.0.9                he85c1e1_1  
mkl                       2018.0.3                      1  
mkl_fft                   1.0.4            py27h4414c95_1  
mkl_random                1.0.1            py27h4414c95_1  
mxnet                     0.10.0                    <pip>
ncurses                   6.0                  h9df7e31_2  
numpy                     1.15.0           py27h1b885b7_0  
numpy-base                1.15.0           py27h3dfced4_0  
olefile                   0.45.1                   py27_0  
openblas                  0.2.20                        4  
openblas-devel            0.2.20                        7  
opencv                    2.4.11                 nppy27_0    menpo
opencv-python             3.2.0.6                   <pip>
openssl                   1.0.2o               h14c3975_1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pillow                    5.2.0            py27heded4f4_0  
pip                       10.0.1                   py27_0  
protobuf                  3.5.2            py27hf484d3e_1  
python                    2.7.14              h89e7a4a_22  
readline                  7.0                  ha6073c6_4  
setuptools                40.0.0                    <pip>
setuptools                3.3                      py27_0    auto
six                       1.11.0                   py27_1  
sqlite                    3.23.1               he433501_0  
tk                        8.6.7                hc745277_3  
wheel                     0.31.1                   py27_0  
xz                        5.2.4                h14c3975_4  
zlib                      1.2.11               ha838bed_2  

运行demo

Traceback (most recent call last):
  File "setup_linux.py", line 63, in <module>
    CUDA = locate_cuda()
  File "setup_linux.py", line 58, in locate_cuda
    for k, v in cudaconfig.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'

检查虚拟环境,python3.6改为python2.7

(flow) ouc@ouc-yzb:~/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/incubator-mxnet$ make -j4
Makefile:35: mshadow/make/mshadow.mk: No such file or directory
Makefile:36: /home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/incubator-mxnet/dmlc-core/make/dmlc.mk: No such file or directory
Makefile:131: /home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/incubator-mxnet/ps-lite/make/ps.mk: No such file or directory
make: *** No rule to make target '/home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/incubator-mxnet/ps-lite/make/ps.mk'.  Stop.

可以看到运行以下两步后,mshadow文件仍然为空。

(flow) ouc@ouc-yzb:~/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation$ git clone --recursive https://github.com/apache/incubator-mxnet.git


(flow) ouc@ouc-yzb:~/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/incubator-mxnet$ git submodule update

解决办法,更新子模块:

git submodule update --init --recursive

解决git clone 子模块没下载全的问题

git checkout v0.10.0
git submodule update

cp -r ../fgfa_rfcn/operator_cxx/* src/operator/contrib

make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=0
conda install -c https://conda.binstar.org/menpo opencv

python Anaconda2安装OpenCV2
出现如下问题时,需要修改环境变量。

Package opencv was not found in the pkg-config search path.
Perhaps you should add the directory containing `opencv.pc'
to the PKG_CONFIG_PATH environment variable
No package 'opencv' found

设置环境变量PKG_CONFIG_PATH方法举例如下:
找到 opencv.pc所在文件夹 比如 /home/ouc/anaconda3/envs/flow/lib/pkgconfig/。
设置为环境变量

export PKG_CONFIG_PATH=/home/ouc/anaconda3/envs/flow/lib/pkgconfig/:$PKG_CONFIG_PATH

Package OpenCV not found? Let’s Find It.

In file included from src/operator/tensor/././sort_op.h:85:0,
from src/operator/tensor/./indexing_op.h:24,
from src/operator/tensor/indexing_op.cu:8:
src/operator/tensor/./././sort_op-inl.cuh:10:44: fatal error: cub/device/device_radix_sort.cuh: No such file or directory
#include <cub/device/device_radix_sort.cuh>
^
compilation terminated.
make: *** [build/src/operator/tensor/indexing_op_gpu.o] Error 1

解决方法是git clone this new submodule到mxnet的目录即可,覆盖已有的cub文件夹。

/tmp/ccOS1IcD.o: In function `main':
im2rec.cc:(.text.startup+0x2f0f): undefined reference to `cv::imencode(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, cv::_InputArray const&, std::vector<unsigned char, std::allocator<unsigned char> >&, std::vector<int, std::allocator<int> > const&)'
collect2: error: ld returned 1 exit status
Makefile:264: recipe for target 'bin/im2rec' failed
make: *** [bin/im2rec] Error 1

参考链接深度学习主机软件环境平台安装小记

Traceback (most recent call last):
  File "./fgfa_rfcn/demo.py", line 21, in <module>
    from utils.image import resize, transform
  File "/home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation/fgfa_rfcn/../lib/utils/image.py", line 12, in <module>
    from PIL import Image
ImportError: No module named PIL

解决方法:conda install Pillow

(flow) ouc@ouc-yzb:~/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/incubator-mxnet$ git submodule update --init --recursive

fatal: reference is not a tree: 89de7ab20167909bc2c4f8acd397671c47cf3c0d
Unable to checkout '89de7ab20167909bc2c4f8acd397671c47cf3c0d' in submodule path 'cub'

解决办法:git管理代码报错(使用Sourcetree工具) 有子模块Submodule
1.去到相应的子模块ReactiveCocoa (Submodule)

cd /Users/zhanglizhi/Desktop/项目_hh/ReactiveCocoa

2.查看状态

git status

3.返回主分支

git checkout master

4.可以更新

git submodule update --remote

Traceback (most recent call last):
  File "./fgfa_rfcn/demo.py", line 257, in <module>
    main()
  File "./fgfa_rfcn/demo.py", line 159, in main
    arg_params=arg_params, aux_params=aux_params)
  File "/home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/fgfa_rfcn/core/tester.py", line 37, in __init__
    self._mod.bind(provide_data, provide_label, for_training=False)
  File "/home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/fgfa_rfcn/core/module.py", line 844, in bind
    for_training, inputs_need_grad, force_rebind=False, shared_module=None)
  File "/home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/fgfa_rfcn/core/module.py", line 401, in bind
    state_names=self._state_names)
  File "/home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/fgfa_rfcn/core/DataParallelExecutorGroup.py", line 191, in __init__
    self.bind_exec(data_shapes, label_shapes, shared_group)
  File "/home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/fgfa_rfcn/core/DataParallelExecutorGroup.py", line 277, in bind_exec
    shared_group))
  File "/home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/fgfa_rfcn/core/DataParallelExecutorGroup.py", line 571, in _bind_ith_exec
    grad_req=self.grad_req, shared_exec=shared_exec)
  File "/home/ouc/anaconda3/envs/flow1/lib/python2.7/site-packages/mxnet-0.10.0-py2.7.egg/mxnet/symbol.py", line 1407, in bind
    ctypes.byref(handle)))
  File "/home/ouc/anaconda3/envs/flow1/lib/python2.7/site-packages/mxnet-0.10.0-py2.7.egg/mxnet/base.py", line 84, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [21:04:00] src/operator/custom/custom.cc:180: GPU is not enabled

解决方案
找到路径/home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/incubator-mxnet/make中的config.mk文件,将对应内容修改如下:

USE_CUDA =1 #USE_CUDA =0

USE_CUDA_PATH=/usr/local/cuda #USE_CUDA_PATH=0

src/ndarray/ndarray.cc:347: GPU is not enabled

然后需要重复编译步骤

cd /home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/incubator-mxnet

make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=0

cd /home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/incubator-mxnet/python

python setup.py install

python /home/ouc/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation/fgfa_rfcn/demo.py

运行demo.py正常后出现如下过程:

{'CLASS_AGNOSTIC': True,
 'MXNET_VERSION': '',
 'SCALES': [(600, 1000)],
 'TEST': {'BATCH_IMAGES': 1,
          'CXX_PROPOSAL': True,
          'HAS_RPN': True,
          'KEY_FRAME_INTERVAL': 9,
          'NMS': 0.3,
          'RPN_MIN_SIZE': 0,
          'RPN_NMS_THRESH': 0.7,
          'RPN_POST_NMS_TOP_N': 300,
          'RPN_PRE_NMS_TOP_N': 6000,
          'SEQ_NMS': False,
          'max_per_image': 300,
          'test_epoch': 2},
 'TRAIN': {'ASPECT_GROUPING': True,
           'BATCH_IMAGES': 1,
           'BATCH_ROIS': -1,
           'BATCH_ROIS_OHEM': 128,
           'BBOX_MEANS': [0.0, 0.0, 0.0, 0.0],
           'BBOX_NORMALIZATION_PRECOMPUTED': True,
           'BBOX_REGRESSION_THRESH': 0.5,
           'BBOX_STDS': [0.1, 0.1, 0.2, 0.2],
           'BBOX_WEIGHTS': array([1., 1., 1., 1.]),
           'BG_THRESH_HI': 0.5,
           'BG_THRESH_LO': 0.0,
           'CXX_PROPOSAL': True,
           'ENABLE_OHEM': True,
           'END2END': True,
           'FG_FRACTION': 0.25,
           'FG_THRESH': 0.5,
           'FLIP': True,
           'MAX_OFFSET': 9,
           'MIN_OFFSET': -9,
           'RESUME': False,
           'RPN_BATCH_SIZE': 256,
           'RPN_BBOX_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
           'RPN_CLOBBER_POSITIVES': False,
           'RPN_FG_FRACTION': 0.5,
           'RPN_MIN_SIZE': 0,
           'RPN_NEGATIVE_OVERLAP': 0.3,
           'RPN_NMS_THRESH': 0.7,
           'RPN_POSITIVE_OVERLAP': 0.7,
           'RPN_POSITIVE_WEIGHT': -1.0,
           'RPN_POST_NMS_TOP_N': 300,
           'RPN_PRE_NMS_TOP_N': 6000,
           'SHUFFLE': True,
           'begin_epoch': 0,
           'end_epoch': 2,
           'lr': 0.00025,
           'lr_factor': 0.1,
           'lr_step': '1.333',
           'model_prefix': 'fgfa_rfcn_vid',
           'momentum': 0.9,
           'warmup': False,
           'warmup_lr': 0,
           'warmup_step': 0,
           'wd': 0.0005},
 'dataset': {'NUM_CLASSES': 31,
             'dataset': 'ImageNetVID',
             'dataset_path': './data/ILSVRC2015',
             'enable_detailed_eval': True,
             'image_set': 'DET_train_30classes+VID_train_15frames',
             'motion_iou_path': './lib/dataset/imagenet_vid_groundtruth_motion_iou.mat',
             'proposal': 'rpn',
             'root_path': './data',
             'test_image_set': 'VID_val_videos'},
 'default': {'frequent': 100, 'kvstore': 'device'},
 'gpus': '0',
 'network': {'ANCHOR_MEANS': [0.0, 0.0, 0.0, 0.0],
             'ANCHOR_RATIOS': [0.5, 1, 2],
             'ANCHOR_SCALES': [8, 16, 32],
             'ANCHOR_STDS': [0.1, 0.1, 0.4, 0.4],
             'FGFA_FEAT_DIM': 3072,
             'FIXED_PARAMS': ['conv1', 'res2', 'bn'],
             'IMAGE_STRIDE': 0,
             'NORMALIZE_RPN': True,
             'NUM_ANCHORS': 9,
             'PIXEL_MEANS': array([103.06, 115.9 , 123.15]),
             'RCNN_FEAT_STRIDE': 16,
             'RPN_FEAT_STRIDE': 16,
             'pretrained': '',
             'pretrained_epoch': 0,
             'pretrained_flow': ''},
 'output_path': './output/fgfa_rfcn/imagenet_vid',
 'symbol': ''}
get-predictor
testing 0.JPEG 80.7054s
testing 1.JPEG 40.4755s
testing 2.JPEG 27.0630s
testing 3.JPEG 20.3575s
testing 4.JPEG 16.3345s
testing 5.JPEG 13.6521s
testing 6.JPEG 11.7366s
testing 7.JPEG 10.3003s
testing 8.JPEG 9.1831s
testing 9.JPEG 8.2892s
testing 10.JPEG 7.5579s
testing 11.JPEG 6.9479s
testing 12.JPEG 6.4320s
testing 13.JPEG 5.9896s
testing 14.JPEG 5.6064s
testing 15.JPEG 5.2716s
testing 16.JPEG 4.9757s
testing 17.JPEG 4.7130s
testing 18.JPEG 4.4784s
testing 19.JPEG 4.2671s
testing 20.JPEG 4.0756s
testing 21.JPEG 3.9015s
testing 22.JPEG 3.7424s
testing 23.JPEG 3.5965s
testing 24.JPEG 3.4625s
testing 25.JPEG 3.3386s
testing 26.JPEG 3.2240s
testing 27.JPEG 3.1177s
testing 28.JPEG 3.0187s
testing 29.JPEG 2.9261s
testing 30.JPEG 2.8397s
testing 31.JPEG 2.7584s
testing 32.JPEG 2.6821s
testing 33.JPEG 2.6103s
testing 34.JPEG 2.5426s
testing 35.JPEG 2.4787s
testing 36.JPEG 2.4183s
testing 37.JPEG 2.3611s
testing 38.JPEG 2.3067s
testing 39.JPEG 2.2552s
testing 40.JPEG 2.2060s
testing 41.JPEG 2.1592s
testing 42.JPEG 2.1148s
testing 43.JPEG 2.0723s
testing 44.JPEG 2.0318s
testing 45.JPEG 1.9931s
testing 46.JPEG 1.9562s
testing 47.JPEG 1.9208s
testing 48.JPEG 1.8869s
testing 49.JPEG 1.8540s
testing 50.JPEG 1.8225s
testing 51.JPEG 1.7920s
testing 52.JPEG 1.7628s
testing 53.JPEG 1.7348s
testing 54.JPEG 1.7078s
testing 55.JPEG 1.6817s
testing 56.JPEG 1.6566s
testing 57.JPEG 1.6325s
testing 58.JPEG 1.6092s
testing 59.JPEG 1.5865s
testing 60.JPEG 1.5646s
testing 61.JPEG 1.5433s
testing 62.JPEG 1.5229s
testing 63.JPEG 1.5030s
testing 64.JPEG 1.4838s
testing 65.JPEG 1.4652s
testing 66.JPEG 1.4471s
testing 67.JPEG 1.4296s
testing 68.JPEG 1.4125s
testing 69.JPEG 1.3960s
testing 70.JPEG 1.3797s
testing 71.JPEG 1.3640s
testing 72.JPEG 1.3488s
testing 73.JPEG 1.3338s
testing 74.JPEG 1.3193s
testing 75.JPEG 1.3053s
testing 76.JPEG 1.2914s
testing 77.JPEG 1.2782s
testing 78.JPEG 1.2652s
testing 79.JPEG 1.2527s
testing 80.JPEG 1.2403s
testing 81.JPEG 1.2282s
testing 82.JPEG 1.2164s
testing 83.JPEG 1.2049s
testing 84.JPEG 1.1937s
testing 85.JPEG 1.1827s
testing 86.JPEG 1.1720s
testing 87.JPEG 1.1614s
testing 88.JPEG 1.1513s
testing 89.JPEG 1.1412s
testing 90.JPEG 1.1314s
testing 91.JPEG 1.1217s
testing 92.JPEG 1.1124s
testing 93.JPEG 1.1031s
testing 94.JPEG 1.0940s
testing 95.JPEG 1.0852s
testing 96.JPEG 1.0766s
testing 97.JPEG 1.0682s
testing 98.JPEG 1.0599s
testing 99.JPEG 1.0517s
testing 100.JPEG 1.0437s
testing 101.JPEG 1.0359s
testing 102.JPEG 1.0283s
testing 103.JPEG 1.0208s
testing 104.JPEG 1.0134s
testing 105.JPEG 1.0061s
testing 106.JPEG 0.9990s
testing 107.JPEG 0.9920s
testing 108.JPEG 0.9851s
testing 109.JPEG 0.9784s
testing 110.JPEG 0.9717s
testing 111.JPEG 0.9653s
testing 112.JPEG 0.9590s
testing 113.JPEG 0.9528s
testing 114.JPEG 0.9467s
testing 115.JPEG 0.9407s
testing 116.JPEG 0.9349s
testing 117.JPEG 0.9290s
testing 118.JPEG 0.9234s
testing 119.JPEG 0.9177s
testing 120.JPEG 0.9122s
testing 121.JPEG 0.9067s
testing 122.JPEG 0.9013s
testing 123.JPEG 0.8960s
testing 124.JPEG 0.8908s
testing 125.JPEG 0.8856s
testing 126.JPEG 0.8806s
testing 127.JPEG 0.8756s
testing 128.JPEG 0.8707s
testing 129.JPEG 0.8659s
testing 130.JPEG 0.8611s
testing 131.JPEG 0.8565s
testing 132.JPEG 0.8518s
testing 133.JPEG 0.8473s
testing 134.JPEG 0.8428s
testing 135.JPEG 0.8381s
testing 136.JPEG 0.8335s
testing 137.JPEG 0.8289s
testing 138.JPEG 0.8244s
testing 139.JPEG 0.8200s
testing 140.JPEG 0.8157s
testing 141.JPEG 0.8114s
testing 142.JPEG 0.8072s
testing 143.JPEG 0.8030s
done

(flow1) ouc@ouc-yzb:~/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation-2018$ python experiments/fgfa_rfcn/fgfa_rfcn_end2end_train_
test.py --cfg experiments/fgfa_rfcn/cfgs/resnet_v1_101_flownet_imagenet_vid_rfcn_end2end_ohem.yaml
Traceback (most recent call last):
  File "experiments/fgfa_rfcn/fgfa_rfcn_end2end_train_test.py", line 16, in <module>
    import train_end2end
  File "experiments/fgfa_rfcn/../../fgfa_rfcn/train_end2end.py", line 52, in <module>
    from utils.load_data import load_gt_roidb, merge_roidb, filter_roidb
  File "experiments/fgfa_rfcn/../../fgfa_rfcn/../lib/utils/load_data.py", line 10, in <module>
    from dataset import *
  File "experiments/fgfa_rfcn/../../fgfa_rfcn/../lib/dataset/__init__.py", line 2, in <module>
    from imagenet_vid import ImageNetVID
  File "experiments/fgfa_rfcn/../../fgfa_rfcn/../lib/dataset/imagenet_vid.py", line 23, in <module>
    from imagenet_vid_eval_motion import vid_eval_motion
  File "experiments/fgfa_rfcn/../../fgfa_rfcn/../lib/dataset/imagenet_vid_eval_motion.py", line 15, in <module>
    import scipy.io as sio
ImportError: No module named scipy.io

解决方法:

(flow1) ouc@ouc-yzb:~/LiuHongzhi/Flow-Guided-Feature-Aggregation-new/Flow-Guided-Feature-Aggregation-2018$ pip install scikit-image
上一篇下一篇

猜你喜欢

热点阅读