2019-05-10 将VGG-19替换成Resnet-50

2019-09-29  本文已影响0人  whisper330

本文的代码是基于CF2做的改进,将残差神经网络作为特征提取工具用于物体跟踪。
与上一篇结合起来阅读效果更佳哦。
下面是具体做法。

global net里面的net是在以下代码里初始化的。

出错 initial_net (line 5)
net = load(fullfile('model', 'imagenet-vgg-verydeep-19.mat'));

出错 get_features (line 9)
    initial_net();

把VGG-19换成Resnet

步入到get_feature的initial_net,把里面的vgg-verydeep19改成resnet50

搞清楚vl_simplenn_tidy(好像是一个把搞进来的网络更规范的的函数),没啥用,我就把它注释掉了。

改进地方:

1.把初始化网络只留下一行

function initial_net()
% INITIAL_NET: Loading VGG-Net-19

global net;
net = load(fullfile('model', 'imagenet-resnet-50-dag.mat'));

% Remove the fully connected layers and classification layer
%net.layers(37+1:end) = [];

% % Switch to GPU mode
% global enableGPU;
% if enableGPU
%     net = vl_simplenn_move(net, 'gpu');
% end
% 
% net=vl_simplenn_tidy(net);

end

2.get_feature好多都改了
3.DAGNetwork里面的Line335的activationBuffer是每层的输出,在这个函数里面做了一些改进:将输出从Y改成activationBuffer,然后把Y注释掉,因为用不上。下面是predict输出网络结果的函数改进

        function activationsBuffer = predict(this, X)
            
            % Wrap X in cell if needed
            X = iWrapInCell(X);
            
            % Apply any transforms
            X = this.applyTransformsForInputLayers(X);            

            % Allocate space for the activations.
            activationsBuffer = cell(this.NumActivations,1);
            
            % Loop over topologically sorted layers to perform forward
            % propagation. Clear memory when activations are no longer
            % needed.
            for i = 1:this.NumLayers
                if isa(this.Layers{i},'nnet.internal.cnn.layer.ImageInput')
                        [~, currentInputLayer] = find(this.InputLayerIndices == i);
                    
                        outputActivations = this.Layers{i}.predict(X{currentInputLayer});
                else
                        XForThisLayer = iGetTheseActivationsFromBuffer( ...
                            activationsBuffer, ...
                            this.ListOfBufferInputIndices{i});
                    
                        outputActivations = this.Layers{i}.predict(XForThisLayer);
                end
                
                activationsBuffer = iAssignActivationsToBuffer( ...
                    activationsBuffer, ...
                    this.ListOfBufferOutputIndices{i}, ...
                    outputActivations);
                
%                 activationsBuffer = iClearActivationsFromBuffer( ...
%                     activationsBuffer, ...
%                     this.ListOfBufferIndicesForClearingForward{i});
            end
            
            % Return activations corresponding to output layers.
%             Y = { activationsBuffer{ ...
%                     [this.ListOfBufferOutputIndices{this.OutputLayerIndices}] ...
%                     } };
        end

DAGNetwork的Line298 YBatch变成了输出,然后将299---310注释掉了,Line183的变量换成YBatch, function YBatch = predict(this, X, varargin),然后Line363的score变成了每层的输出,Line366---388都被注释了,因为不用输出label,然后Line313做如下更改

function [labelsToReturn, scoresToReturn] = classify(this, X, varargin)
改成     
function scores = classify(this, X, varargin)

最后把get_features的Line36改成

[label,scores] = classify(net,img);
res = classify(net,img);

还有Line47结果换一换啊

 % x = res(layers(ii)).x;
 x = res{1,1}{(layers{ii}),1};

然后运行成功了,但是效果不好,好了,我就是个天才。


上一篇下一篇

猜你喜欢

热点阅读