您的位置:首页 > Web前端

caffe学习系列(10):如何测试caffe训练出来的模型

2017-05-17 15:26 447 查看
方法一:

caffe自带了测试命令,只需在caffe根目录输入:

~/caffe-master$ ./build/tools/caffe test\
 -model examples/mnist/lenet_train_test.prototxt \

-weights examples/mnist/lenet_iter_10000.caffemodel\

 -iterations 100

命令行解释:

./build/tools/caffe test ,表示只作预测(前向传播计算),不进行参数更新(后向传播计算)

-model examples/mnist/lenet_train_test.prototxt ,指定模型描述文本文件(可修改数据源位置,用自己制作的数据来测试)

-weights examples/mnist/lenet_iter_10000.caffemodel ,指定模型预先训练好的权值文件

-iterations 100 ,指定测试迭代次数,参与测试的样例数目为(iterations * batch_size),

                         batch_size在model prototxt 里定义,设为100时刚好覆盖10000个测试样本

方法一的输出结果:

I0517 13:46:57.771152 39600 caffe.cpp:234] Use CPU.

I0517 13:46:58.591327 39600 net.cpp:322] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist

I0517 13:46:58.591553 39600 net.cpp:49] Initializing net from parameters: 

name: "LeNet"

state {

  phase: TEST

}

layer {

  name: "mnist"

  type: "Data"

  top: "data"

  top: "label"

  include {

    phase: TEST

  }

  transform_param {

    scale: 0.00390625

  }

  data_param {

    source: "examples/mnist/mnist_test_lmdb"

    batch_size: 100

    backend: LMDB

  }

}

layer {

  name: "conv1"

  type: "Convolution"

  bottom: "data"

  top: "conv1"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  convolution_param {

    num_output: 20

    kernel_size: 5

    stride: 1

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "pool1"

  type: "Pooling"

  bottom: "conv1"

  top: "pool1"

  pooling_param {

    pool: MAX

    kernel_size: 2

    stride: 2

  }

}

layer {

  name: "conv2"

  type: "Convolution"

  bottom: "pool1"

  top: "conv2"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  convolution_param {

    num_output: 50

    kernel_size: 5

    stride: 1

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "pool2"

  type: "Pooling"

  bottom: "conv2"

  top: "pool2"

  pooling_param {

    pool: MAX

    kernel_size: 2

    stride: 2

  }

}

layer {

  name: "ip1"

  type: "InnerProduct"

  bottom: "pool2"

  top: "ip1"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  inner_product_param {

    num_output: 500

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "relu1"

  type: "ReLU"

  bottom: "ip1"

  top: "ip1"

}

layer {

  name: "ip2"

  type: "InnerProduct"

  bottom: "ip1"

  top: "ip2"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  inner_product_param {

    num_output: 10

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "accuracy"

  type: "Accuracy"

  bottom: "ip2"

  bottom: "label"

  top: "accuracy"

  include {

    phase: TEST

  }

}

layer {

  name: "loss"

  type: "SoftmaxWithLoss"

  bottom: "ip2"

  bottom: "label"

  top: "loss"

}

I0517 13:46:58.592334 39600 layer_factory.hpp:76] Creating layer mnist

I0517 13:46:58.593997 39600 net.cpp:106] Creating Layer mnist

I0517 13:46:58.594102 39600 net.cpp:411] mnist -> data

I0517 13:46:58.594207 39600 net.cpp:411] mnist -> label

I0517 13:46:58.595083 39606 db_lmdb.cpp:38] Opened lmdb examples/mnist/mnist_test_lmdb

I0517 13:46:58.595214 39600 data_layer.cpp:41] output data size: 100,1,28,28

I0517 13:46:58.596074 39600 net.cpp:150] Setting up mnist

I0517 13:46:58.596159 39600 net.cpp:157] Top shape: 100 1 28 28 (78400)

I0517 13:46:58.596189 39600 net.cpp:157] Top shape: 100 (100)

I0517 13:46:58.596199 39600 net.cpp:165] Memory required for data: 314000

I0517 13:46:58.596215 39600 layer_factory.hpp:76] Creating layer label_mnist_1_split

I0517 13:46:58.596242 39600 net.cpp:106] Creating Layer label_mnist_1_split

I0517 13:46:58.596256 39600 net.cpp:454] label_mnist_1_split <- label

I0517 13:46:58.596278 39600 net.cpp:411] label_mnist_1_split -> label_mnist_1_split_0

I0517 13:46:58.596297 39600 net.cpp:411] label_mnist_1_split -> label_mnist_1_split_1

I0517 13:46:58.596323 39600 net.cpp:150] Setting up label_mnist_1_split

I0517 13:46:58.596335 39600 net.cpp:157] Top shape: 100 (100)

I0517 13:46:58.596344 39600 net.cpp:157] Top shape: 100 (100)

I0517 13:46:58.596352 39600 net.cpp:165] Memory required for data: 314800

I0517 13:46:58.596361 39600 layer_factory.hpp:76] Creating layer conv1

I0517 13:46:58.596385 39600 net.cpp:106] Creating Layer conv1

I0517 13:46:58.596395 39600 net.cpp:454] conv1 <- data

I0517 13:46:58.596420 39600 net.cpp:411] conv1 -> conv1

I0517 13:46:58.596591 39600 net.cpp:150] Setting up conv1

I0517 13:46:58.596632 39600 net.cpp:157] Top shape: 100 20 24 24 (1152000)

I0517 13:46:58.596648 39600 net.cpp:165] Memory required for data: 4922800

I0517 13:46:58.596684 39600 layer_factory.hpp:76] Creating layer pool1

I0517 13:46:58.596748 39600 net.cpp:106] Creating Layer pool1

I0517 13:46:58.596767 39600 net.cpp:454] pool1 <- conv1

I0517 13:46:58.596786 39600 net.cpp:411] pool1 -> pool1

I0517 13:46:58.596827 39600 net.cpp:150] Setting up pool1

I0517 13:46:58.596850 39600 net.cpp:157] Top shape: 100 20 12 12 (288000)

I0517 13:46:58.596865 39600 net.cpp:165] Memory required for data: 6074800

I0517 13:46:58.596880 39600 layer_factory.hpp:76] Creating layer conv2

I0517 13:46:58.596900 39600 net.cpp:106] Creating Layer conv2

I0517 13:46:58.596916 39600 net.cpp:454] conv2 <- pool1

I0517 13:46:58.596938 39600 net.cpp:411] conv2 -> conv2

I0517 13:46:58.597240 39600 net.cpp:150] Setting up conv2

I0517 13:46:58.597268 39600 net.cpp:157] Top shape: 100 50 8 8 (320000)

I0517 13:46:58.597296 39600 net.cpp:165] Memory required for data: 7354800

I0517 13:46:58.597318 39600 layer_factory.hpp:76] Creating layer pool2

I0517 13:46:58.597337 39600 net.cpp:106] Creating Layer pool2

I0517 13:46:58.597355 39600 net.cpp:454] pool2 <- conv2

I0517 13:46:58.597381 39600 net.cpp:411] pool2 -> pool2

I0517 13:46:58.597414 39600 net.cpp:150] Setting up pool2

I0517 13:46:58.597457 39600 net.cpp:157] Top shape: 100 50 4 4 (80000)

I0517 13:46:58.597472 39600 net.cpp:165] Memory required for data: 7674800

I0517 13:46:58.597486 39600 layer_factory.hpp:76] Creating layer ip1

I0517 13:46:58.597515 39600 net.cpp:106] Creating Layer ip1

I0517 13:46:58.597535 39600 net.cpp:454] ip1 <- pool2

I0517 13:46:58.597559 39600 net.cpp:411] ip1 -> ip1

I0517 13:46:58.601647 39600 net.cpp:150] Setting up ip1

I0517 13:46:58.601673 39600 net.cpp:157] Top shape: 100 500 (50000)

I0517 13:46:58.601688 39600 net.cpp:165] Memory required for data: 7874800

I0517 13:46:58.601711 39600 layer_factory.hpp:76] Creating layer relu1

I0517 13:46:58.601732 39600 net.cpp:106] Creating Layer relu1

I0517 13:46:58.601748 39600 net.cpp:454] relu1 <- ip1

I0517 13:46:58.601766 39600 net.cpp:397] relu1 -> ip1 (in-place)

I0517 13:46:58.601785 39600 net.cpp:150] Setting up relu1

I0517 13:46:58.601801 39600 net.cpp:157] Top shape: 100 500 (50000)

I0517 13:46:58.601815 39600 net.cpp:165] Memory required for data: 8074800

I0517 13:46:58.601830 39600 layer_factory.hpp:76] Creating layer ip2

I0517 13:46:58.601855 39600 net.cpp:106] Creating Layer ip2

I0517 13:46:58.601871 39600 net.cpp:454] ip2 <- ip1

I0517 13:46:58.601888 39600 net.cpp:411] ip2 -> ip2

I0517 13:46:58.601971 39600 net.cpp:150] Setting up ip2

I0517 13:46:58.601991 39600 net.cpp:157] Top shape: 100 10 (1000)

I0517 13:46:58.602005 39600 net.cpp:165] Memory required for data: 8078800

I0517 13:46:58.602025 39600 layer_factory.hpp:76] Creating layer ip2_ip2_0_split

I0517 13:46:58.602041 39600 net.cpp:106] Creating Layer ip2_ip2_0_split

I0517 13:46:58.602056 39600 net.cpp:454] ip2_ip2_0_split <- ip2

I0517 13:46:58.602072 39600 net.cpp:411] ip2_ip2_0_split -> ip2_ip2_0_split_0

I0517 13:46:58.602092 39600 net.cpp:411] ip2_ip2_0_split -> ip2_ip2_0_split_1

I0517 13:46:58.602111 39600 net.cpp:150] Setting up ip2_ip2_0_split

I0517 13:46:58.602128 39600 net.cpp:157] Top shape: 100 10 (1000)

I0517 13:46:58.602144 39600 net.cpp:157] Top shape: 100 10 (1000)

I0517 13:46:58.602157 39600 net.cpp:165] Memory required for data: 8086800

I0517 13:46:58.602172 39600 layer_factory.hpp:76] Creating layer accuracy

I0517 13:46:58.602192 39600 net.cpp:106] Creating Layer accuracy

I0517 13:46:58.602207 39600 net.cpp:454] accuracy <- ip2_ip2_0_split_0

I0517 13:46:58.602223 39600 net.cpp:454] accuracy <- label_mnist_1_split_0

I0517 13:46:58.602244 39600 net.cpp:411] accuracy -> accuracy

I0517 13:46:58.602270 39600 net.cpp:150] Setting up accuracy

I0517 13:46:58.602288 39600 net.cpp:157] Top shape: (1)

I0517 13:46:58.602303 39600 net.cpp:165] Memory required for data: 8086804

I0517 13:46:58.602315 39600 layer_factory.hpp:76] Creating layer loss

I0517 13:46:58.602335 39600 net.cpp:106] Creating Layer loss

I0517 13:46:58.602349 39600 net.cpp:454] loss <- ip2_ip2_0_split_1

I0517 13:46:58.602365 39600 net.cpp:454] loss <- label_mnist_1_split_1

I0517 13:46:58.602382 39600 net.cpp:411] loss -> loss

I0517 13:46:58.602432 39600 layer_factory.hpp:76] Creating layer loss

I0517 13:46:58.602481 39600 net.cpp:150] Setting up loss

I0517 13:46:58.602501 39600 net.cpp:157] Top shape: (1)

I0517 13:46:58.602515 39600 net.cpp:160]     with loss weight 1

I0517 13:46:58.602550 39600 net.cpp:165] Memory required for data: 8086808

I0517 13:46:58.602563 39600 net.cpp:226] loss needs backward computation.

I0517 13:46:58.602579 39600 net.cpp:228] accuracy does not need backward computation.

I0517 13:46:58.602593 39600 net.cpp:226] ip2_ip2_0_split needs backward computation.

I0517 13:46:58.602607 39600 net.cpp:226] ip2 needs backward computation.

I0517 13:46:58.602622 39600 net.cpp:226] relu1 needs backward computation.

I0517 13:46:58.602634 39600 net.cpp:226] ip1 needs backward computation.

I0517 13:46:58.602648 39600 net.cpp:226] pool2 needs backward computation.

I0517 13:46:58.602663 39600 net.cpp:226] conv2 needs backward computation.

I0517 13:46:58.602675 39600 net.cpp:226] pool1 needs backward computation.

I0517 13:46:58.602689 39600 net.cpp:226] conv1 needs backward computation.

I0517 13:46:58.602704 39600 net.cpp:228] label_mnist_1_split does not need backward computation.

I0517 13:46:58.602718 39600 net.cpp:228] mnist does not need backward computation.

I0517 13:46:58.602731 39600 net.cpp:270] This network produces output accuracy

I0517 13:46:58.602746 39600 net.cpp:270] This network produces output loss

I0517 13:46:58.602774 39600 net.cpp:283] Network initialization done.

I0517 13:46:58.608233 39600 caffe.cpp:240] Running for 100 iterations.

I0517 13:46:58.681567 39600 caffe.cpp:264] Batch 0, accuracy = 0.99

I0517 13:46:58.681614 39600 caffe.cpp:264] Batch 0, loss = 0.0132719

I0517 13:46:58.752292 39600 caffe.cpp:264] Batch 1, accuracy = 1

I0517 13:46:58.752326 39600 caffe.cpp:264] Batch 1, loss = 0.00619175

I0517 13:46:58.822971 39600 caffe.cpp:264] Batch 2, accuracy = 0.99

I0517 13:46:58.823009 39600 caffe.cpp:264] Batch 2, loss = 0.0110308

I0517 13:46:58.893909 39600 caffe.cpp:264] Batch 3, accuracy = 0.99

I0517 13:46:58.893947 39600 caffe.cpp:264] Batch 3, loss = 0.0289481

I0517 13:46:58.964637 39600 caffe.cpp:264] Batch 4, accuracy = 0.98

I0517 13:46:58.964673 39600 caffe.cpp:264] Batch 4, loss = 0.0613811

I0517 13:46:59.035308 39600 caffe.cpp:264] Batch 5, accuracy = 0.99

I0517 13:46:59.035342 39600 caffe.cpp:264] Batch 5, loss = 0.053343

I0517 13:46:59.105906 39600 caffe.cpp:264] Batch 6, accuracy = 0.98

I0517 13:46:59.105940 39600 caffe.cpp:264] Batch 6, loss = 0.0658264

I0517 13:46:59.176535 39600 caffe.cpp:264] Batch 7, accuracy = 0.99

I0517 13:46:59.176570 39600 caffe.cpp:264] Batch 7, loss = 0.0457008

I0517 13:46:59.247864 39600 caffe.cpp:264] Batch 8, accuracy = 1

I0517 13:46:59.247896 39600 caffe.cpp:264] Batch 8, loss = 0.00715946

I0517 13:46:59.318927 39600 caffe.cpp:264] Batch 9, accuracy = 0.98

I0517 13:46:59.318960 39600 caffe.cpp:264] Batch 9, loss = 0.039076

I0517 13:46:59.389575 39600 caffe.cpp:264] Batch 10, accuracy = 0.98

I0517 13:46:59.389608 39600 caffe.cpp:264] Batch 10, loss = 0.0447998

I0517 13:46:59.460206 39600 caffe.cpp:264] Batch 11, accuracy = 0.97

I0517 13:46:59.460240 39600 caffe.cpp:264] Batch 11, loss = 0.0377561

I0517 13:46:59.531414 39600 caffe.cpp:264] Batch 12, accuracy = 0.95

I0517 13:46:59.531450 39600 caffe.cpp:264] Batch 12, loss = 0.139456

I0517 13:46:59.602504 39600 caffe.cpp:264] Batch 13, accuracy = 0.98

I0517 13:46:59.602553 39600 caffe.cpp:264] Batch 13, loss = 0.0485627

I0517 13:46:59.674017 39600 caffe.cpp:264] Batch 14, accuracy = 1

I0517 13:46:59.674057 39600 caffe.cpp:264] Batch 14, loss = 0.0111801

I0517 13:46:59.744745 39600 caffe.cpp:264] Batch 15, accuracy = 0.99

I0517 13:46:59.744781 39600 caffe.cpp:264] Batch 15, loss = 0.0553036

I0517 13:46:59.818766 39600 caffe.cpp:264] Batch 16, accuracy = 0.99

I0517 13:46:59.818814 39600 caffe.cpp:264] Batch 16, loss = 0.0177639

I0517 13:46:59.890465 39600 caffe.cpp:264] Batch 17, accuracy = 0.99

I0517 13:46:59.890509 39600 caffe.cpp:264] Batch 17, loss = 0.0335771

I0517 13:46:59.961452 39600 caffe.cpp:264] Batch 18, accuracy = 0.99

I0517 13:46:59.961514 39600 caffe.cpp:264] Batch 18, loss = 0.0210356

I0517 13:47:00.032574 39600 caffe.cpp:264] Batch 19, accuracy = 0.99

I0517 13:47:00.032610 39600 caffe.cpp:264] Batch 19, loss = 0.0472622

I0517 13:47:00.103441 39600 caffe.cpp:264] Batch 20, accuracy = 0.96

I0517 13:47:00.103476 39600 caffe.cpp:264] Batch 20, loss = 0.0901533

I0517 13:47:00.174660 39600 caffe.cpp:264] Batch 21, accuracy = 0.97

I0517 13:47:00.174692 39600 caffe.cpp:264] Batch 21, loss = 0.0619079

I0517 13:47:00.245530 39600 caffe.cpp:264] Batch 22, accuracy = 0.99

I0517 13:47:00.245566 39600 caffe.cpp:264] Batch 22, loss = 0.0284561

I0517 13:47:00.316332 39600 caffe.cpp:264] Batch 23, accuracy = 0.98

I0517 13:47:00.316366 39600 caffe.cpp:264] Batch 23, loss = 0.0232634

I0517 13:47:00.387085 39600 caffe.cpp:264] Batch 24, accuracy = 0.98

I0517 13:47:00.387120 39600 caffe.cpp:264] Batch 24, loss = 0.0506237

I0517 13:47:00.458309 39600 caffe.cpp:264] Batch 25, accuracy = 0.99

I0517 13:47:00.458343 39600 caffe.cpp:264] Batch 25, loss = 0.0991881

I0517 13:47:00.529395 39600 caffe.cpp:264] Batch 26, accuracy = 0.99

I0517 13:47:00.529428 39600 caffe.cpp:264] Batch 26, loss = 0.0867655

I0517 13:47:00.600162 39600 caffe.cpp:264] Batch 27, accuracy = 0.99

I0517 13:47:00.600198 39600 caffe.cpp:264] Batch 27, loss = 0.0189236

I0517 13:47:00.670883 39600 caffe.cpp:264] Batch 28, accuracy = 0.99

I0517 13:47:00.670922 39600 caffe.cpp:264] Batch 28, loss = 0.0424343

I0517 13:47:00.741680 39600 caffe.cpp:264] Batch 29, accuracy = 0.96

I0517 13:47:00.741715 39600 caffe.cpp:264] Batch 29, loss = 0.149228

I0517 13:47:00.812865 39600 caffe.cpp:264] Batch 30, accuracy = 0.99

I0517 13:47:00.812901 39600 caffe.cpp:264] Batch 30, loss = 0.0291813

I0517 13:47:00.883566 39600 caffe.cpp:264] Batch 31, accuracy = 1

I0517 13:47:00.883601 39600 caffe.cpp:264] Batch 31, loss = 0.0026461

I0517 13:47:00.954175 39600 caffe.cpp:264] Batch 32, accuracy = 0.99

I0517 13:47:00.954208 39600 caffe.cpp:264] Batch 32, loss = 0.0194136

I0517 13:47:01.028168 39600 caffe.cpp:264] Batch 33, accuracy = 1

I0517 13:47:01.028209 39600 caffe.cpp:264] Batch 33, loss = 0.00502277

I0517 13:47:01.100853 39600 caffe.cpp:264] Batch 34, accuracy = 0.99

I0517 13:47:01.100890 39600 caffe.cpp:264] Batch 34, loss = 0.0488627

I0517 13:47:01.173538 39600 caffe.cpp:264] Batch 35, accuracy = 0.96

I0517 13:47:01.173576 39600 caffe.cpp:264] Batch 35, loss = 0.106914

I0517 13:47:01.245472 39600 caffe.cpp:264] Batch 36, accuracy = 1

I0517 13:47:01.245512 39600 caffe.cpp:264] Batch 36, loss = 0.00538039

I0517 13:47:01.320181 39600 caffe.cpp:264] Batch 37, accuracy = 0.99

I0517 13:47:01.320220 39600 caffe.cpp:264] Batch 37, loss = 0.0438456

I0517 13:47:01.391701 39600 caffe.cpp:264] Batch 38, accuracy = 1

I0517 13:47:01.391734 39600 caffe.cpp:264] Batch 38, loss = 0.0138935

I0517 13:47:01.462843 39600 caffe.cpp:264] Batch 39, accuracy = 0.98

I0517 13:47:01.462882 39600 caffe.cpp:264] Batch 39, loss = 0.0561068

I0517 13:47:01.533682 39600 caffe.cpp:264] Batch 40, accuracy = 0.98

I0517 13:47:01.533718 39600 caffe.cpp:264] Batch 40, loss = 0.0594499

I0517 13:47:01.604460 39600 caffe.cpp:264] Batch 41, accuracy = 0.98

I0517 13:47:01.604496 39600 caffe.cpp:264] Batch 41, loss = 0.0886764

I0517 13:47:01.675865 39600 caffe.cpp:264] Batch 42, accuracy = 0.98

I0517 13:47:01.675900 39600 caffe.cpp:264] Batch 42, loss = 0.0390902

I0517 13:47:01.747081 39600 caffe.cpp:264] Batch 43, accuracy = 0.99

I0517 13:47:01.747123 39600 caffe.cpp:264] Batch 43, loss = 0.0130302

I0517 13:47:01.818629 39600 caffe.cpp:264] Batch 44, accuracy = 1

I0517 13:47:01.818677 39600 caffe.cpp:264] Batch 44, loss = 0.0167423

I0517 13:47:01.891489 39600 caffe.cpp:264] Batch 45, accuracy = 0.99

I0517 13:47:01.891525 39600 caffe.cpp:264] Batch 45, loss = 0.0288195

I0517 13:47:01.962338 39600 caffe.cpp:264] Batch 46, accuracy = 1

I0517 13:47:01.962373 39600 caffe.cpp:264] Batch 46, loss = 0.0122629

I0517 13:47:02.033588 39600 caffe.cpp:264] Batch 47, accuracy = 0.99

I0517 13:47:02.033622 39600 caffe.cpp:264] Batch 47, loss = 0.0206735

I0517 13:47:02.104370 39600 caffe.cpp:264] Batch 48, accuracy = 0.98

I0517 13:47:02.104404 39600 caffe.cpp:264] Batch 48, loss = 0.0693706

I0517 13:47:02.174890 39600 caffe.cpp:264] Batch 49, accuracy = 1

I0517 13:47:02.174923 39600 caffe.cpp:264] Batch 49, loss = 0.00863102

I0517 13:47:02.245613 39600 caffe.cpp:264] Batch 50, accuracy = 1

I0517 13:47:02.245648 39600 caffe.cpp:264] Batch 50, loss = 0.000288643

I0517 13:47:02.317011 39600 caffe.cpp:264] Batch 51, accuracy = 0.99

I0517 13:47:02.317044 39600 caffe.cpp:264] Batch 51, loss = 0.00836202

I0517 13:47:02.387931 39600 caffe.cpp:264] Batch 52, accuracy = 1

I0517 13:47:02.387965 39600 caffe.cpp:264] Batch 52, loss = 0.00275417

I0517 13:47:02.458679 39600 caffe.cpp:264] Batch 53, accuracy = 1

I0517 13:47:02.458711 39600 caffe.cpp:264] Batch 53, loss = 0.000484625

I0517 13:47:02.529466 39600 caffe.cpp:264] Batch 54, accuracy = 1

I0517 13:47:02.529500 39600 caffe.cpp:264] Batch 54, loss = 0.00313282

I0517 13:47:02.600678 39600 caffe.cpp:264] Batch 55, accuracy = 1

I0517 13:47:02.600713 39600 caffe.cpp:264] Batch 55, loss = 0.000175331

I0517 13:47:02.671911 39600 caffe.cpp:264] Batch 56, accuracy = 1

I0517 13:47:02.671957 39600 caffe.cpp:264] Batch 56, loss = 0.0103118

I0517 13:47:02.743280 39600 caffe.cpp:264] Batch 57, accuracy = 0.99

I0517 13:47:02.743316 39600 caffe.cpp:264] Batch 57, loss = 0.011138

I0517 13:47:02.814064 39600 caffe.cpp:264] Batch 58, accuracy = 1

I0517 13:47:02.814100 39600 caffe.cpp:264] Batch 58, loss = 0.00176995

I0517 13:47:02.884955 39600 caffe.cpp:264] Batch 59, accuracy = 0.97

I0517 13:47:02.884989 39600 caffe.cpp:264] Batch 59, loss = 0.0855831

I0517 13:47:02.955776 39600 caffe.cpp:264] Batch 60, accuracy = 1

I0517 13:47:02.955807 39600 caffe.cpp:264] Batch 60, loss = 0.00808355

I0517 13:47:03.026746 39600 caffe.cpp:264] Batch 61, accuracy = 0.99

I0517 13:47:03.026779 39600 caffe.cpp:264] Batch 61, loss = 0.017528

I0517 13:47:03.097520 39600 caffe.cpp:264] Batch 62, accuracy = 1

I0517 13:47:03.097553 39600 caffe.cpp:264] Batch 62, loss = 1.76275e-05

I0517 13:47:03.168239 39600 caffe.cpp:264] Batch 63, accuracy = 1

I0517 13:47:03.168272 39600 caffe.cpp:264] Batch 63, loss = 0.000123199

I0517 13:47:03.239184 39600 caffe.cpp:264] Batch 64, accuracy = 1

I0517 13:47:03.239220 39600 caffe.cpp:264] Batch 64, loss = 0.000735786

I0517 13:47:03.310091 39600 caffe.cpp:264] Batch 65, accuracy = 0.94

I0517 13:47:03.310125 39600 caffe.cpp:264] Batch 65, loss = 0.128581

I0517 13:47:03.380687 39600 caffe.cpp:264] Batch 66, accuracy = 0.98

I0517 13:47:03.380719 39600 caffe.cpp:264] Batch 66, loss = 0.0857973

I0517 13:47:03.451447 39600 caffe.cpp:264] Batch 67, accuracy = 0.99

I0517 13:47:03.451481 39600 caffe.cpp:264] Batch 67, loss = 0.0340182

I0517 13:47:03.522209 39600 caffe.cpp:264] Batch 68, accuracy = 0.99

I0517 13:47:03.522244 39600 caffe.cpp:264] Batch 68, loss = 0.00933298

I0517 13:47:03.593348 39600 caffe.cpp:264] Batch 69, accuracy = 1

I0517 13:47:03.593380 39600 caffe.cpp:264] Batch 69, loss = 0.00282775

I0517 13:47:03.664039 39600 caffe.cpp:264] Batch 70, accuracy = 1

I0517 13:47:03.664073 39600 caffe.cpp:264] Batch 70, loss = 0.000840758

I0517 13:47:03.734781 39600 caffe.cpp:264] Batch 71, accuracy = 1

I0517 13:47:03.734815 39600 caffe.cpp:264] Batch 71, loss = 0.000306738

I0517 13:47:03.805531 39600 caffe.cpp:264] Batch 72, accuracy = 0.99

I0517 13:47:03.805567 39600 caffe.cpp:264] Batch 72, loss = 0.0109394

I0517 13:47:03.879171 39600 caffe.cpp:264] Batch 73, accuracy = 1

I0517 13:47:03.879221 39600 caffe.cpp:264] Batch 73, loss = 7.22083e-05

I0517 13:47:03.949936 39600 caffe.cpp:264] Batch 74, accuracy = 1

I0517 13:47:03.949970 39600 caffe.cpp:264] Batch 74, loss = 0.00236448

I0517 13:47:04.020737 39600 caffe.cpp:264] Batch 75, accuracy = 1

I0517 13:47:04.020771 39600 caffe.cpp:264] Batch 75, loss = 0.00125433

I0517 13:47:04.091593 39600 caffe.cpp:264] Batch 76, accuracy = 1

I0517 13:47:04.091629 39600 caffe.cpp:264] Batch 76, loss = 0.00020058

I0517 13:47:04.162919 39600 caffe.cpp:264] Batch 77, accuracy = 1

I0517 13:47:04.162981 39600 caffe.cpp:264] Batch 77, loss = 0.000176043

I0517 13:47:04.234006 39600 caffe.cpp:264] Batch 78, accuracy = 1

I0517 13:47:04.234041 39600 caffe.cpp:264] Batch 78, loss = 0.000697597

I0517 13:47:04.304982 39600 caffe.cpp:264] Batch 79, accuracy = 1

I0517 13:47:04.305014 39600 caffe.cpp:264] Batch 79, loss = 0.00192746

I0517 13:47:04.375728 39600 caffe.cpp:264] Batch 80, accuracy = 0.99

I0517 13:47:04.375761 39600 caffe.cpp:264] Batch 80, loss = 0.0236264

I0517 13:47:04.446709 39600 caffe.cpp:264] Batch 81, accuracy = 1

I0517 13:47:04.446743 39600 caffe.cpp:264] Batch 81, loss = 0.00273019

I0517 13:47:04.517841 39600 caffe.cpp:264] Batch 82, accuracy = 1

I0517 13:47:04.517875 39600 caffe.cpp:264] Batch 82, loss = 0.000858938

I0517 13:47:04.588673 39600 caffe.cpp:264] Batch 83, accuracy = 0.99

I0517 13:47:04.588707 39600 caffe.cpp:264] Batch 83, loss = 0.0106569

I0517 13:47:04.659397 39600 caffe.cpp:264] Batch 84, accuracy = 0.99

I0517 13:47:04.659433 39600 caffe.cpp:264] Batch 84, loss = 0.0272491

I0517 13:47:04.730412 39600 caffe.cpp:264] Batch 85, accuracy = 1

I0517 13:47:04.730444 39600 caffe.cpp:264] Batch 85, loss = 0.00337841

I0517 13:47:04.801645 39600 caffe.cpp:264] Batch 86, accuracy = 1

I0517 13:47:04.801677 39600 caffe.cpp:264] Batch 86, loss = 5.54074e-05

I0517 13:47:04.872436 39600 caffe.cpp:264] Batch 87, accuracy = 1

I0517 13:47:04.872469 39600 caffe.cpp:264] Batch 87, loss = 4.36915e-05

I0517 13:47:04.943171 39600 caffe.cpp:264] Batch 88, accuracy = 1

I0517 13:47:04.943203 39600 caffe.cpp:264] Batch 88, loss = 4.06601e-05

I0517 13:47:05.014057 39600 caffe.cpp:264] Batch 89, accuracy = 1

I0517 13:47:05.014101 39600 caffe.cpp:264] Batch 89, loss = 7.38562e-05

I0517 13:47:05.085383 39600 caffe.cpp:264] Batch 90, accuracy = 0.98

I0517 13:47:05.085417 39600 caffe.cpp:264] Batch 90, loss = 0.0930051

I0517 13:47:05.156461 39600 caffe.cpp:264] Batch 91, accuracy = 1

I0517 13:47:05.156493 39600 caffe.cpp:264] Batch 91, loss = 5.29384e-05

I0517 13:47:05.227131 39600 caffe.cpp:264] Batch 92, accuracy = 1

I0517 13:47:05.227165 39600 caffe.cpp:264] Batch 92, loss = 0.000267582

I0517 13:47:05.297791 39600 caffe.cpp:264] Batch 93, accuracy = 1

I0517 13:47:05.297823 39600 caffe.cpp:264] Batch 93, loss = 0.000255183

I0517 13:47:05.368868 39600 caffe.cpp:264] Batch 94, accuracy = 1

I0517 13:47:05.368901 39600 caffe.cpp:264] Batch 94, loss = 0.000404982

I0517 13:47:05.439965 39600 caffe.cpp:264] Batch 95, accuracy = 1

I0517 13:47:05.439996 39600 caffe.cpp:264] Batch 95, loss = 0.00341813

I0517 13:47:05.510735 39600 caffe.cpp:264] Batch 96, accuracy = 0.99

I0517 13:47:05.510767 39600 caffe.cpp:264] Batch 96, loss = 0.0376019

I0517 13:47:05.581421 39600 caffe.cpp:264] Batch 97, accuracy = 0.98

I0517 13:47:05.581457 39600 caffe.cpp:264] Batch 97, loss = 0.115846

I0517 13:47:05.651897 39600 caffe.cpp:264] Batch 98, accuracy = 1

I0517 13:47:05.651933 39600 caffe.cpp:264] Batch 98, loss = 0.00771821

I0517 13:47:05.723466 39600 caffe.cpp:264] Batch 99, accuracy = 1

I0517 13:47:05.723521 39600 caffe.cpp:264] Batch 99, loss = 0.0128566

I0517 13:47:05.723539 39600 caffe.cpp:269] Loss: 0.0289755

I0517 13:47:05.723582 39600 caffe.cpp:281] accuracy = 0.9906

I0517 13:47:05.723620 39600 caffe.cpp:281] loss = 0.0289755 (* 1 = 0.0289755 loss)

方法二:

在caffe的官方文件里面,就有classification.cpp作为demo。

如果自己建一个工程,首先需要把所有依赖库都加上。

包括:caffe(废话),boost,glog,gflag,hdf5,openblas,opencv,protobuf(后面这些事caffe要用的)

对于caffe,要把它的include文件夹和libcaffe.lib导入到工程里。

其他库由于nuget已经帮我们下好了,一样

1,在包含目录里添上:

../caffelib/include;

../caffelib/include/caffe/proto;

H:\NugetPackages\boost.1.59.0.0\lib\native\include;

H:\NugetPackages\gflags.2.1.2.1\build\native\include;

H:\NugetPackages\glog.0.3.3.0\build\native\include;

H:\NugetPackages\protobuf-v120.2.6.1\build\native\include;

H:\NugetPackages\OpenBLAS.0.2.14.1\lib\native\include;

H:\NugetPackages\OpenCV.2.4.10\build\native\include;

2,在库目录里添上

../caffelib/lib;

H:\NugetPackages\boost_date_time-vc120.1.59.0.0\lib\native\address-model-64\lib;

H:\NugetPackages\boost_filesystem-vc120.1.59.0.0\lib\native\address-model-64\lib;

H:\NugetPackages\boost_system-vc120.1.59.0.0\lib\native\address-model-64\lib;

H:\NugetPackages\boost_thread-vc120.1.59.0.0\lib\native\address-model-64\lib;

H:\NugetPackages\boost_chrono-vc120.1.59.0.0\lib\native\address-model-64\lib;

H:\NugetPackages\glog.0.3.3.0\build\native\lib\x64\v120\Release\dynamic;

H:\NugetPackages\protobuf-v120.2.6.1\build\native\lib\x64\v120\Release;

H:\NugetPackages\OpenBLAS.0.2.14.1\lib\native\lib\x64;

H:\NugetPackages\hdf5-v120-complete.1.8.15.2\lib\native\lib\x64;

H:\NugetPackages\OpenCV.2.4.10\build\native\lib\x64\v120\Release;

H:\NugetPackages\gflags.2.1.2.1\build\native\x64\v120\dynamic\Lib;

3,在附加依赖项里面添上

libcaffe.lib

libopenblas.dll.a

libprotobuf.lib

opencv_highgui2410.lib

opencv_core2410.lib

opencv_imgproc2410.lib

libglog.lib

gflags.lib

hdf5.lib

hdf5_hl.lib

然后把classification.cpp作为demo运行就可以了。当然,也可以根据需要修改这部分代码,不难。

参考:1.MNIST
模型测试
             2.利用caffe训练的模型,写一个测试程序
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: