我爱编程

Windows 2012 安装Tensorflow

2017-09-19  本文已影响350人  董春磊

官网安装方法:

https://www.tensorflow.org/install/install_windows

1.pip

2.Anaconda

3.源码:http://www.jianshu.com/p/d0a5fa97fcc8

说明:使用pip或anaconda等方式安装的预编译好的tensorflow没有AVX2指令集加速,通过手动编译可以更好的利用GPU。但是如果没有AVX或者GPU的话,手动编译几乎没有优势。

目前,官方只提供了Ubuntu和Mac OS X的编译支持,在Windows下可以通过Bazel和CMake两种方式进行编译,但只是 “highly experimental”,可能会遇到各种错误。

可供参考的其他安装方法:

http://blog.csdn.net/wx7788250/article/details/60877166

http://blog.csdn.net/JerryZhang__/article/details/60763161

开始安装

前提条件

windows平台安装TF,要求python版本号必须为3.5.x or 3.6.x,并且必须选择为x64平台的。

必须安装Microsoft Visual C++ 2015 Redistributable Update 3,否则会执行失败,报错内容稍后提到(missing MSVCP140.dll)。

下载链接:

VC++安装包下载:https://www.microsoft.com/en-us/download/details.aspx?id=53587

python:https://www.python.org/downloads/release/python-362/

安装Bazel:https://docs.bazel.build/versions/master/install-windows.html

安装Chocolatey:https://chocolatey.org/install

配置本地环境

1.VC++安装包下载:

https://www.microsoft.com/en-us/download/details.aspx?id=53587

2.安装python 3.6.x,

https://www.python.org/downloads/release/python-362/

3.安装Cuda和CuDNN

谷歌提供了CPU和GPU版本的TensorFlow,使用GPU版本的TensorFlow进行训练需要NVIDIA显卡,并安装Cuda和CnDNN,如果使用CPU版本的,可跳过这一步。

CUDA安装:https://developer.nvidia.com/cuda-downloads

按照提示直接安装即可。

CuDNN安装:https://developer.nvidia.com/cudnn

这一步需要注册一个账号,并填写一个问卷,完成后即可下载。CuDNN下载后解压,添加 [yourPath]\cuda 和[yourPath]\cuda\bin 到环境变量 并按照如下操作:

[yourPath]\cuda\bin\cudnn64_5.dll —> (拷贝至)

[yourPath]\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin

[yourPath]\cuda\include\cudnn.h —> (拷贝至)

[yourPath]\NVIDIA GPU Computing Toolkit\CUDA\v8.0\include

[yourPath]\cuda\lib\x64\cudnn.lib —>(拷贝至)

[yourPath]\NVIDIA GPU Computing Toolkit\CUDA\v8.0\lib\x64

4.查看CUDA版本

在命令提示符中查看CUDA8的版本

C:\Users\Administrator.chenbo-ovr097b6>nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver

Copyright (c) 2005-2016 NVIDIA Corporation

Built on Mon_Jan__9_17:32:33_CST_2017

Cuda compilation tools, release 8.0, V8.0.60

5.查看GPU设备信息

运行deviceQuery.exe,查看GPU设备信息

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\extras\demo_suite>device

Query.exe

deviceQuery.exe Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Tesla M60"

CUDA Driver Version / Runtime Version          9.0 / 8.0

CUDA Capability Major/Minor version number:    5.2

Total amount of global memory:                8108 MBytes (8501460992 bytes)

(16) Multiprocessors, (128) CUDA Cores/MP:    2048 CUDA Cores

GPU Max Clock rate:                            1178 MHz (1.18 GHz)

Memory Clock rate:                            2505 Mhz

Memory Bus Width:                              256-bit

L2 Cache Size:                                2097152 bytes

Maximum Texture Dimension Size (x,y,z)        1D=(65536), 2D=(65536, 65536),3D=(4096, 4096, 4096)

Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers

Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers

Total amount of constant memory:              65536 bytes

Total amount of shared memory per block:      49152 bytes

Total number of registers available per block: 65536

Warp size:                                    32

Maximum number of threads per multiprocessor:  2048

Maximum number of threads per block:          1024

Max dimension size of a thread block (x,y,z): (1024, 1024, 64)

Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)

Maximum memory pitch:                          2147483647 bytes

Texture alignment:                            512 bytes

Concurrent copy and kernel execution:          Yes with 2 copy engine(s)

Run time limit on kernels:                    No

Integrated GPU sharing Host Memory:            No

Support host page-locked memory mapping:      Yes

Alignment requirement for Surfaces:            Yes

Device has ECC support:                        Disabled

CUDA Device Driver Mode (TCC or WDDM):        TCC (Tesla Compute Cluster Driver)

Device supports Unified Addressing (UVA):      Yes

Device PCI Domain ID / Bus ID / location ID:  0 / 0 / 21

Compute Mode:

< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = Tesla M60

Result = PASS

6.通过pip安装TensorFlow

6.1安装过程

pip是Python包管理工具,可以很方便的安装一些软件。我们在安装Python的时候已经自动安装了pip,现在可以直接在CMD中执行以下命令安装TensorFlow。

CPU版本: pip install tensorflow

GPU版本: pip install tensorflow-gpu

注意:安装python新版本后,可能会不带pip因此需要先安装pip,然后再安装tensorflow

pip安装方法:

python -m ensurepip  //安装pip

python -m pip install tensorflow //安装TF for CPU框架

python -m pip install tensorflow-gpu //安装TF for GPU框架

6.2验证

python

>>> import tensorflow as tf

>>> hello = tf.constant('Hello, TensorFlow!')

>>> sess = tf.Session() //看到GPU显存信息

>>> print(sess.run(hello))

7 通过Anaconda安装TensorFlow

7.1 安装Anaconda

下载Anaconda中最新版本:https://www.anaconda.com/download/

7.2 打开conda客户端,构建conda环境

C:> conda create -n tensorflow python=3.6

7.3 激活conda环境

C:> activate tensorflow

7.4 安装框架

(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow //安装TF for CPU框架

(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow-gpu //安装TF for GPU框架

7.5 conda环境中验证

python

>>> import tensorflow as tf

>>> hello = tf.constant('Hello, TensorFlow!')

>>> sess = tf.Session() //看到GPU显存信息

>>> print(sess.run(hello))

8.完成

上一篇下一篇

猜你喜欢

热点阅读