Tensorflow有Pip,Docker, Virtualenv, Anaconda 或 源码编译的方法安装 ,本文中采用Pip安装。
因为国内中文教程中有关Pip安装的很少,官方中文文档也有一些顺序上的错误,语义的模糊,按照上面的来很容易出错。所以我就在这里写一篇教程吧。
注:建议 Ubuntu 16.04版本
配置:
Linux Distribution: Ubuntu 16.04 64位
Cpu: Intel Core i5 6300HQ
Gpu:GTX 960M
Python:2.7
因为笔者是装的双系统,用UEFI引导的没有关Security Boot(安全启动),所以出了一点问题。
1.开机进入Bios,关闭Security Boot
2.系统设置(setting)->软件和更新->附加驱动
-
先打开terminal输入:
sudo apt-get update
-
然后将显卡驱动选择为NVIDIA的显卡驱动(更改以后需要等待一段时间。)
3.安装CUDA8
- 去cuda toolkit 8.0下载,如图,选择deb(local)安装,网上有些说神马一定要选runfile,其实deb也很简单。
- 注:cuda 7.5只有ubuntu15.04和14.04版本,所以要用cuda 8.0
(图1)
- 下载成功以后按照图中的命令操作即可安装完成。
安装完成以后,cuda默认安装在了/usr/local/cuda-8.0/目录处,然后:
vim ~/.profile
注:很多其他发行版是打开.bash_profile,ubuntu没有这个,而是.profile
设置环境变量(加在文件末尾,每次登陆后自动生效):
export PATH="$PATH:/usr/local/cuda-8.0"
export LD_LIBRARY_PATH="/usr/local/cuda-8.0/lib64"
保存以后,继续在terminal中输入:: source ~/.profile #使更改的环境变量立即生效 nvidia-smi #测试是否配置成功 结果出现以下相识输出说明配置成功: (图2)
4.降低gcc版本
因为ubuntu的gcc编译器是5.4.0,然而cuda8.0不支持5.0以上的编译器,因此需要降级,把编译器版本降到4.9:
sudo apt-get install g++-4.9 sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 20 sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 10 sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.9 20 sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 10 sudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc 30 sudo update-alternatives --set cc /usr/bin/gcc sudo update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++ 30 sudo update-alternatives --set c++ /usr/bin/g++ gcc --version
如果版本显示出来是4.9就成功了
5.安装深度学习库cuDNN
一定要下载cuDNN5.1!!!!,至少现在下载6.0版本后面是有问题的,我之前下载的6.0版本,后面导入tensorflow时一直报错,在gaythub上看见有人说6.0“don not work”而5.1可以,不得已退回到5.1,然后便成功了。 - 下载之前要注册一个NVIDA DEVELOPER帐号,然后填三个问题就可以啦。如图选择cuDNN v5.1 Library for Linux: (图3) - 下载完成以后,解压并拷贝 CUDNN 文件到 Cuda Toolkit 8.0安装路径下. 假设 Cuda Toolkit 8.0 安装在 /usr/local/cud-8.0(默认路径), 执行以下命令(若/usr/local/cuda-8.0/include目录不存在,先创建一个include目录):
tar xvzf cudnn-8.0-linux-x64-v5.1.tgz sudo cp cuda/include/cudnn.h /usr/local/cuda-8.0/include sudo cp cuda/lib64/libcudnn* /usr/local/cuda-8.0/lib64 sudo chmod a+r /usr/local/cuda-8.0/include/cudnn.h /usr/local/cuda-8.0/lib64/libcudnn*
6.Install and upgrade PIP
- 首先要确保安装和更新pip:
sudo apt-get install python-pip python-dev pip install --upgrade pip
7.安装 Tensorflow Gpu enable python 2.7 版本
pip install --upgrade tensorflow-gpu
测试一下:
$python Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf >>> hello = tf.constant('Hello, TensorFlow!') >>> sess = tf.Session() 2017-08-25 14:34:54.825013: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-25 14:34:54.825065: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-25 14:34:54.825081: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-08-25 14:34:54.825093: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-25 14:34:54.825105: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017-08-25 14:34:55.071951: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-08-25 14:34:55.072542: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate (GHz) 1.176 pciBusID 0000:01:00.0 Total memory: 1.95GiB Free memory: 1.31GiB 2017-08-25 14:34:55.072632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 2017-08-25 14:34:55.072695: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 2017-08-25 14:34:55.072730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960M, pci bus id: 0000:01:00.0) >>> print(sess.run(hello)) Hello, TensorFlow! 查看一下GPU开启情况: - "/cpu:0": The CPU of your machine. - "/gpu:0": The GPU of your machine, if you have one. - "/gpu:1": The second GPU of your machine, etc.
>>> import tensorflow as tf >>> a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') >>> b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') >>> c = tf.matmul(a, b) >>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) >>> print(sess.run(c)) MatMul: (MatMul): /job:localhost/replica:0/task:0/gpu:0 2017-08-09 09:47:39.461702: I tensorflow/core/common_runtime/simple_placer.cc:847] MatMul: (MatMul)/job:localhost/replica:0/task:0/gpu:0 b: (Const): /job:localhost/replica:0/task:0/gpu:0 2017-08-09 09:47:39.461942: I tensorflow/core/common_runtime/simple_placer.cc:847] b: (Const)/job:localhost/replica:0/task:0/gpu:0 a: (Const): /job:localhost/replica:0/task:0/gpu:0 2017-08-09 09:47:39.461976: I tensorflow/core/common_runtime/simple_placer.cc:847] a: (Const)/job:localhost/replica:0/task:0/gpu:0 [[ 22. 28.] [ 49. 64.]]