Oct 28, 2020 | 2506 views
How to Install CUDA 11.1 + Pytorch 1.7
=================================================
Install CUDA 11.1
=================================================
下载:https://developer.download.nvidia.com/compute/cuda/11.1.0/local_installers/cuda-repo-rhel7-11-1-local-11.1.0_455.23.05-1.x86_64.rpm
文档:https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
===== Step One:
Pre-installation Actions [https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#pre-installation-actions]
!!! Important:
You may need to update kernel if ` yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r)` fails.
How to update kernel: `yum install kernel kernel-tools kernel-tools-libs` [https://www.golinuxcloud.com/how-to-update-kernel-rhel-centos-7-yum-linux/]
===== Step Two:
Package Manager Installation [https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#package-manager-installation]
===== Step Three:
Post-installation Actions [https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions]
===== Step Four: Install proper drivers for Graphic cards
[ https://www.server-world.info/en/note?os=CentOS_7&p=nvidia]
===== Step Five: Compile samples
# cuda-install-samples-11.1.sh [dir]
# make -k -j
检查命令
nvidia-smi
lspci | grep -i nvi
./bin/x86_64/linux/release/deviceQuery
当中可能遇到的错误情况处理:
1. centos7 yum 错误 This system is not registered with an entitlement server [https://blog.csdn.net/whatday/article/details/106106767]
2. cudaNvSci.h: fatal error: nvscibuf.h: No such file or directory [https://cloud.tencent.com/developer/article/1584631]
其他参考:
1. https://www.server-world.info/en/note?os=CentOS_7&p=cuda&f=4 (samples的测试)
2. https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#abstract (Official installation guide)
=================================================
Install Pytorch 1.7
=================================================
https://pytorch.org/get-started/locally/
Recommend using conda to install Pytorch, refer to above link:
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
For pip, run this Command: pip install torch==1.7.0+cu110 torchvision==0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
Test code after successful installation from [https://varhowto.com/install-pytorch-cuda-10-2/]
How to install CUDA 10.2 + Pytorch 1.6 on CentOS 7
=================================================
Install CUDA 10.2
=================================================
Download:https://developer.nvidia.com/cuda-10.2-download-archive?target_os=Linux&target_arch=x86_64&target_distro=CentOS&target_version=7&target_type=rpmnetwork
Document:https://docs.nvidia.com/cuda/archive/10.2/cuda-installation-guide-linux/index.html
===== Step One:
Pre-installation Actions [https://docs.nvidia.com/cuda/archive/10.2/cuda-installation-guide-linux/index.html#pre-installation-actions]
!!! Important:
You may need to update kernel if ` yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r)` fails.
How to update kernel: `yum install kernel kernel-tools kernel-tools-libs` [https://www.golinuxcloud.com/how-to-update-kernel-rhel-centos-7-yum-linux/]
===== Step Two:
Package Manager Installation [https://docs.nvidia.com/cuda/archive/10.2/cuda-installation-guide-linux/index.html#package-manager-installation]
===== Step Three:
Post-installation Actions [https://docs.nvidia.com/cuda/archive/10.2/cuda-installation-guide-linux/index.html#post-installation-actions]
===== Step Four: Install proper drivers for Graphic cards
[ https://www.server-world.info/en/note?os=CentOS_7&p=nvidia]
===== Step Five: Compile samples
# cuda-install-samples-10.2.sh [directory]
# make -k -j
当中可能遇到的错误情况处理:
1. centos7 yum 错误 This system is not registered with an entitlement server [https://blog.csdn.net/whatday/article/details/106106767]
2. cudaNvSci.h: fatal error: nvscibuf.h: No such file or directory [https://cloud.tencent.com/developer/article/1584631]
其他参考:
1. https://www.server-world.info/en/note?os=CentOS_7&p=cuda&f=4 (samples的测试)
2. https://docs.nvidia.com/cuda/archive/10.2/cuda-installation-guide-linux/index.html#post-installation-actions (Official installation guide)
=================================================
Install Pytorch 1.6
=================================================
-------First: Install Python-3.8.1 (Follow from source guide [https://phoenixnap.com/kb/how-to-install-python-3-centos-7] )---------
How to Install PyTorch with CUDA 10.2: https://varhowto.com/install-pytorch-cuda-10-2/
Comments: 0
Pls refer to Create a Logical Volume larger than 2TB and format it in Linux/RHEL
"You cannot create a Linux partition larger than 2 TB using the fdisk command. Using fdisk you could not create partitions larger than 2 TB. It is fine for desktop and laptop users, but for servers, you need large partitions like 2TB, 3TB, 4TB etc.
Root Cause
- The fdisk command only supports the legacy MBR partition table format (also known as msdos partition table)
- MBR partition tables use data fields that have a maximum of 32 bit sector numbers, and with 512 bytes/sector that means a maximum of 2^(32+9) bytes per disk or partition is supported
- MBR partition table can not support accessing data on disks past 2.19TB due to the above limitation
- Note that some older versions of fdisk may permit a larger size to be created but the resulting partition table will be invalid.
- The parted command can create disk labels using MBR (msdos), GUID Partition Table (GPT), SUN disk labels and many more types.
- The GPT disk label overcomes many of the limitations of the DOS MBR including restrictions on the size of the disk, the size of any one partition and the overall number of partitions.
- Note that booting from a GPT labelled volume requires firmware support and this is not commonly available on non-EFI platforms (including x86 and x86_64 architectures).
For more details please see the solution on Red Hat Customer Portal"
Comments: 0
Oct 22, 2020 | 1510 views
!!! Recommend: You can also compile the latest Octopus with gcc version 8 with the same procedure.
================= 如何编译安装Octopus 9.1 =====================
说明!!!:using mpif90 as FC compiler when compiling blas, lapack, libxc libraries
According to "Normally it is necessary to compile all Fortran libraries with the same compiler. If you have trouble, try to look for help in the Octopus mailing list." [Testing_your_build]
参考资料:
1. Octopus安装-算盘
2. 安装 Octopus
3. wiki Manual:Building_from_scratch
4. wiki Manual:Installation#Testing_your_build
5. https://octopus-code.org/wiki/Manual:Installation
软件安装到/opt/目录下,如下
[root@mgt lapack-3.9.0]# ls -F /opt/ | more
BLAS-3.8.0/
fftw-3.3.8/
gsl-2.4/
HOW-Octopus-9.1-Was-Installed.txt
lapack-3.9.0/
libxc-4.3.4/
octopus-9.1/
openmpi-4.0.5/
安装一些必须的工具包:
---------------------
yum install -y autoconf automake libtool
--------------------
环境变量被更改到/etc/profile
--------------------
export LD_LIBRARY_PATH=/opt/gsl-2.4/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/openmpi-4.0.5/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/libxc-4.3.4//lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/fftw-3.3.8//lib:$LD_LIBRARY_PATH
export PATH=/opt/gsl-2.4/bin:$PATH
export PATH=/opt/openmpi-4.0.5/bin:$PATH
export PATH=/opt/octopus-9.1/bin:$PATH
export PATH=/opt/libxc-4.3.4//bin:$PATH
export PATH=/opt/fftw-3.3.8//bin:$PATH
=== 1. 编译安装openmpi-4.0.5 =======
Tutorial refer to https://www.jianshu.com/p/7309d3b1c735
Reference:
- openmpi building#easy-build
wget https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.5.tar.bz2
./configure --prefix=/opt/openmpi-4.0.5
make -j
make -j install
cd examples/
make
./hello_c
echo 'export LD_LIBRARY_PATH=/opt/openmpi-4.0.5/lib:$LD_LIBRARY_PATH' >> /etc/profile
echo 'export PATH=/opt/openmpi-4.0.5/bin:$PATH' >> /etc/profile
=== 2. 编译安装 BLAS-3.8.0/ ===
[root@mgt BLAS-3.8.0]# grep -v '^#' make.inc
SHELL = /bin/sh
PLAT = _LINUX
FORTRAN = mpif90
OPTS = -O3
DRVOPTS = $(OPTS)
NOOPT =
LOADER = mpif90
LOADOPTS =
ARCH = ar
ARCHFLAGS= cr
RANLIB = ranlib
BLASLIB = blas$(PLAT).a
mkdir -pv /opt/BLAS-3.8.0/
cp blas_LINUX.a /opt/BLAS-3.8.0/libblas.a
=== 3. 编译安装 lapack-3.9.0 ===
mv make.inc.example make.inc
### EDIT make.inc like below
[root@mgt lapack-3.9.0]# grep FC make.inc
# Modify the FC and FFLAGS definitions to the desired compiler
FC = mpif90
make -j
cd ..
tar cvf - lapack-3.9.0/*.a | tar xvf - -C /opt/
=== 可选但此教程有安装:编译安装 scalapack-2.1.0 ======
Tutorial can refer to 安装scalapack
wget https://github.com/Reference-ScaLAPACK/scalapack/archive/v2.1.0.tar.gz
[root@mgt scalapack-2.1.0]# grep -v '^#' SLmake.inc | grep -E 'BLAS|LAPACK'
SCALAPACKLIB = libscalapack.a
BLASLIB = /opt/BLAS-3.8.0/blas_LINUX.a
LAPACKLIB = /opt/lapack-3.9.0/liblapack.a
LIBS = $(LAPACKLIB) $(BLASLIB)
make -j
cd ..
tar cvf - scalapack-2.1.0 | tar xvf - -C /opt/
=== 4. 编译安装 libxc-4.3.4 ===
[root@mgt libxc-4.3.4]#
autoreconf -i
./configure --prefix=/opt/libxc-4.3.4 CC=gcc CXX=g++ FC=mpif90
make -j
make -j install
echo 'export LD_LIBRARY_PATH=/opt/libxc-4.3.4//lib:$LD_LIBRARY_PATH' >> /etc/profile
echo 'export PATH=/opt/libxc-4.3.4//bin:$PATH' >> /etc/profile
=== 5. 编译安装 gsl-2.4 =======
wget http://ftp.gnu.org/gnu/gsl/gsl-2.4.tar.gz
tar xvzf gsl-2.4.tar.gz
cd gsl-2.4
[root@mgt gsl-2.4]#
mkdir build-gsl
cd build-gsl/
../configure --prefix=/opt/gsl-2.4
make -j
make -j install
echo 'export LD_LIBRARY_PATH=/opt/gsl-2.4/lib:$LD_LIBRARY_PATH' >> /etc/profile
echo 'export PATH=/opt/gsl-2.4/bin:$PATH' >> /etc/profile
==== 6. 编译安装 fftw-3.3.8 ==========
wget http://fftw.org/fftw-3.3.8.tar.gz
tar xzvf fftw-3.3.8.tar.gz
cd fftw-3.3.8/
ls
less README
./configure --prefix=/opt/fftw-3.3.8
make -j
make install
ls /opt/fftw-3.3.8/
echo 'export LD_LIBRARY_PATH=/opt/fftw-3.3.8//lib:$LD_LIBRARY_PATH' >> /etc/profile
echo 'export PATH=/opt/fftw-3.3.8//bin:$PATH' >> /etc/profile
======== FINAL: 编译安装 Octopus 9.1 =============
wget https://octopus-code.org/download/9.1/octopus-9.1.tar.gz
tar xzf octopus-9.1.tar.gz
cd octopus-9.1/
ls
./configure --prefix=/opt/octopus-9.1 --with-libxc-prefix=/opt/libxc-4.3.4/ --with-gsl-prefix=/opt/gsl-2.4/ --with-blas=/opt/BLAS-3.8.0/libblas.a --with-lapack="-L/opt/lapack-3.9.0 -ltmglib -llapack" --with-scalapack="-L/opt/scalapack-2.1.0 -lscalapack" --with-fftw-prefix=/opt/fftw-3.3.8/ --enable-mpi
make -j
make install
echo 'export PATH=/opt/octopus-9.1/bin:$PATH' >> /etc/profile
# Test octopus 9.1 using a helloworld example
Comments: 4
差集:
SELECT t1.id, t1.name, t1.age
FROM t1
LEFT JOIN t2
ON t1.id = t2.id
WHERE t1.name != t2.name
OR t1.age != t2.age;
id name age
2 小宋 20
3 小白 30
交集:
SELECT id, NAME, age, COUNT(*)
FROM (SELECT id, NAME, age
FROM t1
UNION ALL
SELECT id, NAME, age
FROM t2
) a
GROUP BY id, NAME, age
HAVING COUNT(*) > 1
id NAME age COUNT(*)
1 小王 10 2
4 hello 40 2
Comments: 0
here is my views.py bases on above link (also its code on github):
from django.shortcuts import render
from rest_framework.decorators import api_view
from rest_framework.response import Response
from rest_framework import status
from django.core.cache import cache
from django.conf import settings
from django.core.cache.backends.base import DEFAULT_TIMEOUT
import string, random
CACHE_TTL = getattr(settings, 'CACHE_TTL', DEFAULT_TIMEOUT)
from .models import Product
# Create your views here.
def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
def price_generator(size=3, chars=string.digits):
return ''.join(random.choice(chars) for _ in range(size))
def create_product(name, desc='', price=0):
product = Product(name=name, description=desc, price=price)
product.save()
@api_view(['GET'])
def view_books(request):
products = Product.objects.all()
if len(products) < 1000:
i = 0
while i < 1000 - len(products):
create_product(id_generator(), desc=3*id_generator(), price=int(price_generator()))
i += 1
results = [product.to_json() for product in products]
return Response(results, status=status.HTTP_201_CREATED)
@api_view(['GET'])
def view_cached_books(request):
if 'product' in cache:
# get results from cache
products = cache.get('product')
return Response(products, status=status.HTTP_201_CREATED)
else:
products = Product.objects.all()
results = [product.to_json() for product in products]
# store data in cache
cache.set('product', results, timeout=CACHE_TTL)
return Response(results, status=status.HTTP_201_CREATED)
Here is requirements.txt (Be aware djangorestframework version):
$ pip freeze
asgiref==3.2.10
Django==1.9
django-redis==4.12.1
djangorestframework==3.6.3
pytz==2020.1
redis==3.5.3
sqlparse==0.4.1
Comments: 0