[PYTHON] minpy를 이용하여 gpu연산하기
https://minpy.readthedocs.io/en/latest/tutorial/numpy_under_minpy.html
GPU Support
But we do not stop here, we want MinPy not only friendly to use, but also fast. To this end, MinPy leverages GPU’s parallel computing ability. The code below shows our GPU support and a set of API to make you freely to change the runnning context (i.e. to run on CPU or GPU). You can refer to Select Context for MXNet for more details.
colab에서 minpy를 설치하고. 이를 참고해서 설치했다.
https://stackoverflow.com/questions/51342408/how-do-i-install-python-packages-in-googles-colab
How do I install Python packages in Google's Colab?
In a project, I have e.g. two different packages, How can I use the setup.py to install these two packages in the Google's Colab, so that I can import the packages?
stackoverflow.com
!python setup.py install
!pip install minpy
# !을 붙이면 notebook에서 설치가능
minpy를 설치하니까
mxnet을 설치하라네?
import를 했더니 이와 같이 경고메세지가 뜨는데
W0730 02:24:07 120 minpy.dispatch.registry:register:47] Type MXNet for name reshape has already existed
minpy로 하면 이와같은 에러가 뜬다. 왜 그런것일까? 모르겠군.
일단 접어두고.
예제 실습을 해보자.
How do I install a library permanently in Colab?
How do I install a library permanently in Colab?
In Google Colaboratory, I can install a new library using !pip install package-name. But when I open the notebook again tomorrow, I need to re-install it every time. Is there a way to install a li...
stackoverflow.com
import os, sys
from google.colab import drive
drive.mount('/content/drive')
nb_path = '/content/notebooks'
os.symlink('/content/drive/My Drive/Colab Notebooks', nb_path)
sys.path.insert(0,nb_path)
GPU Support
But we do not stop here, we want MinPy not only friendly to use, but also fast. To this end, MinPy leverages GPU’s parallel computing ability. The code below shows our GPU support and a set of API to make you freely to change the runnning context (i.e. to run on CPU or GPU). You can refer to Select Context for MXNet for more details.
import minpy.numpy as np
import minpy.numpy.random as random
from minpy.context import cpu, gpu
import time
n = 100
with cpu():
x_cpu = random.rand(1024, 1024) - 0.5
y_cpu = random.rand(1024, 1024) - 0.5
# dry run
for i in range(10):
z_cpu = np.dot(x_cpu, y_cpu)
z_cpu.asnumpy()
# real run
t0 = time.time()
for i in range(n):
z_cpu = np.dot(x_cpu, y_cpu)
z_cpu.asnumpy()
t1 = time.time()
with gpu(0):
x_gpu0 = random.rand(1024, 1024) - 0.5
y_gpu0 = random.rand(1024, 1024) - 0.5
# dry run
for i in range(10):
z_gpu0 = np.dot(x_gpu0, y_gpu0)
z_gpu0.asnumpy()
# real run
t2 = time.time()
for i in range(n):
z_gpu0 = np.dot(x_gpu0, y_gpu0)
z_gpu0.asnumpy()
t3 = time.time()
print("run on cpu: %.6f s/iter" % ((t1 - t0) / n))
print("run on gpu: %.6f s/iter" % ((t3 - t2) / n))
코드가 실행이 안되는데.. with gpu(0) : 이하가 실행이 되지 않는다.
단순 dot product인데 gpu를 사용하면 속도가 훨씬 빠르다. --> gpu가 여러개의 core로 병렬처리를 하기 때문이다.
gpu는 덧셈 뺄셈 곱셈 나눗셈만 이것만 전문으로 하는 코어들을 수천개씩 가지고 있다.
The asnumpy() call is somewhat mysterious, implying z_cpu is not NumPy’s ndarray type. Indeed this is true. For fast execution, MXNet maintains its own datastrcutre NDArray. This calls re-synced z_cpu into NumPy array.
As you can see, there is a gap between the speeds of matrix multiplication in CPU and GPU. That’s why we set default policy mode as PreferMXNetPolicy, which means MinPy will dispatch the operator to MXNet as much as possible for you, and achieve transparent fallback while there is no MXNet implementation. MXNet operations run on GPU, whereas the fallbacks run on CPU.