Apple silicon에서도 pytorch에서의 gpu acceleartion이 적용됐다. 이참에 사용하는 프레임워크에서 gpu acceleartion을 사용하는 방법들을 정리하고자 한다.
Pytorch
2022-05-20 기준.
Pytorch 1.12를 설치하면 된다. Nightly build에서만 작동한다.
import torchimport torchvision.models as modelsfrom torchsummary import summary
print(torch.__version__)mps_device = torch.device("mps")
print(mps_device)
# Create a Tensor directly on the mps devicex = torch.ones((1, 3, 224, 224), device=mps_device)print(x.shape)
# Move your model to mps just like any other devicemodel = models.resnet18()summary(model, (3, 244, 244))model.to(mps_device)
# Now every call runs on the GPUpred = model(x)
print(pred, pred.shape)HuggingFace
pip, conda로 설치가 안되므로 직접 빌드해야 한다. rust tokenizer를 사용했다.
# install rust on arm terminalcurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# intsall tokenizergit clone https://github.com/huggingface/tokenizerscd tokenizers/bindings/pythonpip install setuptools_rustpython setup.py install
# install transformerspip install git+https://github.com/huggingface/transformers
# install datasetspip install git+https://github.com/huggingface/datasetsfrom transformers import AutoTokenizer, BertModel
device = "mps"sentence = 'Hello World!'tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased', use_fast=True)model = BertModel.from_pretrained('bert-large-uncased')
inputs = tokenizer(sentence, return_tensors="pt").to(device)model = model.to(device)outputs = model(**inputs)print(outputs)