L’accelerazione GPU per PyTorch è ora disponibile su Apple Silicon. Ho voluto documentare come usare l’accelerazione GPU nei framework che utilizzo.
Pytorch
Alla data del 2022-05-20.
Installare PyTorch 1.12. Funziona solo con la nightly build.
import torchimport torchvision.models as modelsfrom torchsummary import summary
print(torch.__version__)mps_device = torch.device("mps")
print(mps_device)
# Create a Tensor directly on the mps devicex = torch.ones((1, 3, 224, 224), device=mps_device)print(x.shape)
# Move your model to mps just like any other devicemodel = models.resnet18()summary(model, (3, 244, 244))model.to(mps_device)
# Now every call runs on the GPUpred = model(x)
print(pred, pred.shape)HuggingFace
Non si installa tramite pip o conda, quindi bisogna compilare dai sorgenti. Ho usato il tokenizer in rust.
# install rust on arm terminalcurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# intsall tokenizergit clone https://github.com/huggingface/tokenizerscd tokenizers/bindings/pythonpip install setuptools_rustpython setup.py install
# install transformerspip install git+https://github.com/huggingface/transformers
# install datasetspip install git+https://github.com/huggingface/datasetsfrom transformers import AutoTokenizer, BertModel
device = "mps"sentence = 'Hello World!'tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased', use_fast=True)model = BertModel.from_pretrained('bert-large-uncased')
inputs = tokenizer(sentence, return_tensors="pt").to(device)model = model.to(device)outputs = model(**inputs)print(outputs)