# Allocate a pipeline for sentiment-analysis >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to include pipeline into the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9978193640708923}]
1 2 3 4 5 6 7 8 9
>>> from transformers import pipeline
# Allocate a pipeline for question-answering >>> question_answerer = pipeline('question-answering') >>> question_answerer({ ... 'question': 'What is the name of the repository ?', ... 'context': 'Pipeline have been included in the huggingface/transformers repository' ... }) {'score': 0.5135612454720828, 'start': 35, 'end': 59, 'answer': 'huggingface/transformers'}
To download and use any of the pretrained models on your given task, you just need to use those three lines of codes (PyTorch version):
1 2 3 4 5 6 7
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> model = AutoModel.from_pretrained("bert-base-uncased")