Nat TaylorBlog, AI, Product Management & Tinkering

Test Drive: Transformers

Published on .

Today I’m test driving huggingface Transformers (“State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow”). I’ve used the library many times, but never deliberately on its own, and I’m doing the Serverless API too. My task is a simple classification task. The process is:

  1. pip install transformers
  2. Go to huggingface.co and pick some models
  3. Implement

Here is the code. On my M1 it runs in a few seconds.

"""Use huggingface locally and Severless API"""

import huggingface_hub
from transformers import pipeline

pipes = {
    'smol': pipeline("text-generation", model="HuggingFaceTB/SmolLM-135M-Instruct", device='mps'),
    'qwen': pipeline("text-generation", model="Qwen/Qwen2.5-0.5B-Instruct", max_new_tokens=500, device='mps'),
    'api': lambda messages: huggingface_hub.InferenceClient().chat.completions.create(model="meta-llama/Llama-3.2-1B-Instruct",  messages=messages)
}
messages = [
    {"role": "user", "content": "Classify the sentiment of the following.  ONLY OUTPUT positive OR negative !!!\nThis vacuum really sucks"},
]

for name, pipe in pipes.items():
    print(name)
    print(pipe(messages))

# smol
# [{'generated_text': [{'role': 'user', 'content': 'Classify the sentiment of the following.  ONLY OUTPUT positive OR negative !!!\nThis vacuum really sucks'}, {'role': 'assistant', 'content': 'Here\'s a possible classification of the sentiment of the given text:\n\n**Positive Sentiment:**\n\n* "I love this new restaurant"\n* "I\'m so excited to try this'}]}]
# qwen
# [{'generated_text': [{'role': 'user', 'content': 'Classify the sentiment of the following.  ONLY OUTPUT positive OR negative !!!\nThis vacuum really sucks'}, {'role': 'assistant', 'content': 'negative'}]}]
# api
# ChatCompletionOutput(choices=[ChatCompletionOutputComplete(finish_reason='stop', index=0, message=ChatCompletionOutputMessage(role='assistant', content='Negative', tool_calls=None), logprobs=None)], created=1730317449, id='', model='meta-llama/Llama-3.2-1B-Instruct', system_fingerprint='2.3.1-sha-a094729', usage=ChatCompletionOutputUsage(completion_tokens=2, prompt_tokens=55, total_tokens=57))

Post Navigation

«
»