I have not done much deep learning research for about one year, because I have focused on development development and deep learning model deployment in Unreal 5.
These days, ChatGPT have gained much popularity in our world, and we do have some technical need to learn text-based interface of human-computer interaction.
I have done deep learning research for character animation for several years. What interests me most is the possibility of extension of ChatGPT for more applications such as character animation generation and some more real-time 3D topics. I also touched some APIs such as ASR and TTS in NLP and the integration of them.
I have research and development experience about RNN, LSTM, GRU for character animation generation. I think for me, transformer model is a good start. Now let’s dive into these basics and explore more!
Install dependencies
My test platform:
- Windows 10
- NVIDIA GeForce RTX 2070 SUPER
- CUDA 11.8
- cuDNN 8.7
- Python 3.9
transformers
is a python library from Hugging Face.
pip install "transformers[sentencepiece]"
Install Pytorch 2.0.0:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Explore Some Code
transformers.pipeline
is a wrapper class factory, including pre-processing, model inference and post-processing.
There are some available pipelines:
- sentiment-analysis
- zero-shot-classification
- text generation
- mask filling
- named entity recognition
- question answering
- summarization
- translation
the model will be cached in ~/.cache/huggingface/
.
Sentiment Analysis
from transformers import pipeline import time def sentiment_analysis(): classifier = pipeline("sentiment-analysis") t0 = time.time() result = classifier("I've been waiting for a HuggingFace course my whole life.") print(result) result = classifier( ["I've been waiting for a HuggingFace course my whole life.", "I hate this so much!"] ) print(result) print(f'elapsed time: {time.time() - t0}') if __name__ == '__main__': sentiment_analysis()
Output:
[{'label': 'POSITIVE', 'score': 0.9598049521446228}] [{'label': 'POSITIVE', 'score': 0.9598049521446228}, {'label': 'NEGATIVE', 'score': 0.9994558691978455}] elapsed time: 0.03801226615905762
Zero-shot Classification
def zero_shot_classification(): classifier = pipeline("zero-shot-classification") t0 = time.time() result = classifier( "This is a course about the Transformers library", candidate_labels=["education", "politics", "business"], ) print(result) result = classifier( "I am a happy bear running in the grass, and I want to play around.", candidate_labels=["story", "education", "politics", "business"], ) print(result) print(f'elapsed time: {time.time() - t0}') if __name__ == '__main__': zero_shot_classification()
Output:
{'sequence': 'This is a course about the Transformers library', 'labels': ['education', 'business', 'politics'], 'scores': [0.8445993065834045, 0.11197393387556076, 0.043426718562841415]} {'sequence': 'I am a happy bear running in the grass, and I want to play around.', 'labels': ['story', 'business', 'education', 'politics'], 'scores': [0.7858290672302246, 0.10075385868549347, 0.06901882588863373, 0.04439830780029297]} elapsed time: 0.819998025894165
Text Generation
def text_generation(): generator = pipeline("text-generation") t0 = time.time() result = generator("In this course, we will teach you how to") print(result) print(f'elapsed time: {time.time() - t0}') generator = pipeline("text-generation", model="distilgpt2") t0 = time.time() result = generator( "In this course, we will teach you how to", max_length=30, num_return_sequences=2, ) print(result) print(f'elapsed time: {time.time() - t0}') if __name__ == '__main__': text_generation()
Output:
[{'generated_text': 'In this course, we will teach you how to use a single-source software architecture to design and build web apps. We will build apps using PHP, Scala, Clojure and Dart. We will learn the fundamentals of REST; the fundamental concepts of programming'}] elapsed time: 1.106968879699707 Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. [{'generated_text': 'In this course, we will teach you how to write concise, easy-to-read, and complex code. This course aims at you to help'}, {'generated_text': 'In this course, we will teach you how to solve the problem that creates chaos and anxiety within real society. Our lesson of these two courses is that'}] elapsed time: 0.3989686965942383
Mask Filling
def mask_filling(): unmasker = pipeline("fill-mask") t0 = time.time() result = unmasker("This course will teach you all about <mask> models.", top_k=2) print(result) print(f'elapsed time: {time.time() - t0}') if __name__ == '__main__': mask_filling()
Output:
[{'score': 0.1961977779865265, 'token': 30412, 'token_str': ' mathematical', 'sequence': 'This course will teach you all about mathematical models.'}, {'score': 0.04052717983722687, 'token': 38163, 'token_str': ' computational', 'sequence': 'This course will teach you all about computational models.'}] elapsed time: 0.022998571395874023
Named Entity Recognition
def named_entity_recognition(): ner = pipeline("ner", grouped_entities=True) t0 = time.time() result = ner("My name is Sylvain and I work at Hugging Face in Brooklyn.") print(result) print(f'elapsed time: {time.time() - t0}') if __name__ == '__main__': named_entity_recognition()
Output:
[{'entity_group': 'PER', 'score': 0.9981694, 'word': 'Sylvain', 'start': 11, 'end': 18}, {'entity_group': 'ORG', 'score': 0.9796019, 'word': 'Hugging Face', 'start': 33, 'end': 45}, {'entity_group': 'LOC', 'score': 0.9932106, 'word': 'Brooklyn', 'start': 49, 'end': 57}] elapsed time: 0.09499835968017578
Question Answering
def question_answering(): question_answerer = pipeline("question-answering") t0 = time.time() result = question_answerer( question="Where do I work?", context="My name is Sylvain and I work at Hugging Face in Brooklyn", ) print(result) print(f'elapsed time: {time.time() - t0}') if __name__ == '__main__': question_answering()
Output:
{'score': 0.6949772238731384, 'start': 33, 'end': 45, 'answer': 'Hugging Face'} elapsed time: 0.01596975326538086
Summarization
def summarization(): summarizer = pipeline("summarization") t0 = time.time() result = summarizer( """ America has changed dramatically during recent years. Not only has the number of graduates in traditional engineering disciplines such as mechanical, civil, electrical, chemical, and aeronautical engineering declined, but in most of the premier American universities engineering curricula now concentrate on and encourage largely the study of engineering science. As a result, there are declining offerings in engineering subjects dealing with infrastructure, the environment, and related issues, and greater concentration on high technology subjects, largely supporting increasingly complex scientific developments. While the latter is important, it should not be at the expense of more traditional engineering. Rapidly developing economies such as China and India, as well as other industrial countries in Europe and Asia, continue to encourage and advance the teaching of engineering. Both China and India, respectively, graduate six and eight times as many traditional engineers as does the United States. Other industrial countries at minimum maintain their output, while America suffers an increasingly serious decline in the number of engineering graduates and a lack of well-educated engineers. """ ) print(result) print(f'elapsed time: {time.time() - t0}') if __name__ == '__main__': summarization()
Output:
[{'summary_text': ' America has changed dramatically during recent years . The number of engineering graduates in the U.S. has declined in traditional engineering disciplines such as mechanical, civil, electrical, chemical, and aeronautical engineering . Rapidly developing economies such as China and India, as well as other industrial countries, continue to encourage and advance the teaching of engineering .'}] elapsed time: 3.961998224258423
Translation
def translation(): translator = pipeline("translation", model="Helsinki-NLP/opus-mt-zh-en") t0 = time.time() result = translator("你好,我的名字叫何雨龙。我是一名软件研发工程师。") print(result) print(f'elapsed time: {time.time() - t0}') if __name__ == '__main__': translation()
Output:
[{'translation_text': "Hi, my name is He Yu Long. I'm a software development engineer."}] elapsed time: 0.42996883392333984
Feature Extraction
def feature_extraction(): extractor = pipeline("feature-extraction", model="bert-base-uncased") t0 = time.time() result = extractor("This is a simple test.", return_tensors=True) print(result.shape) print(f'elapsed time: {time.time() - t0}') if __name__ == '__main__': feature_extraction()
Output:
torch.Size([1, 8, 768]) elapsed time: 0.0279998779296875