Top Up Home HTML2PDF Hugging Face

Hugging Face

  • Home (link)
  • Transformers (link)
  • Transformers - git (link)
  • Datasets (link)
  • Models (link)
  • Inside Hugging Face’s Accelerate! (link)
  • Introducing HF Accelerate (link)
  • Hugging Face on PyTorch / XLA TPUs: Faster and cheaper training (link)
  • Fine-Tune Wav2Vec2 for English ASR with HF-Transformers (link)
  • The Partnership: Amazon SageMaker and Hugging Face (link)
  • Fine-tuning a model on a text classification task - colab (link)
  • Fine-tuning a model on a token classification task (link)
  • HF + Comet (link)
  • GerPT2-large (link)
  • Multilingual Serverless XLM RoBERTa with HuggingFace, AWS Lambda (link)
  • Transformers-based Encoder-Decoder Models (link)
  • The ultimate guide to Transformer-based Encoder-Decoder Models (colab) (link) (link) (link) (link)
  • Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT (link)

Attention, Transformer, Bert

  • What Have Language Models Learned? (link)
  • Simple Transformers (link)
  • Comparing Transformer Tokenizers (link)
  • Transformer Networks: A mathematical explanation why scaling the dot products leads to more stable gradients (link)
  • 10 Things You Need to Know About BERT and the Transformer Architecture That Are Reshaping the AI Landscape (link)
  • Bert Inner Workings (link)
  • Summarization has gotten commoditized thanks to BERT (link)
  • Retrieval Augmented Generation with Huggingface Transformers and Ray (link)
  • How to Incorporate Tabular Data with HuggingFace Transformers (link)
  • Extractive Text Summarization using Contextual Embeddings (link)
  • How not to use BERT for Document Ranking (link)
  • Conversational Summarization with Natural Language Processing (link)
  • Transformers (link)
  • ELECTRA — Addressing the flaws of BERT’s pre-training process (link)
  • Encoder Decoder models in HuggingFace from (almost) scratch (link)
  • Beyond BERT (link)
  • Easy sentence similarity with BERT Sentence Embeddings using John Snow Labs NLU (link)
  • TinyBERT — Size does matter, but how you train it can be more important (link)
  • ELECTRA: Pre-Training Text Encoders as Discriminators rather than Generators (link)
  • Poor Man’s BERT — Exploring Pruning as an Alternative to Knowledge Distillation (link)
  • Data Extraction using Question Answering Systems (link)
  • Understanding LongFormer’s Sliding Window Attention Mechanism (link)
  • What Is The SMITH Algorithm? (link)
  • BERT: Working with Long Inputs (link)
  • XLNet outperforms BERT on several NLP Tasks (link)
  • Text-to-Text Transfer Transformer (link)
  • Transformer encoder - visualized (link)
  • Emergent linguistic structure in artificial neural networks trained by self-supervision (link)
  • Understanding Language using XLNet with autoregressive pre-training (link)
  • Speeding up BERT (link)
  • Pre-training BERT from scratch with cloud TPU (link)
  • Dissecting BERT Part 1: Understanding the Transformer (link)
  • Understanding BERT Part 2: BERT Specifics (link)
  • deepset - bert (link)
  • deepset - farm (link)
  • How GPT3 Works - Visualizations and Animations (link)
  • The Annotated GPT-2 (link)
  • The Illustrated GPT-2 (Visualizing Transformer Language Models) (link)
  • A Visual Guide to Using BERT for the First Time (link)
  • The Illustrated BERT, ELMo, and co. (link)
  • BERT - git google (link)
  • The Illustrated Word2vec (link)
  • The Annotated Encoder-Decoder with Attention (link)
  • Seq2seq Models With Attention (link)
  • How to code The Transformer in Pytorch (link)
  • The Annotated Transformer (link)
  • The Illustrated Transformer (link)
  • Transformers from scratch (link)
  • the transformer … “explained”? (link)
  • PyTorch-Transformers (link)
  • Comprehensive Language Model Fine Tuning, Part 1 (link)
  • Which flavor of BERT should you use for your QA task? (link)
  • Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT (link)
  • Fastai with Transformers (BERT, RoBERTa, XLNet, XLM, DistilBERT) (link)
  • Using SimpleTransformers for Common NLP Applications (link)
  • minGPT - karpathy (link)
  • A Quick Demo of Andrej Karpathy’s minGPT Play Char Demo (link)
  • BERT Text Classification Using Pytorch (link)
  • The Reformer - Pushing the limits of language modeling (link)
  • Visual Paper Summary: ALBERT (A Lite BERT) (link)
  • GPT-2 and the Nature of Intelligence (link)
  • The Dark Secrets of BERT (link)
  • Encoder-decoders in Transformers: a hybrid pre-trained architecture for seq2seq (link)
  • Benchmarking Transformers: PyTorch and TensorFlow (link)
  • Transformers Hugginface GitHub (link)
  • Transformers - A collection of resources to study Transformers in depth. (link)
  • XLNet, ERNIE 2.0, And RoBERTa: What You Need To Know About New 2019 Transformer Models (link)
  • spaCy meets PyTorch-Transformers: Fine-tune BERT, XLNet and GPT-2 (link)

Misc

  • Ultimate Guide To Text Similarity With Python - NewsCatcher (link)
  • Unsupervised Text Summarization using Sentence Embeddings (link)
  • Text Mining 101: A Stepwise Introduction to Topic Modeling using Latent Semantic Analysis (using Python) (link)
  • An Introduction to Text Summarization using the TextRank Algorithm (link)
  • The Language Interpretability Tool (LIT) (link)
  • A guide to language model sampling in AllenNLP (link)
  • Going Beyond SQuAD (Part 1) (link)
  • GEM Benchmark - for Natural Language Generation (link)
  • Learn Natural Language Processing the practical way (link)
  • GLUE Benchmark (link) (link)
  • Stanza – A Python NLP Package for Many Human Languages (link)
  • nlp-tutorial (link)
  • Self-host your HuggingFace Transformer NER model with Torchserve + Streamlit (link)
  • jiant is an NLP toolkit (link)
  • Transfer Learning for Natural Language Processing (Pact-Buch) (link)
  • NER-Papers (link)
  • Zero-Shot Learning in Modern NLP (link)
  • NLP’s ImageNet moment has arrived (link)
  • NLP Year in Review — 2019 (link)
  • Current Issues with Transfer Learning in NLP (link)
  • 74 Summaries of Machine Learning and NLP Research (link)
  • Evaluation Metrics for Language Modeling (link)

NER

  • Named Entity Recognition — Clinical Data Extraction (link)
  • Training a spaCy NER Pipeline with Prodigy (link)
  • Existing Tools for Named Entity Recognition (link)
  • GermEval 2014 Named Entity Recognition Shared Task (link)
  • A Named Entity Recognition Shootout for German - pdf (link)
  • Named Entity Recognition and the Road to Deep Learning (link)
  • A Named-Entity Recognition Program based on Neural Networks and Easy to Use (link)
  • CRF Layer on the Top of BiLSTM 1 (link)
  • CRF Layer on the Top of BiLSTM 2 (link)
  • CRF Layer on the Top of BiLSTM 3 (link)
  • CRF Layer on the Top of BiLSTM 4 (link)
  • CRF Layer on the Top of BiLSTM 5 (link)
  • CRF Layer on the Top of BiLSTM 6 (link)
  • CRF Layer on the Top of BiLSTM 7 (link)
  • CRF Layer on the Top of BiLSTM 8 (link) (link) (link)

Other

  • link
  • Haystack — Neural Question Answering At Scale (link)
  • 5 NLP Libraries Everyone Should Know (link)
  • link
  • The NLP Pandect (link)
  • Text Summary Papers (link)
  • Transfer Learning in NLP - Folien Wolf Hugging Face (link)
  • SOTA NLP (link)
  • Ruder NLP Newsletter (link)
  • Shuffling Paragraphs: Using Data Augmentation in NLP to Increase Accuracy (link)
  • The Conversational Intelligence Challenge 2 (ConvAI2) (link)
  • Workshop for Natural Language Processing Open Source Software (link)
  • How to Train your Own Model with NLTK and Stanford NER Tagger? (for English, French, German…) (link)
  • Supervised Word Vectors from Scratch in Rasa NLU (link)
  • An overview of the NLP ecosystem in R (link)
  • SNLI-decomposable-attention (link)
  • A Review of the Neural History of Natural Language Processing (link)
  • Eisenstein Buch (link)
  • Holy NLP! Understanding Part of Speech Tags, Dependency Parsing, and Named Entity Recognition (link)
  • NLP Architect by Intel AI LAB (link)
  • TutorialBank: Learning NLP Made Easier (link)
  • Comparing Sentence Similarity Methods (link)
  • Text Embedding Models Contain Bias. Here’s Why That Matters (link)
  • The Natural Language Decathlon (link)
  • Joey NMT (link)

Spacy

  • How to Fine-Tune BERT Transformer with spaCy 3 (link)