[go: nahoru, domu]

Skip to content

Releases: huggingface/transformers

v4.30.1 Patch release

09 Jun 15:58
65a1ec0
Compare
Choose a tag to compare

v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone

08 Jun 18:07
fe861e5
Compare
Choose a tag to compare

100k

Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of transformers and we have decided to create an awesome-transformers page to do just that.

We accept PRs to add projects to the list!

4-bit quantization and QLoRA

By leveraging the bitsandbytes library by @TimDettmers, we add 4-bit support to transformers models!

Agents

The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:

  • Local agent capabilities, to load a generative model directly from transformers instead of relying on APIs.
  • Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them
  • We add an AzureOpenAiAgent class to support Azure OpenAI agents.

Safetensors

The safetensors library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).

It has now become a core dependency of transformers.

New models

Swiftformer

The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.

Autoformer

This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.

MobileViTv2

MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.

PerSAM

PerSAM proposes a minimal modification to SAM to allow dreambooth-like personalization, enabling to segment concepts in new images using just one example.

Timm backbone

We add support for loading timm weights within the AutoBackbone API in transformers. timm models can be instantiated through the TimmBackbone class, and then used with any vision model that needs a backbone.

Image to text pipeline conditional support

We add conditional text generation to the image to text pipeline; allowing the model to continue generating an initial text prompt according to an image.

  • [image-to-text pipeline] Add conditional text support + GIT by @NielsRogge in #23362

TensorFlow implementations

Accelerate Migration

A major rework of the internals of the Trainer is underway, leveraging accelerate instead of redefining them in transformers. This should unify both framework and lead to increased interoperability and more efficient development.

Bugfixes and improvements

Read more

v4.29.2: Patch release

16 May 19:47
ba70545
Compare
Choose a tag to compare

Fixes the package so non-Python files (like CUDA kernels) are properly included.

V4.29.1: Patch release

11 May 20:45
118e981
Compare
Choose a tag to compare

Reverts a regression in the FSDP integration.
Add pip install transformers["agent"] to have all dependencies agents rely on.
Fixes the documentation about agents.

v4.29.0: Transformers Agents, SAM, RWKV, FocalNet, OpenLLaMa

10 May 21:55
15f260a
Compare
Choose a tag to compare

Transformers Agents

Transformers Agent is a new API that lets you use the library and Diffusers by prompting an agent (which is a large language model) in natural language. That agent will then output code using a set of predefined tools, leveraging the appropriate (and state-of-the-art) models for the task the user wants to perform. It is fully multimodal and extensible by the community. Learn more in the docs

SAM

SAM (Segment Anything Model) was proposed in Segment Anything by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.

The model can be used to predict segmentation masks of any object of interest given an input image.

RWKV

RWKV suggests a tweak in the traditional Transformer attention to make it linear. This way, the model can be used as recurrent network: passing inputs for timestamp 0 and timestamp 1 together is the same as passing inputs at timestamp 0, then inputs at timestamp 1 along with the state of timestamp 0 (see example below).

This can be more efficient than a regular Transformer and can deal with sentence of any length (even if the model uses a fixed context length for training).

FocalNet

The FocalNet model was proposed in Focal Modulation Networks by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. FocalNets completely replace self-attention (used in models like ViT and Swin) by a focal modulation mechanism for modeling token interactions in vision. The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation.

OpenLLaMa

The Open-Llama model was proposed in Open-Llama project by community developer s-JoL.

The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PLAM. And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks.

Assisted Generation

Assisted generation is a new technique that lets you speed up generation with large language models by using a smaller model as assistant. The assistant model will be the ones doing multiple forward pass while the LLM will merely validate the tokens proposed by the assistant. This can lead to speed-ups up to 10x!

  • Generate: Add assisted generation by @gante in #22211
  • Generate: assisted generation with sample (take 2) by @gante in #22949

Code on the Hub from another repo

To avoid duplicating the model code in multiple repos when using the code on the Hub feature, loading such models will now save in their config the repo in which the code is. This way there is one source of ground truth for code on the Hub models.

Breaking changes

This releases has three breaking changes compared to version v4.28.0.

The first one focuses on fixing training issues for Pix2Struct. This slightly affects the results, but should result in the model training much better.

  • 🚨🚨🚨 [Pix2Struct] Attempts to fix training issues 🚨🚨🚨 by @younesbelkada in #23004

The second one is aligning the ignore index in the LUKE model to other models in the library. This breaks the convention that models should stick to their original implementation, but it was necessary in order to align with other transformers in the library

Finally, the third breaking change aims to harmonize the training procedure for most of recent additions in transformers. It should be users' responsibility to fill_mask the padding tokens of the labels with the correct value. This PR addresses the issue that was raised by other architectures such as Luke or Pix2Struct

Bugfixes and improvements

Read more

v4.28.1: Patch release

14 Apr 16:57
04ab560
Compare
Choose a tag to compare

Fixes a regression for DETA models

v4.28.0: LLaMa, Pix2Struct, MatCha, DePlot, MEGA, NLLB-MoE, GPTBigCode

13 Apr 18:40
9417c92
Compare
Choose a tag to compare

LLaMA

The LLaMA model was proposed in LLaMA: Open and Efficient Foundation Language Models. It is a collection of foundation language models ranging from 7B to 65B parameters. You can request access to the weights here then use the conversion script to generate a checkpoint compatible with Hugging Face

Pix2Struct, MatCha, DePlot

Pix2Struct is a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct has been fine-tuned on various tasks and datasets, ranging from image captioning and visual question answering (VQA) over different inputs (books, charts, science diagrams) to captioning UI components, and others.

Mega

MEGA proposes a new approach to self-attention with each encoder layer having a multi-headed exponential moving average in addition to a single head of standard dot-product attention, giving the attention mechanism stronger positional biases. This allows MEGA to perform competitively to Transformers on standard benchmarks including LRA while also having significantly fewer parameters. MEGA’s compute efficiency allows it to scale to very long sequences, making it an attractive option for long-document NLP tasks.

GPTBigCode

The model is a an optimized GPT2 model with support for Multi-Query Attention.

  • Add GPTBigCode model (Optimized GPT2 with MQA from Santacoder & BigCode) by @jlamypoirier in #22575

NLLB-MoE

The mixture of experts version of the NLLB release has been added to the library.

Serializing 8bit models

You can now push 8bit models and/or load 8bit models directly from the Hub, save memory and load your 8bit models faster! An example repo here

Breaking Changes

Ordering of height and width for the BLIP image processor

Notes from the PR:

The BLIP image processor incorrectly passed in the dimensions to resize in the order (width, height). This is reordered to be correct.

In most cases, this won't have an effect as the default height and width are the same. However, this is not backwards compatible for custom configurations with different height, width settings and direct calls to the resize method with different height, width values.

  • 🚨🚨🚨 Fix ordering of height, width for BLIP image processor by @amyeroberts in #22466

Prefix tokens for the NLLB tokenizer

The big problem was the prefix and suffix tokens of the NLLB tokenizer.

Previous behaviour:

>>> from transformers import NllbTokenizer
>>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
>>> tokenizer("How was your day?").input_ids
[13374, 1398, 4260, 4039, 248130, 2, 256047]
>>> # 2: '</s>'
>>> # 256047 : 'eng_Latn'

New behaviour

>>> from transformers import NllbTokenizer
>>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
>>> tokenizer("How was your day?").input_ids
[256047, 13374, 1398, 4260, 4039, 248130, 2]

In case you have pipelines that were relying on the old behavior, here is how you would enable it once again:

>>> from transformers import NllbTokenizer
>>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour = True)

TensorFlow ports

The BLIP model is now available in TensorFlow.

Export TF Generate with a TF tokenizer

As the title says, this PR adds the possibility to export TF generate with a TF-native tokenizer -- the full thing in a single TF graph.

  • Generate: Export TF generate with a TF tokenizer by @gante in #22310

Task guides

A new task guide has been added, focusing on depth-estimation.

Bugfixes and improvements

Read more

v4.27.4: Patch release

29 Mar 17:08
4e9f6fc
Compare
Choose a tag to compare

This patch fixes a regression with FlauBERT and XLM models.

  • Revert "Error (also in original) model, scaling only q matrix not qk.T dot product (qk.T/sqrt(dim_per_head)) (#21627) in #22444 by @sgugger

v4.27.3: Patch release

23 Mar 19:01
5e3b19a
Compare
Choose a tag to compare

Enforce max_memory for device_map strategies by @sgugger in #22311

v4.27.2: Patch release

20 Mar 17:42
6828768
Compare
Choose a tag to compare