Aligning pretrained language models with instruction data generated by themselves.
-
Updated
Mar 27, 2023 - Python
Aligning pretrained language models with instruction data generated by themselves.
Open-source Self-Instruction Tuning Code LLM
Summaries of papers related to the alignment problem in NLP
Instruction Tuning with GPT-4
A curated list of awesome instruction tuning datasets, models, papers and repositories.
🌱 梦想家(DreamerGPT):中文大语言模型指令精调
This repository contains the code to train flan t5 with alpaca instructions and low rank adaptation.
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
Domain generalization on Aspect Based Sentiment Analysis (ABSA) task via utilizing noisy student architecture.
CodeUp: A Multilingual Code Generation Llama2 Model with Parameter-Efficient Instruction-Tuning on a Single RTX 3090
Vision Large Language Models trained on M3IT instruction tuning dataset
Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback
Crosslingual Generalization through Multitask Finetuning
The ParroT framework to enhance and regulate the Translation Abilities during Chat based on open-sourced LLMs (e.g., LLaMA-7b, Bloomz-7b1-mt) and human written translation and evaluation data.
An open-source conversational language model developed by the Knowledge Works Research Laboratory at Fudan University.
Code base for the paper "Instruction Tuned Models are Quick Learners".
Research Trends in LLM-guided Multimodal Learning.
CIKM2023 Best Demo Paper Award. HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊
DISC-FinLLM,中文金融大语言模型(LLM),旨在为用户提供金融场景下专业、智能、全面的金融咨询服务。DISC-FinLLM, a Chinese financial large language model (LLM) designed to provide users with professional, intelligent, and comprehensive financial consulting services in financial scenarios.
Random Noisy Embeddings with fine-tuning 방법론을 한국어 LLM에 간단히 적용할 수 있는 Kosy🍵llama
Add a description, image, and links to the instruction-tuning topic page so that developers can more easily learn about it.
To associate your repository with the instruction-tuning topic, visit your repo's landing page and select "manage topics."