Ovis is a novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings. For a comprehensive introduction, please refer to the Ovis paper.
Ovis has been tested with Python 3.10, Torch 2.1.0, Transformers 4.41.1, and DeepSpeed 0.14.0. For a comprehensive list of package dependencies, please consult the requirements.txt
file. Before training or inference, please install Ovis as follows.
git clone git@github.com:AIDC-AI/Ovis.git
conda create -n ovis python=3.10 -y
conda activate ovis
cd Ovis
pip install -r requirements.txt
pip install -e .
Ovis can be instantiated with popular LLMs (e.g., Qwen, Llama3). We provide the following pretrained Ovis MLLMs:
Ovis MLLMs | ViT | LLM | Download | MMStar | MMB-EN | MMB-CN | MMMU-Val | MMMU-Test | MathVista-Mini | MME | HallusionBench | RealWorldQA |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Ovis-Clip-Qwen1.5-7B | Clip | Qwen1.5-7B-Chat | Huggingface | 44.3 | 75.1 | 70.2 | 39.7 | 37.7 | 41.4 | 1882 | 56.4 | 60.0 |
Ovis-Clip-Llama3-8B | Clip | Llama3-8B-Instruct | Huggingface | 49.5 | 77.4 | 72.8 | 44.7 | 39.0 | 40.8 | 2009 | 61.1 | 57.9 |
Ovis-Clip-Qwen1.5-14B | Clip | Qwen1.5-14B-Chat | Huggingface | 48.5 | 78.4 | 76.6 | 46.7 | 40.7 | 43.4 | 1961 | 57.6 | 62.7 |
We evaluate Ovis across various multimodal benchmarks using VLMEvalKit. The evaluation results show that Ovis outperforms open-source MLLMs within the same parameter tier across various benchmarks, and Ovis-Clip-Qwen1.5-14B also surpasses the high-resource proprietary model Qwen-VL-Plus overall.
All training datasets are summarized in the JSON file located at ovis/train/dataset_info.json
. Each dataset entry includes the following attributes:
meta_file
: This file contains a collection of samples where each sample consists of text and (optionally) image. The text data is embedded directly within themeta_file
, while the image is represented by its filename. This filename refers to the image file located in theimage_dir
.image_dir
: The directory where the images are stored.data_format
: Specifies the format of the data, which is used to determine the dataset class for processing the dataset.
We provide the meta_file
for each training dataset at Huggingface. The images can be downloaded from their respective sources listed below.
Below is an example of the folder structure consistent with ovis/train/dataset_info.json
. You can alter the folder structure as needed and modify ovis/train/dataset_info.json
accordingly.
|-- mllm_datasets
|-- meta_files
|-- coyo-10m.parquet
|-- llava-pretrain-558k.json
|-- sharegpt4v-pretrain-82k.json
|-- allava-caption-laion-4v-485k.json
...
|-- images
|-- coyo_10m
|-- llava_pretrain
|-- sharegpt4v
|-- allava_laion
...
Ovis is trained in three stages, with each stage's training scripts located in the scripts
directory. Before starting the training, ensure you properly set the ROOT
variable in the scripts. Below are the commands to train Ovis-Clip-Qwen1.5-7B:
bash scripts/v1/Ovis-Clip-Qwen1.5-7B-S1.sh
bash scripts/v1/Ovis-Clip-Qwen1.5-7B-S2.sh
bash scripts/v1/Ovis-Clip-Qwen1.5-7B-S3.sh
We provide an inference wrapper in ovis/serve/runner.py
, which can be used as:
from PIL import Image
from ovis.serve.runner import RunnerArguments, OvisRunner
image = Image.open('IMAGE_PATH')
text = 'PROMPT'
runner_args = RunnerArguments(model_path='MODEL_PATH')
runner = OvisRunner(runner_args)
generation = runner.run(image, text)
Based on Gradio, Ovis can also be accessed via a web user interface:
python ovis/serve/server.py --model_path MODEL_PATH --port PORT
If you find Ovis useful, please cite the paper
@article{lu2024ovis,
title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model},
author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
year={2024},
journal={arXiv:2405.20797}
}
This work is a collaborative effort by the MarcoVL team. We would also like to provide links to the following MLLM papers from our team:
- Parrot: Multilingual Visual Instruction Tuning
- Wings: Learning Multimodal LLMs without Text-only Forgetting
The project is licensed under the Apache 2.0 License and is restricted to uses that comply with the license agreements of Qwen, Llama3, and Clip.