- π¨βπ¦° Iβm currently a M.Phil candidate of Peking University.
- π¦ Before that, I received the (Honours) B.E., HUST.
- β€οΈβπ₯ Now, I am intersted in Multi-modal Learning especially MLLM.
- π₯ In 2023 summer, I take part in OSPP(Open Source Promotion Plan) Summer Camp , with the honor of contributing for MMPretrain to build prompt-based classifier.
- π₯ 2023.10: I implement MiniGPT4Qwen, which is a toy model aligning MiniGPT4 with Qwen-Chat LLM model. I just use 18.8k high quality instruction-tuning data(bi-lingual, selected from minigpt4 and llava). Just fine-tuning the projection layer (3M trainable parameters), this model support Chinese and English! MiniGPT4Qwen
- π₯ 2024.2: I extend MiniGPT4Qwen to MPP-Qwen14B(Multimodal Pipeline Parallel), scaling up both the LLM(to Qwen-14B-Chat) and pretrain-data(using LLaVA-pretrain-data). I also unfreeze the whole LLM during SFT-stage. All training is conducted on 3090/4090 GPUs. To prevent poverty (24GB of VRAM) from limiting imagination, I implemented an MLLM version based on deepspeed Pipeline Parallel. Pre-training can be completed in 22 hours on 2x4090s, while SFT requires training on 6x4090s (because it needs to fully activate the LLM), but due to the small amount of data, it only takes several hours.MPP-Qwen14B
Highlights
- Pro
Block or Report
Block or report Coobiw
Report abuse
Contact GitHub support about this userβs behavior. Learn more about reporting abuse.
Report abusePinned
-
open-mmlab/mmpretrain
open-mmlab/mmpretrain PublicOpenMMLab Pre-training Toolbox and Benchmark
-
MiniGPT4Qwen
MiniGPT4Qwen PublicPersonal Project: MPP-Qwen14B(Multimodal Pipeline Parallel-Qwen14B). Don't let the poverty limit your imagination! Train your own 14B LLaVA-like MLLM on RTX3090/4090 24GB.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.