Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add LLM training scripts #49

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
165 changes: 165 additions & 0 deletions tasks/generative-ai/text-to-text/training/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

tmp
*.tar.gz
*.bin
*.pt
37 changes: 37 additions & 0 deletions tasks/generative-ai/text-to-text/training/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# SageMaker での日本語 LLM の分散学習

こちらのサンプルコードは [SageMaker Model Parallel Library](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel.html) を使用して SageMaker 上で日本語大規模言語モデルの Pretraining、Fine-Tuning、Instruction-Tuning が行うサンプルコードです。

## Notebooks

- smp-train-jp-gpt-neox-sharded-data-parallel.ipynb
- 日本語 GPT Neox の Pretraining
- smp-finetune-jp-gpt-neox-sharded-data-parallel.ipynb
- 日本語 GPT Neox (Rinna 3.6B) の Fine Tuning
- smp-instruct-jp-gpt-neox-sharded-data-parallel.ipynb
- 日本語 GPT Neox (Rinna 3.6B) の Instruction Tuning
- smp-train-gpt-neox-sharded-data-parallel.ipynb
- GPT NeoX の Pretraining(元のノートブックを少し改変したもの)
- convert-hf.ipynb
- 分散学習スクリプトで保存した Model を HuggingFace 形式に変換して保存するコード
- data-preprocess.ipynb
- 事前学習用にデータセットを前処理して FSx for Lustre に保存するサンプルコード

## 元サンプルコードからの変更点

このサンプルノートブックは [SageMaker の分散学習での GPT NeoX の Pretraining サンプル](https://github.com/aws/amazon-sagemaker-examples/tree/main/training/distributed_training/pytorch/model_parallel/gpt-neox) を元に日本語対応および Fine-tuning、Instruction Tuning 対応などを追加しています。

主な変更点は以下になります。

- 日本語対応
- Finetuning、Instruction Tuning ノートブックの追加
- Padding Collator の実装
- Pretraining であればテキストをチャンク化するためバッチ内でのトークン長が一致しているため問題にならないが Instruction Tuning の際には Instruction ごとにトークン長が異なるため Padding が必要になる。[Transformers の Collator](https://github.com/huggingface/transformers/blob/main/src/transformers/data/data_collator.py#L402) の実装を元にバッチに Left Padding を施す変更を加えた。元々トークン長が一致している Pretraining には影響しない。
- その他、試行錯誤しやすいように細かな変更をいくつか追加しています

## おすすめのリソース

- [大規模言語モデルを Amazon SageMaker 上で学習する際のベストプラクティス](https://aws.amazon.com/jp/blogs/news/training-large-language-models-on-amazon-sagemaker-best-practices/)
- [SageMaker 分散モデル並列処理のベストプラクティス](https://docs.aws.amazon.com/ja_jp/sagemaker/latest/dg/model-parallel-best-practices.html)
- [GPT-J の分散学習についての AWS ブログ](https://aws.amazon.com/blogs/machine-learning/fine-tune-gpt-j-using-an-amazon-sagemaker-hugging-face-estimator-and-the-model-parallel-library/)
- [Amazon SageMaker 分散学習ドキュメント](https://docs.aws.amazon.com/ja_jp/sagemaker/latest/dg/distributed-training.html)