Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add StableLM2 pre-tokenizer #7349

Merged
merged 3 commits into from
May 19, 2024

Conversation

aahouzi
Copy link
Contributor

@aahouzi aahouzi commented May 17, 2024

Type of Change

StableLM2:

  "pre_tokenizer": {
    "type": "Sequence",
    "pretokenizers": [
      {
        "type": "Split",
        "pattern": {
          "Regex": "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\r\n]*|\\s*[\r\n]+|\\s+(?!\\S)|\\s+"
        },
        "behavior": "Removed",
        "invert": true
      },
      {
        "type": "ByteLevel",
        "add_prefix_space": false,
        "trim_offsets": true,
        "use_regex": false
      }
    ]
  },

Qwen2:

  "pre_tokenizer": {
    "type": "Sequence",
    "pretokenizers": [
      {
        "type": "Split",
        "pattern": {
          "Regex": "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
        },
        "behavior": "Isolated",
        "invert": false
      },
      {
        "type": "ByteLevel",
        "add_prefix_space": false,
        "trim_offsets": false,
        "use_regex": false
      }
    ]
  }

Description

  • Adds pre-tokenizer for StableLM2 models.

How has this PR been tested?

  • Example from StableLM2-12B:
............................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   100.00 MiB
llama_new_context_with_model: KV self size  =  100.00 MiB, K (f16):   50.00 MiB, V (f16):   50.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.38 MiB
llama_new_context_with_model:        CPU compute buffer size =   206.00 MiB
llama_new_context_with_model: graph nodes  = 1408
llama_new_context_with_model: graph splits = 1

system_info: n_threads = 32 / 64 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
sampling:
        repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 2048, n_predict = 512, n_keep = 0


<|endoftext|>Who is Elon Musk ? How does he work ? What are his projects? In this article, of course we will see who is the new CEO of Tesla and what are his projects.
What is the secret of Elon Musk?
Elon Musk is the CEO of Tesla, but what is the secret of his success? What is his motivation?
This is not the first time this question has been asked, but we will try to give you a simple and concise answer.
How much is Elon Musk worth?
Elon Musk is currently the richest man in the world. His fortune is estimated at $244.4 billion dollars. He is also the CEO of Tesla and SpaceX. He has a degree in Physics and Computer Science from the University of Pennsylvania and a Master’s degree in Physics from Stanford University.
How does Elon Musk work?
Elon Musk is known as the Elon Musk of the 21st century. He is a billionaire, an entrepreneur, an investor and a futurist. He was born in South Africa and has lived in the United States since he was a child. He is currently the CEO of Tesla and SpaceX. He has been involved in many different projects throughout his life, including PayPal, Tesla, SpaceX, SolarCity, Neuralink and Hyperloop One.
How much money does Elon Musk make per month?
Elon Musk is a billionaire entrepreneur, inventor, investor, engineer, and philanthropist. He is the founder, CEO, and chief product architect of SpaceX; CEO and chief engineer of Tesla; co-founder of PayPal; co-founder of the online payment company X.com; co-founder of OpenAI, an artificial intelligence research company; and co-founder of Neuralink, a neurotechnology startup. He is the chairman of SolarCity; co-founder of The Boring Company; co-founder of the Musk Foundation; and the owner of the electric car company Tesla.
What does Elon Musk have to say?
Elon Musk is the CEO of Tesla and SpaceX and is also an engineer and entrepreneur. He is known for his innovative work in electric cars, space travel, and artificial intelligence. He was born in South Africa in 1971 and moved to Canada in 1989. He attended Queen’s University in Kingston, Ontario. He later moved to the United States to study at the University of Pennsylvania.
How much does Elon Musk make?
Elon Musk is the CEO of SpaceX and Tesla. He has a net worth of $22 billion, making him one of the richest people in the world. He was born in Pret
llama_print_timings:        load time =    2903.05 ms
llama_print_timings:      sample time =      40.35 ms /   512 runs   (    0.08 ms per token, 12690.23 tokens per second)
llama_print_timings: prompt eval time =     223.17 ms /     6 tokens (   37.20 ms per token,    26.88 tokens per second)
llama_print_timings:        eval time =   54297.05 ms /   511 runs   (  106.26 ms per token,     9.41 tokens per second)
llama_print_timings:       total time =   54838.64 ms /   517 tokens
Log end

Copy link
Contributor

github-actions bot commented May 17, 2024

📈 llama.cpp server for bench-server-baseline on Standard_NC4as_T4_v3 for phi-2-q4_0: 534 iterations 🚀

Expand details for performance related PR only
  • Concurrent users: 8, duration: 10m
  • HTTP request : avg=8772.08ms p(95)=20523.81ms fails=, finish reason: stop=482 truncated=52
  • Prompt processing (pp): avg=99.57tk/s p(95)=434.44tk/s
  • Token generation (tg): avg=45.49tk/s p(95)=45.25tk/s
  • ggml-org/models/phi-2/ggml-model-q4_0.gguf parallel=8 ctx-size=16384 ngl=33 batch-size=2048 ubatch-size=256 pp=1024 pp+tg=2048 branch=add_stablelm-pre_tokenizer commit=ab0910b15fd38c5ff05de848ac695259b3c1ca5d

prompt_tokens_seconds

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 534 iterations"
    y-axis "llamacpp:prompt_tokens_seconds"
    x-axis "llamacpp:prompt_tokens_seconds" 1716106925 --> 1716107557
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 768.71, 768.71, 768.71, 768.71, 768.71, 670.1, 670.1, 670.1, 670.1, 670.1, 699.28, 699.28, 699.28, 699.28, 699.28, 777.84, 777.84, 777.84, 777.84, 777.84, 777.27, 777.27, 777.27, 777.27, 777.27, 779.39, 779.39, 779.39, 779.39, 779.39, 794.83, 794.83, 794.83, 794.83, 794.83, 801.91, 801.91, 801.91, 801.91, 801.91, 815.04, 815.04, 815.04, 815.04, 815.04, 801.87, 801.87, 801.87, 801.87, 801.87, 821.44, 821.44, 821.44, 821.44, 821.44, 825.32, 825.32, 825.32, 825.32, 825.32, 845.51, 845.51, 845.51, 845.51, 845.51, 859.08, 859.08, 859.08, 859.08, 859.08, 860.14, 860.14, 860.14, 860.14, 860.14, 858.3, 858.3, 858.3, 858.3, 858.3, 854.47, 854.47, 854.47, 854.47, 854.47, 875.56, 875.56, 875.56, 875.56, 875.56, 875.23, 875.23, 875.23, 875.23, 875.23, 875.42, 875.42, 875.42, 875.42, 875.42, 880.08, 880.08, 880.08, 880.08, 880.08, 879.44, 879.44, 879.44, 879.44, 879.44, 892.18, 892.18, 892.18, 892.18, 892.18, 890.17, 890.17, 890.17, 890.17, 890.17, 890.96, 890.96, 890.96, 890.96, 890.96, 907.13, 907.13, 907.13, 907.13, 907.13, 904.09, 904.09, 904.09, 904.09, 904.09, 902.2, 902.2, 902.2, 902.2, 902.2, 903.78, 903.78, 903.78, 903.78, 903.78, 906.38, 906.38, 906.38, 906.38, 906.38, 904.37, 904.37, 904.37, 904.37, 904.37, 903.85, 903.85, 903.85, 903.85, 903.85, 905.09, 905.09, 905.09, 905.09, 905.09, 914.79, 914.79, 914.79, 914.79, 914.79, 917.83, 917.83, 917.83, 917.83, 917.83, 921.95, 921.95, 921.95, 921.95, 921.95, 918.39, 918.39, 918.39, 918.39, 918.39, 916.46, 916.46, 916.46, 916.46, 916.46, 915.4, 915.4, 915.4, 915.4, 915.4, 916.56, 916.56, 916.56, 916.56, 916.56, 915.51, 915.51, 915.51, 915.51, 915.51, 913.72, 913.72, 913.72, 913.72, 913.72, 862.0, 862.0, 862.0, 862.0, 862.0, 861.11, 861.11, 861.11, 861.11, 861.11, 859.42, 859.42, 859.42, 859.42, 859.42, 856.91, 856.91, 856.91, 856.91, 856.91, 844.14, 844.14, 844.14, 844.14, 844.14, 846.38, 846.38, 846.38, 846.38, 846.38, 845.12, 845.12, 845.12, 845.12, 845.12, 850.26, 850.26, 850.26, 850.26, 850.26, 849.69, 849.69, 849.69, 849.69, 849.69, 850.98, 850.98, 850.98, 850.98, 850.98, 855.41, 855.41, 855.41, 855.41, 855.41, 853.87, 853.87, 853.87, 853.87, 853.87, 854.14, 854.14, 854.14, 854.14, 854.14, 854.98, 854.98, 854.98, 854.98, 854.98, 853.94, 853.94, 853.94, 853.94, 853.94, 852.3, 852.3, 852.3, 852.3, 852.3, 853.08, 853.08, 853.08, 853.08, 853.08, 854.23, 854.23, 854.23, 854.23, 854.23, 854.54, 854.54]
                    
predicted_tokens_seconds
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 534 iterations"
    y-axis "llamacpp:predicted_tokens_seconds"
    x-axis "llamacpp:predicted_tokens_seconds" 1716106925 --> 1716107557
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 42.38, 42.38, 42.38, 42.38, 42.38, 34.23, 34.23, 34.23, 34.23, 34.23, 33.53, 33.53, 33.53, 33.53, 33.53, 34.13, 34.13, 34.13, 34.13, 34.13, 34.62, 34.62, 34.62, 34.62, 34.62, 35.63, 35.63, 35.63, 35.63, 35.63, 36.29, 36.29, 36.29, 36.29, 36.29, 36.37, 36.37, 36.37, 36.37, 36.37, 36.19, 36.19, 36.19, 36.19, 36.19, 35.07, 35.07, 35.07, 35.07, 35.07, 35.09, 35.09, 35.09, 35.09, 35.09, 34.45, 34.45, 34.45, 34.45, 34.45, 33.58, 33.58, 33.58, 33.58, 33.58, 31.93, 31.93, 31.93, 31.93, 31.93, 31.21, 31.21, 31.21, 31.21, 31.21, 30.38, 30.38, 30.38, 30.38, 30.38, 30.63, 30.63, 30.63, 30.63, 30.63, 30.5, 30.5, 30.5, 30.5, 30.5, 30.44, 30.44, 30.44, 30.44, 30.44, 30.41, 30.41, 30.41, 30.41, 30.41, 30.41, 30.41, 30.41, 30.41, 30.41, 30.43, 30.43, 30.43, 30.43, 30.43, 30.6, 30.6, 30.6, 30.6, 30.6, 30.43, 30.43, 30.43, 30.43, 30.43, 30.76, 30.76, 30.76, 30.76, 30.76, 30.62, 30.62, 30.62, 30.62, 30.62, 30.6, 30.6, 30.6, 30.6, 30.6, 30.89, 30.89, 30.89, 30.89, 30.89, 31.12, 31.12, 31.12, 31.12, 31.12, 31.32, 31.32, 31.32, 31.32, 31.32, 31.34, 31.34, 31.34, 31.34, 31.34, 31.34, 31.34, 31.34, 31.34, 31.34, 31.45, 31.45, 31.45, 31.45, 31.45, 31.49, 31.49, 31.49, 31.49, 31.49, 31.26, 31.26, 31.26, 31.26, 31.26, 30.97, 30.97, 30.97, 30.97, 30.97, 30.82, 30.82, 30.82, 30.82, 30.82, 30.89, 30.89, 30.89, 30.89, 30.89, 31.03, 31.03, 31.03, 31.03, 31.03, 31.13, 31.13, 31.13, 31.13, 31.13, 31.19, 31.19, 31.19, 31.19, 31.19, 31.28, 31.28, 31.28, 31.28, 31.28, 31.11, 31.11, 31.11, 31.11, 31.11, 30.59, 30.59, 30.59, 30.59, 30.59, 30.54, 30.54, 30.54, 30.54, 30.54, 29.33, 29.33, 29.33, 29.33, 29.33, 29.27, 29.27, 29.27, 29.27, 29.27, 29.24, 29.24, 29.24, 29.24, 29.24, 29.17, 29.17, 29.17, 29.17, 29.17, 29.23, 29.23, 29.23, 29.23, 29.23, 29.27, 29.27, 29.27, 29.27, 29.27, 29.32, 29.32, 29.32, 29.32, 29.32, 29.32, 29.32, 29.32, 29.32, 29.32, 29.14, 29.14, 29.14, 29.14, 29.14, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.19, 29.23, 29.23, 29.23, 29.23, 29.23, 29.35, 29.35, 29.35, 29.35, 29.35, 29.43, 29.43, 29.43, 29.43, 29.43, 29.5, 29.5]
                    

Details

kv_cache_usage_ratio

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 534 iterations"
    y-axis "llamacpp:kv_cache_usage_ratio"
    x-axis "llamacpp:kv_cache_usage_ratio" 1716106925 --> 1716107557
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.38, 0.38, 0.38, 0.38, 0.38, 0.37, 0.37, 0.37, 0.37, 0.37, 0.13, 0.13, 0.13, 0.13, 0.13, 0.19, 0.19, 0.19, 0.19, 0.19, 0.15, 0.15, 0.15, 0.15, 0.15, 0.14, 0.14, 0.14, 0.14, 0.14, 0.17, 0.17, 0.17, 0.17, 0.17, 0.21, 0.21, 0.21, 0.21, 0.21, 0.24, 0.24, 0.24, 0.24, 0.24, 0.19, 0.19, 0.19, 0.19, 0.19, 0.31, 0.31, 0.31, 0.31, 0.31, 0.34, 0.34, 0.34, 0.34, 0.34, 0.37, 0.37, 0.37, 0.37, 0.37, 0.45, 0.45, 0.45, 0.45, 0.45, 0.24, 0.24, 0.24, 0.24, 0.24, 0.13, 0.13, 0.13, 0.13, 0.13, 0.11, 0.11, 0.11, 0.11, 0.11, 0.21, 0.21, 0.21, 0.21, 0.21, 0.22, 0.22, 0.22, 0.22, 0.22, 0.13, 0.13, 0.13, 0.13, 0.13, 0.23, 0.23, 0.23, 0.23, 0.23, 0.14, 0.14, 0.14, 0.14, 0.14, 0.28, 0.28, 0.28, 0.28, 0.28, 0.15, 0.15, 0.15, 0.15, 0.15, 0.11, 0.11, 0.11, 0.11, 0.11, 0.22, 0.22, 0.22, 0.22, 0.22, 0.13, 0.13, 0.13, 0.13, 0.13, 0.1, 0.1, 0.1, 0.1, 0.1, 0.16, 0.16, 0.16, 0.16, 0.16, 0.17, 0.17, 0.17, 0.17, 0.17, 0.22, 0.22, 0.22, 0.22, 0.22, 0.18, 0.18, 0.18, 0.18, 0.18, 0.11, 0.11, 0.11, 0.11, 0.11, 0.32, 0.32, 0.32, 0.32, 0.32, 0.28, 0.28, 0.28, 0.28, 0.28, 0.3, 0.3, 0.3, 0.3, 0.3, 0.14, 0.14, 0.14, 0.14, 0.14, 0.11, 0.11, 0.11, 0.11, 0.11, 0.17, 0.17, 0.17, 0.17, 0.17, 0.14, 0.14, 0.14, 0.14, 0.14, 0.17, 0.17, 0.17, 0.17, 0.17, 0.39, 0.39, 0.39, 0.39, 0.39, 0.55, 0.55, 0.55, 0.55, 0.55, 0.58, 0.58, 0.58, 0.58, 0.58, 0.62, 0.62, 0.62, 0.62, 0.62, 0.23, 0.23, 0.23, 0.23, 0.23, 0.19, 0.19, 0.19, 0.19, 0.19, 0.28, 0.28, 0.28, 0.28, 0.28, 0.2, 0.2, 0.2, 0.2, 0.2, 0.19, 0.19, 0.19, 0.19, 0.19, 0.2, 0.2, 0.2, 0.2, 0.2, 0.17, 0.17, 0.17, 0.17, 0.17, 0.31, 0.31, 0.31, 0.31, 0.31, 0.1, 0.1, 0.1, 0.1, 0.1, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.19, 0.19, 0.19, 0.19, 0.19, 0.13, 0.13, 0.13, 0.13, 0.13, 0.13, 0.13, 0.13, 0.13, 0.13, 0.16, 0.16, 0.16, 0.16, 0.16, 0.2, 0.2]
                    
requests_processing
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 534 iterations"
    y-axis "llamacpp:requests_processing"
    x-axis "llamacpp:requests_processing" 1716106925 --> 1716107557
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0, 3.0, 3.0, 3.0, 3.0, 4.0, 4.0, 4.0, 4.0, 4.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 4.0, 4.0, 4.0, 4.0, 4.0, 6.0, 6.0, 6.0, 6.0, 6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 4.0, 4.0, 4.0, 4.0, 4.0, 6.0, 6.0, 6.0, 6.0, 6.0, 5.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 7.0, 7.0, 7.0, 7.0, 7.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 8.0, 8.0, 8.0, 8.0, 8.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 4.0, 4.0, 4.0, 4.0, 4.0, 6.0, 6.0, 6.0, 6.0, 6.0, 3.0, 3.0, 3.0, 3.0, 3.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 5.0, 5.0, 5.0, 5.0, 5.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 3.0, 3.0, 3.0, 3.0, 3.0, 6.0, 6.0, 6.0, 6.0, 6.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 3.0, 3.0, 3.0, 3.0, 3.0, 6.0, 6.0, 6.0, 6.0, 6.0, 7.0, 7.0, 7.0, 7.0, 7.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 5.0, 5.0, 5.0, 5.0, 5.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0, 6.0, 6.0, 6.0, 6.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 1.0, 1.0]
                    

@mofosyne mofosyne added model Model specific review complexity : low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix labels May 18, 2024
Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix trailing spaces - see the CI

@github-actions github-actions bot added the python python script changes label May 18, 2024
@aahouzi
Copy link
Contributor Author

aahouzi commented May 18, 2024

@mofosyne it's ready to merge

@mofosyne mofosyne merged commit 6aade19 into ggerganov:master May 19, 2024
71 checks passed
@aahouzi aahouzi deleted the add_stablelm-pre_tokenizer branch May 19, 2024 13:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
model Model specific python python script changes review complexity : low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants