Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime error when running clip-inference using "open_clip:xlm-roberta-large-ViT-H-14" #313

Open
simran-khanuja opened this issue Sep 16, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@simran-khanuja
Copy link

Hi! Thanks for your amazing work!

I am trying to obtain image-text embeddings for "open_clip:xlm-roberta-large-ViT-H-14"; but face the following error. The code works for "open_clip:ViT-H-14". I haven't changed anything but the model. Do you happen to have insights? Thanks!

 File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/clip_inference/distributor.py", line 17, in __call__
    worker(
  File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/clip_inference/worker.py", line 122, in worker
    runner(task)
  File "/home/<user>/envs/demo/lib/python3.10/site-packages/clip_retrieval/clip_inference/runner.py", line 29, in __call__
    reader = self.reader_builder(sampler)
  File "/home/<user>miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/clip_inference/worker.py", line 52, in reader_builder
    _, preprocess = load_clip(
  File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/load_clip.py", line 85, in load_clip
    model, preprocess = load_clip_without_warmup(clip_model, use_jit, device, clip_cache_path)
  File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/load_clip.py", line 74, in load_clip_without_warmup
    model, preprocess = load_open_clip(clip_model, use_jit, device, clip_cache_path)
  File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/load_clip.py", line 49, in load_open_clip
    model, _, preprocess = open_clip.create_model_and_transforms(
  File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/open_clip/factory.py", line 308, in create_model_and_transforms
    model = create_model(
  File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/open_clip/factory.py", line 228, in create_model
    load_checkpoint(model, checkpoint_path)
  File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/open_clip/factory.py", line 104, in load_checkpoint
    incompatible_keys = model.load_state_dict(state_dict, strict=strict)
  File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1671, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CustomTextCLIP:
	Unexpected key(s) in state_dict: "text.transformer.embeddings.position_ids".
@simran-khanuja simran-khanuja changed the title Runtime error when running clip-inference "open_clip:xlm-roberta-large-ViT-H-14" Runtime error when running clip-inference using "open_clip:xlm-roberta-large-ViT-H-14" Sep 16, 2023
@fabiozappo
Copy link
Contributor

It looks related to mlfoundations/open_clip#594 , one solution that worked for me is downgrading transformers to a lower version where the keys of the model match.

pip install -U transformers==4.30.2

@rom1504 rom1504 added the bug Something isn't working label Jan 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants