Fix RWTHDBIS inference device mismatch and add accelerate#302
Fix RWTHDBIS inference device mismatch and add accelerate#302Kleinpenny wants to merge 4 commits intosciknoworg:devfrom
Conversation
|
Dear @Kleinpenny, thanks for your contribution. To proceed, can you update your branch with the latest version of the Moreover, I can certenly understood why you have modified the Thank You. I am looking forward to your next commits. |
|
Dear Hamed, Thanks for your suggestions and reviewing! I will edit this PR and follow your instructions to avoid conflict. I'm looking into the trainer of our approach inside, and i will make another PR soon. Hopefully i can back to you within this week. |
Co-authored-by: HamedBabaei <26560419+HamedBabaei@users.noreply.github.com>
|
Can be closed coz the changes has applied in PR #306 |
Summary
acceleratedependency required bytransformers.Trainer.cudaby default.examples/results/andresults/.Background
Running RWTHDBIS examples on GPU triggered a runtime error during inference:
Expected all tensors to be on the same devicebecause inputs were sent toself.devicewhile the model had already been moved to GPU by the trainer.
Changes
ontolearner/learner/term_typing/rwthdbis.py: move inference inputs tomodel_device.ontolearner/learner/taxonomy_discovery/rwthdbis.py: same device alignment fix.accelerate>=0.26.0torequirements.txt,pyproject.toml, andsetup.py.examples/llm_learner_rwthdbis_term_typing.py: setdevice="cuda".examples/llm_learner_rwthdbis_taxonomy_discovery.py: setdevice="cuda"..gitignore: ignoreexamples/results/andresults/.Impact
Test plan
python examples/llm_learner_rwthdbis_term_typing.pypython examples/llm_learner_rwthdbis_taxonomy_discovery.py