Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about the paper #16

Open
ResonWang opened this issue Jul 14, 2023 · 3 comments
Open

Questions about the paper #16

ResonWang opened this issue Jul 14, 2023 · 3 comments

Comments

@ResonWang
Copy link

Dear authors,
I have some questions about the paper content:
(1) what is the MedVicuna and RadVicuna in Table 1? I cannot find them in the paper or on the Internet;
(2) According to Figure 1, it seems only the Linear Transformation Layer is trained in the whole framework, but why you mentioned in the contributions that "The LLM (Vicuna) is fine-turned on medical data"?
(3) In your work, if only the Linear Transformation Layer is trained while the LLM and MedClip are all frozen?

@OmkarThawakar
Copy link
Collaborator

Dear @ResonWang

Thanks for interest in our work.

Our MedVicuna and RadVicuna is fine-tuned version of vicuna on medical and radiologist conversation data provided.
We fine-tuned llm (vicuna) on medical and radiology conversation data separately prior to the training of linear layer.
We kept Llm and Image Encoder frozen while training linear layer.

@awaisahmednuces
Copy link

Hi Dear Authors,

So whether you trained LLM or Image encoder in this training stage-1 or stage-2?

@OmkarThawakar
Copy link
Collaborator

OmkarThawakar commented Aug 8, 2023

Dear @awaisahmednuces,
We train llm separately on medical and radiology conversation.
In our method, in stage-1 and stage-2 training, we train projection layer between image encoder and llm on MIMIC & OpenI data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants