I am conducting PhD research on abstractive summarization of biomedical documents. In my first year, I encountered a challenge: biomedical documents are often very long, and I lack the GPU resources to fine-tune transformer models for these lengthy documents. Even though I used Google Colab, I still encountered GPU limitations due to the length of the documents. Given this limitation, and with my supervisor's desire for us to publish an article, what contributions can I make if I don't have the material resources to train or fine-tune deep learning models on long documents? Any advice would be greatly appreciated. I am truly at a standstill.

Thank you in advance

Similar questions and discussions