Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    New user? Click here to register. Have you forgotten your password?
Repository logo
  • Colleges, Institutes & Collections
  • Browse AAU-ETD
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Dagim Melkie"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • No Thumbnail Available
    Item
    Advancing Amharic Text Summarization with a Tailored Parameter-Efficient Fine-Tuning Technique
    (Addis Ababa University, 2025-08) Dagim Melkie; Fantahun Bogale (PhD)
    While recent progress in Large Language Models (LLMs) has revolutionized the field of Natural Language Processing (NLP), applying these models to low-resource languages such as Amharic presents considerable difficulties. Key obstacles include the scarcity of available data and the intensive computational cost associated with conventional finetuning methods. To overcome these issues, this thesis introduces a specialized parameterefficient fine-tuning (PEFT) framework developed specifically for Amharic text summarization. This new framework combines a dynamic low-rank adaptation component (DyLoRA-Amharic) with an adaptive activation method (AdaptAmharic), which work together to improve the model’s flexibility and optimize its resource allocation during training. The methodology involves injecting these custom modules into the mT5-small encoder– decoder architecture, allowing dynamic adjustment of DyLoRA-Amharic ranks and AdaptAmharic activation levels based on gradient signals. A joint optimization objective incorporating regularization terms for both rank and activation was employed to manage model complexity and ensure training stability. Comparative experiments were conducted against standard PEFT LoRA and Houlsby Adapter baselines on a curated Amharic summarization dataset. Experimental results demonstrate that the proposed DyLoRA-Amharic and Adapt Amharic framework significantly out performs the baselines across ROUGE, BLEU, and BERT Score metrics, achieving the lowest evaluation loss. Specifically, it improved ROUGEL by 30.5% and BLEU by 52.4% over the strongest baseline. This superior performance validates the efficacy of a densely injected, dynamic, and regularized architecture, challenging the conventional emphasis on maximal sparsity in PEFT. While the framework utilizes a higher proportion of trainable parameters (13.42%) compared to the baselines, this trade-off is justified by the substantial performance gains. This research contributes to advancing PEFT methodologies for low-resource NLP, providing a robust and adaptable solution for Amharic text summarization. The findings lay a foundation for developing more efficient and effective LLMs for diverse and linguistically underrepresented communities

Home |Privacy policy |End User Agreement |Send Feedback |Library Website

Addis Ababa University © 2023