School of Information Technology and Engineering
Permanent URI for this collection
Browse
Recent Submissions
Item A Multimodal Security Information and Event Management Solution Empowered by Deep Learning and Alert Fusion(Addis Ababa University, 2024-11) Behailu Adugna; Sileshi Demisie (PhD)The cybersecurity threat landscape is marked by a growing number of increasingly complex and sophisticated attacks affecting organizations across various sectors. In response, solutions like SIEM systems are essential for providing centralized threat detection, real-time analysis, and compliance support, making them integral to modern cybersecurity strategies. One of the reasons for this is that SIEM solutions collect and aggregate log data from across an organization's IT infrastructure, providing a single pane of glass for monitoring security events. And this centralized approach is essential for identifying threats that span multiple systems and environments, identifying indicative patterns of attacks such as privilege escalation and polymorphic malware, helping proactively identify signs of unusual data accesses or exfiltration before significant damage occurs. Furthermore, SIEM solutions support compliance by maintaining detailed audit logs and providing preconfigured reporting tools. However, SIEM systems usually encounter significant challenges in effectively identifying and responding to sophisticated cyberattacks. Since they rely heavily on predefined rules, even if complex correlations, and signatures, they struggle to adapt to novel attack techniques that do not match the predefined patterns. They often lack sophisticated analytics capabilities such as deep learning and behavioral analysis, which deprives them of the effectiveness at detecting advanced threats. Furthermore, they frequently produce an overwhelming volume of alerts, many of which are irrelevant or false positives. This leads to alert fatigue, causing cybersecurity analysts to become desensitized to alerts and increase the risk of overlooking critical incidents. This research proposes a multimodal architecture of SIEM designed to overcome current limitations in threat detection by integrating diverse data sources, including network traffic and event logs. The solution utilizes advanced neural networks to analyze intricate relationships within network connection features and their temporal dependencies. By further employing alert fusion, it creates a melting-pot for alerts from different sources that can provide a more comprehensive and complementary understanding of potential threats that can address the issue of false positives.Item Attribution Methods for Explainability of Predictive and Deep Generative Diffusion Models(Addis Ababa University, 2025-06) Debela Desalegen; Beakal Gizachew (PhD)As machine learning models grow in complexity and their deployment in high-stakes domains becomes more common, the demand for transparent and faithful explainability methods has become increasingly urgent. However, most existing attribution techniques remain fragmented, targeting either predictive or generative models, and lack a hybrid approach that offers coherent interpretability across both domains. While predictive modeling faces challenges such as faithfulness, sparsity, stability, and reliability, generative diffusion models introduce additional complexity due to their temporal dynamics, tokento- region interactions, and diverse architectural designs. This work presents a hybrid attribution method designed to improve explainability for both predictive black-box models and generative diffusion models. We propose two novel methods: FIFA (Firefly-Inspired Feature Attribution), an optimization-based approach for sparse and faithful attribution in tabular models; and DiffuSAGE (Diffusion Shapley Attribution with Gradient Explanations), a temporally and spatially grounded method that attributes generated image content to individual prompt tokens using Aumann-Shapley values, Integrated Gradients, and cross-attention maps. FIFA applied to the Random Forest, XGBoost, CatBoost, and TabNet models in three benchmark datasets: Adult Income, Breast Cancer, and Diabetes, outperforming SHAP and LIME in key metrics: +6.24% sparsity, +9.15% Insertion AUC,-8.65% Deletion AUC, and +75% stability. DiffuSAGE evaluated on Stable Diffusion v1.5 trained on the LAION-5B dataset, yielding a 12.4% improvement in Insertion AUC and a 9.1% reduction in Deletion AUC compared to DF-RISE and DF-CAM. A qualitative user study further validated DiffuSAGE’s alignment with human perception. Overall, these contributions establish the first hybrid attribution methods for both predictive and\ generative models, addressing fundamental limitations in current XAI approaches and enabling more interpretable, robust, and human-aligned AI systems.Item Optimizing Intrusion Detection Systems with Ensemble Deep Learning: A Comparative Study of RNN and LSTM Architectures(Addis Ababa University, 2024-10) Admasu Awash; Henock Mulugeta (PhD)Nowadays, due to the complexity and severity of security attacks on computer networks attackers can launch a variety of attacks against organizational networks using a variety of methods in order to access, modify, or delete crucial data. The rise in cyberattacks has made it necessary to create reliable and effective intrusion detection systems (IDS) that can instantly recognize malicious activity. IDS, which can automatically and quickly detect and categorize cyberattacks at host and network levels, has made substantial use of machine learning techniques. Although ML techniques like K Nearest Neighbor and Support Vector Machines have been used to building IDSs, those systems still have a high false alarm rate and poor accuracy. Many security researchers are integrating different machine learning approaches to protect the data and reputation of the organizations. Deep learning algorithms have emerged as a forceful instrument in this field and these can detect with better precision than conventional techniques. Recently, Deep learning has become more well-known in network-based intrusion detection systems, enhancing their efficiency in safeguarding hosts and computer networks. In the field of deep learning, ensemble learning has appeared as a potent method that improves the performance of single models by combining several of them. The present study employed two architectures of recurrent neural networks (RNNs), namely simple recurrent neural networks and long shortterm memory (LSTM), in order to investigate the possible applicability of ensemble learning in intrusion detection systems (IDS). RNNs are suited for predicting sequential data in IDS by identifying temporal relations in network traffic. LSTMs, which are a kind of RNN, can deal with long-term dependencies well and help avoid vanishing gradient problem that is important in identifying complicated intrusion model.The performance of designed model and the IDS were evaluated using LITNET2020 publicly available dataset under performance evaluation metrics. In multiclass classification the ensemble model fared better than LSTM, yielding accuracy and precious 99.981% and 99.965%, respectively, whereas LSTM provided accuracy and precious of 99.638% and 99.451 %, respectively. Additionally, the suggested ensemble approach produced superior in multi-classification results for the various types of intrusions.Item A Cyber Insurance Framework for Ethiopia: Key Components and Recommendations(Addis Ababa University, 2024-11) Ephrem Baheru; Sileshi Demesie (PhD)The exponential rise in cyber threats such as ransomware, identity theft, and other forms of cybercrime has driven many organizations to seek cyber insurance as an extra layer of protection. Cyber insurance has emerged as a means of mitigating residual risks that remain after implementing various cyber risk mitigation strategies. Cyber-attacks in Ethiopia have been rising steadily each year, driven by a surge in digital transformation initiatives across various sectors, including government, financial institutes, and other critical infrastructure. This highlights the urgent need for cyber insurance services in the country, as it could help organizations manage financial losses and recover more effectively from cyber incidents. This study reveals that no insurance provider in the country currently offers cyber insurance services. This research envisioned promoting cyber insurance practice in Ethiopia by developing a cyber insurance framework that could be used by public and private organizations. To develop the framework, data was collected through a face-to-face interview with insurers, potential insureds, and regulatory bodies, and the data was analyzed using a qualitative approach. We also studied global best practices and trends in cyber insurance. The framework is designed to help Ethiopian organizations manage cyber risks and effectively recover from cyber incidents and reputational damage. The framework includes key components such as stakeholder engagement, insurance coverage, risk assessment and underwriting, premium calculation, risk mitigation and loss prevention, incident response and claims process, regulatory compliance, awareness and education, review and iteration, collaboration, and information sharing. A case study is used to demonstrate how a company successfully implemented the cybersecurity framework.Item Cybersecurity Governance Framework for Ethiopian National Identification Program(Addis Ababa University, 2025-06) Selwa Nurye; Henock Mulugeta (PhD)Ethiopia launched its digital transformation strategy, Digital Ethiopia 2025, in 2020 to build a sustainable digital economy. One of the key priorities of this strategy is to implement digital identification for all citizens and residents. Digitalizing government services and businesses requires a secure, electronic representation of individuals and entities, proving their identity and reliability during transactions or interactions, both online and in person. However, the increasing interconnectivity of the digital world poses ongoing cybersecurity challenges. Digital IDs, while crucial to enabling the digital economy, are vulnerable to the same cyber risks that affect other widely used digital technologies. Although global efforts to develop national digital identity systems aim to enhance security and convenience, they also face significant technical, ethical, and security challenges. These systems are vital for achieving the Sustainable Development Goals (SDGs), but they often grapple with issues such as privacy, data management, enrollment processes, and costs. As a result, effective cybersecurity governance is essential. The cybersecurity governance activities of the body responsible for overseeing these programs must align closely with the strategy’s objectives. This study employed a qualitative research methodology, including in-depth interviews and document analysis, to collect the necessary data. Thematic data analysis was used to process the data, leading to conclusions from which recommendations were derived. Based on the findings and insights from reviewed literatures, we developed a cybersecurity governance framework that was validated through hypothetical cyber incident scenarios to show that the proposed framework mitigate those incidents. Besides, key performance indicators were prepared to assess the effectiveness of the framework in real-world scenarios.Item Assessing Cybersecurity Readiness in Ethiopia Fintech Sector(Addis Ababa University, 2024-10) Teklehymanot Meheret; Elefelious Getachew (PhD)Ethiopian fintech sector brought a significant transformation on the financial transaction and payment instrument business. This change however poses concerns on various stakeholders that the country’s ability to protect the business and to mitigate the risks caused by bad actors to exploited the vulnerability. The research aim to investigate the cybersecurity readiness and preparedness of fintech and also how their practice is met the international standard through answering three research questions.Regulators and fintech companies the major stakeholders this study utilized the proposes of got the relevant information. The research identified governance, resilience and competency as a core variable to evaluates the readiness of the sector which is very much mapped with the international standard including NIST CSF, ISO/IEC 27001 and FFIEC. The study also prepared two separates the questionnaires to address the two participants current cybersecurity practice. The collected data analyzed and observed that there is clear gap and lack of readiness. The sector lacks comprehensive framework that meet the international standard according to the research findings. There was limited practice of the backup, business continuity plan and an incident response plan which impact the resilience of the sector. The other challenge this research identified was inadequate skilled cybersecurity experts and awareness level that impacted the competency of fintech ecosystem to enhance the awareness level as well as creating cybersecurity culture. The research developed a cybersecurity assessment framework that help the sector to protect their critical assets through a proper evaluation and assessment their risk and weakness. The proposed framework subjected to went through a validation process to make sure the framework relevance to the challenged identified in the research and met the basic global standard. The research concludes with valuable recommendations and consideration to enhance cybersecurity practice, collaboration and developed tailored cybersecurity framework for continuous improvement..Item Cybersecurity Maturity Assessment Framework: The Case of Ethiopian Banks(Addis Ababa University, 2024-10) Yafet Ashebir; Elefelious Getachew (PhD)As the banking sector becomes a key player in globalized cyberspace with increasing reliance on digital services, it is prone to a wide range of emerging cybersecurity risks. As cybersecurity can only be achieved through a well-organized set of controls; existing cybersecurity maturity frameworks, while comprehensive and vague, fail to address the unique cybersecurity challenges faced by Ethiopian banks. The literature review discovered that no study has proposed a cybersecurity maturity assessment framework for the Ethiopian banking sector. This study aims to propose a customized framework by reviewing multiple cybersecurity maturity assessment frameworks to identify their weaknesses and strengths. After a thorough assessment, we have identified the major limitations of the existing frameworks and they are not easy to understand, expensive to implement, require intensive and equipped human resources, and are not tailored to the banking sectors to fix operational challenges. Moreover, to assess existing cybersecurity maturity frameworks in banks, data was collected from 9 selected governmental and private banks, and a thematic analysis approach was utilized for the qualitative data collected. As the findings reveal, all selected banks don’t have a proper cybersecurity maturity assessment framework as well as improper adoption of international standards. To address identified weaknesses, a customized cybersecurity maturity assessment framework is proposed to enable banks to identify their security posture and manage their security risks. The proposed framework comprises various components such as regulatory requirements, personal data protection, supply chain security, awareness and culture development, cyber governance, cyber risk management, business continuity and disaster recovery, incident response plan, information sharing, and collaboration, and incorporates international best practices like General Data Protection Regulation (GDPR). To evaluate the framework expert review has been done as the framework contributes to both academic literature and industry practice by providing a customized framework for banks to assess and improve their cybersecurity maturity.Item Ensemble Learning with Attention and Audio for Robust Video Classification(Addis Ababa University, 2025-06) Dereje Tadesse; Beakal Gizachew (PhD)The classification of video scenes is a fundamental task for many applications, such as content recommendation, indexing, and monitoring broadcasts. Current methods often depend on annotation-dependent object detection models, restricting their generalizability when working with different types of broadcast content, particularly cases where visual clues like logos or brands may not have clear definition or presence. This thesis is intended to address the problems associated with current methods through describing a two-stage classification framework that integrates both recognized and unheard information to improve accuracy and robustness of classification. The first stage utilizes a detection model based on pretrained models of object detection and enhanced spatial attention to detect physical visual markers (such as program logo or branded intro sequences) in video program content. However, individual visual indicators are sometimes not robust enough to add confidence, especially in content such as situational comedies where logos do not exist. The second stage describes a twostaged, early fusion ensemble presentation of convolutional neural network-based visual features and recurrent neural network-based audio features. The two modes each use some complementary properties, thus could be used for more robust classification. Experiments were completed with a dataset of approximately 19 hours of content from 13 TV programs across three channels, all focused on intro, credit, and outro segments. The visual-only model achieved 96.83% accuracy, while the audio-only model achieved 90.91%. The proposed early fusion ensemble method achieved 94.13% accuracy and revealed more robustness in difficult situations when quality of visual data was low or ambiguous. Ablation studies contrasting model performance with different ensemble methods confirmed the greater utility of early fusion and its capturing of cross-modal interactions. The system is also designed to be computationally efficient allowing for operationalization in broadcast media settings. This work, while also demonstrating methodical video classification ability, fills a significant gap for scalable and generalizable video classification through the integration of multimodal learning, especially with large amounts of uncontrollable annotations which has previously been a hurdle to more typical models.Item Ensemble Learning with Attention and Audio for Robust Video Classification(Addis Ababa University, 2025-06) Dereje Tadesse; Beakal Gizachew (PhD)The classification of video scenes is a fundamental task for many applications, such as content recommendation, indexing, and monitoring broadcasts. Current methods often depend on annotation-dependent object detection models, restricting their generalizability when working with different types of broadcast content, particularly cases where visual clues like logos or brands may not have clear definition or presence. This thesis is intended to address the problems associated with current methods through describing a two-stage classification framework that integrates both recognized and unheard information to improve accuracy and robustness of classification. The first stage utilizes a detection model based on pretrained models of object detection and enhanced spatial attention to detect physical visual markers (such as program logo or branded intro sequences) in video program content. However, individual visual indicators are sometimes not robust enough to add confidence, especially in content such as situational comedies where logos do not exist. The second stage describes a twostaged, early fusion ensemble presentation of convolutional neural network-based visual features and recurrent neural network-based audio features. The two modes each use some complementary properties, thus could be used for more robust classification. Experiments were completed with a dataset of approximately 19 hours of content from 13 TV programs across three channels, all focused on intro, credit, and outro segments. The visual-only model achieved 96.83% accuracy, while the audio-only model achieved 90.91%. The proposed early fusion ensemble method achieved 94.13% accuracy and revealed more robustness in difficult situations when quality of visual data was low or ambiguous. Ablation studies contrasting model performance with different ensemble methods confirmed the greater utility of early fusion and its capturing of cross-modal interactions. The system is also designed to be computationally efficient allowing for operationalization in broadcast media settings. This work, while also demonstrating methodical video classification ability, fills a significant gap for scalable and generalizable video classification through the integration of multimodal learning, especially with large amounts of uncontrollable annotations which has previously been a hurdle to more typical models.Item Identification and Classification of Illegal Dark Web Activities in East Africa Region(Addis Ababa University, 2024-08) Tariku Eshetu; Fitsum Assamnew (PhD)Online criminal activity manifests in various forms across the Surface, Deep, and Dark Web layers of the Internet. The darknet environment is notorious for various illegal activities, including financial crimes, hacking, recruitment for terrorism and extremism, child pornography, human organ trafficking, drug trafficking, and illegal arms trading. Law enforcement faces significant challenges in identifying specific criminal websites due to the ineffectiveness of traditional investigative techniques. In East Africa, the growth of technology has created economic and social opportunities, but it has also led to increased internet penetration and connectivity, making the region an attractive target for cybercriminals. Compounding the issue are the insufficient readiness of security organizations and a lack of user awareness, which further facilitate cybercrime. This thesis investigates the landscape of cybercrime on the Dark Web, focusing specifically on East African Internet Protocol (IP) address spaces, an area that has been largely under-researched in the existing literature. This research seeks to address a pronounced gap in knowledge regarding the types of illegal activities and associated protocols on the Dark Web, particularly given existing studies’ inadequacies in contextualizing research within East African socio-political frameworks. The research pivots around two key questions: (1) What types of protocols operate through the Dark Web in East African IP address spaces? and (2) What illegal activities are conducted through these protocols? The objectives of this study are multifaceted, aiming to develop a robust methodology for data collection and analysis from Tor exit nodes within the East African, classify the prevalent communication protocols, and categorize the diverse illegal activities identified. Through thorough examination of Tor network traffic, the study reveals crucial patterns, including a dominance of TCP and TLS protocols, smaller percentages using other protocols such as DATA, Bitcoin, HTTP, DNS, and SSH and with illicit activities significantly associated with drug, violence, and software piracy. The findings underscore the pressing need for tailored law enforcement strategies, informed policymaking, and collaborative regional approaches to manage the escalating threats. By innovatively integrating advanced data analytics techniques and multithreaded computing, this thesis provides a unique framework for ongoing cybercrime analysis, enhancing situational awareness for stakeholders and facilitating more effective monitoring of the Dark Web. The implications of this research extend beyond academic inquiry; it offers practical resources for law enforcement agencies, policymakers, and researchers in mitigating cyber threats, thereby contributing to a safer digital environment in East Africa.Item Deep Learning-Based Amharic Keyword Extraction for Open-Source Intelligence Analysis(Addis Ababa Univeristy, 2025-06) Alemayehu Gutema; Henok Mulugeta (PhD)In today's digital age, the problem of information overload has become a pressing concern, especially in the field of OSINT (Open-Source Intelligence). With vast amounts of data available on the internet, it is challenging to separate relevant and credible information from the noise. An OSINT approach involves gathering intelligence from publicly available sources. However, with the increasing volume and diversity of online content, it has become difficult to extract actionable intelligence from enormous amounts of data. Deep learning can help identify patterns in large amounts of data and automate decision-making processes. Despite these advances, a problem of information overload still exists. One approach to addressing this problem is to develop effective deep learning model to extract the relevant information. Leveraging both machine and deep learning algorithms with natural language processing (NLP) can help automatically classify and categorize information. The purpose of this study is to design deep learning model to extract intelligence from vast amount of Amharic dataset, aiming to design model for keyword extraction. Keyword extraction is the process of identifying important words or phrases that capture the essence of a given piece of text. This task is critical for many natural language processing applications, including document summarization, information retrieval, and search engine optimization. In recent years, deep learning algorithms have shown great promise in this field, largely due to their ability to learn from vast amounts of data and extract complex patterns. In this paper, we propose a novel keyword extraction approach based on deep learning methods. We will explore different algorithms, such as recurrent neural networks (RNNs) and transformer models, to learn the relevant features from the input text and predict the most salient keywords. We evaluate our proposed method on datasets containing Amharic content, and show that it outperforms state-of-the-art methods. Our results suggest that deep learning-based approaches have the potential to significantly improve keyword extraction accuracy and scalability in realworld application.Item Multimodal Unified Bidirectional Cross-Modal Audio-Visual Saliency Prediction(Addis Ababa University, 2025-06) Tadele Melesse; Natnael Argaw (PhD); Beakal Gizachew (PhD)Human attention in dynamic environments is inherently multimodal and is shaped by the interplay of auditory and visual cues. Although existing saliency prediction methods predominantly focus on visual semantics, they neglect audio as a critical modulator of gaze behavior. Recent audiovisual approaches attempt to address this gap but remain limited by temporal misalignment between modalities and inadequate retention of spatio-temporal information, which is key to resolving both the location and timing of salient events, ultimately yielding suboptimal performance. Inspired by recent breakthroughs in cross-attention transformers with convolutions for joint global-local representation learning and conditional denoising diffusion models for progressive refinement, we introduce a novel multimodal framework for bidirectional efficient audiovisual saliency prediction. It employs dual-stream encoders to process video and audio independently, coupled with separate efficient cross-modal attention pathways that model mutual modality influence: One pathway aligns visual features with audio features, while the other adjusts audio embeddings to visual semantics. Critically, these pathways converge into a unified latent space, ensuring coherent alignment of transient audiovisual events through iterative feature fusion. To preserve finegrained details, residual connections propagate multiscale features across stages. For saliency generation, a conditional diffusion decoder iteratively denoises a noise-corrupted ground truth map, conditioned at each timestep on the fused audiovisual features through a hierarchical decoder that enforces spatio-temporal coherence via multiscale refinement. Extensive experiments demonstrate that our model outperforms state of the art methods, achieving individual improvements of up to 11.52% (CC), 20.04% (SIM), and 3.79% (NSS) across evaluation metrics over DiffSal on the AVAD datasetItem A Structured Framework for Email Forensic Investigations(Addis Ababa University, 2025) Biruk Bekele; Henok Mulugeta (PhD)Email forensics investigations become vital regarding legal, cybersecurity, and corporate challenges. However, most of the existing frameworks are suffering from inefficiency problems, data integrity, and handling such diverse data sources with complexity, considering encrypted emails and metadata. This thesis applied the Design Science Methodology to develop a structured framework that enhanced efficiency and effectiveness in email forensic investigations. These specifically deal with data quality, diversity in data management, and integrity of evidence. Among others, one key component is case management, which systemizes and keeps track of the investigation from the very outset to the last step in an appropriate manner and ensures each step is conducted methodically. The framework comprises key phases: case management, governance, identification, preservation, classification, analysis, presentation and compliance that address critical challenges such as ensuring data quality, managing diverse data sources, and maintaining evidence integrity. Case management forms the core part of the proposed framework for organizing, tracking the investigation process from start to finish in order ensuring that evidence is handled properly, and all phases are executed in a systematic manner. It integrates open-source tools, case studies of different varieties, and best practices to be relevant to different real-world scenarios. The effectiveness of the artifact can also be demonstrated in practical application, performance being measured in terms of speed of investigation, data quality, accuracy, and user satisfaction, among other metrics. This research underscores that the suggested framework decreases the time of investigation, reduces the rate of errors, increases the quality of data management, and guarantees the effective access of various data sources. This thesis contributes on both practical and theoretical levels, guiding practitioners and researchers comprehensively in the area of digital forensics to bring current email forensic investigations into a more efficient, accountable, and adaptable condition.Item Cybersecurity Incident Management Framework for Smart Grid Systems in Ethiopia(Addis Ababa University, 2024-06) Getinet Admassu; Henock Mulugeta (ጵህD)Merging OT and IT into smart grid systems brought along new advantages. Smart grids will be able to use this amalgamation to manage energy generation and transmission with minimal loss of energy, a factor that results in high efficiency. Besides that, integrating IT and OT into the smart grid presents real-time infrastructure management monitoring. On the other hand, this digital change subjected smart grids to many cybersecurity threats. This will be achieved by developing and implementing stable cybersecurity incident management systems to secure key infrastructures. Based on evidence from existing literature and expert judgments, this paper enumerates the principal challenges power utilities face in managing cybersecurity incidents. Then, it outlines a comprehensive cybersecurity incident management framework. This framework will, hence, enable power utilities to take on an active role and deal with relevant powers regarding cybersecurity incidents. Also, the model ensures that cybersecurity, concerning all strategic, engineering, procurement, construction, and operational aspects and involving all parties and resources concerned, is put together systematically. The underlying design science qualitative approach facilitated the development of this framework. It organizes sophisticated threat detection techniques and counter-threat strategies and correlates with Risk Management, Threat Analysis, Security Controls, Operational Models, and Management. They also involve real-time network traffic and system log monitoring, anomaly detection algorithms, intrusion detection, and prevention systems. Power utilities will significantly improve the ability to effectively detect and respond to cybersecurity-related events. The following threat scenarios, including organized DDoS and ransomware attacks as a taxonomy against the various components of the proposed framework, show how these smart grid technologies mentioned above can be used to develop effective solutions in response to cyber security incidents. It is indeed a systematic framework; it gives good advice. The recommendations will target particular challenge areas within the electric power industry and underpin its cybersecurity posture, with a view that our critical energy infrastructure will be reliable and capable of being counted upon in grace. This research encourages sustainable development and social welfare by resilience in cybersecurity for smart grid systems.Item Framework for PKI Implementation: Optimizing Project Management in Ethiopia(Addis Ababa University, 2024-09) Binyam Ayele; Henock Mulugeta (PhD)In today's increasingly digital world, the security of online communications and transactions is paramount. Public Key Infrastructure (PKI) has emerged as a cornerstone technology for ensuring secure, authenticated, and confidential digital interactions. However, the implementation of PKI projects remains challenging due to its inherent complexities, including certificate management, key distribution, and system integration, National legal framework contradictions & Limitations, lack of interoperability. The lack of a standardized implementation framework further exacerbates these challenges, leading to inconsistent and often flawed deployments that fail to leverage the full potential of PKI. This study investigates the importance of optimizing a PKI Project implementation framework that support the establishment of a national or organizational PKI project at national or organizational level by developing a comprehensive framework that mitigate PKI project implementation challenges. The study seeks to address the critical need for a comprehensive PKI Project Implementation Framework that can guide organizations in navigating the complexities of PKI deployment. The problem under investigation is the absence of standardized and generic framework and best practices for PKI implementation, which has resulted in varied levels of security and effectiveness across different sectors. The study aims to develop a framework that is adaptable to diverse organizational contexts, ensuring that PKI systems are implemented in a manner that is both secure and scalable. To achieve this goal, a systematic literature review (SLR) methodology will be employed as the primary research method. The SLR will systematically identify, evaluate, and synthesize existing research on PKI implementation, focusing on the challenges, best practices, and potential solutions proposed in the literature. By analyzing a wide range of studies, the SLR will provide a comprehensive understanding of the current state of PKI implementation and identify gaps that the proposed framework can address. This method will ensure a rigorous and evidence-based approach to the development of the PKI Project Implementation Framework. This research focused on developing a PKI implementation framework that assist PKI project management. A case study and Key Performance Indictor (KPI) is incorporated to evaluate the proposed framework. As a direct outcome of this study, stakeholders who have plans to implement PKI within Ethiopia or other country will obtain a proactive understanding of potential implementation considerations that should be taken.Item Leveraging Intel SGX and Hybrid Design for Secure National ID Systems(Addis Ababa University, 2025-01) Tesfalem Fekadu; Sileshi Demesie (PhD)Globally, 1.1 billion individuals, including 21 million refugees, lack proof of legal identity, disproportionately affecting children and women in rural areas of Asia and Africa. Without official identification, access to essential services such as education, healthcare, banking, and public distribution systems becomes nearly impossible. The increasing reliance on digital identity management systems demands robust security measures to safeguard sensitive personal data. The Modular Open-Source Identity Platform (MOSIP) is a widely adopted solution due to its flexibility and scalability. However, protecting sensitive data during National ID enrollment, registration, and authentication processes remains a significant challenge. Specifically, decrypting biometric data before feature comparison in server environments exposes this data to critical vulnerabilities, increasing the risk of potential attacks. The reliance on software-based Software Development Kits (SDKs) for biometric matching exacerbates the issue, as these SDKs often operate alongside other software modules, expanding the attack surface. Software-based approaches are inherently risky due to the high likelihood of exploitable bugs, which attackers can use to compromise data integrity or gain unauthorized access. This study addresses these security challenges by integrating Trusted Execution Environments (TEEs) to enhance data protection during processing. A hybrid architecture is proposed, incorporating an SGX-based solution named SGX-BioShield to improve the security and hybrid architecture for performance enhancement. A prototype of the proposed security solution has been developed and tested, demonstrating that SGX-BioShield significantly reduces the risk of unauthorized access and data breaches by isolating sensitive operations within a hardware-protected environment. Intel SGX ensures that data remains secure even if the operating system or hypervisor is compromised. This research contributes to the field of identity management by presenting a novel approach to securing platforms like MOSIP. It provides practical insights into improving data security and overall system performance through the implementation of a hybrid architecture in digital identity systems.Item Modular Federated Learning for Non-IID Data(Addis Ababa University, 2025-06) Samuel Hailemariam; Beakal Gizachew (PhD)Federated Learning (FL) promises privacy-preserving collaboration across distributed clients but is hampered by three key challenges: severe accuracy degradation under non-IID data, high communication and computational demands on edge devices, and a lack of built-in explainability for debugging, user trust, and regulatory compliance. To bridge this gap, we propose two modular FL pipelines—SPATL-XL and SPATL-XLC—that integrate SHAP-driven pruning with, in the latter, dynamic client clustering. SPATL-XL applies SHAP-based pruning to the largest layers, removing low-impact parameters to both reduce model size and sharpen interpretability, whereas SPATL-XLC further groups clients via lightweight clustering to reduce communication overhead and smooth convergence in low-bandwidth, high-client settings. In experiments on CIFAR-10 and Fashion-MNIT over 200 communication rounds under IID and Dirichlet non-IID splits, our pipelines lower per-round communication to 13.26 MB, speed up end-to-end training by 1.13×, raise explanation fidelity from 30–50% to 89%, match or closely approach SCAFFOLD’s 70.64% top-1 accuracy (SPATL-XL: 70.36%), and maintain stable clustering quality (Silhouette, CHI, DBI) even when only 40–70% of clients participate. These results demonstrate that combining explainability-driven pruning with adaptive clustering yields practical, communication-efficient, and regulation-ready FL pipelines that simultaneously address non-IID bias, resource constraints, and transparency requirements.Item Optimizing Explainable Deep Q-Learning via SHAP, LIME, & Policy Visualization(Addis Ababa University, 2025-06) Tesfahun Yemisrach; Beakal Gizachew (PhD); Natnael Argaw (PhD) Co-AdvisorReinforcement learning (RL) has demonstrated remarkable promise in sequential decision-making tasks; however, its interpretability issues continue to be a hindrance in high-stakes domains that demand regulatory compliance, transparency, and trust. Posthoc explainability has been investigated in recent research using techniques like SHAP and LIME; however, these methods are frequently isolated from the training process and lack cross-domain evaluation. In order to fill this gap, we propose an explainable Deep Q-Learning (DQL) framework that incorporates explanation-aligned reward shaping and model-agnostic explanation techniques into the agent’s learning pipeline. The framework exhibits broad applicability as it is tested in both financial settings and traditional control environments. According to experimental findings, the explainable agent continuously performs better than the baseline in terms of explanation fidelity, average reward, and convergence speed. In CartPole, the agent obtained a LIME fidelity score of 87.2% versus 63.5% and an average reward of 190 versus 130 for the baseline. It produced an 89.10% win ratio, a Sharpe Ratio of 0.4782, and a return of 154.32% in the financial domain. The development of transparent and reliable reinforcement learning systems is aided by these results, which demonstrate that incorporating explainability into RL enhances interpretability as well as stability and performance across domains.Item Provenance Blockchain with Predictive Auditing Framework for Mitigating Cloud Manufacturing Risks in Industry 4.0(Addis Ababa University, 2025-06) Mifta Ahmed; Gouveia , Luis Borges (PhD); Elefelious Getachew (PhD)Cloud manufacturing is an evolving concept that enables various manufacturers to connect and address shared demand streams regardless of their geographical location. Although this transformation facilitates operational flexibility and resource optimization, it concurrently introduces critical challenges related to continuous visibility, traceability, and proactive security management within Industrial Internet of Things (IIoT)-enabled cloud manufacturing environments. Notably, the absence of real-time insights into device states and operational behaviors increases susceptibility to unauthorized access, latent security breaches, and operational disruptions, whereas existing blockchMLn-based solutions predominantly emphasize initial authentication and transactional integrity but lack mechanisms for ongoing device verification and continuous provenance tracking. Simultaneously, artificial intelligence (ML)-driven predictive auditing techniques have evolved in isolation, without harnessing the immutability, accountability, and policy enforcement capabilities afforded by blockchMLn technology. This fragmentation results in limited traceability and weakened system integrity, particularly in dynamic IIoT ecosystems, where timely data-driven decision making is imperative. This study MLms to address these gaps through three primary objectives: (i) optimize blockchMLn architectures to support continuous monitoring, traceability, and visibility in IIoT environments; (ii) develop and integrate ML-based predictive auditing mechanisms with blockchMLn to proactively detect and mitigate security risks in IIoT-based cloud manufacturing; and (iii) evaluate the effectiveness of the integrated blockchMLn and predictive auditing framework in addressing security, traceability, and real-time visibility challenges while mMLntMLning operational continuity. Adopting a Design Science Research Methodology (DSRM), this study develops and rigorously evaluates an integrated framework that combines dynamic blockchMLn-based provenance logging with ML-driven anomaly detection. The experi-mental evaluation was conducted using a scenario-based experimental setup in a cloud simulated multizone warehouse environment involving IIoT-enabled forklifts that operated under three behavioral scenarios: fully compliant, partially compliant, and rogue. Key evaluation metrics included validation accuracy 94%, prediction precision (up to 99.7%, F1 score 90%, traceability rate (ranging from 82% to 85%, average system latency (3.95 seconds), transaction rejection rate (100% for rogue inputs), and operational uptime (100% resilience with no downtime). The results substantiate the ability of the framework to provide real-time responsiveness, robust security, and continuous traceability while mMLntMLning operational continuity, even under adversarial or non-compliant conditions. This study contributes to the body of knowledge by bridging the gap between blockchMLn technology and ML in IIoT-enabled cloud-manufacturing security. These findings have practical implications for the secure deployment of IIoT technologies across smart manufacturing ecosystems.Item Collatz Sequence-Based Weight Initialization for Enhanced Convergence and Gradient Stability in Neural Networks(Addis Ababa University, 2025-06) Zehara Eshetu; Beakal Gizachew (PhD); Adane Letta (PhD)Deep neural networks have achieved state-of-the-art performance in tasks ranging from image classification to regression. However, their training dynamics remain highly sensitive to weight initialization. This is a fundamental factor that influences both convergence speed and model performance. Traditional initialization methods such as Xavier and He rely on fixed statistical distributions and often underperform when applied across diverse architectures and datasets. This study introduces Collatz Sequence-Based Weight Initialization, a novel deterministic approach that leverages the structured chaos of Collatz sequences to generate initial weights. CSB applies systematic transformations and scaling strategies to improve gradient flow and enhance training stability. It is evaluated against seven baseline initialization techniques using a CNN on the CIFAR-10 dataset and an MLP on the California Housing dataset. Results show that CSB consistently outperforms conventional methods in both convergence speed and final performance. Specifically, CSB achieves up to 55.03% faster convergence than Xavier and 18.49% faster than He on a 1,000-sample subset, and maintains a 20.64% speed advantage over Xavier on the full CIFAR-10 dataset. On the MLP, CSB shows a 58.12% improvement in convergence speed over He. Beyond convergence, CSB achieves a test accuracy of 78.12% on CIFAR-10, outperforming Xavier by 1.53% and He by 1.34%. On the California Housing dataset, CSB attains an R score of 0.7888, marking a 2.35% improvement over Xavier. Gradient analysis reveals that CSB-initialized networks maintain balanced L2 norms across layers, effectively reducing vanishing and exploding gradient issues. This stability contributes to more reliable training dynamics and improved generalization.However, this study is limited by its focus on shallow architectures and lacks a robustness analysis across diverse hyperparameter settings.