Telecommunication Engineering
Permanent URI for this collection
Browse
Recent Submissions
Item Comparative Analysis of Machine Learning Models for Prediction of BTS Power System Failure(Addis Ababa University, 2025-06) Biruktayt Fentabil; Dereje Hailemariam (PhD)Loss of power system integrity in Base Transceiver Stations (BTS) can have significant impacts on communication networks, causing service outages and loss of revenue. This study was focused on developing and testing machine learning models to predict BTS power system failures so that maintenance can be performed before system failures occur. The models tested were Hidden Markov Models (HMM), Long Short-Term Memory (LSTM), and Random Forest (RF). The data used for this research included 46,943 records from ethiotelecom’s (ET) monitoring system. ET’s provided information related to variables sourced from monitoring features that impact power system failure, including environmental, load, battery, or other BTS-based metric records. All these data were pre-processed. Before feature analysis, Z-score normalization was conducted to standardize the data. After this data preparation step, principal component analysis (PCA) was undertaken to perform feature analysis. In addition, K-means clustering was also applied to categorize the hidden states (’Normal,’ ’Degraded,’ and ’Failure’) and group the observable sequences. The HMM was trained using the Baum-Welch algorithm, and the Viterbi technique aided state prediction. To enhance performance, a range of hyperparameter approaches were applied to the RF and LSTM models. With a 97.72% F1-score, 98.04% accuracy, 98.08% precision, and 98.04% recall, compared to the other two models, the HMM performed better when load-related parameters were observable. With an accuracy of 97.81% and an F1-score of 96.74%, the LSTM model came in second place for identifying temporal connections in the data. Despite being robust, RF’s performance metrics were marginally worse, with an F1-score of 93% and an accuracy of 95%. The study’s conclusions show that the best model for forecasting BTS power system breakdowns is HMM. Due to its exceptional precision and dependability, ethiotelecom can enhance network performance by facilitating proactive maintenance, reducing downtime, and increasing user satisfaction.Item Machine Learning for Improved Root Cause Analysis of Data Center Energy Inefficiency(Addis Ababa University, 2025-06) Elsa Abreha; Dereje Hailemariam (PhD)In this study the vital issue of energy inefficiency in the data center is addressed by developing a machine learning-based framework for root cause analysis (RCA) of high power use efficiency (PUE) rates. Focusing on the Nefas Silk Data Center, the research leverages a 1D Convolutional Neural Network (CNN) model to classify PUE efficiency and employs SHapley Additive exPlanations (SHAP) to interpret the contributions of key operational features. The dataset, comprising 6,586 hourly measurements, identifies air conditioning systems as the primary driver of inefficiency, followed by UPS losses during power conversion and distribution, and rectifier performances. The suggested 1D CNN model demonstrates outstanding performance, achieving an accuracy of 99.99%, sensitivity of 99.99%, and an F1 score of 99.99%, outperforming the comparative LSTM and RNN architectures. By integrating global and local interpretability methods, the framework provides recommendations to optimize energy consumption, reduce operational costs, and improve sustainability. The findings underscore the potential of machine learning to transform data center energy management, offering a scalable solution to improve efficiency in similar infrastructures. Future work will aim to improve energy efficiency in data centers by improving root cause analysis through the integration of historical data from all data centers at the core site. It will also involve comparative studies to assess regional factors that influence performance. These initiatives seek to create a robust framework for sustainable energy management in various environments.Item Root Cause Analysis of Insufficient Effective Throughput in 4G Core Network using Long Short-Term Memory Networks(Addis Ababa University, 2025-06) Simon Getahun; Yihenew WondieIn the world of telecommunications today, the 4G Core Network (4G CN) is a critical enabler of higher speed data connectivity for mobile internet users. Unfortunately, many telecom operators continue to struggle with the persistent issue of Effective Throughput (ET), which decreases service quality and user experience. Inspired by these issues, this thesis investigates the causes of ET degradation in various flows within the 4G CN while emphasizing a shift toward higher degree of data-driven additional insight and the need to implement proactive network management strategies. To tackle the issue, a thesis on a Long Short-term Memory (LSTM) networks method, for time series classification of performance degradation patterns in 4G CN have been proposed. To improve model interpretability and transparency, apply SHapley Additive exPlanations (SHAP) to provide insights into the contribution of feature behaviour on our decisions, to the level of individual features. The methodology includes collecting real-world network performance data, training the LSTM model, and applying SHAP to characterize how each performance metric influenced behaviors contributing to throughput changes. The resulting LSTM model produces good predictive accuracy with a performance of 92.10% and a ROC AUC score of 97.90%, confirming its effectiveness for the task of identifying anomalies in ET. The SHAP analysis highlights that Client-Side Long RTT, Uplink TCP Out-of-Order Rate, and Network Packet behaviours were important contributors to IET behaviour. As a result, telecom operators can be empowered to dedicate increased effort on high priority performance indicators for targeted optimization.Item Optimizing Energy Consumption in WDM Optical Networks through a Combined Sleep Mode and Traffic Grooming Approach: A Case Study in Ethio Telecom Optical Backbone Network(Addis Ababa University, 2025-06) Danayit Girma; Yalemzewd Negash (PhD)In recent years, the advancement of technologies and the increase in traffic demand have led to rise in necessity of deploying capable and power intensive telecom networks. To support this demand, optical backbone networks have been deployed with high resource provisions. The aggressively increasing traffic demand and the need for the networks to handle it, have posed a significant challenge for telecommunication sector. This research focuses on optimizing energy utilization in optical transport network through combined traffic grooming and sleep mode approach. This case study is on Ethio Telecom’s optical transport network. Traffic grooming is responsible in aggregating sub rate traffic into higher traffic line so that it will let the line idle to be in sleep mode. This combined approach is formed through mathematical model that account for latency and blocking probability resulted from the implementation the combined method. The multi-objective optimization problem is developed expressing the conflicting nature of power with latency and blocking probability through weights. The model results in optimization in energy utilization and other optical resources, such as line cards, wavelength, and etc. The combined framework achieves an average energy reduction of 50 % across network nodes compared to the baseline configuration. The energy savings at the nodal level vary and are analyzed under different weightings of power, latency, and blocking probability. The savings are translated in monetary terms by quantifying the reductions in energy bills and idle-card for future scalability. The proposed method achieved optimization in energy utilization while maintaining QoS.Item Co-Change Recommendation Using Dependency Graph and Concept Lattice(Addis Ababa University, 2025-06) Tewodros Meheretu; Sosina Mengestu (PhD)Software change requests are immediate drivers of software evolution. Prompted by the need for new functionality, fixing bugs, or enhanced performance. Resolving such emergent issues can end up being extremely difficult to resolve since software source code entities are highly interconnected. Entities that must change together to assist in resolving change requests are called co-changes. Identification and utilization of bugs typically involve tedious, manual processes, to overcome this issue, bug localization techniques have emerged to help developers. Bug localization is the activity that seeks to find source code entities that are relevant to a given bug report. However, this method does not have any property to extract co-changing entities from the ranked list while applying the bug fix. This research attempts to identify co-changing source code entities during bug fixing using a source-code-only approach. It is based on a method call dependency graph and a concept lattice of conceptual relationships. The lattice supports a concept lattice-based ranking to recommend the top ten co-change candidates. The study evaluates if the fusion of the two architectures improves performance and compares the results to that of a machine learning approach using a contrastive loss-trained Siamese network. The Siamese model is evaluated with both its native similarity-based and Approximate Nearest Neighbor (ANN) ranking strategies. As such, the comparative study reflected considerable difference in the success level of the proposed approaches. The combined approach, which utilizes the concept lattice and dependency graph-based dependencies, achieved the rate of 90.33% success, followed by the concept lattice-based method at 84.05%. The dependency graph-based approach in isolation lagged behind with a success rate of 39.6%. On the other hand, Siamese network models varied in terms of context inclusion. The Siamese network achieved a low 24.63% success rate without context. The performance increased with contextual embeddings from GraphCodeBERT to 65.21%. Nevertheless, embedding ANN search achieved success rates of 71.01%. Lastly, the combined and concept lattice-based approaches showed the highest accuracy, with the Siamese network performing competitively only when enriched with contextual information.Item Detection of SIM-BOX Fraud Using Deep Learning(Addis Ababa University, 2025-06) Haile Welay; Tsegamlak Terefe (PhD))In underdeveloped countries, the telecommunications infrastructure is often subsidized by the high cost of incoming international calls. However, this situation has led to an increase in sim box fraud, where attackers use VoIP-GSM gateways, known as ”SIM-BOXES,” to illegally route international calls through local wired data connections. The research presented here developed models for the classification of Call Detail Records (CDRs) in order to come up with a model that identifies fraudulent subscribers with higher accuracy. Three classification techniques, viz. Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and autoencoder, combined with three user aggregation datasets (4-hour, daily, and monthly aggregated), were used. These three algorithms, along with the three datasets, were applied in building the models. Results of the work show that LSTM performed better among the three algorithms with an accuracy of 99.81% and a lesser false positive on the monthly aggregated dataset.Item Assessment and Optimization of Infrastructure and Radio Resource Utilization Efficiency for LTE/LTE-A Cellular Networks, the Case of Ethio Telecom(Addis Ababab university, 2024-12) Elias Hagos; Yelemzewd Negash (PhD)Customers' quick requests for a variety of primary (basic and required, such as call, SMS/MMS, and data) and secondary (luxury and business class enterprises, such as high BW quality video accessing, video and camera surveillance, and online systematic monitoring & accessing) services via mobile networks necessitate thoughtful, timely responses from suppliers. The existence of many standards makes it difficult for 3G mobile networks to roam and cooperate with other mobile networks. However, LTE is a global standard that provides global mobility and service portability without binding customers to companies' proprietary hardware. Moreover, LTE is primarily a synthesis of multiple previous technologies rather than a completely unique standard. Since LTE/LTE-A/LTE-A pro is the most recent and enhanced service, it can currently satisfy consumer requests for high-speed data, low latency, bandwidth efficiency, multimedia broadcast and unicast, and high-quality cellular broadband services with affordable prices and revenue structures. This will be achieved by utilizing key technologies including orthogonal frequency-division multiplexing (OFDM), RF analysis and carrier aggregation, adaptive modulations, multiple-input multiple-output (MIMO), Hetnets and CoMP, and others. For instance, MIMO technology can significantly increase channel and spectrum efficiency. The physical layer channel includes modulation, cyclic prefix, frequency, bandwidth, and coding rate, all of which contribute to the high throughput. To increase spectral efficiency, the system uses OFDMA as an access mechanism in the downlink and Single Carrier Frequency Division Multiplexing (SC-FDMA) in the uplink. The Sphere Decoding (SD) detector, QR decomposition with M-algorithm maximum likelihood detector (QRM-MLD), Maximum Likelihood (ML) detector, Zero-Forcing (ZF) detector, and Minimum Mean Square Error (MMSE) detector are the superior signal detection methods that are employed. The ideal signal detector has a higher bit-error-rate (BER) in a correlated MIMO-OFDM scenario. even if the suppliers are testing and researching possible future R&D optimizations and service enhancements. In order to upgrade to LTE-4.5G and LTE-5G and offer opulent services like high BW, high QOS, low cost and jitter, error-free, and customer-acceptable, operators and service providers are now innovating, analyzing, and optimizing their LTE/LTE-A systems. As a result, consumers are sharing the globe and enjoying the newest innovations. Furthermore, engineers and technologists are researching and presenting their results and breakthroughs from theses, research, and seminars. In addition to discussing the evaluations, visualizations, and/or analyses utilizing various research techniques and algorithms to trace the availability, this thesis can also discuss LTE/LTE-A in the context of the aforementioned situations. After that, it will make a determination regarding the analysis. Based on the analysis, it is feasible to optimize the structural infrastructures and available resources to improve quick, dependable, and accessible services like SMS/MMS, calls, high bandwidth data, cheap cost, very little jitter, and delay with flawless system integrations. In the framework, market analysis and LTE-Advanced radio network link budget dimensioning are conducted using the COST-231 Hata model; existing LTE traffic, standard, and demographic studies are conducted for macro and small cells, respectively. Additionally, the required system bandwidth is estimated and computed correctly. As a result, it will offer critiques, suggestions, and conversations regarding the outcome, followed by reports and presentations of the thesis. Based on this thesis study, it will be feasible to conduct a detailed survey, observations, and optimizations of the efficiency of LTE/LTE-A infrastructures and the consumption of radio resources in Ethiopian telecom cellular mobile networks. Additionally, it will analyze the optimizations of LTE/LTE-A upgrades to LTE-A pro and LTE-5G innovations. In addition, this thesis will demonstrate how LTE/LTE-A resource optimizations can serve as technical references and support as well as a means of disseminating engineering research and findings.Item Performance Evaluation of Optimization Algorithms for BGP Slow Convergence Problem(Addis Ababa University, 2022-06) Mihret Gebre; Yalemzewd Negash (PhD)Border Gateway Protocol (BGP) is an inter-domain routing protocol that provides routing or reachability information inside or between different Autonomous Systems (AS). The existing inter-domain routing architecture has a major challenge due to slow convergence during network failures. The slow routing convergence time results in intermittent loss of connectivity, increased packet loss, and latency. The Minimum Route Advertisement Interval (MRAI) timer limits the number of messages propagated by BGP speakers. Convergence time can be characterized in terms of MRAI rounds. Thus, the optimum MRAI timer implementation plays a vital role in improving the convergence time of BGP. This thesis is conducted to evaluate and analyze the performance of optimization algorithms to determine an optimum value for the MRAI timer, which minimizes the convergence time without affecting the number of update messages. The dataset has been gathered from the network topology with different values of MRAI using Graphical Network Simulator-3 (GNS3). The Adaptive Neuro-Fuzzy Inference Systems (ANFIS) is trained with the dataset to generate a model. Finally, three optimization algorithms such as Artificial Bee Colony (ABC), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO) are applied to the model to get the optimum value for the MRAI timer. The implementation has been performed using MATLAB. The results indicated that the PSO model outperforms the ABC and GA which reduces the running time and lowers the convergence rate required to complete the search process. The experimental results and analysis also show that in optimized BGP convergence time is improved by 55%, and packet loss is improved by 19% as compared to default BGP.Item Machine Learning Based Soft Failure Detection by Exploiting Optical Channel Forward Error Correction Data(Addis Ababa University, 2023-10) Dawit Tadesse; Yalemzewd Negash (PhD)The performance of optical channel degrades because of soft failures, such as filter failures, laser drift, and system aging. If such degradation is handled correctly and promptly, soft failures will not affect services. A crucial element in the protection against failures in optical channels is soft failure detection. Traditional approaches, however, find it difficult to complete this task because of their limitations in adapting the dynamic behavior of soft failure, requirements for professional manual intervention, and vastly increased optical performance data. The aim of this thesis is to detect soft failures based on machine learning by exploiting optical channel forward error correction data. Soft failures detection is explored using three ML algorithms: support vector machines (SVM), artificial neural networks (multilayer perception), and random forests (RF).The input of ML algorithms is Pre forward error correction bit error rate (Pre-FE-BER) that captured from forward error correction data on real optical channels. We implemented feature labelling and extraction based on the behavior of a time series window. We use stratified shuffle split method in cross-validation approaches to optimize and validate algorithm performance in terms of confession matrix, accuracy, and building time. As a result, RF with significant features, which has a validation accuracy of 99.2% and a standard deviation of 0.49%, is the best method. Beside, a lower computational complexity of 12 features and a building time of 17.5ms were determined.Item User Behavior-Based Insider Threat Detection Using Few Shot Learning(Addis Ababa University, 2023-08) Eden Teklemariam; Fitsum Assamnew (PhD)Insider threats are among the most difficult cyber threats to counteract since they emerge from an Organization’s own trusted employee who knows its organizational structure plus system and often leads that organization to a significant loss. The problem of insider threat detection has been researched for a long time in both the security and data mining sectors. The existing studies face challenges due to a lack of labeled datasets, imbalanced classes, and feature representation. The machine learning approaches depend on manual feature engineering that takes time and requires expertise and knowledge. The deep learning approaches depend on a huge amount of labeled and balanced training data. In the case of insider threat detection, the number of malicious users compared to normal ones is significantly imbalanced in both real-world scenarios and insider threat working datasets. Data on insider threats often includes a lot of features, which can make the data highdimensional and difficult to represent the relevant features. In this paper, we propose a novel approach that includes CNN (Convolutional Neural Network) and LSTM (Long short-term memory) approach to act independently over the publicly available insider threat dataset, CERT (Computer Emergency Response Team) release 4.2 for feature extraction and few-shot learning based detector for insider threat detection. We concatenate features extracted from the dataset using both deep learning models and do feature selection to have relevant and best features. We use the Siamese neural network (SNN), called the ’twin’ network of few-shot learning, to detect malicious insiders. We do an experiment using three datasets which are CNN-extracted features datasets (datasets found from a process of feature extraction using CNN), LSTM-extracted features datasets (datasets found from a process of feature extraction using LSTM), and Selected-features datasets (datasets found after applying feature concatenation and selection techniques). We do compare our model with other baseline models such as RNN, isolation forest, and XGBoost. The experimental result shows that with the experiment done using CNN-extracted features datasets, the proposed model best performs with an F1 score of 68% which is 18% better than the isolation forest which performs the worst, and an FNR (False Negative Rate) value of 0.006. The second experiment is done using LSTM-extracted features datasets, and the results of SNN in terms of an F1 score is 69% which is 11% better than the isolation forest which performs the worst with an F1 score of 58% and an FNR value of 0.2.The last experiment is done using the Selected-features dataset, the proposed model outperforms in terms of F1 score by having the highest value of 87% which is 10% greater than compared to the least performer baseline model (XGBoost) that has 77% of F1 score. In terms of FNR (False Negative Rate) the lowest value of 0.11 FNR with the CNN-extracted features datasets.Item E-service Impact Assessment Framework: Case of Digital Government of Ethiopia (E-IAF: CDGE)(Addis Ababa University, 2023-02) Eshetu Tekle; Mesfin Kifle (PhD)Computer applications known as "E-services" are developed to offer effective access to services electronically. The Ethiopian government provides a variety of government services to its people. These different services are some fully automated, some partially automated, and others manual. Because of public dissatisfaction with government services, the government works to support those services through ICT to reduce the amount of time citizens spend finding out about them and to make them easily accessible everywhere, at any time. At a certain point, all those manual and partially automated services will become E-services, and they will offer a large number of benefits to their users, such as removing time and location barriers and reducing costs. For the past five years, MiNT has been implementing E-services in Ethiopia. The issue is that there is no mechanism to assess the outcomes of E-services and their impact on the country's economy, politics, and social life. The objective of this study is to prepare an impact assessment methodology and create an adapted E-Service Impact Assessment Framework. The methodology for this study uses statistical techniques to evaluate and analyze quantitative data. Towards the Development of a Citizen Centric Framework for Evaluating the Impact of E-Government: A Case Study of Developing Countries analysis of the E-government evaluation framework from among those various studies chosen for the adaptation of the E-Service Impact Assessment Framework in the Ethiopian context . Because it emphasizes the citizen's point of view and was created in the context of developing countries. However, towards the Development of a Citizen-Centric Framework for Evaluating the Impact of E-Government: A Case Study of Developing Countries model does not fit the Ethiopian scenario or context, such as the governmental, technological, and social readiness of the country. By identifying the additional characteristics required to assess the impact of E-services introduced during the past five years in Ethiopia, the researcher put a lot of effort into creating an E-service impact assessment framework. Weighing the importance of each additional attribute in the Ethiopian context's framework for assessing E-services and finally prepare E-service Impact Assessment Framework in the case of Digital Government of Ethiopia.Item Deep Learning Based User Mobility Prediction(Addis Ababa University, 2023-09) Feleku Mulu; Tsegamlak Terefe (PhD)Telecommunication service providers mainly emphasize on providing uninterrupted network access with the maximum attainable quality of service. With this in mind, service providers often monitor and utilize information acquired from user mobility patterns to performing effective resource management of network resources and to predict the user’s future location. For instance, information associated with user mobility is used to reduces the cost of paging, managing the bandwidth resources and efficient planning. Overall, with the current trend of increase in the number of devices connected to mobile networks, telecom service providers are expected to carefully monitor and utilize user mobility patterns in order to improve the quality of service provided to their customers. With this understanding, in this thesis, we propose to utilize neural networks to predict user mobility, which helps to increase the performance of mobility analysis in cellular networks. This in turn is expected to improve the studying and under. In general, we intend to provide useful insights into how users migrate across various geographic areas and how they interact with the network infrastructure supplied by Ethiotelcom by constructing neural network based user predicting models. To meet this objective, we used mobility data obtained from Call Details Records (CDR) to forecast the future mobility of users (devices) as a sequential time series. Our experimental outcome suggests that a neural network based on one dimensional convolutional effective tool for user mobility analysis using datasets extracted from CDR. In reality, the Conv-LSTM networks take advantage of both an LSTM’s ability to capture long-short dependency for time series data and the strength of the convolutional layers to extract localized features from complicated and non-linear dataset.Item Machine Learning for Improved Root Cause Analysis of LTE Network Accessibility and Integrity Degradation(Addis Ababa University, 2023-09) Fikreaddis Tazeb; Dereje Hailemariam (PhD)Long Term Evolution (LTE) networks are essential for enabling high-speed, reliable communication and data transmission. However, the accessibility and integrity of LTE networks can degrade due to a variety of factors, such as congestion, coverage, and configuration problems. Root cause analysis (RCA) is a process for identifying the underlying causes of degradation. However, RCA can be time-consuming and labor-intensive. Machine learning can be used to enhance RCA by identifying patterns and trends in data that can be used to identify the root causes of problems. Limited work exists on machine learning-enabled RCA for LTE networks. This thesis proposes a machine learning-enabled approach, specifically Convolutional Neural Network (CNN) and SHapley Additive exPlanations (SHAP), for RCA of LTE network performance degradation. The approach was evaluated using key performance indicators (KPIs) and counters data collected from LTE network of ethio telecom, a major operator in Ethiopia. The main causes of reduced network accessibility are failure caused by the Mobility Management Entity (MME), the average number of users, and handover failures. Similarly, the underlying causes of degraded accessibility at the cell level are failure caused by MME, control channel element (CCE) utilization, and paging utilization. For network integrity, which is measured by user throughput, the main causes of degradation are the high number of active users, high downlink Physical Resource Block (PRB) utilization, poor Channel Quality Indicator (CQI), and coverage issues. At the cell level, the main factors are downlink PRB utilization, unfavorable CQI values, and high downlink block error rate. For the given data, the model’s sensitivity for network accessibility and integrity at the cell level is 82.8% and 95.5%, respectively. These results demonstrate the potential of the proposed approach to accurately identify degradation instances. The proposed approach using deep learning and SHAP offers reusability, high-dimensionality support, geographic scalability, and time resolution for improved performance analysis in networks of all sizes. Network operators can improve network performance by identifying and addressing the root causes of degradation.Item Infrastructure and Spectrum Sharing for Coverage and Capacity Enhancement in Multi-Operator Networks(Addis Ababa University, 2023-08-31) Mahider Abera; Dereje Hailemariam (PhD)The rapid growth of mobile data traffic is putting a strain on wireless networks. Infrastructure and spectrum sharing are two promising ways to alleviate this challenge on the wireless networks. Infrastructure sharing refers to the co-deployment and operation of network infrastructures like base station and other radio access equipment by different mobile network operators (MNOs). Spectrum sharing refers to the use of the same spectrum by multiple MNOs. Full sharing involves the sharing of both infrastructure and spectrum among MNOs. Sharing between operators is nowadays used as a cost optimization and technology refreshment in developed markets and as coverage and capacity enhancement in emerging markets. In Ethiopia, the two MNOs of the country are not meeting and exceeding the quality of service (QoS) target given by the nation’s communication authority and most customers are not satisfied by the poor QoS specially, mobile data services. Infrastructure and spectrum sharing are effective and efficient ways to fulfill the required QoS. The research presented in this paper presents an analytical model for infrastructure, spectrum and full sharing scenarios and investigates the performance of the three sharing scenarios in terms of probability of coverage and mean user rate. The results show that infrastructure sharing provides superior coverage as compared to spectrum and full sharing scenarios; whereas, full sharing and spectrum sharing have given the highest mean user rate. Therefore, the findings of this thesis can help MNOs to decide on the best way to share infrastructure and spectrum in order to meet their capacity and coverage requirements.Item Performance and Cost-Benefit Analysis of Software Defined Network over Conventional Network at the National Datacenter -Ethiopia(Addis Ababa University, 2023-08) Maru Seyoum; Mesfin Kifle (PhD)With the continuous expansion of data center services and the aging of equipment, there is a growing need to enhance network services and development. Despite their widespread use, traditional IP networks exhibit limitations in terms of network management efficiency and cost-effectiveness, presenting challenges in configuring networks according to established protocols and promptly addressing changes in traffic load and network issues. Adding complexity, current networks often have a vertically integrated structure. To address these issues, an emerging concept known as Software Defined Networking (SDN) has gained traction in network design and administration. SDN offers the ability to dynamically allocate network resources, contrasting with the limitations of conventional approaches. This paper delves into the performance challenges faced by existing national data center networks and demonstrates the potential improvements that can be achieved through the adoption of Software Defined Networking (SDN). Additionally, the study employs a modeling methodology to assess the value of these network enhancements, considering various factors such as capital and operational expenses and overall ownership costs in the two scenarios. The techno-economic assessment (TEA) model, implemented using tools like Mininet, Observium, and MS Excel, comprises dedicated sections for cost benefit modeling. The integration of Software-Defined Networking in Ethiopia's national data center is a strategic move that aligns both with social and economic benefits. The consideration of downtime costs adds a crucial dimension to its implementation. From a social standpoint, the adoption of SDN reflects the government's commitment to enhanced public service delivery and inclusivity. By optimizing network performance, SDN enables citizens to access crucial services efficiently, fostering a sense of equal opportunity. This technological advancement supports good governance by enabling effective resource allocation and timely service provision, ultimately enhancing the trust citizens have in the government's dedication to meeting their needs. The paper thus emphasizes the significant potential of SDN adoption for bolstering both technical and social aspects of national data center networks.Item Genetic Algorithm-Based Small Cell Switch-Off at Low Load Traffic for Energy-Efficient Heterogeneous Network(Addis Ababa University, 2023-09) Na’ol Getu; Yalemzewd Negash (PhD)The capacity of macro base stations (MBS) is insufficient to handle the growing traffic, particularly in densely populated cities like Addis Ababa, Ethiopia. To address this, small cells are deployed then the cellular network become heterogeneous cellular networks (HetNets). Het- Nets combines MBS with low power small cells to enhance system capacity. However, the increasing energy consumption of cellular networks poses a challenge, leading to a growing interest in energy efficiency among network operator. To mitigate this, a technique called small base station (SBS) on or off switching is employed to conserve energy and improve network efficiency. By turning off or putting unused cells into sleep mode during periods of low traffic, power consumption can be reduced. When traffic or user demand increases, these cells can be switched back on to accommodate the higher demand. The thesis proposes a switch-on and off technique based on traffic load fluctuations using a Genetic Algorithm (GA). It balances energy savings and network performance, returning the base station to operational status when traffic or demand increases. A simulation using Matlab (2021a) shows the algorithm can save 24% of energy consumption, enhancing system efficiency. The performance improvement of small cells is evaluated.Item ILP Based Optimal BBU Pool Planning for Cloud-RAN Deployment: in the Context of Ethiotelecom(Addis Ababa University, 2023-08) Ruhama Girma; Yihenew Wondie (PhD)Cloud radio access network (C-RAN) is a novel mobile network architecture that decouples the baseband units (BBUs) from their corresponding cell sites and takes the baseband processing unit to a virtualized and shared central location. This increases wireless networks' scalability, manageability, and significantly saves both the capital and operational costs of mobile network operators (MNOs). Although this BBU centralization enables power savings, it imposes much higher bandwidth on the fronthaul network. Since with distributed radio access network (D-RAN) there is a higher building, maintenance and operational costs there should be a RAN architectural change, and in addition to previously done researches in this area this thesis comprise the impact of wireless interface delay on BBU centralization. We investigate the optimal planning of BBU pools for C-RAN deployment in the context of Ethiotelecom and shows how to optimally determine the required number of BBU pools for different use cases using an integer linear programming (ILP) optimization problem and calculates the required budget to deploy a single BBU pool. We also figure out which functional split is cost-effective for fronthaul networks while providing the centralization benefits. Simulation results show that the wireless interface delay has an impact on BBU centralization therefore it should be considered in the BBU pool planning process. On the other hand, the number of BBU pools required varies depending on the use cases considered, hence for the chosen area ultra-reliable and low latency communication (URLLC) services require 5 BBU pools, real-time functions such as voice over long-term evolution (VoLTE) require 3 BBU pools; in contrast latency-insensitive applications, such as email 2 BBU pools are sufficient to meet the required performance requirements. Furthermore, split option 7.2 was selectedItem CDR Based Recommender System for Mobile Package Service Users(Addis Ababa University, 2023-09) Saba Mulugeta; Sosina Mengistu (PhD)Due to increased competition, telecom operators are continually introducing new products and services. To make their services easy to use, to meet customer requirements, and to satisfy their customers' needs in terms of payment, operators have launched various telecommunication packages. Telecom operators have so many packages that customers are unaware of; some packages may go unnoticed even if they are useful. To overcome such problems, we need a recommender system that directly notifies customers based on their interests. Most research has been conducted on recommendation systems for web service users based on user ratings. In this paper, we propose a mobile package service recommendation system for customers. The proposed recommendation system has two phases. The first is creating a relationship between customer usage and mobile packages by grouping customers based on their usage. To create a relationship between customers' usage and mobile packages, we have used the k-means clustering algorithm. The elbow method is used to determine the number of clusters for each service. The second phase is building a classification model that will recommend mobile packages for users. Two-month CDR data was used to build a classification model by using random forest (RF) and K-nearest neighbor (KNN) classifier algorithms. The evaluation result shows KNN outperformed RF for weekly and monthly data usage plans with F1 scores of 90.4% and 96%, respectively, whereas RF outperformed for daily plans with an F1 score of 86.9%. On the other hand, RF outperformed KNN with F1 scores of 95.10% and 99.60% for daily and monthly voice usage plans, respectively. Similarly, KNN showcased better performance than RF on the weekly voice usage plan with an F1 score of 94.30%. Generally, the strengths of each algorithm differ for different usage scenarios within the voice and data service domains.Item Mobile Network Backup Power Supply Battery Remaining Useful Time Prediction Using Machine Learning Algorithms(Addis Ababa University, 2023-08) Abel Hirpo; Dereje Hailemariam (PhD)Base transceiver stations (BTSs) in mobile cellular systems are critical infrastructure for providing reliable service to mobile users. However, BTSs can be disrupted by electric power supply interruptions, which can lead to degraded quality of service (QoS) and quality of experience (QoE) for users. The reliability of the battery used in BTSs is affected by a number of factors, including: The instability of the primary power supply, Temperature fluctuations, Battery aging, the number of charging and discharging cycles (CDC) and the depth of discharge (DOD).As a result of these factors,the state of health (SOH) of a battery is impacted which will in turn affect the remaining useful time (RUT) of the battery. This can lead to disruptions in service for mobile users, as the BTS may not have available power to operate during a power outage. To address this issue, the developed supervised machine learning (ML) techniques have predict the RUT of lithium iron phosphate (LFP) batteries installed in BTSs have used ML models and trained on data that has been extracted from power and environment (P&E) monitoring tool Net Eco(iManager NetEco data center infrastructure management system) . The ML models can then be used to predict the RUT of a battery, which can help to ensure that batteries are replaced before they fail to deliver the designed capacity. In this study, three ML models were evaluated: linear regression, random forest regression, and support vector regression. The support vector regression model provided the best overall prediction performance, with a test error of 4.85%. This suggests that the support vector regression model is a promising tool for predicting the RUT of LFP batteries used in BTSs.Item Prediction of LTE Cell Degradation Using Hidden Markov Model(Addis Ababa University, 2023-08) Abera Dibaba; Dereje Hailemariam (PhD)Long-Term Evolution (LTE) networks play a crucial role in providing high-speed wireless communication services. However, operators often have incomplete awareness of the overall state of their LTE networks due to the vast number of cells, the dynamic nature of LTE networks operations, complex interference scenarios, and huge number of key performance indicators (KPIs). This thesis presents a novel approach to predict LTE cell degradation levels using Hidden Markov Models (HMM). HMMs are a class of probabilistic models that can be used to capture the dynamic nature of LTE networks. HMMs model the sequential occurrence of cell degradation events, which provides network operators statistical insights into the future state of cells based on historical data. To develop our prediction model, we used KPIs, such as average traffic volume, number of Reference Signal Received Power (RSRP) measurement report, and number of outgoing handover requests as observation datasets. These KPIs are clustered into six unique observation sequences, which form the basis for our model training. Then, the Baum-Welch algorithm is applied to train and obtain the HMM parameters for modeling the cell degradation. The results of the study convincingly demonstrate the performance scores of the HMM prediction model. With an average of 23 observation lengths, the HMM achieved an average accuracy of 93.12%, F1 score of 91.81% and a precision of 92.82%. These metrics illustrate the effectiveness of using the proposed HMM approach in predicting LTE cell degradation levels. This research addresses the challenges of monitoring and analyzing LTE cell degradation events by proposing a comprehensive methodology for LTE cell degradation prediction using HMM and KPIs. The timely provision of predictions enables operators to proactively identify and address potential network issues, optimizing network performance and enhancing quality of service. The main limitations of this study are that it was conducted on a small number of cells and only four degradation states. Future work should test the approach on a larger number of cells with various KPIs and complex states using different types of HMMs.