Browsing by Author "Dereje Hailemariam (PhD)"
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item Comparative Study of Machine Learning Techniques for Path Loss Prediction(Addis Ababa University, 2023-11) Ademe Wondimneh; Dereje Hailemariam (PhD)Path loss is the term used to describe the difference in signal strength between transmitted and received. Predicting this loss is a crucial task in wireless and mobile communication to gather data for resource allocation and network planning. Deterministic and empirical models are the two fundamental propagation models that are used to calculate path loss. There is a trade-off between accuracy and computing complexity between these models. Machine learning models reflect a classic conflict between accuracy and complexity and have significant potential in path loss prediction because they can learn complicated non-linear correlations between input properties and target values. This study investigates the application of machine learning techniques for path loss prediction in Addis Ababa LTE networks. Artificial neural networks (ANNs), random forest regression (RFR), and multiple linear regressions (MLR) are employed as machine learning models and compared with the widely used COST 231 empirical model. Data for training and testing is obtained through measurements from Addis Ababa LTE networks. The performance of the proposed models is evaluated using statistical metrics such as root mean squared error (RMSE), mean absolute error (MAE), and R-squared (R2). The results demonstrate that the RFR model outperforms the other models in terms of prediction accuracy, achieving an MAE of 3.48, an RMSE of 5.35, and an R2 of 0.77. The ANN model also exhibits satisfactory performance with an MAE of 4.19, an RMSE of 5.78, and an R2 of 0.71. The Cost 231 model, on the other hand, exhibits lower prediction accuracy. In terms of computational complexity, ANNs are found to be the most computationally intensive, while MLR is the simplest model among the evaluated machine learning models. RFR falls between ANNs and MLR in terms of computational complexity.Item Deep Learning-based Cell Performance Degradation Prediction(Addis Ababa University, 2022-07) Betelehem Dagnaw; Dereje Hailemariam (PhD)In light of rapid developments in the telecommunications sector, there is a growing volume of generated data as well as high customer expectations regarding both cost and Quality of Service (QoS). For Mobile Network Operators (MNOs), the changing dynamics of radio networks pose challenges in coping with the increased number of network faults and outages, which both lead to performance degradation and increased operational expenditures (OPEX). Human expertise is required to diagnose, identify and x faults and outages. However, the increasing density of mobile cells and diversi cation of cell types are making this approach less feasible, both nancially and technically. In this paper, relying on the power of deep learning and the availability of large radio network data at MNOs, we propose a system that predicts the performance degradation of cells using key performance indicators (KPIs). Data collected from the Universal Mobile Telecommunications Service (UMTS) network of an operator located in Addis Ababa, Ethiopia, is used to build models in the system. The proposed system consists of a multivariate time series forecasting model, which forecasts KPIs in advance. In addition, a cell performance degradation detection model, which detects anomalous records in the KPI data based on the forecasting model outputs. Convolutional Long Short-Term Memory (ConvLSTM) and LSTM Autoencoders are cascaded for prediction and degradation detection. The results show that the system is capable of predicting KPIs with a Root Mean Square Error (RMSE) of 0.896 and a Mean Absolute Error (MAE) of 0.771, and detecting degradation with 98% accuracy. This research can therefore contribute signi cantly to improving network failure management systems by predicting the impact of upcoming cell performance degradations on network service before they occur. This research can therefore contribute signi cantly to improving network failure management systems by predicting the impact of upcoming cell performance degradations on network service before they occur.Item Design and Performance Evaluation of Power-aware Routing Protocols for Wireless Sensor Networks – GAICH and GCH(Addis Ababa University, 2011-10) Seifemichael Bekele; Dereje Hailemariam (PhD)In recent years, the advancements in wireless communications and electronics have enabled the development of low-cost, low-power and multifunctional wireless sensor networks (WSNs). As nodes in sensor networks are equipped with a limited power source, efficient utilization of power is a very important issue in order to extend the network lifetime. It is for these reasons that researchers are currently focusing on the design of power-aware protocols and algorithms for sensor networks. In this thesis, two routing protocols that provide efficient energy management for WSNs are proposed. The first protocol, GAICH (Genetic Algorithm Inspired Clustering Hierarchy), makes use of genetic algorithm to create optimum clusters in terms of energy consumption. The other one, GCH (Grid Clustering Hierarchy), creates clusters by forming virtual girds, where nodes share the role of cluster head in a round-robin fashion. These protocols have been implemented in MATLAB using a standard radio energy dissipation model that is used for the simulation of WSNs. Performance comparison has been made with two of the existing routing protocols: LEACH and Direct Transmission, on different performance metrics. Simulation results show that GAICH and GCH are better than LEACH in the total packets sent to the base station and network lifetime. Moreover, different techniques for optimizing energy consumption in WSNs are suggested.Item Evaluation of Traffic Load-based Multi-Objective Optimization Techniques of Carrier Components for Throughput Improvement in LTE-Advanced Networks(Addis Ababa University, 2023-09) Umerulfaruqe Shehseid; Dereje Hailemariam (PhD)The rapid growth of global data traffic is putting a strain on long term evolution(LTE) networks, which are struggling to keep up with demand. Carrier aggregation (CA) is a promising technique to improve the throughput of LTE networks by combining multiple carrier components to create a wider bandwidth. However, CA can also lead to traffic imbalance between the carrier components, which can degrade throughput. This thesis proposes a multi-objective optimization technique based on genetic algorithm to balance the load across carrier components in LTE-Advanced networks. The proposed technique is evaluated using simulations in MATLAB and WinProp, a software tool for radio propagation analysis. The data used in this study was collected from ethio telecom. The data includes engineering parameters such as azimuth, electrical and mechanical tilt, default transmit power, cell-level and user-level traffic. A hotspot area in Addis Ababa around Bole was selected for the study. The area has 22 eNodeBs and 66 cells. Specifically, the joint Tx Power with Electrical Tilt technique provides a 2.41 bps/Hz improvement in spectral efficiency over the Tx Power technique, a 103.9 % improvement in cell edge throughput, and a 12.5 % reduction in load imbalance in the cell edge case. In the cell mid case, the improvement is 61.34 %, 14.53 %, and 8.3 %, respectively. And in the cell center case, the improvement is 18.84 %, 2.25 %, and 1.5 %, respectively and also for Signal-to-Interference and Noise Ratio (SINR) has improved with 105.72% and 63.51% for cell edge and cell center respectively. The limitations of the study include the fact that the uplink side was not considered, more than three carrier components were not considered, and the load balance between non-CA and CA was not considered. The findings of this thesis can be used to improve the design and implementation of LTE-Advanced networks. The recommendation of this study is that the operator should deploy the proposed technique, Significantly improve spectral efficiency, average throughput, and load imbalance, especially in the cell edge case.It is worthy to effectively use spectrum resources rather than adding more carrier components.Item Infrastructure and Spectrum Sharing for Coverage and Capacity Enhancement in Multi-Operator Networks(Addis Ababa University, 2023-08-31) Mahider Abera; Dereje Hailemariam (PhD)The rapid growth of mobile data traffic is putting a strain on wireless networks. Infrastructure and spectrum sharing are two promising ways to alleviate this challenge on the wireless networks. Infrastructure sharing refers to the co-deployment and operation of network infrastructures like base station and other radio access equipment by different mobile network operators (MNOs). Spectrum sharing refers to the use of the same spectrum by multiple MNOs. Full sharing involves the sharing of both infrastructure and spectrum among MNOs. Sharing between operators is nowadays used as a cost optimization and technology refreshment in developed markets and as coverage and capacity enhancement in emerging markets. In Ethiopia, the two MNOs of the country are not meeting and exceeding the quality of service (QoS) target given by the nation’s communication authority and most customers are not satisfied by the poor QoS specially, mobile data services. Infrastructure and spectrum sharing are effective and efficient ways to fulfill the required QoS. The research presented in this paper presents an analytical model for infrastructure, spectrum and full sharing scenarios and investigates the performance of the three sharing scenarios in terms of probability of coverage and mean user rate. The results show that infrastructure sharing provides superior coverage as compared to spectrum and full sharing scenarios; whereas, full sharing and spectrum sharing have given the highest mean user rate. Therefore, the findings of this thesis can help MNOs to decide on the best way to share infrastructure and spectrum in order to meet their capacity and coverage requirements.Item Machine Learning for Improved Root Cause Analysis of LTE Network Accessibility and Integrity Degradation(Addis Ababa University, 2023-09) Fikreaddis Tazeb; Dereje Hailemariam (PhD)Long Term Evolution (LTE) networks are essential for enabling high-speed, reliable communication and data transmission. However, the accessibility and integrity of LTE networks can degrade due to a variety of factors, such as congestion, coverage, and configuration problems. Root cause analysis (RCA) is a process for identifying the underlying causes of degradation. However, RCA can be time-consuming and labor-intensive. Machine learning can be used to enhance RCA by identifying patterns and trends in data that can be used to identify the root causes of problems. Limited work exists on machine learning-enabled RCA for LTE networks. This thesis proposes a machine learning-enabled approach, specifically Convolutional Neural Network (CNN) and SHapley Additive exPlanations (SHAP), for RCA of LTE network performance degradation. The approach was evaluated using key performance indicators (KPIs) and counters data collected from LTE network of ethio telecom, a major operator in Ethiopia. The main causes of reduced network accessibility are failure caused by the Mobility Management Entity (MME), the average number of users, and handover failures. Similarly, the underlying causes of degraded accessibility at the cell level are failure caused by MME, control channel element (CCE) utilization, and paging utilization. For network integrity, which is measured by user throughput, the main causes of degradation are the high number of active users, high downlink Physical Resource Block (PRB) utilization, poor Channel Quality Indicator (CQI), and coverage issues. At the cell level, the main factors are downlink PRB utilization, unfavorable CQI values, and high downlink block error rate. For the given data, the model’s sensitivity for network accessibility and integrity at the cell level is 82.8% and 95.5%, respectively. These results demonstrate the potential of the proposed approach to accurately identify degradation instances. The proposed approach using deep learning and SHAP offers reusability, high-dimensionality support, geographic scalability, and time resolution for improved performance analysis in networks of all sizes. Network operators can improve network performance by identifying and addressing the root causes of degradation.Item Machine Learning for Power Failure Prediction in Base Transceiver Stations: A Multivariate Approach(Addis Ababa University, 2023-10) Sofia Ahmed; Dereje Hailemariam (PhD)The proliferation of mobile cellular networks has had a transformative impact on economic and social activities. Base transceiver stations (BTSs) play a critical role in delivering wireless services to mobile users. However, power failures in BTSs can pose significant challenges to maintaining uninterrupted mobile services, leading to inconveniences for users and financial losses for service providers. This thesis introduces a novel approach to mitigating power system interruptions in BTSs using a machine learning-based power failure prediction framework. The framework leverages multivariate time-series data collected from the BTS power and environmental monitoring system. The methodology aims to preemptively predict power failures using three advanced machine learning techniques, specifically, Convolutional Neural Networks (CNNs), Long Shortterm Memory (LSTM), and CNN-LSTM networks. These methods excel in capturing complex temporal relationships inherent in time-series data. All the three algorithms reasonably capture the temporal patterns in the data. However, the LSTM model consistently outperforms the other two models having a MSE of 0.001 and 1.194 MAPE, albeit with longer training times which is more than three hours. On the other hand, the CNN-LSTM model stands out for its efficient training process, which takes notably less time than the LSTM model around two hours training time resulting 0.001 MSE and 2.528 MAPE. Furthermore, the CNN model takes notably less time to compute than the other two models with a prediction performance of 0.223 MSE and 2.843 MAPE. essential to highlight that this study concentrates on the predictive aspect, which contributes significantly to the field by offering a robust and effective predictive model tailored specifically for BTS power systems. By enabling timely maintenance actions and minimizing downtime, our proposed methodology holds the possibility to significantly improve the reliability of telecommunications infrastructure, which will ultimately lead to better user experiences and streamlined service provider operations.Item Machine Learning-Based Spectrum Utilization Prediction for Dynamic Spectrum Sharing(Addis Ababa University, 2023-09) Tewodros Abebe; Dereje Hailemariam (PhD)Dynamic Spectrum Sharing (DSS) is a promising technology for improving the performance of heterogeneous wireless networks. DSS allows Fifth Generation New Radio (5G NR) to be deployed in the same frequency bands as Fourth Generation (4G) Long Term Evolution (LTE), which can help to increase cell capacity and improve the overall network performance. Machine Learning (ML) can be used to improve the efficiency of DSS by helping to predict future spectrum utilization and allocate resources accordingly. ML algorithms can be trained on historical data to identify patterns in spectrum usage and learn the behavior of different users. This information can then be used to make predictions about future spectrum utilization and allocate resources accordingly, in a way that minimizes interference and maximizes throughput. This work proposes an ML-based approach to dynamically distribute spectrum resources between 4G LTE and 5G NR users in a way that meets the traffic requirements of each user and optimizes link-level performance at varying Signal-to-Noise Ratio (SNR) points. Two ML models, namely, Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN) models are developed for the spectrum utilization prediction. The Resource Element (RE)-level rate-matching DSS technique is evaluated by using 24-hour sample data from ML prediction results. This thorough assessment encompasses measuring throughput, and spectral efficiency at various SNR points. The proposed model’s performance is compared with that of the static spectrum-sharing technique. The results show that the CNN algorithm-based model can be the best input for the DSS controller to distribute spectrum resources for both technologies optimally. The model can predict the next 6 hours eNodeB (eNB) spectrum utilization with an Root Mean Square Error (RMSE) value of 1.3. Based on the prediction results, the average 4G LTE and 5G NR throughput per day is 8.7 and 107.7216 Mbps, respectively. Furthermore, the overall cell spectral efficiency is increased to 5.82 bits/sec/Hz. LTE performance is not affected by DSS when compared to an existing non-sharing network and Static Spectrum Sharing (SSS). However, NR experiences 32.54% of performance degradation. The proposed ML-based DSS technique can significantly improve the performance of DSS by dynamically allocating spectrum resources to LTE and 5G NR users. The CNN algorithm-based model is shown to be the best model for spectrum utilization prediction.Item Mobile Network Backup Power Supply Battery Remaining Useful Time Prediction Using Machine Learning Algorithms(Addis Ababa University, 2023-08) Abel Hirpo; Dereje Hailemariam (PhD)Base transceiver stations (BTSs) in mobile cellular systems are critical infrastructure for providing reliable service to mobile users. However, BTSs can be disrupted by electric power supply interruptions, which can lead to degraded quality of service (QoS) and quality of experience (QoE) for users. The reliability of the battery used in BTSs is affected by a number of factors, including: The instability of the primary power supply, Temperature fluctuations, Battery aging, the number of charging and discharging cycles (CDC) and the depth of discharge (DOD).As a result of these factors,the state of health (SOH) of a battery is impacted which will in turn affect the remaining useful time (RUT) of the battery. This can lead to disruptions in service for mobile users, as the BTS may not have available power to operate during a power outage. To address this issue, the developed supervised machine learning (ML) techniques have predict the RUT of lithium iron phosphate (LFP) batteries installed in BTSs have used ML models and trained on data that has been extracted from power and environment (P&E) monitoring tool Net Eco(iManager NetEco data center infrastructure management system) . The ML models can then be used to predict the RUT of a battery, which can help to ensure that batteries are replaced before they fail to deliver the designed capacity. In this study, three ML models were evaluated: linear regression, random forest regression, and support vector regression. The support vector regression model provided the best overall prediction performance, with a test error of 4.85%. This suggests that the support vector regression model is a promising tool for predicting the RUT of LFP batteries used in BTSs.Item Prediction of Base Transceiver Station Power Supply System Failure Indicators using Deep Neural Networks for Multi-Time Variant Time Series(Addis Ababa University, 2023-11) Jalene Bekuma; Dereje Hailemariam (PhD)The uninterrupted operation of wireless communication services relies heavily on the stability of power supply systems for Base Transceiver Stations (BTS). This study is dedicated to predicting potential failure indicators in BTS power systems using deep neural network architectures, such as recurrent and convolutional neural networks. The study integrates principal component analysis (PCA) for data dimensionality reduction and addresses challenges related to power system failures caused by environmental factors, power fluctuations, and equipment malfunctions within the Ethio telecom BTS system. The dataset utilized in this study spans four weeks of data from multiple sites, with observations sampled at 5-minute intervals, obtained from the ET NetEcho power monitoring system. The study meticulously explores the data preprocessing steps for time series analysis, encompassing consolidation, cleaning, scaling, and dimensionality reduction using PCA. Furthermore, it delves into the detailed implementation of CNN, LSTM, and CNN-LSTM models for time series prediction, thoroughly evaluating their performance and convergence. The experimental results clearly indicate that CNN-LSTM model surpasses both LSTM and CNN models in predicting BTS power system failure indicators, achieving the lowest loss values of 0.036 MSE, 0.189 RMSE and 0.112 MAE using CNN_LSTM model. These findings shows the potential of deep neural network architectures, particularly CNN_LSTM model in accurately predicting BTS power system failure indicators for the next thirty minutes. The significance of accurate prediction models in proactively detecting failures and minimizing their impact is highlighted, contributing to the reliability and stability of BTS power supply systems for wireless communication services.Item Prediction of LTE Cell Degradation Using Hidden Markov Model(Addis Ababa University, 2023-08) Abera Dibaba; Dereje Hailemariam (PhD)Long-Term Evolution (LTE) networks play a crucial role in providing high-speed wireless communication services. However, operators often have incomplete awareness of the overall state of their LTE networks due to the vast number of cells, the dynamic nature of LTE networks operations, complex interference scenarios, and huge number of key performance indicators (KPIs). This thesis presents a novel approach to predict LTE cell degradation levels using Hidden Markov Models (HMM). HMMs are a class of probabilistic models that can be used to capture the dynamic nature of LTE networks. HMMs model the sequential occurrence of cell degradation events, which provides network operators statistical insights into the future state of cells based on historical data. To develop our prediction model, we used KPIs, such as average traffic volume, number of Reference Signal Received Power (RSRP) measurement report, and number of outgoing handover requests as observation datasets. These KPIs are clustered into six unique observation sequences, which form the basis for our model training. Then, the Baum-Welch algorithm is applied to train and obtain the HMM parameters for modeling the cell degradation. The results of the study convincingly demonstrate the performance scores of the HMM prediction model. With an average of 23 observation lengths, the HMM achieved an average accuracy of 93.12%, F1 score of 91.81% and a precision of 92.82%. These metrics illustrate the effectiveness of using the proposed HMM approach in predicting LTE cell degradation levels. This research addresses the challenges of monitoring and analyzing LTE cell degradation events by proposing a comprehensive methodology for LTE cell degradation prediction using HMM and KPIs. The timely provision of predictions enables operators to proactively identify and address potential network issues, optimizing network performance and enhancing quality of service. The main limitations of this study are that it was conducted on a small number of cells and only four degradation states. Future work should test the approach on a larger number of cells with various KPIs and complex states using different types of HMMs.Item Quality of Experience Modeling for Fixed Broadband Internet Using Machine Learning Algorithms(Addis Ababa University, 2024-04) Abayneh Mekonnen; Dereje Hailemariam (PhD)As the demand for dependable fixed broadband internet services continues to grow, ensuring an excellent Quality of Experience (QoE) for end-users is essential. This thesis centers on QoE modeling, employing advanced machine learning techniques, specifically Support Vector Machine (SVM) and Random Forest algorithms. The study utilizes subjective assessments and Quality of Service (QoS) metrics, including latency, upload speed, download speed, uptime, packet loss, and jitter, to comprehensively comprehend and model the factors influencing user satisfaction. The research incorporates an exhaustive feature selector to extract pertinent features from the dataset, enhancing the precision of the models. Hyperparameter optimization is carried out through a Grid Search approach to fine-tune the models for optimal performance. To assess the models, a robust cross-validation methodology is implemented. The results indicate that SVM surpasses Random Forest in QoE modeling for Virtual Internet Service Providers (vISPs) like Websprix and ZERGAW Cloud with average accuracy score of 92% and 70% respectively. Conversely, Random Forest proves to be the more suitable model for predicting QoE in the case of the national ISP, ethio telecom with average accuracy value of 88%. This comparative performance analysis offers valuable insights into the distinct strengths of each model for different service providers. The research findings also indicate that employing both subjective and QoS metrics in combination to model the user QoE yields superior model performance and predictive outcomes compared to relying solely on subjective assessments and QoS metrics. These findings contribute to the ongoing discussion on QoE enhancement in fixed broadband internet services, providing practical recommendations for service providers based on observed model performances. The application of machine learning, feature selection, and hyperparameter optimization techniques underscores the importance of these methodologies in customizing QoE models to specific service contexts, ultimately enhancing user satisfaction in diverse fixed broadband Internet environments.Item Traffic-Aware Band-Level Cells On Off for Energy Saving in LTE-Advanced Networks with Inter-Band Carrier Aggregation(Addis Ababa University, 2023-09) Yilma Melaku; Dereje Hailemariam (PhD)Mobile network operators employ Carrier Aggregation (CA) in Long Term Evolution (LTE)-Advanced (LTE-A) networks to meet demand for high-rate mobile data from smartphones. CA allows users to use multiple LTE carriers, including fragmented component carriers (CCs) in different bands, called inter-band CA, which can increase bandwidth. However, inter-band CA requires more radio frequency (RF) chains that rely on inefficient power amplifiers. Our survey of real LTE-A network operated by ethio telecom, Ethiopia, found that although mains power from renewable sources, outages lead to a significant reliance on non-green diesel use. Additionally, the high daily traffic load variance observed in the survey indicates the need for adaptive solutions to achieve potential power savings. Previous works have focused on switching on/off cells separately, transferring users to active cells (which is a non-CA scenario), or deactivating/activating CCs at the user level showing limitations to consider both network and end-user devices. This thesis proposes a novel traffic load adaptive band-level cells on/off (BLCOO) approach for the CA scenario. BLCOO optimizes the number of serving CCs to save power for RF units and user devices during off-peak hours. The energy-saving problem is formulated as a Markov decision process (MDP) with uncertain network conditions. Deep reinforcement learning algorithms, specifically Deep Q-Networks and proximal policy optimization, are trained to solve the MDP problem for discrete and continuous evolved node B (eNB) load states. Operator data, including resource blocks usage, energy consumption, and CA configuration are used to build RF power consumption models and a custom simulation environment. The proposed algorithms are evaluated for a one-day hourly performance. The results show that, on average, 72.0% of CCs are sufficient to meet the actual traffic demand, resulting in a maximum of 18.71% and average of 14.62% reduction in RF power consumption.