Telecommunication Engineering

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 168
  • Item
    Machine Learning Based Soft Failure Detection by Exploiting Optical Channel Forward Error Correction Data
    (Addis Ababa University, 2023-10) Dawit Tadesse; Yalemzewd Negash (PhD)
    The performance of optical channel degrades because of soft failures, such as filter failures, laser drift, and system aging. If such degradation is handled correctly and promptly, soft failures will not affect services. A crucial element in the protection against failures in optical channels is soft failure detection. Traditional approaches, however, find it difficult to complete this task because of their limitations in adapting the dynamic behavior of soft failure, requirements for professional manual intervention, and vastly increased optical performance data. The aim of this thesis is to detect soft failures based on machine learning by exploiting optical channel forward error correction data. Soft failures detection is explored using three ML algorithms: support vector machines (SVM), artificial neural networks (multilayer perception), and random forests (RF).The input of ML algorithms is Pre forward error correction bit error rate (Pre-FE-BER) that captured from forward error correction data on real optical channels. We implemented feature labelling and extraction based on the behavior of a time series window. We use stratified shuffle split method in cross-validation approaches to optimize and validate algorithm performance in terms of confession matrix, accuracy, and building time. As a result, RF with significant features, which has a validation accuracy of 99.2% and a standard deviation of 0.49%, is the best method. Beside, a lower computational complexity of 12 features and a building time of 17.5ms were determined.
  • Item
    User Behavior-Based Insider Threat Detection Using Few Shot Learning
    (Addis Ababa University, 2023-08) Eden Teklemariam; Fitsum Assamnew (PhD)
    Insider threats are among the most difficult cyber threats to counteract since they emerge from an Organization’s own trusted employee who knows its organizational structure plus system and often leads that organization to a significant loss. The problem of insider threat detection has been researched for a long time in both the security and data mining sectors. The existing studies face challenges due to a lack of labeled datasets, imbalanced classes, and feature representation. The machine learning approaches depend on manual feature engineering that takes time and requires expertise and knowledge. The deep learning approaches depend on a huge amount of labeled and balanced training data. In the case of insider threat detection, the number of malicious users compared to normal ones is significantly imbalanced in both real-world scenarios and insider threat working datasets. Data on insider threats often includes a lot of features, which can make the data highdimensional and difficult to represent the relevant features. In this paper, we propose a novel approach that includes CNN (Convolutional Neural Network) and LSTM (Long short-term memory) approach to act independently over the publicly available insider threat dataset, CERT (Computer Emergency Response Team) release 4.2 for feature extraction and few-shot learning based detector for insider threat detection. We concatenate features extracted from the dataset using both deep learning models and do feature selection to have relevant and best features. We use the Siamese neural network (SNN), called the ’twin’ network of few-shot learning, to detect malicious insiders. We do an experiment using three datasets which are CNN-extracted features datasets (datasets found from a process of feature extraction using CNN), LSTM-extracted features datasets (datasets found from a process of feature extraction using LSTM), and Selected-features datasets (datasets found after applying feature concatenation and selection techniques). We do compare our model with other baseline models such as RNN, isolation forest, and XGBoost. The experimental result shows that with the experiment done using CNN-extracted features datasets, the proposed model best performs with an F1 score of 68% which is 18% better than the isolation forest which performs the worst, and an FNR (False Negative Rate) value of 0.006. The second experiment is done using LSTM-extracted features datasets, and the results of SNN in terms of an F1 score is 69% which is 11% better than the isolation forest which performs the worst with an F1 score of 58% and an FNR value of 0.2.The last experiment is done using the Selected-features dataset, the proposed model outperforms in terms of F1 score by having the highest value of 87% which is 10% greater than compared to the least performer baseline model (XGBoost) that has 77% of F1 score. In terms of FNR (False Negative Rate) the lowest value of 0.11 FNR with the CNN-extracted features datasets.
  • Item
    E-service Impact Assessment Framework: Case of Digital Government of Ethiopia (E-IAF: CDGE)
    (Addis Ababa University, 2023-02) Eshetu Tekle; Mesfin Kifle (PhD)
    Computer applications known as "E-services" are developed to offer effective access to services electronically. The Ethiopian government provides a variety of government services to its people. These different services are some fully automated, some partially automated, and others manual. Because of public dissatisfaction with government services, the government works to support those services through ICT to reduce the amount of time citizens spend finding out about them and to make them easily accessible everywhere, at any time. At a certain point, all those manual and partially automated services will become E-services, and they will offer a large number of benefits to their users, such as removing time and location barriers and reducing costs. For the past five years, MiNT has been implementing E-services in Ethiopia. The issue is that there is no mechanism to assess the outcomes of E-services and their impact on the country's economy, politics, and social life. The objective of this study is to prepare an impact assessment methodology and create an adapted E-Service Impact Assessment Framework. The methodology for this study uses statistical techniques to evaluate and analyze quantitative data. Towards the Development of a Citizen Centric Framework for Evaluating the Impact of E-Government: A Case Study of Developing Countries analysis of the E-government evaluation framework from among those various studies chosen for the adaptation of the E-Service Impact Assessment Framework in the Ethiopian context . Because it emphasizes the citizen's point of view and was created in the context of developing countries. However, towards the Development of a Citizen-Centric Framework for Evaluating the Impact of E-Government: A Case Study of Developing Countries model does not fit the Ethiopian scenario or context, such as the governmental, technological, and social readiness of the country. By identifying the additional characteristics required to assess the impact of E-services introduced during the past five years in Ethiopia, the researcher put a lot of effort into creating an E-service impact assessment framework. Weighing the importance of each additional attribute in the Ethiopian context's framework for assessing E-services and finally prepare E-service Impact Assessment Framework in the case of Digital Government of Ethiopia.
  • Item
    Deep Learning Based User Mobility Prediction
    (Addis Ababa University, 2023-09) Feleku Mulu; Tsegamlak Terefe (PhD)
    Telecommunication service providers mainly emphasize on providing uninterrupted network access with the maximum attainable quality of service. With this in mind, service providers often monitor and utilize information acquired from user mobility patterns to performing effective resource management of network resources and to predict the user’s future location. For instance, information associated with user mobility is used to reduces the cost of paging, managing the bandwidth resources and efficient planning. Overall, with the current trend of increase in the number of devices connected to mobile networks, telecom service providers are expected to carefully monitor and utilize user mobility patterns in order to improve the quality of service provided to their customers. With this understanding, in this thesis, we propose to utilize neural networks to predict user mobility, which helps to increase the performance of mobility analysis in cellular networks. This in turn is expected to improve the studying and under. In general, we intend to provide useful insights into how users migrate across various geographic areas and how they interact with the network infrastructure supplied by Ethiotelcom by constructing neural network based user predicting models. To meet this objective, we used mobility data obtained from Call Details Records (CDR) to forecast the future mobility of users (devices) as a sequential time series. Our experimental outcome suggests that a neural network based on one dimensional convolutional effective tool for user mobility analysis using datasets extracted from CDR. In reality, the Conv-LSTM networks take advantage of both an LSTM’s ability to capture long-short dependency for time series data and the strength of the convolutional layers to extract localized features from complicated and non-linear dataset.
  • Item
    Machine Learning for Improved Root Cause Analysis of LTE Network Accessibility and Integrity Degradation
    (Addis Ababa University, 2023-09) Fikreaddis Tazeb; Dereje Hailemariam (PhD)
    Long Term Evolution (LTE) networks are essential for enabling high-speed, reliable communication and data transmission. However, the accessibility and integrity of LTE networks can degrade due to a variety of factors, such as congestion, coverage, and configuration problems. Root cause analysis (RCA) is a process for identifying the underlying causes of degradation. However, RCA can be time-consuming and labor-intensive. Machine learning can be used to enhance RCA by identifying patterns and trends in data that can be used to identify the root causes of problems. Limited work exists on machine learning-enabled RCA for LTE networks. This thesis proposes a machine learning-enabled approach, specifically Convolutional Neural Network (CNN) and SHapley Additive exPlanations (SHAP), for RCA of LTE network performance degradation. The approach was evaluated using key performance indicators (KPIs) and counters data collected from LTE network of ethio telecom, a major operator in Ethiopia. The main causes of reduced network accessibility are failure caused by the Mobility Management Entity (MME), the average number of users, and handover failures. Similarly, the underlying causes of degraded accessibility at the cell level are failure caused by MME, control channel element (CCE) utilization, and paging utilization. For network integrity, which is measured by user throughput, the main causes of degradation are the high number of active users, high downlink Physical Resource Block (PRB) utilization, poor Channel Quality Indicator (CQI), and coverage issues. At the cell level, the main factors are downlink PRB utilization, unfavorable CQI values, and high downlink block error rate. For the given data, the model’s sensitivity for network accessibility and integrity at the cell level is 82.8% and 95.5%, respectively. These results demonstrate the potential of the proposed approach to accurately identify degradation instances. The proposed approach using deep learning and SHAP offers reusability, high-dimensionality support, geographic scalability, and time resolution for improved performance analysis in networks of all sizes. Network operators can improve network performance by identifying and addressing the root causes of degradation.
  • Item
    Infrastructure and Spectrum Sharing for Coverage and Capacity Enhancement in Multi-Operator Networks
    (Addis Ababa University, 2023-08-31) Mahider Abera; Dereje Hailemariam (PhD)
    The rapid growth of mobile data traffic is putting a strain on wireless networks. Infrastructure and spectrum sharing are two promising ways to alleviate this challenge on the wireless networks. Infrastructure sharing refers to the co-deployment and operation of network infrastructures like base station and other radio access equipment by different mobile network operators (MNOs). Spectrum sharing refers to the use of the same spectrum by multiple MNOs. Full sharing involves the sharing of both infrastructure and spectrum among MNOs. Sharing between operators is nowadays used as a cost optimization and technology refreshment in developed markets and as coverage and capacity enhancement in emerging markets. In Ethiopia, the two MNOs of the country are not meeting and exceeding the quality of service (QoS) target given by the nation’s communication authority and most customers are not satisfied by the poor QoS specially, mobile data services. Infrastructure and spectrum sharing are effective and efficient ways to fulfill the required QoS. The research presented in this paper presents an analytical model for infrastructure, spectrum and full sharing scenarios and investigates the performance of the three sharing scenarios in terms of probability of coverage and mean user rate. The results show that infrastructure sharing provides superior coverage as compared to spectrum and full sharing scenarios; whereas, full sharing and spectrum sharing have given the highest mean user rate. Therefore, the findings of this thesis can help MNOs to decide on the best way to share infrastructure and spectrum in order to meet their capacity and coverage requirements.
  • Item
    Performance and Cost-Benefit Analysis of Software Defined Network over Conventional Network at the National Datacenter -Ethiopia
    (Addis Ababa University, 2023-08) Maru Seyoum; Mesfin Kifle (PhD)
    With the continuous expansion of data center services and the aging of equipment, there is a growing need to enhance network services and development. Despite their widespread use, traditional IP networks exhibit limitations in terms of network management efficiency and cost-effectiveness, presenting challenges in configuring networks according to established protocols and promptly addressing changes in traffic load and network issues. Adding complexity, current networks often have a vertically integrated structure. To address these issues, an emerging concept known as Software Defined Networking (SDN) has gained traction in network design and administration. SDN offers the ability to dynamically allocate network resources, contrasting with the limitations of conventional approaches. This paper delves into the performance challenges faced by existing national data center networks and demonstrates the potential improvements that can be achieved through the adoption of Software Defined Networking (SDN). Additionally, the study employs a modeling methodology to assess the value of these network enhancements, considering various factors such as capital and operational expenses and overall ownership costs in the two scenarios. The techno-economic assessment (TEA) model, implemented using tools like Mininet, Observium, and MS Excel, comprises dedicated sections for cost benefit modeling. The integration of Software-Defined Networking in Ethiopia's national data center is a strategic move that aligns both with social and economic benefits. The consideration of downtime costs adds a crucial dimension to its implementation. From a social standpoint, the adoption of SDN reflects the government's commitment to enhanced public service delivery and inclusivity. By optimizing network performance, SDN enables citizens to access crucial services efficiently, fostering a sense of equal opportunity. This technological advancement supports good governance by enabling effective resource allocation and timely service provision, ultimately enhancing the trust citizens have in the government's dedication to meeting their needs. The paper thus emphasizes the significant potential of SDN adoption for bolstering both technical and social aspects of national data center networks.
  • Item
    Genetic Algorithm-Based Small Cell Switch-Off at Low Load Traffic for Energy-Efficient Heterogeneous Network
    (Addis Ababa University, 2023-09) Na’ol Getu; Yalemzewd Negash (PhD)
    The capacity of macro base stations (MBS) is insufficient to handle the growing traffic, particularly in densely populated cities like Addis Ababa, Ethiopia. To address this, small cells are deployed then the cellular network become heterogeneous cellular networks (HetNets). Het- Nets combines MBS with low power small cells to enhance system capacity. However, the increasing energy consumption of cellular networks poses a challenge, leading to a growing interest in energy efficiency among network operator. To mitigate this, a technique called small base station (SBS) on or off switching is employed to conserve energy and improve network efficiency. By turning off or putting unused cells into sleep mode during periods of low traffic, power consumption can be reduced. When traffic or user demand increases, these cells can be switched back on to accommodate the higher demand. The thesis proposes a switch-on and off technique based on traffic load fluctuations using a Genetic Algorithm (GA). It balances energy savings and network performance, returning the base station to operational status when traffic or demand increases. A simulation using Matlab (2021a) shows the algorithm can save 24% of energy consumption, enhancing system efficiency. The performance improvement of small cells is evaluated.
  • Item
    ILP Based Optimal BBU Pool Planning for Cloud-RAN Deployment: in the Context of Ethiotelecom
    (Addis Ababa University, 2023-08) Ruhama Girma; Yihenew Wondie (PhD)
    Cloud radio access network (C-RAN) is a novel mobile network architecture that decouples the baseband units (BBUs) from their corresponding cell sites and takes the baseband processing unit to a virtualized and shared central location. This increases wireless networks' scalability, manageability, and significantly saves both the capital and operational costs of mobile network operators (MNOs). Although this BBU centralization enables power savings, it imposes much higher bandwidth on the fronthaul network. Since with distributed radio access network (D-RAN) there is a higher building, maintenance and operational costs there should be a RAN architectural change, and in addition to previously done researches in this area this thesis comprise the impact of wireless interface delay on BBU centralization. We investigate the optimal planning of BBU pools for C-RAN deployment in the context of Ethiotelecom and shows how to optimally determine the required number of BBU pools for different use cases using an integer linear programming (ILP) optimization problem and calculates the required budget to deploy a single BBU pool. We also figure out which functional split is cost-effective for fronthaul networks while providing the centralization benefits. Simulation results show that the wireless interface delay has an impact on BBU centralization therefore it should be considered in the BBU pool planning process. On the other hand, the number of BBU pools required varies depending on the use cases considered, hence for the chosen area ultra-reliable and low latency communication (URLLC) services require 5 BBU pools, real-time functions such as voice over long-term evolution (VoLTE) require 3 BBU pools; in contrast latency-insensitive applications, such as email 2 BBU pools are sufficient to meet the required performance requirements. Furthermore, split option 7.2 was selected
  • Item
    CDR Based Recommender System for Mobile Package Service Users
    (Addis Ababa University, 2023-09) Saba Mulugeta; Sosina Mengistu (PhD)
    Due to increased competition, telecom operators are continually introducing new products and services. To make their services easy to use, to meet customer requirements, and to satisfy their customers' needs in terms of payment, operators have launched various telecommunication packages. Telecom operators have so many packages that customers are unaware of; some packages may go unnoticed even if they are useful. To overcome such problems, we need a recommender system that directly notifies customers based on their interests. Most research has been conducted on recommendation systems for web service users based on user ratings. In this paper, we propose a mobile package service recommendation system for customers. The proposed recommendation system has two phases. The first is creating a relationship between customer usage and mobile packages by grouping customers based on their usage. To create a relationship between customers' usage and mobile packages, we have used the k-means clustering algorithm. The elbow method is used to determine the number of clusters for each service. The second phase is building a classification model that will recommend mobile packages for users. Two-month CDR data was used to build a classification model by using random forest (RF) and K-nearest neighbor (KNN) classifier algorithms. The evaluation result shows KNN outperformed RF for weekly and monthly data usage plans with F1 scores of 90.4% and 96%, respectively, whereas RF outperformed for daily plans with an F1 score of 86.9%. On the other hand, RF outperformed KNN with F1 scores of 95.10% and 99.60% for daily and monthly voice usage plans, respectively. Similarly, KNN showcased better performance than RF on the weekly voice usage plan with an F1 score of 94.30%. Generally, the strengths of each algorithm differ for different usage scenarios within the voice and data service domains.
  • Item
    Mobile Network Backup Power Supply Battery Remaining Useful Time Prediction Using Machine Learning Algorithms
    (Addis Ababa University, 2023-08) Abel Hirpo; Dereje Hailemariam (PhD)
    Base transceiver stations (BTSs) in mobile cellular systems are critical infrastructure for providing reliable service to mobile users. However, BTSs can be disrupted by electric power supply interruptions, which can lead to degraded quality of service (QoS) and quality of experience (QoE) for users. The reliability of the battery used in BTSs is affected by a number of factors, including: The instability of the primary power supply, Temperature fluctuations, Battery aging, the number of charging and discharging cycles (CDC) and the depth of discharge (DOD).As a result of these factors,the state of health (SOH) of a battery is impacted which will in turn affect the remaining useful time (RUT) of the battery. This can lead to disruptions in service for mobile users, as the BTS may not have available power to operate during a power outage. To address this issue, the developed supervised machine learning (ML) techniques have predict the RUT of lithium iron phosphate (LFP) batteries installed in BTSs have used ML models and trained on data that has been extracted from power and environment (P&E) monitoring tool Net Eco(iManager NetEco data center infrastructure management system) . The ML models can then be used to predict the RUT of a battery, which can help to ensure that batteries are replaced before they fail to deliver the designed capacity. In this study, three ML models were evaluated: linear regression, random forest regression, and support vector regression. The support vector regression model provided the best overall prediction performance, with a test error of 4.85%. This suggests that the support vector regression model is a promising tool for predicting the RUT of LFP batteries used in BTSs.
  • Item
    Prediction of LTE Cell Degradation Using Hidden Markov Model
    (Addis Ababa University, 2023-08) Abera Dibaba; Dereje Hailemariam (PhD)
    Long-Term Evolution (LTE) networks play a crucial role in providing high-speed wireless communication services. However, operators often have incomplete awareness of the overall state of their LTE networks due to the vast number of cells, the dynamic nature of LTE networks operations, complex interference scenarios, and huge number of key performance indicators (KPIs). This thesis presents a novel approach to predict LTE cell degradation levels using Hidden Markov Models (HMM). HMMs are a class of probabilistic models that can be used to capture the dynamic nature of LTE networks. HMMs model the sequential occurrence of cell degradation events, which provides network operators statistical insights into the future state of cells based on historical data. To develop our prediction model, we used KPIs, such as average traffic volume, number of Reference Signal Received Power (RSRP) measurement report, and number of outgoing handover requests as observation datasets. These KPIs are clustered into six unique observation sequences, which form the basis for our model training. Then, the Baum-Welch algorithm is applied to train and obtain the HMM parameters for modeling the cell degradation. The results of the study convincingly demonstrate the performance scores of the HMM prediction model. With an average of 23 observation lengths, the HMM achieved an average accuracy of 93.12%, F1 score of 91.81% and a precision of 92.82%. These metrics illustrate the effectiveness of using the proposed HMM approach in predicting LTE cell degradation levels. This research addresses the challenges of monitoring and analyzing LTE cell degradation events by proposing a comprehensive methodology for LTE cell degradation prediction using HMM and KPIs. The timely provision of predictions enables operators to proactively identify and address potential network issues, optimizing network performance and enhancing quality of service. The main limitations of this study are that it was conducted on a small number of cells and only four degradation states. Future work should test the approach on a larger number of cells with various KPIs and complex states using different types of HMMs.
  • Item
    Content Based Multi-Class Amharic Short Message Service Classification using Machine Learning
    (Addis Ababa University, 2023-08-31) Alem Tedla; Rosa Tsegaye (PhD)
    Short message services are one of the most common communication methods. With the increased usage of SMS, a variety of Amharic-content spam and smishing messages have also increased. Spam SMS messages are unwanted messages received on a mobile phone. Examples of spam messages are advertisements, promotions, and information from organizations, whereas smishing messages critically harm users and service providers. Free fees, rewards, fake lottery tickets, and malicious links are among the types of smishing messages. Both types of SMS cause poor customer experiences and reduced revenue for operators. Therefore, many studies have been conducted to classify short message service using a foreign language SMS dataset to keep and win customers and avoid revenue loss. However, the features they used are not relevant for Amharic SMS classification due to the diversity of SMS characteristics. This paper studies a model that classifies Amharic SMS using a machine learning technique. The model classifies Amharic SMS texts into three classes that are ham, spam, and smishing. The model was trained on 1844 labeled messages. The features have been prepared as follows: Two relevant features have been selected from English spam detection approaches; three new features have been created; and 162 keywords have been extracted from the dataset and vectorized using TF-IDF. Then the Random Forest classifier has been trained using the prepared features, using 10-fold cross-validation. Finally, the prepared features with RF outperformed the existing approaches that have been done for foreign languages by 6% and achieved a 0.99 F1-score to classify Amharic SMS.
  • Item
    Multi-level Interval Based Load Balancing for Multi-controller SDN
    (Addis Ababa University, 2023-10) Amanuel Bekele; Amanuel Bekele (PhD)
    Software Defined Networking (SDN) is a new approach of managing and programming networks that is managed by a centralized controller. For a larger network, single controller architecture will be inefficient to manage the network because the increase in the network traffic causes network congestion and transmission delays. To tackle this issue, software defined architecture with multiple controllers has been introduced in recent researches. However, this strategy introduces a new problem of unequal load balance on the distributed SDN controller. To address this issue, several load balancing approaches have been developed. But most of them are domain specific and have relatively high migration cost, communication overhead and packet loss with low load balancing rate. An Improved Switch Migration Decision Algorithm (ISMDA) is one of the algorithm that solves the SDN load imbalance when the incoming data load is high traffic (traffic load greater than 500 p/s). The under loaded controller is selected by the load balancing module among the controller group by estimating variance and the average load status of the controllers. It improves controller throughput, migration cost and response time by running different modules like load judgment, switch selection and controller selection module. But it splits the load only into two intervals, i.e., overloaded and under-loaded, which limits its capability to distinguish the controller load status accurately. This paper proposes Multi-level Interval Based Load Balancing (MIBLB) algorithm that modifies the ISMDA algorithm by implementing a multi-level load interval, i.e., idle, normal, overloaded and highly overloaded, and applies dynamic threshold for switch migration. Dividing the load status into multi-level intervals enables accurate migration by giving priority to highly overloaded controller, and avoid congestion and packet loss. The efficiency will be evaluated in terms of network performance evaluation parameters, i.e., throughput, response time, and average number of migrations. The experimental results show that MIBLB reduces number of migrations and controller response time of ISDMA by 6.43% and 3.53%, respectively. The controller throughput of ISDMA is also increased by 6.82% while sustaining efficient load balancing rate.
  • Item
    Software Defined Networks Based Seamless MPLS QoS
    (Addis Ababa University, 2023-12-12) Asheber Bekele; Sosina Mengistu (PhD)
    The Internet service provider (ISP) need more advanced MPLS networks for improved performance. The MPLS protocol tries its best to provide good service, but it does not guarantee traffic priority or quality. The Seamless MPLS (S-MPLS) technology improved performance by integrating both aggregate and core networks into one MPLS domain. However, there are some problems that make it hard to control the amount of data being used and the path that data takes when there is a lot of internet traffic. Implementing QoS in S-MPLS can also enhance ISP network performance. However, QoS in Seamless MPLS allows headend routers in an AS to optimize traffic routing without considering the needs of other AS routers. Using SDN-based Seamless MPLS QoS will be able to see the entire network topology of different domains, monitor of the status of the links, add or remove forward traffic information and guarantee end-to-end quality of service. The inspiration of this proposal is to examine and analyze impacts of QoS actualizing in seamless MPLS with SDN based seamless MPLS. This study examines the effects on Quality of Service (QoS) parameters through the analysis of four distinct scenarios Seamless MPLS, QoS Seamless MPLS, SDN based Seamless MPLS and QoS SDN based Seamless MPLS with Several different domains are combined into one large MPLS domain. Simulation tools like GNS3, Ostinato, and Wire shark are used to compare how well the four scenarios perform. The analysis results show that in QoS SDN based Seamless MPLS throughput is improved by 14.7%, latency is improved by 12.8%, and packet loss is improved by 18% compared to Best Effort Seamless MPLS. Based on the research findings, it is evident that the adoption and implementation of Quality of service (QoS) Software Define Network (SDN) based Seamless Multi-Protocol Label Switching (MPLS) can confer various benefits in service provider core network.
  • Item
    Techno-Economic Feasibility of Hybrid Energy System Versus Grid for a Different Level of Grid Reliability at Cellular Base Stations: A case of Ethio telecom
    (Addis Ababa University, 2023-09) Belaynesh Getachew; Bisrat Derebssa (PhD)
    The rapid growth of cellular technology needs a significant attention to energy consumption in cellular networks. This is especially crucial in developing countries like Ethiopia, where the electric supply and grid power distribution are unreliable. In Ethio telecom, grid as the primary energy source for its communication infrastructure. Approximately 70% of the Base Transceiver Stations (BTS) are connected to the grid. Some BTS operate with grid and backup batteries, while others have standby diesel generators and backup batteries. Due to frequent grid outages, some sites are working with diesel generators for a long time whereas the other site service is disrupted, and increased operational costs, increase fuel consumption, degraded Quality of Service(QoS), Quality of Experience(QoE), and reduced the revenue. This study focuses on the techno-economic feasibility of Grid connected PV hybrid energy system(HES) to provide a reliable and cost-efficient energy solution for BTS. The sites are classified based on grid outage frequency, maintenance time, climatic zones and 54 sites are represented for study . Input parameters are obtained from Ethio telecom's Network Ecosystem(NetEco) and Power Engineering Department, as well as solar radiation and temperature data from the National Aeronautics and Space Administration (NASA).To design the optimal hybrid energy system and components, optimization techniques and energy control strategies are employed, by utilizing the Hybrid Optimization of Multiple Energy Resources (HOMER) Pro software tool. The aim is to minimize the Net Present Cost (NPC) and Cost of Energy (COE) of the hybrid energy system by considering different power configurations. The results indicate that the proposed Grid/PV/BSS and Grid/PV/DG/BSS configurations have lower NPC and COE compared to the conventional power system. Grid/PV/DG/BSS configuration is feasible for frequent grid interruptions, and took long time grid maintenance and the proposed hybrid power system saved fuel consumptionqof the site from 5,393 to 61,843L/yr.
  • Item
    Techno-Economic Analysis of Elastic Optical Network Deployment, in case of Ethio Telecom
    (Addis Ababa University, 2023-10-11) Biniyam Kiflemariam; Yalemzewd Negash (PhD)
    This thesis offers a thorough techno-economic analysis of elastic optical networks' (EONs) implementation. In order to avoid higher operational expenditure and to create flexible communication networks, EONs have emerged as a possible alternative. This study intends to examine the technical and financial elements of EON deployment, assessing its viability, affordability, and possible advantages in the case of Ethio Telecom network. To create a complete techno-economic model, a number of input factors are taken into account, including traffic demand, network topologies, transmission methods, spectrum allocation, equipment costs, operating expenses, revenue models, discount rate, and economic indicators. The techno-economic study examines the effects of many parameters on the effectiveness and financial sustainability of EON deployment using the proposed model. The paper also discusses the difficulties and constraints of EON implementation, including system adaptability, upgradeability, and compatibility with changing network requirements. The research's findings help decision-makers make well-informed decisions about network infrastructure expenditures by advancing our knowledge of the techno-economic elements of EON deployment.
  • Item
    Techno-economic analysis of open RAN deployment scenario: in the case of Ethiotelecom
    (Addis Ababa University, 2023-09) Seada Endris; Beneyam Berehanu (PhD)
    The telecommunications industry is characterized by the collaboration of multiple vendors to provide mobile network communication services. However, the heterogeneous integration of hardware and software from multiple vendors presents a considerable challenge. Hence, operators typically rely on a single vendor for their radio access network (RAN) requirements. Furthermore, the increasing demand for data services adds to the complexity. In response to these challenges, the introduction of open Radio Access Network (Open RAN) emerges as a solution, offering an ecosystem that is open, flexible, intelligent, efficient, and cost-effective. In the context of Ethiotelecom, an assessment of the viability of implementing Cloud RAN incorporating Layer 1 (L1) and Layer 2 (L2) functionalities alongside the physical network function (PNF) and the virtualization of Layer 3 (L3) functionalities has been conducted. In C-RAN the HW components and SW are all proprietary. This study focuses on the virtualization of all baseband functionalities and based on open RAN COTs HW and SW. This thesis assesses the viability of deploying Open RAN network architecture in Addis Ababa, Ethiopia. A thorough techno-economic analysis (TEA) with a study period of five years was conducted by using modified TERA model. The analysis encompassed an investigation of deployment scenarios of Open RAN Distributed Unit (DU) pooling to edge cloud and Central Unit (CU) pooling to regional cloud. The selection of optimal locations for vDU and vCU components entailed an array of factors, including available bandwidth, fiber accessibility, and latency considerations. Inputs such as cell site data, network configurations, hardware-software specifications, and financial parameters were used. The analysis involved network dimensioning, cost modeling and assessing using economic metrics for each architecture. Metrics such as Net Present Value (NPV), Payback Period (PP), and Internal Rate of Return (IRR) were employed as decision-making tools. These metrics were computed using MS-Excel and MATLAB. The results indicate that D-RAN was more favorable than Open RAN, with PP of 2.4 and 3.5 years, respectively because D-RAN CAPEX cost of initial investment are taken by considering depreciation rate of RAN component. Also both architectural approaches exhibited positive NPV and IRR that exceeded the assumed discount rate of 10%. The findings of this study indicate the profitability of existing RAN architecture within the study period.
  • Item
    LTE Network Coverage Hole Detection Technique using Random Forest Classifier Machine Learning Algorithm
    (Addis Ababa University, 2023-10) Serkadis Bahiru; Yihenew Wondie (PhD)
    The most significant phases for the mobile operator are the performance of cellular networks and the evaluation of quality of service (QoS). Hence, Long-Term Evolution (LTE) network monitoring and measuring are necessary in determining mobile network statistics to optimize the performance of the network in a particular area. Measuring networks is an effective way to qualify and quantify how networks are being used and how they are behaving, and it helps to find voids or uncovered areas that are coverage holes in the LTE network. A coverage hole is defined as a client's being unable to receive a wireless network signal. Traditionally, it is detected through drive tests, but it costs resources and time. In order to identify coverage holes in the LTE network, this study proposed a random forest classifier Machine Learning (ML) approach. The data was gathered from the User Equipment (UE) via minimization drive test (MDT) functionality with low cost, which measures Reference Signal Receive Power (RSRP), Reference Signal Received Quality (RSRQ), and Signal to Interference and Noise Ratio (SINR). The RSRP and RSRQ measure the strength and quality of the reference signal received at the device's antenna, whereas SINR is a metric that assesses the relationship between the targeted signal strength and the total power of all interfering signals and noise. The results obtained show that the employed model's accuracy was 96.3%. However, we could use the hyperparameter optimization (HPO) technique to enhance the model's performance, namely random and grid search cross validation (CV), which increased the model's accuracy to 97.7% and 99.2%, respectively.
  • Item
    Machine Learning for Power Failure Prediction in Base Transceiver Stations: A Multivariate Approach
    (Addis Ababa University, 2023-10) Sofia Ahmed; Dereje Hailemariam (PhD)
    The proliferation of mobile cellular networks has had a transformative impact on economic and social activities. Base transceiver stations (BTSs) play a critical role in delivering wireless services to mobile users. However, power failures in BTSs can pose significant challenges to maintaining uninterrupted mobile services, leading to inconveniences for users and financial losses for service providers. This thesis introduces a novel approach to mitigating power system interruptions in BTSs using a machine learning-based power failure prediction framework. The framework leverages multivariate time-series data collected from the BTS power and environmental monitoring system. The methodology aims to preemptively predict power failures using three advanced machine learning techniques, specifically, Convolutional Neural Networks (CNNs), Long Shortterm Memory (LSTM), and CNN-LSTM networks. These methods excel in capturing complex temporal relationships inherent in time-series data. All the three algorithms reasonably capture the temporal patterns in the data. However, the LSTM model consistently outperforms the other two models having a MSE of 0.001 and 1.194 MAPE, albeit with longer training times which is more than three hours. On the other hand, the CNN-LSTM model stands out for its efficient training process, which takes notably less time than the LSTM model around two hours training time resulting 0.001 MSE and 2.528 MAPE. Furthermore, the CNN model takes notably less time to compute than the other two models with a prediction performance of 0.223 MSE and 2.843 MAPE. essential to highlight that this study concentrates on the predictive aspect, which contributes significantly to the field by offering a robust and effective predictive model tailored specifically for BTS power systems. By enabling timely maintenance actions and minimizing downtime, our proposed methodology holds the possibility to significantly improve the reliability of telecommunications infrastructure, which will ultimately lead to better user experiences and streamlined service provider operations.