Communication Engineering

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 119
  • Item
    Quality of Experience Modeling for Fixed Broadband Internet Using Machine Learning Algorithms
    (Addis Ababa University, 2024-04) Abayneh Mekonnen; Dereje Hailemariam (PhD)
    As the demand for dependable fixed broadband internet services continues to grow, ensuring an excellent Quality of Experience (QoE) for end-users is essential. This thesis centers on QoE modeling, employing advanced machine learning techniques, specifically Support Vector Machine (SVM) and Random Forest algorithms. The study utilizes subjective assessments and Quality of Service (QoS) metrics, including latency, upload speed, download speed, uptime, packet loss, and jitter, to comprehensively comprehend and model the factors influencing user satisfaction. The research incorporates an exhaustive feature selector to extract pertinent features from the dataset, enhancing the precision of the models. Hyperparameter optimization is carried out through a Grid Search approach to fine-tune the models for optimal performance. To assess the models, a robust cross-validation methodology is implemented. The results indicate that SVM surpasses Random Forest in QoE modeling for Virtual Internet Service Providers (vISPs) like Websprix and ZERGAW Cloud with average accuracy score of 92% and 70% respectively. Conversely, Random Forest proves to be the more suitable model for predicting QoE in the case of the national ISP, ethio telecom with average accuracy value of 88%. This comparative performance analysis offers valuable insights into the distinct strengths of each model for different service providers. The research findings also indicate that employing both subjective and QoS metrics in combination to model the user QoE yields superior model performance and predictive outcomes compared to relying solely on subjective assessments and QoS metrics. These findings contribute to the ongoing discussion on QoE enhancement in fixed broadband internet services, providing practical recommendations for service providers based on observed model performances. The application of machine learning, feature selection, and hyperparameter optimization techniques underscores the importance of these methodologies in customizing QoE models to specific service contexts, ultimately enhancing user satisfaction in diverse fixed broadband Internet environments.
  • Item
    Deep Learning-based Cell Performance Degradation Prediction
    (Addis Ababa University, 2022-07) Betelehem Dagnaw; Dereje Hailemariam (PhD)
    In light of rapid developments in the telecommunications sector, there is a growing volume of generated data as well as high customer expectations regarding both cost and Quality of Service (QoS). For Mobile Network Operators (MNOs), the changing dynamics of radio networks pose challenges in coping with the increased number of network faults and outages, which both lead to performance degradation and increased operational expenditures (OPEX). Human expertise is required to diagnose, identify and x faults and outages. However, the increasing density of mobile cells and diversi cation of cell types are making this approach less feasible, both nancially and technically. In this paper, relying on the power of deep learning and the availability of large radio network data at MNOs, we propose a system that predicts the performance degradation of cells using key performance indicators (KPIs). Data collected from the Universal Mobile Telecommunications Service (UMTS) network of an operator located in Addis Ababa, Ethiopia, is used to build models in the system. The proposed system consists of a multivariate time series forecasting model, which forecasts KPIs in advance. In addition, a cell performance degradation detection model, which detects anomalous records in the KPI data based on the forecasting model outputs. Convolutional Long Short-Term Memory (ConvLSTM) and LSTM Autoencoders are cascaded for prediction and degradation detection. The results show that the system is capable of predicting KPIs with a Root Mean Square Error (RMSE) of 0.896 and a Mean Absolute Error (MAE) of 0.771, and detecting degradation with 98% accuracy. This research can therefore contribute signi cantly to improving network failure management systems by predicting the impact of upcoming cell performance degradations on network service before they occur. This research can therefore contribute signi cantly to improving network failure management systems by predicting the impact of upcoming cell performance degradations on network service before they occur.
  • Item
    Real-time Feature Extraction in a Distributed Acoustic Sensor Based on Phase Demodulation With Fast Hilbert Transform
    (Addis Ababa University, 2024-03) Semira Mohammed; Yonas Seifu (PhD); Bisrat Derebssa (PhD)
    Phase-sensitive optical Time Domain Reflectometry (Φ-OTDR) is the most common implementation of a Distributed Acoustic Sensor (DAS) system. It employs the observation of speckles resulting from Rayleigh Back-scattering from coherent pulses in an optical fiber[1]. Since they are sensitive to local disturbances altering the intensity and phase of light, perturbations induced by events cause changes in the speckle pattern whose precise measurement provides information on the amplitude and frequency of vibrations distributed along the fiber. Demodulation of the local phase change is key to the precise measurement of events since it is more linearly related to the strain applied to the fiber. One of the key issues in distributed sensing is that phase demodulation schemes usually require additional post-processing algorithm runs for each spatial location, which introduces delays, and hence reductions in dynamic sensing capability when scaled along the whole sensing distance. In this research, we analyze the impact of the post-processing in different phase demodulation techniques employing Phase-Generated Carrier (PGC) on the bandwidth of distributed feature extraction in a typical DAS system by quantifying the total computation time needed for a benchmark, 10-km sensing range at meterscale and sub-meter-scale spatial resolutions. We then design, implement, and analyze a signal processing scheme for phase extraction in Φ -OTDR enabling real-time dynamic measurements based on a Fast Hilbert Transform (FHT). Particular focus is given to the choice of this demodulation scheme for optimizing the bandwidth of distributed feature extraction as it enables the use of parallel processing of adjacent blocks in such a way that the overall throughput of spatially resolved concurrent demodulation allows dynamic vibration sensing at speeds relevant to most distributed monitoring applications. Our analysis shows that that on average 3 orders magnitude reductions in computation times are achieved when employing the Fast Hilbert transform for demodulation compared to the commonly used PGC-arctan algorithm, while there is a three-fold reduction compared to PG-DCM and PG-DMS algorithms.
  • Item
    Deep Learning-Powered Equalization with Autoencoders for Improved 5G Communication
    (Addis Ababa University, 2024-05) Adomeas Asfaw; Tsegamlak Terefe (PhD)
    The Fifth Generation (5G) wireless technology has significant advancements in communication speed, capacity and latency, revolutionizing various industries and enabling transformative applications. However, these benefits are challenged by the complexities of the wireless environment, characterized by multipath propagation, fading, and interference. This thesis address the challenge of mitigating errors within 5G communication systems. The multipath propagation and fading present in wireless channels often lead to Inter Symbol Interference (ISI) and other forms of distortion. As a mitigation for this cases an autoencoder-based equalizer tailored for 5G communication systems is proposed and thoroughly evaluated. Leveraging the power of deep learning, the autoencoder architecture is adept at extracting complex features from received signals, thus enabling equalization in the presence of channel impairments. Our focus is on mitigating errors within the context of the International Telecommunication Union (ITU) 2020 channel model and Quadrature Amplitude Modulation (QAM) schemes (16-QAM and 64-QAM). Through simulation the performance of the proposed equalizer is assessed using constellation plots, Symbol Error Rate (SER), Bit Error Rate (BER) and convergence rate. Results indicate that the designed autoencoder achieved an SER of approximately 10−4 and a BER of 10−5 for the 16-QAM and an SER of approximately 10−3 and a BER of 10−4 for the 64-QAM. Our comparison analysis reveals the efficacy and competitiveness of the autoencoder-based equalizer in mitigating the effects of the channel for 5G downlink outdoor to indoor communication system.
  • Item
    Performance Analysis of Downlink Linear Precoding for Multi-Cell Massive MIMO under Correlated Rayleigh Fading Channels
    (Addis Ababa University, 2022-09) Habte Aregawi; Murad Ridwan
    The Fifth Generation (5G) networks have performance targets of high Spectral Efficiency (SE), decreased latency, energy savings, cost reduction, high system capacity, and huge device connections. To increase the SE of networks, researchers deal with increasing the transmit power, obtaining the array gain, using Space Division Multiple Access (SDMA), and deploying massive numbers of antennas at the Base Station (BS). A Multi User- Multiple Input Multiple Output (MU-MIMO) technology that combines SDMA with Time Division Duplex (TDD) to limit the Channel State Information (CSI) acquisition overhead and a massive number of antennas at the BS is known as Massive-Multiple Input Multiple Output (M-MIMO). For efficient use of massive antennas at the BS, the channel characteristics between User Equipments (UEs) and the BS must be known. Practical channels are known to be spatially correlated due to sampling at the BS, environmental orientations, and polarization effects. Estimation of spatially correlated channels in a multi-cell M-MIMO system degrades due to reuse of pilot signals among UEs, which cannot be addressed by increasing the number of BS antennas. Alleviating the impact of pilot contamination in multi-cell cellular systems is conducted in various research. However, describing pilot contamination effects based on UEs position on the channel estimation is not addressed in most of the researches. In this research, the effect of UEs position on channel estimation and the ability to get favourable channels is investigated under correlated Rayleigh fading channels. Using blind estimation of precoded channels, the performance of different linear precoding schemes is examined using MATLAB simulation platform. The pilot contamination effect is negligible under more correlated channels if the angle of arrival (position of UEs) is slightly different. The Minimum Mean Square Error (MMSE) precoding schemes have better performance than Regularized Zero Forcing (RZF), Zero Forcing (ZF), and Maximum Ratio Transmission (MRT). RZF has better performance than ZF when the effective Signal to Noise Ratio (SNR) is low or the number of antennas at the BS is small, unless they have the same level of performance.
  • Item
    Performance Comparison of Multi-Mode Modulation Techniques for SDR Using FPGA
    (Addis Ababa University, 2023-11) Sisay Bogale; Yihenew Wondie (PhD)
    Radio devices that were previously built in hardware have been replaced in recent years by reconfigurable software defined radio (SDR) systems. Conventional hardware-based radios have restricted multi-functionality and are physically changeable only. This leads to an increase in production expenses and a reduction in the number of waveform standards that can be supported. A rapid and affordable answer to this issue is provided by software-defined radio technology, which enables software upgrades for multi-mode, multi-band, and multi-functional wireless devices. In SDR, different modulation techniques are used to achieve efficient communication over a radio channel. Multi-mode modulation is an approach that allows the use of multiple modulation schemes in a single system, which can enhance the flexibility and resilience of communication systems. This paper presented a design and implementation of multi-mode modulation techniques for SDR using FPGA and analyze the performance based on the FPGA resource utilization. It combines six modulation schemes: QASK, QPSK, QAM, AM, PM and FM to create multi-mode modulation system. The performance of this multi-mode modulation system is evaluated in terms of FPGA resource utilization such as total computational power, total number of Look Table (LUT) or memory used, Flip Flops (FF) and Input/Output (IO) port usage. Xilinx Vivado system generator for DSP with MATLAB/Simulink is used to design, simulate and verify the multi-mode modulator, which would then be implemented on a Xilinx Zedboard FPGA hardware. A total of 0.225W power, 844 number of LUT and 1 IO port is utilized by the implemented design. The biggest thing we achieved in this research is that we saved computational power. 1.572W and 1.134W amount of power is saved by our design as compared to previous two studies.
  • Item
    Design and Performance Evaluation of Power-aware Routing Protocols for Wireless Sensor Networks – GAICH and GCH
    (Addis Ababa University, 2011-10) Seifemichael Bekele; Dereje Hailemariam (PhD)
    In recent years, the advancements in wireless communications and electronics have enabled the development of low-cost, low-power and multifunctional wireless sensor networks (WSNs). As nodes in sensor networks are equipped with a limited power source, efficient utilization of power is a very important issue in order to extend the network lifetime. It is for these reasons that researchers are currently focusing on the design of power-aware protocols and algorithms for sensor networks. In this thesis, two routing protocols that provide efficient energy management for WSNs are proposed. The first protocol, GAICH (Genetic Algorithm Inspired Clustering Hierarchy), makes use of genetic algorithm to create optimum clusters in terms of energy consumption. The other one, GCH (Grid Clustering Hierarchy), creates clusters by forming virtual girds, where nodes share the role of cluster head in a round-robin fashion. These protocols have been implemented in MATLAB using a standard radio energy dissipation model that is used for the simulation of WSNs. Performance comparison has been made with two of the existing routing protocols: LEACH and Direct Transmission, on different performance metrics. Simulation results show that GAICH and GCH are better than LEACH in the total packets sent to the base station and network lifetime. Moreover, different techniques for optimizing energy consumption in WSNs are suggested.
  • Item
    Prediction of Base Transceiver Station Power Supply System Failure Indicators using Deep Neural Networks for Multi-Time Variant Time Series
    (Addis Ababa University, 2023-11) Jalene Bekuma; Dereje Hailemariam (PhD)
    The uninterrupted operation of wireless communication services relies heavily on the stability of power supply systems for Base Transceiver Stations (BTS). This study is dedicated to predicting potential failure indicators in BTS power systems using deep neural network architectures, such as recurrent and convolutional neural networks. The study integrates principal component analysis (PCA) for data dimensionality reduction and addresses challenges related to power system failures caused by environmental factors, power fluctuations, and equipment malfunctions within the Ethio telecom BTS system. The dataset utilized in this study spans four weeks of data from multiple sites, with observations sampled at 5-minute intervals, obtained from the ET NetEcho power monitoring system. The study meticulously explores the data preprocessing steps for time series analysis, encompassing consolidation, cleaning, scaling, and dimensionality reduction using PCA. Furthermore, it delves into the detailed implementation of CNN, LSTM, and CNN-LSTM models for time series prediction, thoroughly evaluating their performance and convergence. The experimental results clearly indicate that CNN-LSTM model surpasses both LSTM and CNN models in predicting BTS power system failure indicators, achieving the lowest loss values of 0.036 MSE, 0.189 RMSE and 0.112 MAE using CNN_LSTM model. These findings shows the potential of deep neural network architectures, particularly CNN_LSTM model in accurately predicting BTS power system failure indicators for the next thirty minutes. The significance of accurate prediction models in proactively detecting failures and minimizing their impact is highlighted, contributing to the reliability and stability of BTS power supply systems for wireless communication services.
  • Item
    Optimization of Millimeter Wave Microstrip Antenna for Wireless Application Using Genetic Algorithm
    (Addis Ababa University, 2023-12) Arebu Dejene
    In the telecommunications industry, wireless communications have progressed very rapidly in the last two decades. The requirement for high data rates and the paucity of spectrum in existing wireless communication drive next-generation communication technology to mm-wave frequencies, which also require adequate and efficient antenna technology for successful operation. These signals, however, have a high path loss and are susceptible to blocking. These mm-wave signal propagation challenges can be overcome by using high-directivity, wide-band, and multi-band antennas. Nonetheless, creating such a high-performance antenna in every way is a challenging endeavor. This dissertation discourses on the modeling, optimizing, and synthesizing of a rectangular microstrip patch antenna with dual-band and multi-band service for mm-wave communication using a binary-coded genetic algorithm to improve the directivity and bandwidth. The algorithm iteratively creates new models of patch surfaces by employing an iterative combination of HFSS and MATLAB software, and then returns the best antenna model. Accordingly, the dissertation exhibits improvements in the directivity, bandwidth, and multi-functionality of a single microstrip antenna. With patch geometry optimization, a dual-band antenna was optimized and resonated at 28.0 GHz and 46.6 GHz with acceptable performance. Another optimization was carried out on a single microstrip antenna for triple band operation and directivity improvement. The optimized antenna resonated at three distinct frequency bands centered at 28.0 GHz, 40.0 GHz, and 47.0 GHz, and demonstrates broadside radiation patterns with peak directivities of 7.7 dB, 12.1 dB, and 8.2 dB, respectively. On the other hand, bandwidth melioration was achieved by a genetically optimized quad-band antenna, which was resonated at four frequencies centered at 28.3 GHz, 38.1 GHz, 46.6 GHz, and 60.0 GHz, and a total operating bandwidth of 11.5 GHz. The dissertation also presents a penta-band mm-wave antenna for wearable applications. The proposed antenna designed on PTFE fabric substrate and resonates at five distinct frequencies: 27.8 GHz, 30.3 GHz, 40.1 GHz, 47.2 GHz, and 56.7 GHz. In free space, the antenna achieves a wide bandwidth of 0.69, 2.32, 2.22, 1.76, and 8.11 GHz and an improved broadside directivity of 10.3, 8.5, 7.8, 9.6, and 8.9 dB, respectively. Overall, the optimized antennas performances were suitable for multi-functional mm-wave applications.
  • Item
    Comparative Study of Machine Learning Techniques for Path Loss Prediction
    (Addis Ababa University, 2023-11) Ademe Wondimneh; Dereje Hailemariam (PhD)
    Path loss is the term used to describe the difference in signal strength between transmitted and received. Predicting this loss is a crucial task in wireless and mobile communication to gather data for resource allocation and network planning. Deterministic and empirical models are the two fundamental propagation models that are used to calculate path loss. There is a trade-off between accuracy and computing complexity between these models. Machine learning models reflect a classic conflict between accuracy and complexity and have significant potential in path loss prediction because they can learn complicated non-linear correlations between input properties and target values. This study investigates the application of machine learning techniques for path loss prediction in Addis Ababa LTE networks. Artificial neural networks (ANNs), random forest regression (RFR), and multiple linear regressions (MLR) are employed as machine learning models and compared with the widely used COST 231 empirical model. Data for training and testing is obtained through measurements from Addis Ababa LTE networks. The performance of the proposed models is evaluated using statistical metrics such as root mean squared error (RMSE), mean absolute error (MAE), and R-squared (R2). The results demonstrate that the RFR model outperforms the other models in terms of prediction accuracy, achieving an MAE of 3.48, an RMSE of 5.35, and an R2 of 0.77. The ANN model also exhibits satisfactory performance with an MAE of 4.19, an RMSE of 5.78, and an R2 of 0.71. The Cost 231 model, on the other hand, exhibits lower prediction accuracy. In terms of computational complexity, ANNs are found to be the most computationally intensive, while MLR is the simplest model among the evaluated machine learning models. RFR falls between ANNs and MLR in terms of computational complexity.
  • Item
    Power Control and Resource Allocation for Performance Optimization in D2D Underlaid Massive MIMO System
    (Addis Ababa University, 2022-09) Abi Abate; Anna Förster (Prof.); Yihenew Wondie (PhD)
    The desire to support the ever-increasing demand for wireless broadband service and a broad range of Internet of Things (IoT) applications, where fourth-generation (4G) wireless networks are struggling to deliver, necessitates new fifth-generation (5G) network architecture. The emerging 5G network should employ multiple advanced networking solutions to overcome the challenges posed by dynamic service quality requirements. As a single technology cannot achieve the diverse set of 5G requirements, the challenges and benefits of integrating multiple technologies in one system are worth investigating. On the other hand, the need to communicate with low latency requires a fundamental shift from centralized resource management and interference coordination toward distributed approaches, where intelligent devices can rapidly make resource management decisions. In this thesis, we proposed a distributed resource management and interference coordination scheme to optimize network performance for the coexistence of two technologies that have been identified as competent candidates for achieving the challenging 5G system performance criteria, namely, massive multiple-input multiple-output (MIMO) and network-assisted device-to-device (D2D) communications. First, we formulated a two-tier heterogeneous network with two different user types. The first tier serves the cellular users in the uplink by a multi-antenna Base Station (BS). The second layer serves D2D users exploiting their proximity and transmitting simultaneously, with the uplink cellular user bypassing the multi-antenna BS. Then, we formulated the throughput and energy efficiency optimization problem as a nonlinear optimization problem. To realize a distributed solution, we modelled each optimization problem into a matching game and proposed a resource allocation scheme based on the concept of matching theory. The analysis reveals that the implementation of the proposed distributed resource assignment and interference coordination scheme can achieve more than 88% of the ASR and 85% of the EE performance of optimal result. Next, we proposed a three-stage distributed solution to enhance the sum rate and analyze the impact of joint channel assignment and power control. We model the channel assignment problem in the first stage as a matching game. During this stage, each D2D pair sends its preferred channel request to the BS; and the BS accept the most preferred request. In the second stage, we model the power allocation problem as a non-cooperative game. Each D2D pair optimizes its utility value according to its side link quality and interference channel gain to limit the D2D-to-cellular interference. Finally, in the third stage, the algorithm considers the peer effect, searching for blocking pairs until stable matching is established. The performances of proposed schemes are investigated as a function of the number of BS antennas and cellular and D2D users and compared with the random and optimal counterparts. The numerical results show that the joint optimization of channel assignment and power control can enhance the sum rate performance of channel assignment with binary power allocation scheme where D2D pairs are either turned on with full power or turned off completely by 16%. In general, the extra degrees of freedom resulting from having multiple antennas at BS is highly desirable in the design of future D2D-enabled massive MIMO networks, as many side link users can be multiplexed, and inter-user interference can be controlled.
  • Item
    Side Lobe Reduction in Equally Spaced Linear Antenna Arrays using Antenna Thinning Technique
    (Addis Ababa University, 2023-11-22) Yonas Techale; Murad Ridwan (PhD)
    In antenna array design, the radiation pattern is a fundamental performance metric. It is a mathematical or graphical representation of the spatial distribution of radiated energy of an antenna array as a function of directional space coordinates. Array antennas can vary their directivity patterns through amplitude and phase control. One of the most important aspects of an antenna array is reducing interference and radiation power waste. Reduced side lobe level also avoids false target indication. Thinning is a technique for reducing the total number of active elements in an antenna array while maintaining system performance. This study aims to improve antenna performance by lowering the side lobe level using antenna thinning applying GA. A genetic algorithm achieves optimal solution by simulating the natural selection process. It starts with randomly selected candidates as the first generation. In the beginning, we studied radiation patterns of equally spaced and non-equally spaced linear antenna arrays; and radiation patterns for uniformly spaced, non-uniformly spaced, and non-uniformly spaced with rotated elements array for N=20. It is demonstrated in the result that non-uniform spacing and rotated elements can significantly improve the directivity and reduce side lobes compared to uniformly spaced arrays. In addition, it is observed in the beam pattern resulting from one typical first-generation candidate that the sidelobe level is lower in the azimuth direction but higher in the elevation direction compared to the full array. The exact sidelobe level and fill rate of the array is then around 8.7 and 71.75% respectively. This means that 71.75% of the array elements are active and the sidelobe level is approximately 9 dB. It needs to be suppressed further by applying a genetic algorithm with 30 generations. Thus, the result shows the sidelobe level and fill rate of the array after applying GA with 30 generations is around 17.38 and 76.5% respectively. Compared to the first-generation candidate, it uses 5% more active elements while achieving an additional 9 dB sidelobe suppression. Compared to the full array, the resulting thinned array can save the cost of implementing T/R switches behind dummy elements, which in turn leads to a roughly 25% saving on the consumed power. Even though the thinned array uses fewer elements, the beamwidth is close to what could be achieved with a full array.
  • Item
    Performance Analysis of Linear Precoding for Multiuser Multiple-Input and Multiple-Output Broadcast Channels
    (Addis Ababa University, 2023-09) Worku, Tamene; Murad, Ridwan (PhD)
    Multiuser Multiple Input and Multiple Output is an antenna technology for wireless communication in which number of users or wireless terminals each with one or more number of antennas communicate with each other. Precoding in multiuser MIMO systems is important to minimize or mitigate the multiuser interference. As a consequence, the design of suitable precoding algorithms with a low computational complexity and a good overall performance is a challenging scenario when system dimensions are high. A linear precoding technique such as regularized channel inversion (minimum mean square error), channel inversion (zero forcing), and Block diagonalization techniques for multi-user multiple input multiple-output broadcast channels are able to eliminate the multiuser interference per antenna or sum power constraint. After conducting this thesis an enhanced performance is measured from this thesis. In case, analysis of the MU-MIMO with fewer number of antennas may reduce the cost of antenna and some complexities in large antenna system. Different researches are conducted in multiuser MIMO with single antenna receivers and conducted mostly in Rayleigh channel conditions. Besides, the performance of multiuser MIMO linear precoding under different channel conditions together with two or more antenna receivers have been investigated in this work. In this research, the performance of linear precoding in multiuser MIMO under Rayleigh, Rician and Deterministic channel conditions are illustrated in different performance metrics like data rate, channel capacity and spectral efficiencies. The performance of linear precoding under multiuser MIMO with two antenna users have a great performance due to the combined effect of the antennas. In addition, the Rican channels achieves minimum bit error rate than Rayleigh and deterministic channels. Furthermore, this study has advantage of detail comparative analysis when the users are more, and this analysis have a direct impact when the congested number of users are involved.
  • Item
    Techno-economic Comparison of Mid-band 5G Fixed Wireless Access and GPON-based Optical Distribution Networks
    (Addis Ababa University, 2023-06) Tujuma, Bayisa; Dereje, Hailemariam (PhD)
    The popularity of broadband Internet services has increased significantly over the past few years. Similarly, the development of mobile network technologies has seen rapid growth. Due to these trends, Fifth Generation (5G) Fixed Wireless Access (FWA) networks have been proposed as a potential competitor to other broadband access technologies, such as Optical Distribution Network (ODN). However, technological advancement itself cannot show the performance, acceptance, or economic viability of an investment without a detailed technical and economic feasibility assessment of possible broadband deployment alternatives. This thesis conducts a techno-economic comparison between 5G FWA at mid-band frequency range and Gigabit Passive Optical Network (GPON) based ODN to provide broadband services for residential users. It presents a techno-economic analysis of four possible deployment scenarios: namely, two scenarios (Sc-1) and scenario (Sc-2) based on 5G FWA using new and existing infrastructure, respectively. The other two scenarios (Sc-3) and scenario (Sc-4) are GPON-based ODN using new and existing infrastructure, respectively. These scenarios are evaluated in the context of the capital city of Ethiopia, Addis Ababa, around an area called Tulu Dimtu. Data collected from the operator, ethio telecom, serves as main source of information. For the evaluation, the most popular and widely used techno-economic tool, called Techno-economic Results from the Advanced Communications Technology and Services (TERA), is modified and implemented including network dimensioning, revenue modeling, cost modeling, and economic analysis. For all analyses, 10-years study period and 10% discount rate are considered. The analyses were evaluated using standard economic indicators such as Net Present Value (NPV), Internal Rate of Return (IRR), and Payback Period (PBP). MATLAB and Microsoft Excel are used for the implementation. Achieved result shows that the PBP of the scenarios are: 4.48, 3.75, 4.63 and 4.37 years for Sc-1, Sc-2, Sc-3, and Sc-4, respectively. Based on NPV results, all scenarios have positive NPV for the study period and greater IRR value than the defined discounted rate. Sensitivity analysis shows that revenue is the most sensitive parameter over the other parameters. The findings indicate that all scenarios are deployable, but they should be deployed based on requirements.
  • Item
    Hybrid Microwave and Free Space Optics Network for Mobile Backhaul Capacity and Availability Improvement
    (Addis Ababa University, 2023-06) Mulugeta, Semu; Dereje, Hailemariam (PhD)
    With the growing demand for high speed mobile data and the increasing use of smart devices, the existing microwave (MW) or radio frequency (RF) backhaul network is going to be a bottleneck for end users data volume requirements. Additionally, the performance of the MW link is significantly degraded by bad weather conditions, such as rain. To mitigate these limitations, free space optics (FSO) becomes a promising backhaul technology due to its large bandwidth and use of a different carrier frequency that is not impacted by rain. However, FSO is exposed to link loss or failure under foggy weather conditions, whereas MW links are prone to fog. Having this complementary advantage of FSO and RF, using hybrid FSO/RF networks is a preferred solution to improve the availability of the link and the capacity of backhaul networks. In this paper, an adaptive switching hybrid FSO/RF system is used to improve the performance of the hard switching scheme, which is exposed to link flapping due to short-term changes in weather conditions. The switching threshold of the FSO and RF links and multi-rate switching on each link are determined, and the availability and capacity performance of the hybrid system are investigated based on received signal-to-noise ratio (SNR) values. To meet the objective, the methodology followed includes data collection, system and channel modeling, and RF, FSO, and hybrid performance comparison. The system used gamma gamma distribution for the FSO channel and the Rician model for the RF channel model. Simulation results are obtained using the Matlab tool. The effects of rain and fog on the RF and FSO links are simulated and discussed, respectively. The availability of the system in terms of outage probability shows that the hybrid system significantly reduces the SNR value to 14 dB to achieve 99.99% link availability, which is not achieved by an RF-only or FSO-only link. The result also shows that adaptive switching mode has a better bit error rate (BER) than hard switching mode since the switching of links between FSO and RF and switching between multi rates on each link is based on maintaining target BER. To maintain good quality of service (QoS), the target BER of the system is set, and the system gradually lowers its modulation order to the maximum possible data rate based on received SNR values.
  • Item
    Performance Analysis of Energy Efficiency Ensuring Techniques In Massive Mimo For 5G Communication Networks
    (Addis Ababa University, 2023-09) Haile, Araya; Yihenew, Wondie (PhD)
    Wireless communication technology is increasing to satisfy the needs of customers. With the emergence of new technologies, energy consumption is one of the most important performance metrics. According to the requirements of the 5th generation wireless communication system, energy consumption should not increase from the level of the current networks (4G), even though the amount of data is expected to be significantly higher. Therefore, energy efficiency has been set as one of the major objectives for recent cellular networks. Massive multiple-input-multiple-output (M-MIMO) is the key technology to providing higher energy efficiency (EE) and data throughput in 5G wireless communication systems. This thesis focuses on the performance analysis of energy efficiency higher than energy consumption for 5G networks using massive multiple-input multiple-output (M-MIMO). Minimizing the power consumption per user’s equipment (UE) with increasing throughput. The main design parameters used are the power consumption per user’s equipment (PC), the data rate of the system (R), and the massive number of antennas (M) and users’ equipment (K). Energy efficiency is defined as the system throughput per unit of power consumption as a function of a massive number of antennas and users. The performance analysis and comparison used are pre-coding schemes such as multi-cell minimum means square error (M-MMSE), zero-forcing (ZF/RZF), and maximum ratio combination (MRC). MATLAB tools are used to analyze and demonstrate numerical results. The analyzed results show that energy efficiency (EE) is higher than energy consumption in a massive MIMO for 5G wireless communication systems. The overall simulated result of multi-cell minimum mean square error (M-MMSE) is the best pre-coding technique to maximize energy efficiency (EE) rather than total energy consumption in massive MIMO for 5G wireless networks. However, MRC achieves the lowest performance and energy efficiency as the massive number of antennas increases in massive MIMO for 5G cellular communication networks.
  • Item
    On the Performances of User Association Enhancements in Dense Wireless Heterogeneous Networks
    (Addis Ababa University, 2023-03) Dinkisa, Aga; Yihenew; Hamalainen, Jyri (Prof.); Yihenew, Wondie (PhD)
    User Association (UA) plays a signi cant role in radio resource management of wireless communication systems. Currently, network densi cation and heterogeneity have already been identi ed as a feasible solution for the exponentially expanding data service demand. Hence, UA methods must meet di erent requirements in dense and ultra dense Heterogeneous Networks (HetNets). The load-imbalance due to transmit power di erence between tiers and interference coordination challenges, the e ect of serving node intensity on load sharing and achievable throughputs and the e ort to satisfy certain users with high data rate demands are a few problems. Furthermore, the interconnected and complicated problems of service delivery are posed by the spatio-temporal dynamics in service demand and the mobility of User Equipment (UE). This thesis takes a step-by-step approach to solving UA problems in dense and ultra dense HetNets. This research uses stochastic geometry tools, system level simulations, and realistic test case deployment simulations. Models were created for each scenario based on the load balancing, interference coordination, varied densi cation levels, heterogeneity, and user mobility. The work's rst contribution is a solution to the problem of load imbalance and interference coordination. The proposed method is simple to integrate into an existing HetNets network, and the results demonstrate e ective load-aware association and adaptive interference coordination. A cell clustering-based load-aware o setting and an adaptive Low Power Subframe (LPS) approach was developed. The solution allows the separation of UA functions at the UE and network server such that users can make a simple cell-selection decision similar to that in the Maximum Received Signal Strength (max-RSS) based UA scheme, where the network server computes the load-aware o setting and required LPS periods based on the load conditions of the system. The proposed solution was evaluated using system level simulations wherein the results correspond to performance changes in di erent service regions. Results show that the method e ectively solves the o oading and interference coordination problems in dense HetNets. The second contribution of the research is on the coupled and decoupled User Association. It can be used as a guide for network operators to select the appropriate UA scheme for their network. The concepts of Poisson random networks were used to analytiv ically obtain the relative densi cation levels for which we need the o oading, decoupled or coupled UA and validate the analysis with numerical and system level simulation of realistic network. The association window, where users choose to use decoupled association in terms of the relative intensity, transmit powers at each tiers and the Path Loss Exponent (PLE) of the propagation environment, is derived. Further, the ergodic rate expressions in order to study throughput performances in di erent densi cation regions, which can be computed numerically, are formulated . To validate the theoretical analysis, numerical, system level simulation and realistic network analysis were used. The analytical, simulation, and realistic test case results provide insights for the operators about the densi cation ranges, where to use coupled or decoupled association. Finally, the research work focused on solutions for UEs with high data rate demands and mobility management. With Multiple Association (MA), user-centric clustering, control, and user-plane split usages were designed and investigated. Mobility management approaches in Long Term Evolution Advanced (LTE-A)/Fifth Generation (5G) and MA were used. The scheme attempts to separately treat UEs based on their speed by setting some prede ned thresholds. In addition, a clustering approach, which produce virtual cells with which UEs gets associated was developed. Combining of MA with clustering enhances cooperation between most appropriate cells to serve a given UE. The ndings indicate that the issues were addressed in an e cient and e ective manner.
  • Item
    Peak Hour Mobile Core Network Data Traffic Analysis to Improve Network Quality Using Flow Based Method: The Case of Ethio-Telecom
    (Addis Ababa University, 2021-10) Mahlet, Merid; Yihenew, Wondie (PhD)
    It is known that the telecom industry is one of the core areas in a country's sustainability and growth. So it is important that great emphasis be given to it on deploying necessary infrastructures in different areas, maintaining the existing available resources and also upgrading the already existing networks as necessary. Once the basic layout is done, it is also equally important that the necessary follow up is done for giving solutions to problems that arise from customers from time to time. One of the biggest reason that lead to customer complaints arise from poor quality of service which results in dissatisfaction of customers' needs. In order to give a solution to this, one of the ways is to do a network traffic analysis. In this thesis, a data traffic analysis is done in the Ethio Telecom core network. Data captured from its network is used as an input in order to firstly identify the peak hour during the day because this is the time where there is the most communication and transmission. The peak hours of each day are recoded and then finally the average is taken for the purpose of this study. In general over the sampled data the peak hour is found to be at 21:06hr. For this work identification of the peak hour is necessary because this thesis focuses the traffic analysis during the peak hour and for the work to be thorough and to be confirmed, first identification of the busiest hour of the day is necessary. After that by filtering out the data at the peak hour, the Key Performance Indicators, Packet Loss Ratio in percentage (%) and throughput (packet/sec) are studied from the capture data in order to be able to see how exactly the system is working. In order to do so, two approaches are used. First the cumulative distribution functions of the data are fitted against the different traffic analysis distribution models. Out of the selected distribution models, it is seen that our data best fits with the Normal Distribution and the Gamma Distributions. For better accuracy the RMSE (Root Mean Square Error) is calculated for each one of them. Second, the KPI's for the peak hour and the slow hour are compared. From the sample gathered data, for both PLR and throughputs, the number of packets being lost are higher during the peak hour compared to that of the slow hour by 37%. But despite this, when comparing the Packet Loss Ratio recorded for both peak hour and slow hour they are both less than 1% which is the acceptable threshold range. Similarly the number of packets being received per second that are sent for the downlink and uplink throughputs, during peak hour the minimum downlink and uplink throughputs exceed that of the slow hour by 15.4% and 11.9% respectively and for the uplink throughput by 16% and 12.5% respectively. So finally from the analysis result, it is seen that the network works fine with a very minor glitch which is expected from a real life operating network.
  • Item
    Performance Analysis of Spectral Efficiency for 5G Enhanced Mobile Broadband Network with Massive MIMO
    (Addis Ababa University, 2022-01) Kahsay, Nguse; Yihenew, Wondie (PhD)
    Due to the increase of the number of users and applications, the improvement of technology is also ongoing. Wireless mobile communications need high data rate and capacity at the same time. Future generation wireless communication will have to deal with some basic requirements for serving large number users with high throughput. The fifth generation (5G) network needs to evolve in order to increase the capacity higher than the fourth generation of networks by 2025. In practice, the inter-user interference in multi-cell network has impact when more users access the wireless network and reduces performance of the system. This thesis explains the basic motivations behind Massive MIMO technology in application to 5G enhanced mobile broadband network, and provides analysis for spectral efficiency in different propagation environments. First, Lower bound SE expressions are derived to enable efficient system-level analysis under LoS and NLoS propagation environments under the assumption that channel state information is acquired by using pilot sequences (reused across the network) with densification of BSs so as to improve the SE for UEs. Simulations are used to show what happens to SE for different path loss models, BS antennas M, and different UEs K under these propagation environments. The numerical analysis shows that the SE as a function of BS density achieves its maximum for a relatively small density of BS, irrespective of the processing scheme used. This is different from distance-independent path loss model, in which the SE is a non-decreasing function of BS density. ZF processing is found to be good compensation in complexity and performance in spectral efficiency, which is then used to optimize, for a given BS density, the pilot reuse factor, number of BS antennas and UEs.
  • Item
    Performance Evaluation of Precoding Techniques for 5G Massive MIMO Downlink System
    (Addis Ababa University, 2021-06) Beza Shewanega; Yihenew, Wondie (PhD)
    Massive multiple-input-multiple-output (MIMO) systems use a few hundred antennas to simultaneously serve many wireless broadband terminals, using sophisticated coding at the transmitter and substantial signal processing at the receiver, the MIMO channel can be provisioned for higher data rates, resistance to multipath fading, lower delays, and support for multiple users. In multi-user MIMO, a multi-antenna transmitter communicates simultaneously with multiple receivers (each having one or multiple antennas). This is known as space-division multiple access (SDMA) and here Precoding algorithms will be very essential for supporting multi-stream (or multi-layer) transmission in multi-antenna wireless communications, since the research aim is to find the key options to increase the performance of the upcoming 5G wireless system, this research work will focus on one of this options which are downlink distribution Precoding techniques for massive MIMO system, by assuming that both the base station and the user terminals are equipped with an antenna array. Precoding algorithms for SDMA systems can be sub-divided into linear and nonlinear Precoding types. The capacity-achieving algorithms are nonlinear, but linear Precoding approaches usually achieve reasonable performance with much lower complexity. This research work will present a comparative study of different linear Precoding techniques for massive MIMO wireless systems. The performance of the Precoding scheme is evaluated and compared with an iterative Precoding scheme designed to provide a maximum achievable rate gain by exploiting the expanded spatial degrees of freedom.