Browsing by Author "Yihenew Wondie (PhD)"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Enhancement of Detection using Hybrid of Energy and Cyclostationary Detection Algorithm(Addis Ababa University, 2022-12) Ruth Shiferaw; Yihenew Wondie (PhD)The growing demand for bandwidth demanding wireless technologies has led to the problem of spectrum scarcity. However, a fixed spectrum assignment has led to underutilization of spectrum as a great portion of the licensed spectrum is not effectively utilized. Cognitive radio technology promises a solution to the problem by allowing unlicensed users, access to the licensed bands opportunistically. A prime component of cognitive radio technology is spectrum sensing. Many spectrum sensing techniques have been developed to sense the presence of a licensed user. Among them, energy detector is one of the known technique which uses the received signal energy and compare it to the threshold to identify weather the licensed user is using the channel or not but at a low SNR value, the performance is poor. This thesis evaluates the performance of a hybrid of cyclostationary and energy spectrum sensing technique in noisy and fading environments. The performance of the energy detection technique was evaluated by use of Detection Probability Vs SNR value, Receiver Operating Characteristics (ROC), Complement Receiver Operating Curve (CROC), Detection Probability Vs False Alarm Probability curves over additive white Gaussian noise (AWGN) and fading (Rayleigh) channels. The detection time for the proposed hybrid detector also analyzed. Results show that using a hybrid detection technique performs better than the energy detection technique for all evaluation matrices in both AWGN and fading channel models. However, this will cost the detection time. The detection time increases as the probability of detection increases.Item Game Theoretic Frame Work Based Energy Efficient RAN Infrastructure Sharing in Multi-Operator Mobile Networks(Addis Ababa University, 2023-10) Tewodros Misgan.; Yihenew Wondie (PhD)Recently, cellular network base station (BS) switching-off and infrastructure sharing techniques have been introduced as capable solutions for energy and cost reduction. The rising number of base stations results in redundant energy usage, particularly during low traffic durations when the base station's capacity is underutilized. In this thesis, we have studied roaming-based infrastructure sharing problems among three MNOs (mobile network operators). These operators switch-off their BSs and roam their traffic to active neighboring BSs operated by other MNOs in a specified geo-graphical area during low-traffic periods. Our goal is to save energy and increase the system capacity of the MNOs while maintaining QOS. We have proposed mathematical models all over this thesis based on novel non-coop-erative game theoretic strategies to model and evaluate competitive interactions among MNOs. This strategy is modelled as an integrated cost-based objective function by applying different scenarios to determine which base stations should remain active. In this method, operators share infrastructure and dynamically roam their traffic to maximize throughput. We have implemented the Tit-for-Tat (TFT) algorithm to share a portion of their network and optimized it with three MNOs operating in the same area with varying traffic loads and distance (D) between UEs and BSs. We also imple-mented the TFT algorithm's hyper parameters. A MATLAB simulation tool was em-ployed to evaluate the performance of proposed scheme. We demonstrate simulation results in different scenarios to quantify the financial benefits due to energy savings from our proposed scheme, which provides up to 50% BS switching of probability to MNOs and throughput gains up to 100 Mbps with roaming-based traffic load varia-tions compared to the baseline approach.Item ILP Based Optimal BBU Pool Planning for Cloud-RAN Deployment: in the Context of Ethiotelecom(Addis Ababa University, 2023-08) Ruhama Girma; Yihenew Wondie (PhD)Cloud radio access network (C-RAN) is a novel mobile network architecture that decouples the baseband units (BBUs) from their corresponding cell sites and takes the baseband processing unit to a virtualized and shared central location. This increases wireless networks' scalability, manageability, and significantly saves both the capital and operational costs of mobile network operators (MNOs). Although this BBU centralization enables power savings, it imposes much higher bandwidth on the fronthaul network. Since with distributed radio access network (D-RAN) there is a higher building, maintenance and operational costs there should be a RAN architectural change, and in addition to previously done researches in this area this thesis comprise the impact of wireless interface delay on BBU centralization. We investigate the optimal planning of BBU pools for C-RAN deployment in the context of Ethiotelecom and shows how to optimally determine the required number of BBU pools for different use cases using an integer linear programming (ILP) optimization problem and calculates the required budget to deploy a single BBU pool. We also figure out which functional split is cost-effective for fronthaul networks while providing the centralization benefits. Simulation results show that the wireless interface delay has an impact on BBU centralization therefore it should be considered in the BBU pool planning process. On the other hand, the number of BBU pools required varies depending on the use cases considered, hence for the chosen area ultra-reliable and low latency communication (URLLC) services require 5 BBU pools, real-time functions such as voice over long-term evolution (VoLTE) require 3 BBU pools; in contrast latency-insensitive applications, such as email 2 BBU pools are sufficient to meet the required performance requirements. Furthermore, split option 7.2 was selectedItem LTE Network Coverage Hole Detection Technique using Random Forest Classifier Machine Learning Algorithm(Addis Ababa University, 2023-10) Serkadis Bahiru; Yihenew Wondie (PhD)The most significant phases for the mobile operator are the performance of cellular networks and the evaluation of quality of service (QoS). Hence, Long-Term Evolution (LTE) network monitoring and measuring are necessary in determining mobile network statistics to optimize the performance of the network in a particular area. Measuring networks is an effective way to qualify and quantify how networks are being used and how they are behaving, and it helps to find voids or uncovered areas that are coverage holes in the LTE network. A coverage hole is defined as a client's being unable to receive a wireless network signal. Traditionally, it is detected through drive tests, but it costs resources and time. In order to identify coverage holes in the LTE network, this study proposed a random forest classifier Machine Learning (ML) approach. The data was gathered from the User Equipment (UE) via minimization drive test (MDT) functionality with low cost, which measures Reference Signal Receive Power (RSRP), Reference Signal Received Quality (RSRQ), and Signal to Interference and Noise Ratio (SINR). The RSRP and RSRQ measure the strength and quality of the reference signal received at the device's antenna, whereas SINR is a metric that assesses the relationship between the targeted signal strength and the total power of all interfering signals and noise. The results obtained show that the employed model's accuracy was 96.3%. However, we could use the hyperparameter optimization (HPO) technique to enhance the model's performance, namely random and grid search cross validation (CV), which increased the model's accuracy to 97.7% and 99.2%, respectively.Item Performance Comparison of Machine Learning-Based Dynamic Resource Allocation Methods for LTE-A Using Realistic Data(Addis Ababa University, 2023-09) Tiblets Kinfe; Yihenew Wondie (PhD)In the last few years, fourth generation (4G) mobile networks have become much more complex due to deployments of macro, micro, pico, and femtocells to boost network capacity and quality of Services(QoS). Long-Term Evolution Advanced (LTE-A) system is a 4G wireless communication technology that offers mobile devices high-speed data transmission, increased capacity, and improved network performance. While LTE-A offers improved capabilities, it also poses unique resource allocation problems. The following are some difficulties in allocating LTE-A resources: heterogeneous network deployment, diverse QoS requirements and dynamic traffic patterns. Traditional resource management approaches usually use static strategies or predefined rules and are insensitive to the current demand on the network. To allocate resources effectively based on the current network conditions, resource allocation must be adaptable and dynamic. Adaptive resource allocation methods that can handle the dynamic nature of LTE-A networks can be developed with the help of machine learning and artificial intelligence. This thesis evaluates the performance of machine learning-based resource allocation strategies in LTE-A networks using realistic data. Four different machine learning-based resource allocation strategies are compared: Long Short Term Memory (LSTM), Conventional Neural Network and Long Short Term Memory (CNN-LSTM), Random Forest (RF) and K Nearest Neighbor(KNN) algorithm. Performance measures include the accuracy of resource allocation, Root Mean Square Error (RMSE), Mean Absolute Error(MAE) and speed of convergence( running time). The results show that LSTM is the model with higher accuracy score of resource allocation that is 98.56% and in terms RMSE and MAE random forest has lowest value of 0.294 and 0.08. In the case of running time KNN takes shortest time of 1.58 second.Item Performance Comparison of Multi-Mode Modulation Techniques for SDR Using FPGA(Addis Ababa University, 2023-11) Sisay Bogale; Yihenew Wondie (PhD)Radio devices that were previously built in hardware have been replaced in recent years by reconfigurable software defined radio (SDR) systems. Conventional hardware-based radios have restricted multi-functionality and are physically changeable only. This leads to an increase in production expenses and a reduction in the number of waveform standards that can be supported. A rapid and affordable answer to this issue is provided by software-defined radio technology, which enables software upgrades for multi-mode, multi-band, and multi-functional wireless devices. In SDR, different modulation techniques are used to achieve efficient communication over a radio channel. Multi-mode modulation is an approach that allows the use of multiple modulation schemes in a single system, which can enhance the flexibility and resilience of communication systems. This paper presented a design and implementation of multi-mode modulation techniques for SDR using FPGA and analyze the performance based on the FPGA resource utilization. It combines six modulation schemes: QASK, QPSK, QAM, AM, PM and FM to create multi-mode modulation system. The performance of this multi-mode modulation system is evaluated in terms of FPGA resource utilization such as total computational power, total number of Look Table (LUT) or memory used, Flip Flops (FF) and Input/Output (IO) port usage. Xilinx Vivado system generator for DSP with MATLAB/Simulink is used to design, simulate and verify the multi-mode modulator, which would then be implemented on a Xilinx Zedboard FPGA hardware. A total of 0.225W power, 844 number of LUT and 1 IO port is utilized by the implemented design. The biggest thing we achieved in this research is that we saved computational power. 1.572W and 1.134W amount of power is saved by our design as compared to previous two studies.Item Positive Train Control with Headway Optimization on an Active Communication System-a case study of Addis Ababa Light Rail Transit(Addis Ababa University, 2023-08) Mohamed Ali Hussein; Yihenew Wondie (PhD)Reducing rail transit's headway effectively is of enormous practical value since it plays an increasingly significant part in the public transportation system. Some of the most important goals in railroad operations include safety, capacity, and timely schedules. The idea behind positive train control (PTC) is to use cutting-edge 2 information technologies to increase the safety and effectiveness of railroad operations. Except for increasing the average waiting time, headway irregularity may also result in additional energy consumption and more delay time. Active communications and other information technologies enable the deployment of a dynamic headway, which can increase track capacity and dispatching effectiveness while also enhancing safety. Minimum headway, which is the optima Interval time the following train can reach while tracking the lead train, is considered to be one of the main factors restricting operational capacity. We can run with better headways without increasing the operational speed; for example, the Addis Ababa LRT currently uses 15 minutes of headway when the speed is averaging 20-70 km/h. The calculated results show that this significant headway can be reduced by up to 4 minutes without slowing down operation As a result, dynamic headway control cannot be represented using existing modeling techniques that use fixed headways, especially when the dynamic headway only comprises a small portion of a single block. Minimum headway shall be a minimum period of time required for the station to carry out requisite train arrival and departure operations, which it is also essential that trains operating in this area are safe.Item Power Control and Resource Allocation for Performance Optimization in D2D Underlaid Massive MIMO System(Addis Ababa University, 2022-09) Abi Abate; Anna Förster (Prof.); Yihenew Wondie (PhD)The desire to support the ever-increasing demand for wireless broadband service and a broad range of Internet of Things (IoT) applications, where fourth-generation (4G) wireless networks are struggling to deliver, necessitates new fifth-generation (5G) network architecture. The emerging 5G network should employ multiple advanced networking solutions to overcome the challenges posed by dynamic service quality requirements. As a single technology cannot achieve the diverse set of 5G requirements, the challenges and benefits of integrating multiple technologies in one system are worth investigating. On the other hand, the need to communicate with low latency requires a fundamental shift from centralized resource management and interference coordination toward distributed approaches, where intelligent devices can rapidly make resource management decisions. In this thesis, we proposed a distributed resource management and interference coordination scheme to optimize network performance for the coexistence of two technologies that have been identified as competent candidates for achieving the challenging 5G system performance criteria, namely, massive multiple-input multiple-output (MIMO) and network-assisted device-to-device (D2D) communications. First, we formulated a two-tier heterogeneous network with two different user types. The first tier serves the cellular users in the uplink by a multi-antenna Base Station (BS). The second layer serves D2D users exploiting their proximity and transmitting simultaneously, with the uplink cellular user bypassing the multi-antenna BS. Then, we formulated the throughput and energy efficiency optimization problem as a nonlinear optimization problem. To realize a distributed solution, we modelled each optimization problem into a matching game and proposed a resource allocation scheme based on the concept of matching theory. The analysis reveals that the implementation of the proposed distributed resource assignment and interference coordination scheme can achieve more than 88% of the ASR and 85% of the EE performance of optimal result. Next, we proposed a three-stage distributed solution to enhance the sum rate and analyze the impact of joint channel assignment and power control. We model the channel assignment problem in the first stage as a matching game. During this stage, each D2D pair sends its preferred channel request to the BS; and the BS accept the most preferred request. In the second stage, we model the power allocation problem as a non-cooperative game. Each D2D pair optimizes its utility value according to its side link quality and interference channel gain to limit the D2D-to-cellular interference. Finally, in the third stage, the algorithm considers the peer effect, searching for blocking pairs until stable matching is established. The performances of proposed schemes are investigated as a function of the number of BS antennas and cellular and D2D users and compared with the random and optimal counterparts. The numerical results show that the joint optimization of channel assignment and power control can enhance the sum rate performance of channel assignment with binary power allocation scheme where D2D pairs are either turned on with full power or turned off completely by 16%. In general, the extra degrees of freedom resulting from having multiple antennas at BS is highly desirable in the design of future D2D-enabled massive MIMO networks, as many side link users can be multiplexed, and inter-user interference can be controlled.