Computer Engineering
Permanent URI for this collection
Browse
Browsing Computer Engineering by Title
Now showing 1 - 20 of 219
Results Per Page
Sort Options
Item A Video Coding Scheme Based on Bit Depth Enhancement With CNN(Addis Ababa University, 2023-06) Daniel Getachew; Bisrat Derebssa (PhD)Raw or uncompressed videos take a lot of resources in terms of storage and bandwidth. Video compression algorithms are used to reduce the size of a video and many of them have been proposed over the years. People also proposed video coding schemes which works on top of existing video compression algorithms by applying down sampling prior to encoding and restoring them to their original form after decoding for further bitrate reduction. Down sampling can be done in spatial resolution or bit depth. This paper presents a new video coding scheme that is based on bit depth down sampling before encoding and use CNN to restore it at the decoder. However unlike previous approaches the proposed approach exploits the temporal correlation which exists between consecutive frames of a video sequence by dividing the frames into key frames and non-key frames and only apply bit depth down sampling to the non-key frames. These non-key frames will be reconstructed using a CNN that takes the key frames and non-key frames as input at the decoder. Experimental results showed that the proposed bit depth enhancement CNN model improved the quality of the restored non-key frames by an average of 1.6dB PSNR than the previous approach before integrated to the video coding scheme. When integrated in the video coding scheme the proposed approach achieved better coding gain with an average of -18.7454% in Bjøntegaard Delta measurements.Item Accelaration of Preprocessors of the Snort Network Intrusion Detection System Using General Purpose Graphics Processing Unit(Addis Ababa University, 2015-04) Yihunie, Simegnew; Assamnew, FitsumAdvances in networking technologies enable interactions and communications at high speeds and large data volumes. But, securing data and the infrastructure has become a big issue. Intrusion Detection Systems such as Snort play an important role to secure the network. Intrusion detection systems are used to monitor networks for unauthorized access. Snort has a packet decoder, pre-processor, detection engine and an alerting system. The detection engine is the most compute intensive part followed by the pre-processor. Previous work has shown how general purpose graphics processing units(GP-GPU) can be used to accellerate the detection engine. This work focused on the pre-processors of Snort, speci cally, the stream5 pre-processor as pro ling revealed it to be the most time consuming of the pre-processors. The analysis shows that the individual implementation of stream5 using Compute Uni ed Device Architecture(CUDA) achieved up to ve times speed up over the baseline. Also, an over all 15.5 percent speed up on the Defense Advanced Research Projects Agency(DARPA) intrusion detection system dataset was observed when integrated in Snort. Key words: Intrusion Detection System, Snort, Graphics Processing Unit, CUDA, Parallelization, Porting, Preprocessor.Item Acceleration and Energy Reduction of Object Detection on Mobile Graphics Processing Unit(Addis Ababa University, 2019-06) Fitsum, Assamnew; Jonathan, Rose (Prof.); Dereje, Hailemariam (PhD)The evolution of high performance computing in today’s smartphones is enabling their use in compute-intensive applications. As the compute requirement increases, the energy required to do the computation cannot increase in proportion because the cost of providing that energy available and cooling would become prohibitive. An alternative, potentially power-reducing approach is to use graphics processing units or special accelerator cores. Today’s smartphones are equipped with systemon-chip (SoC) devices that house many cores such as graphics processing units, digital signal processors, and special multimedia encoder/decoder hardware along side multi-core central processing units. Their inclusion enables applications that require greater computational power such as real-time computer vision. In this work, we study the capability of the recently introduced general-purpose graphics processing unit (GPU) in a smartphone SoC to enable energy-efficient object detection. This will include understanding the architecture of the recent GPUs that will be used (the Adreno 320 and Adreno 420 from Qualcomm), the implementation and optimization of the object detection algorithm used in the Open Computer Vision library (OpenCV) using these GPUs and measuring the energy consumption of this implementation. We implemented the Viola-Jones based object detection on the GPU in an Android tablet. The implementation is 35% faster on average than the same algorithm running on the CPU on the same device. The implementation also reduces the average energy consumption by 68% compared to the CPU on the same device. An application that utilized the object detector on the mobile GPU to detect Ringworm skin disease was developed. A classifier was trained for this application and it has an accuracy of 75%.Item Acceleration of Convolutional Neural Network Training using Field Programmable Gate Arrays(Addis Ababa University, 2022-01) Guta, Tesema; Fitsum, Assamnew (PhD)Convolutional neural networks (CNN) training often necessitates a considerable amount of computational resources. In recent years, several studies have proposed CNN inference and training accelerators, which the FPGAs have previously demonstrated good performance and energy efficiency. To speed processing, the CNN requires additional computational resources such as memory bandwidth, a FPGA plantform resource usage, time, and power consumption. As well as training the CNN needs large datasets and computational power, and they are constrained by the requirement for improved hardware acceleration to support scalability beyond existing data and model sizes. In this study, we propose a procedure for energy efficient CNN training in collaboration with an FPGA-based accelerator. We employed optimizations such as quantization, which is a common model compression technique, to speed up the CNN training process. Additionally, a gradient accumulation buffer is used to ensure maximum operating efficiency while maintaining gradient descent of the learning algorithm. Subsequently, to validate our design, we implemented the AlexNet and VGG16 models on an FPGA board and a laptop CPU and GPU. Consequently, our designs achieve 203.75 GOPS on Terasic DE1-SoC with the AlexNet model and 196.50 GOPS with the VGG16 model on Terasic DE-SoC. This, as far as we know, outperforms existing FPGA-based accelerators. Compared to the CPU and GPU, our design is 22.613X and 3.709X more energy efficient respectively.Item Adaptive Antenna Array Algorithms and Their Impact on Code Division Multiple Access Systems (CDMA).(Addis Ababa University, 2004-03) Hadgu, Dereje; Abdo, Mohammed (PhD)In mobile communications there is a need to increase the channel capacity. The increasing demand for mobile communication services without a corresponding increase in RF spectrum allocation (channel capacity) motivates the need for new techniques to improve spectrum utilization. The CDMA and adaptive antenna array are two approaches that shows real promise for increasing spectrum efficiency. This research focuses on the application of adaptive arrays to the Code Division Multiple Access (CDMA) cellular systems. The adaptive antenna has an intelligent control unit, so the antenna can follow the user, direct the radiation pattern towards the desired user, adapt to varying channel conditions and minimize the interference. Therefore there can be several users in the same channel in the same cell. The driving force of this intelligent control unit are special kinds of algorithms and we are going to investigate the performance of these different adaptive array algorithms in the CDMA systems. Four each blind adaptive array algorithms are developed, and their performance under different test situations (e. g. AWGN (Additive White Gaussian Noise) channel, and multipath environment) is studied. A MATLAB test bed is created to show their performance on these two test situations and an optimum one can be selected.Item Adaptive modulation based cooperative MIMO in fading channel for future wireless technology(Addis Ababa University, 2017-03) Ahmed, Niema; Ridwan, Murad (PhD)With the rapid growth of multimedia services, future generations of cellular communications require higher data rates and a more reliable transmission links while keeping satisfactory quality of service. The data rate and reliability of wireless communication links can be improved by employing multiple antennas at both ends, thereby creating Multiple-Input Multiple-Output (MIMO) channels. However, the use of multiple antennas in mobile terminals may not be very practical. Certainly there is limited space and other implementation issues which make this a challenging problem. Therefore, to harness the diversity gains order by MIMO transmitter diversity techniques, while maintaining a minimal number of antennas on each handset, cooperative diversity techniques have been proposed. The main drawback of cooperative diversity is the throughput loss due to the extra resources needed for relaying. Therefore, cooperative MIMO together with adaptive modulation is used to meet the demands for high data rate and transmission reliability. This thesis presents performance analysis of a cooperative MIMO schemes with adaptive modulation for different detection techniques in Long Term Evolution network. In this scheme, each link uses MIMO Vertical Bell-Labs Layered Space Time architecture over Rayleigh flat fading channels and the cooperation strategy uses amplify and forward protocol with one relay node. For cooperative and non-cooperative MIMO, the SNR criterions to switch from one modulation order to the next for attaining maximum spectral efficiency subject (SE) at a target bit-error rate are determined. The simulation results shown that the cooperative MIMO system with adaptive modulation not only compensate for the throughput loss but also achieve considerable throughput gain compared with fixed modulation at comparable complexity. The switching criterion of optimal schemes for adaptive modulation of cooperative hybrid network with minimum mean square error (MMSE) detection, as it has a lower complexity compared to maximum likelihood (ML) detection, is also investigated. As an example, in the downlink scenario adaptive modulation based cooperative and non-cooperative MIMO network have shown optimal SE performance for and for respectively, while satisfying target BER constraint, . Key words: Adaptive Modulation, Cooperative Diversity, MIMO, LTE, SNR, Spectral EfficiencyItem Addressing User Cold Start Problem in Amharic YouTube Advertisement Recommendation Using BERT(Addis Ababa University, 2024-06) Firehiwot Kebede; Fitsum Assamnew (PhD)With the rapid growth of the internet and smart mobile devices, online advertising has become widely accepted across various social media platforms. These platforms employ recommendation systems to personalize advertisements for individual users. However, a significant challenge for these systems is the user cold-start problem, where recommending items to new users is difficult due to the lack of historical preference of the user in a content-based recommendation system. To address this issue we propose an Amharic YouTube advertisement recommendation system for unsigned YouTube users where there is no user information like past preference or personal information. The proposed system uses content-based filtering techniques and leverages Sentence Bidirectional Encoder Representations from Transformers (SBERT) to establish sentence semantic similarity between YouTube video titles, descriptions, and advertisement titles. For this research, 4500 data were collected and preprocessed from YouTube via YouTube API, and 500 advertisement titles from advertising and promotional companies. Random samples from these datasets were annotated for evaluation purposes. Our proposed approach achieved a 70% accuracy in recommending semantically related Amharic Advertisements (Ads) to corresponding YouTube videos with respect to the annotated data. At a 95% confidence interval, our system demonstrated an accuracy of 58% to 76% in recommending Ads which are relevant to new users who have no prior interaction history on the platform with the Ads. This approach significantly enhances privacy by reducing the need for extensive data sharing.Item Amharic Hateful Memes Detection on Social Media(Addis Ababa University, 2024-02) Abebe Goshime; Yalemzewud Negash (PhD)Hateful meme is defined as any expression that disparages an individual or a group on the basis of characteristics like race, ethnicity, gender, sexual orientation, country, religion, or other characteristics. It has grown to be a significant issue for all social media platforms. Ethiopia’s government has increasingly relied on the temporary closure of social media sites but such kind of activity couldn’t be permanent solution so design automatic system. These days, there are plenty of ways to communicate and make conversation in chat spaces and on social media such as , text, image, audio, text with image, and image with audio information. Memes are new and exponentially growing trend of data on social media, that blend words and images to convey ideas. The audience can become dubious if one of them is absent. Previous research on the identification of hate speech in Amharic has been primarily focused on textual content. We should design deep learning modal which automatically filter hateful memes in order to reduce hate content on social media. The basis of our model consists of two fundamental components. one is for textual features and the other is for visual features. For textual features, we need to extract text from memes using optical character recognition (OCR). The extracted text through the OCR system is pixel-wise, and the morphological complex nature of Amharic language will affect the performance of the system to extract incomplete or misspelled words. This could result in the limited detection of hateful memes. In order to work effectively with an OCR extracted text, we employed a word embedding method that can capture the syntactic and semantic meaning of a word. LSTM is used for learning long-distance dependency between word sequence in short texts. The visual data was encoded using an ImageNet-trained VGG-16 convolutional neural network. In the studies, the input for the Amharic hateful meme detection classifier combines textual and visual data. The maximum precision was 80.01 percent. When compared to state-of-the-art approaches using memes as a feature on CNN-LSTM, an average F-score improvement of 2.9% was attained.Item Amharic Named Entity Recognition Using Neural Word Embedding as a Feature(AAU, 2017-10) Dagimawi, Demissie; Surafel, Lemma (PhD)In this paper, Amharic Named Entity Recognition problem is addressed by employing a semi-supervised learning approach based on neural networks. The proposed approach aims at automating manual feature design and avoiding dependency on other natural language processing tasks for classi cation features. In this work potential feature information represented as word vectors are generated using neural network from unlabeled Amharic text les. These generated features are used as features for Amharic Named entity classi cation. SVM, J48, random tree, IBk(Instance based learning with parameter k), attribute selected and OneR(one rule) classi ers are tested with word vector features. Additionally BLSTM(bi-directional long short term memory), LSTM(long short term memory) and MLP(multi layer perceptron) deep neural networks are also tested to investigate the impact of proposed approach. From the experiments the highest F-score achieved was 95.5% using the SVM classi er. Relative to state-of-the-art approaches (SVM and J48) an average F-score improvement of 3.95% was achieved. The results showed that, automatically learned word features can substitute manually designed features for Amharic named entity recognition. Also these features has given better performance while reducing the e ort in manual feature design.Item Amharic Parts-of-Speech Tagger using Neural Word Embeddings as Features(Addis Ababa University, 2019-01) Mequanent, Argaw; Surafel, Lemma (PhD)The parts-of-speech (POS) tagging for Amharic language is not matured yet to be used as one important component in other natural language processing (NLP) applications. Previous studies done on Amharic POS tagger used hand-crafted features to develop tagging models. In Amharic language, prepositions and conjunctions usually are attached with the other parts-of-speech. This forces the tags to represent more than one basic information and also decrease the total number of instances in the training corpus. In addition, the manual design of features requires longer time, more labor and linguistic background. In this study, automatically generated neural word embeddings are used as features for the development of an Amharic POS tagger. Neural word embeddings are multi-dimensional vector representations of words. The vector representations capture syntactic and semantic information about words. Another additional aspect in this study is, prepositions and conjunctions attached with the other parts-of-speech are segmented using HornMorpho morphological analyzer. Stateof- the-art deep learning algorithms are also used to develop tagging models. Long Short-Term Memory (LSTM) recurrent neural networks and their bidirectional versions (Bi-LSTM RNNs) are used to develop tagging models from the possible deep learning algorithms. The maximum evaluation result observed is 93.67% F-measure obtained from the model developed by using Bi-LSTM recurrent neural network. From the results obtained, it can be observed that word embeddings generated by neural networks can replace manually designed features which is an important advantage. Segmenting prepositions and conjunctions attached with the other parts-of-speech also improved the accuracy of the POS tagger by more than 5%. The accuracy improvement of the POS tagger is obtained from the increased total number of instances and decreased number of tags due to segmentation.Item Amharic Sign Language Recognition based on Amharic Alphabet Signs(Addis Ababa University, 2018-03-16) Nigus, Kefyalew; Menore, Tekeba (Mr.)Sign language is a natural language mostly used by hearing impaired persons to communicate with each other. At present day, sign language explainers are used to eliminate the language obstacles between people who are hearing impaired and non-impaired one. However, they are very limited in number. So, automatic sign language recognition system is better to narrow the communication gap between hearing impaired and normal people. This thesis work dealts with development of automatic Amharic sign language translator, translates Amharic alphabet signs into their corresponding text using digital image processing and machine learning approach. The input for the system is video frames of Amharic alphabet signs and the output of the system is Amharic alphabets. The proposed system has four major components: preprocessing, segmentation, feature extraction and classification. The preprocessing starts with the cropping and enhancement of frames. Segmentation was done to segment hand gestures. A total of thirty-four features are extracted from shape, motion and color of hand gestures to represent both the base and derived class of Amharic sign characters. Finally, classification models are built using Neural Network and Multi-Class Support Vector Machine. The performance of each models, Neural Network (NN) and Support Vector Machine (SVM) classifiers, are compared on the combination of shape, motion and color feature descriptors using ten-fold cross validation. The system is trained and tested using a dataset prepared for this purpose only for all base characters and some derived characters of Amharic. Consequently, the recognition system is capable of recognizing these Amharic alphabet signs with 57.82% and 74.06% by NN and SVM classifiers respectively. Therefore, the classification performance of Multi-Class SVM classifier was found to be better than NN classifier.Item Amharic Speech Recognition System Using Joint Transformer and Connectionist Temporal Classification with External Language Model Integration(Addis Ababa University, 2023-06) Alemayehu Yilma; Bisrat Derebssa (PhD)Sequence-to-sequence (S2S) attention-based models are deep neural network models that have demonstrated some tremendously remarkable outcomes in automatic speech recognition (ASR) research. In these models, the cutting-edge Transformer architecture has been extensively employed to solve a variety of S2S transformation problems, such as machine translation and ASR. This architecture does not use sequential computation, which makes it different from recurrent neural networks (RNNs) and gives it the benefit of a rapid iteration rate during the training phase. However, according to the literature, the overall training speed (convergence) of Transformer is relatively slower than RNN-based ASR. Thus, to accelerate the convergence of the Transformer model, this research proposes joint Transformer and connectionist temporal classification (CTC) for Amharic speech recognition system. The research also investigates an appropriate recognition units: characters, subwords, and syllables for Amharic end-to-end speech recognition systems. In this study, the accuracy of character- and subword-based end-to-end speech recognition system is compared and contrasted for the target language. For the character-based model with character-level language model (LM), a best character error rate of 8.84% is reported, and for the subword-based model with subword-level LM, a best word error rate of 24.61% is reported. Furthermore, the syllable-based end-to-end model achieves a 7.05% phoneme error rate and a 13.3% syllable error rate without integrating any language models (LMs).Item Analysis and Detection Mechanisms of SIM Box Fraud in The Case of Ethio Telecom(Addis Ababa University, 2017-12-12) Frehiwot, Mola; Yalemzewd, Negash (PhD)Telecommunication fraud can be defined as theft of services (fixed telephone, mobile, data and etc.) or measured abuse of voice or data networks. Fraud is one of the most severe threats to revenue and quality of service in telecommunication networks. The advent of new technologies has provided fraudsters with new techniques to commit fraud. Subscriber identity module box (SIMbox) fraud is one of such fraud that is used in international calls and it has emerged with the use of VOIP technologies. In this thesis, the call detail records (CDR’s) from ethio telecom were organized in order to develop models of normal and fraudulent number behavior via data mining techniques. And four classification algorithms namely decision trees, rule based induction, neural network and hybrid algorithms are used. First we have done data analysis on the data set and for classification we use nine selected features of data extracted from Customer Database Record. The experimentation result will enable to understand the problem of SIM box fraud in the case of ethio telecom and clarifying the behavior of fraudulent and legitimate calls. Finally, we got a good result from PART rule based and hybrid (J48 and PART) algorithms and performed the best among the four algorithms. PART rule based induction classification algorithm had a better performance with an accuracy rate of 99.4906% with true positive and 0.5094 % false positive ratio and followed by hybrid of J48 and PART algorithm with accuracy rate 99.4795% with true positive and 0.5205% false positive ratios.Item Analysis and Evaluation of Diversity-Multiplexing Tradeoff for Multiple-Antenna systems in Ultra Wideband (UWB-MAS) and Rake Receiver(Addis Ababa University, 2012-07) Hailu, Gebremariam; Mokuria, Getahun (PhD)Ultra wide-band (UWB) systems have recently attracted much research interest owing to their appealing features in short-range wireless communications. These features include high data rates, low power consumption, multiple access communications, and precise positioning capabilities.On the other hand, multiple antenna systems (MAS) and space-time coding (STC) techniques, such as space time block coding (STBC) are well known for their great potential to play a significant role in the design of the next-generation broadband wireless communications. Multiple-Input Multiple-Output (MIMO) system extends the link reliability (spatial diversity (SD)) and increase throughput,(spatial multiplexing (SM)).However, there is a fundamental tradeoff(DMT) between how much of each type of gain in any coding scheme can extract. In this thesis the approach for multi-antenna system is to obtain tradeoff between SD and SM gains for UWB technology. Atheoretical analysis is conducted to enlighten the DMTfor UWB-MAS (UWB-MISO/SIMO) and therefore; performance enhancements provided by the proposed scheme compared to the classic single link scheme is evaluatedat finite signal-to-noise ratios (SNRs).The tradeoff curves provide a characterization of achievable SD and SM for a given space-time code at SNR’s encountered in practice. A Rake receiver is employed that captures energy from sequences transmitted from N transmit antennas at M receive antennas in a subset of the resolvable multipath components. Exact diversity gain expressions are determined for orthogonal space–time block codes (OSTBC). It is shown that the asymptotical diversity gain has an infinite valueeven with single-antenna systems and the multi-antenna techniques can be very beneficial in the practical range of signal-to-noise ratios. Comparisons are also provided with DMT results in the literature and found thatcodes that are not optimal over the Rayleigh fading channels are also not optimal over the UWB channels. Key words: DM, DMT, Finite signal-to-noise ratio (SNR), MAS, OSTBC, outage probability, SM, UWB.Item Analysis of the Key Exchange method of SSH using Elliptic Curve Cryptography and a Public Key Infrastructure(Addis Ababa University, 2008-02) Hailu, Banchi; Roy, DP (Professor)SSH, Secure Shell, is a protocol that allows user to log into another computer, to execute commands in a remote machine, and to move files from one machine to another securely over an insecure network. It provides cryptographic authentication, encryption and data integrity to secure network communications. Negotiation of the security parameters and authentication of the peers require using public key cryptosystems. Public key operations are generally slow. In order to improve the performance of the protocol and make it applicable in both powerful and resource constrained environments Elliptic Curve Cryptography is used. In addition, since SSH uses plain public keys to authenticate a remote server, always the first time authentication is vulnerable to the Man-in-the-Middle attack. Using a public key certificate as a host key will eliminate the above vulnerability. And it requires a PKI, Public Key Infrastructure to support the certificate approach. PKI may potentially impact the performance of the security protocol. And PKI path validation techniques (certificate revocation status checking) need more storage capacity, more communication cost and more processing time. This seems to have a problem to scale with large communicating nodes. In this thesis, SSH’s key exchange handshake is implemented using java and bouncy castle cryptographic api. Performance with RSA (Rivest-Shamir-Adleman) and ECDH_ECDSA (Elliptic Curve Diffie-Hellman Elliptic Curve Digital Signature Algorithm) key exchange suites have been compared for both PKI and non-PKI authentication. Client waiting time (key exchange latency), server key exchange throughput, and revocation status message size have been measured for each key exchange suite. Simulation results show that ECC has better processing time performance and better throughput than RSA. Response time and revocation status message size are minimum when Authenticated Directories are used as a certificate status responder. Keywords used: SSH, PKI, Elliptic Curve Cryptography, ECDH, ECDSA, certificate, certificate path validation, certificate revocation status checking, key exchange handshake, authentication, Authenticated Dictionaries and RSA.Item Analyzing and Assessing Tradeoffs and Effects of Energy Related Parameters on Wireless Sensor Networks for Optimizing Sensor Network Lifetime(Addis Ababa University, 2014-06) Tesfaye, Bisratie; Negash, Yalemzewd(PhD)Sensor technologies have become vital today in gathering information about close by environments and its use in wireless sensor networks is getting widespread popular every day. These networks are characterized by a number of sensor nodes deployed in the field for the observation of some phenomena. Due to the limited battery capacity in sensor nodes, energy efficiency is a major and challenging problem in such power constrained networks. To extend the life time of wireless sensor networks as well as conserving its power, some network parameters have been considered, which play an important role in the reduction of power consumption. These parameters are as battery capacity, communication radius, node density and query period. They have a direct impact on the network’s lifetime. These parameters have to be chosen in such a way that the network use its energy resources efficiently. In This thesis we study these parameters that should be selected according to certain tradeoffs with respect to the network’s lifetime. Their tradeoff characteristics have been investigated and illustrated in detail in various combinations. To achieve this goal, a special simulation tool that helps in analyzing the effects of the parameters on sensor network lifetime has been designed and implemented by means of OMNet++; a discrete event simulator provides the framework for the sensor network simulator’s development. Ultimately, results of extensive computational tests are presented, which may be helpful in guiding the sensor network designer in optimally selecting the parameters proposed to improve the lifetime of the network.Item Analyzing and Proposing A Solution for the Incompatibility Problem of Tcp Vegas(Addis Ababa University, 2008-03) Wassie, Keria; Mamo, Mengesha(PhD)TCP is a reliable, connection-oriented and sequence-delivery protocol. All applications which require a reliable delivery of data use TCP. TCP-Reno and TCP-Vegas are some of the TCP types created to give a reliable service in the Network. According to different researchers and results of our simulation, it is observed that TCP-Vegas in isolation has better performance with respect to overall network utilization, stability, fairness, throughput and packet loss, but its performance degrades when TCP-Reno connection exists in the Network. So this thesis is trying to address this incompatibility problem of TCP Vegas while working with TCP-Reno and proposing a solution called Relaxed-Vegas. Using a simulation tool (NS2) we examine the detail behaviours of TCP-Reno and TCP-Vegas while working independently and simultaneously. We propose modifications to the currently working algorithm of TCP-Vegas. From the simulation results we compare the performance of the currently used TCP Vegas and the new proposed solution (Relaxed-Vegas). By using Relaxed-Vegas the number of received packets and the throughput (good-put) is raised by 56.87% and the drop of packets is reduced by 17.0%. Keywords: TCP, TCP Reno, TCP Vegas, Congestion control, Relaxed-VegasItem Ancient Ethiopic Manuscript Recognition Using Deep Learning Artificial Neural Network(Addis Ababa University, 2016-03) Getu, Siranesh; Tekeba, MenoreThe recognition of handwritten documents, which aims at transforming written text into machine encoded text, is considered as one of the most challenging problems in the area of pattern recognition and an open research area. Especially ancient manuscripts, like Ethiopic Geez scripts, are different from the modern documents in various ways such as writing style, morphological structure, writing materials and so on. This brings the necessity to make research works on characetr recogntion of those scripts. Geez is one of the ancient languages which has been used as a liturgical language in Ethiopia. Manuscripts written using this language contains many unexplored content which is the base of the current Ethiopic scripts; however, only few researches have been done on these valuable documents. A number of algorithms have been proposed for handwritten character recognition such as support vector machine, hidden Markov model, and neural network. In this research the design and implementation of character recognition system for ancient Ethiopic manuscript using deep neural network is presented. Deep learning, is employed and trained using a Restricted Boltzman Machine (RBM), a greedy layer-wise unsupervised training strategy. The complete system employs image acquisition, preprocessing, character segmentation, and classification and recognition. Efficient and effective algorithms were selected and implemented in each step. A dataset was also prepared to train and test the system, which consists of 24 base characters of Geez alphabet with 100 frequencies. Overall, a recognition accuracy of 93.75 percent was obtained using 3 hidden layers with 300 neurons. Analysis of results obtained i from each step of the recognition process shows that the system can be extended and fine-tuned for practical application. Key words: Ancient Ethiopic Manuscript, Handwritten Recognition, Preprocessing, segmentation, Deep Neural Network, Restricted Boltzmann Machine.Item Anomaly-Augmented Deep Learning for Adaptive Fraud Detection in Mobile Money Transactions(2024-06) Melat Kebede; Bisrat Derebssa (PhD)Mobile Money, a revolutionary technology, enables individuals to manage their bank accounts entirely via their mobile devices, allowing for transactions like bill payments with unmatched ease and efficiency.This innovation has significantly reshaped financial landscapes, particularly in developing countries with limited access to traditional banking, by promoting financial inclusion and driving economic opportunity. However, the rapid growth of mobile money services has introduced significant challenges, such as fraud, where unauthorized individuals manipulate the system through various scams, creating serious risks that lead to financial losses and undermining trust in the system. We propose a fraud detection model that integrates deep learning techniques to identify fraudulent transactions and adapt to the dynamic behaviors of fraudsters in mobile money transactions. Given the private nature of financial data, we utilized a synthetic dataset generated using the Pay Sim simulator, which is based on a company in Africa. We evaluated three deep learning architectures, namely Restricted Boltzmann Machine (RBM), Probabilistic Neural Network (PNN), and Multi-Layer Perceptron (MLP) for fraud detection, emphasizing feature engineering and class distribution. The MLP achieved 95.70% accuracy, outperforming the RBM (89.91%) and PNN (73.36%) across various class ratios and on both the original and feature-engineered datasets. Among various techniques for anomaly detection, the Auto-Encoder consistently outperformed others, such as the Isolation Forest and Local Outlier Factor, achieving an accuracy of 82.85%. Our hybrid model employed a feature augmentation approach, integrating prediction scores from an Autoencoder model as additional features. These scores were then fed into the Multi-layer Perceptron (MLP) model along with the original dataset. This hybrid approach achieved 96.56% accuracy, 97.62% precision, 84.16% recall, and a 90.39% F1-score, outperforming the standalone MLP.The Hybrid model achieved an accuracy of 73.33% on unseen dataset, showing a 3.9% increase over the MLP model’s 69.41% accuracy, and demonstrating its enhanced ability to capture and adapt to evolving fraud patterns.This study finds that the hybrid model’s enhanced performance highlights the significance of anomaly detection and feature engineering in improving fraud detection.Item Application Layer DDoS Attack Detection In The Presence Of Flash Crowds(Addis Ababa University, 2017-09) Biruk, Asmare; Yalemzewd, Negash (PhD)Application layer DDoS attacks are growing at alarming rate in terms of attack intensity and number of attack. Attackers target websites of government agencies as well as private business for different motives. One particular research problem is distinguishing Application layer DDoS attacks from flash crowds. Both flash crowds and application layer DDoS attack cause denial of service. Flash crowds come from sudden surge in traffic of legitimate requests. Whereas, application layer DDoS attacks are intentionally generated by attackers to cause denial of service. Distinguishing between Application layer DDoS attacks and flash crowd is important because the action taken to address both problems is different. Flash crowds are legitimate requests which should be serviced. Whereas, Application layer DDoS attacks are malicious requests that should not be serviced. Furthermore, the source of application layer DDoS attacks should be blocked from making further requests. In this research, supervised machine learning based application layer DDoS detection approach was proposed to distinguish between application layer DDoS attack and flash crowd. Features that help distinguish application layer DDoS attacks from legitimate flash crowds were identified. Six supervised classifiers were evaluated using World cup 98 flash crowd dataset and experimentally generated application layer DDoS attack dataset. We have selected decision tree as supervised classifier in our detection system based on evaluation result. Decision tree had F1 score of 99.45% and False positive rate of 0.47%.