Mood Based Hybrid Ethiopic Music Recommender

No Thumbnail Available



Journal Title

Journal ISSN

Volume Title


Addis Ababa University


Music is one of the most engaging and enormously spreading content on the Internet that plays an important role in our daily life. This has created new demand for easier services that support music navigation and discovery. Several music recommenders are proposed to contribute for this demand. However, many research questions are still open. Some mood based music recommenders are proposed but there is no any system considering Ethiopic Music. They favor popular songs that lack awareness of user’s contextual situation. They require a lot of user’s effort. Here, we proposed mood based context aware music recommender for smartphone. That has three main tasks, including: 1) Constructing mood based Ethiopic song classifier based on a model trained using linear SVM. 2) User modeling that includes user mood detection module built by combining biometric (heart-rate) and text mood expression modalities using Dempster Shafer theory. 3) Creating an association between user contextual interest and songs to draw list of recommendations. High Positive Affect, Low Positive Affect, Pleasantness, Strong Engagement, and Unpleasantness are the primary moods considered in this study. These has shown accuracy of 65% in song classification, accuracy of 95% in user mood detection and a good feedback gained from subjects that participated in overall evaluation of the recommender. We used 600 Ethiopic Songs and 25,800 mood sentences. Generally the study revealed algorithm and audio features to detect mood of Ethiopic song as well as a new way of user modeling for recommender systems. These can be applied on music information retrieval, music streaming websites, media players and systems that involve user mood detection.



Music Mood, Recommender, Dempster Shafer Theory, Mood Detection, Information Retrieval, Soft Clustering, Linear SVM