Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    New user? Click here to register. Have you forgotten your password?
Repository logo
  • Colleges, Institutes & Collections
  • Browse AAU-ETD
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Kalkidan Behailu"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • No Thumbnail Available
    Item
    Imaginative and Contrastive Based Self Learning Agent
    (Addis Ababa University, 2024-06) Kalkidan Behailu; Menore Tekeba (PhD)
    Developing an agent in reinforcement learning (RL) that is capable of performing complex control tasks directly from high-dimensional observation such as raw pixels is a challenge as efforts still need to be made towards improving sample efficiency. This paper explores an unsupervised learning framework that leverages imaginative and contrastive-based representations to enhance sample efficiency in reinforcement learning, working directly with raw pixels. It incorporates an imaginative module and performs contrastive learning to train its deep convolutional neural network-based encoder to extract temporal and instance information representation to achieve a more sample efficiency for RL. It extracts high-level features from raw pixels using the hybrid of contrastive and imaginative based unsupervised representation learning. It performs off-policy control using the extracted features, enabling the agent to imagine its future states and capture temporal dependencies. The agent's dynamic behavior can be understood by generating learnable patterns. Our method outperforms prior both imaginative and contrastive pixel-based learning methods on complex tasks in of the DeepMind Control Suite at the 100K environment and interaction time-steps benchmarks

Home |Privacy policy |End User Agreement |Send Feedback |Library Website

Addis Ababa University © 2023