Imaginative and Contrastive Based Self Learning Agent
dc.contributor.advisor | Menore Tekeba (PhD) | |
dc.contributor.author | Kalkidan Behailu | |
dc.date.accessioned | 2025-06-19T14:35:13Z | |
dc.date.available | 2025-06-19T14:35:13Z | |
dc.date.issued | 2024-06 | |
dc.description.abstract | Developing an agent in reinforcement learning (RL) that is capable of performing complex control tasks directly from high-dimensional observation such as raw pixels is a challenge as efforts still need to be made towards improving sample efficiency. This paper explores an unsupervised learning framework that leverages imaginative and contrastive-based representations to enhance sample efficiency in reinforcement learning, working directly with raw pixels. It incorporates an imaginative module and performs contrastive learning to train its deep convolutional neural network-based encoder to extract temporal and instance information representation to achieve a more sample efficiency for RL. It extracts high-level features from raw pixels using the hybrid of contrastive and imaginative based unsupervised representation learning. It performs off-policy control using the extracted features, enabling the agent to imagine its future states and capture temporal dependencies. The agent's dynamic behavior can be understood by generating learnable patterns. Our method outperforms prior both imaginative and contrastive pixel-based learning methods on complex tasks in of the DeepMind Control Suite at the 100K environment and interaction time-steps benchmarks | |
dc.identifier.uri | https://etd.aau.edu.et/handle/123456789/5599 | |
dc.language.iso | en_US | |
dc.publisher | Addis Ababa University | |
dc.subject | Reinforcement Learning | |
dc.subject | Imaginative Learning | |
dc.subject | Contrastive Learning | |
dc.subject | Representation Learning | |
dc.title | Imaginative and Contrastive Based Self Learning Agent | |
dc.type | Thesis |