In English

Dynamic State Representation for Homeostatic Agents

Fredrik Mäkeläinen ; Hampus Torén
Göteborg : Chalmers tekniska högskola, 2018. 65 s.
[Examensarbete på avancerad nivå]

In a reinforcement learning setting an agent learns how to behave in an environment through interactions. For complex environments, the explorable state space can easily become unmanageable, and efficient approximations are needed. The Generic Animat model (GA model), heavily influenced by biology, takes an approach utilising a dynamic graph to represent the state. This thesis is part of the Generic Animat research project at Chalmers that develops the GA model. In this thesis, we identify and implement potential improvements to the GA model and make comparisons to standard Q-learning and deep Q-learning. With the improved GA model we show that in a state space larger than 232, we see substantial performance gains compared to the original model.

Nyckelord: animat, autonomous agents, reinforcement learning, adaptive architectures, open ai, policy discovery, state representation, DQN, homeostatic agent.



Publikationen registrerades 2018-12-14. Den ändrades senast 2018-12-14

CPL ID: 256399

Detta är en tjänst från Chalmers bibliotek