The next class is the agent, which has the role of initializing and maintaining the brain (providing the Q-value function approximation) and the memory. It is the agent, moreover, that acts in the environment. Its initialization sets a series of parameters that are mostly fixed given our experience in optimizing the learning for the Lunar Lander game. They can be explicitly changed, though, when the agent is first initialized:
- epsilon = 1.0 is the initial value in the exploration-exploitation parameter. The 1.0 value forces the agent to completely rely on exploration, that is, random moving.
- epsilon_min = 0.01 sets the minimum value of the exploration-exploitation parameter: a value of 0.01 means that there is a 1% chance ...