Reinforcement Learning: Navigation

Algorithm and Architecture

The DQN algorithm involves using a neural network as the underlying function approximator. This function, called Q, is then used to select actions, using an epsilon greedy policy. A second Q function, known as the target network, is also used in order to avoid correlation when calculating the loss factor.

DQN also utilizes a technique known as Replay Memory, which involves first storing the experiences, obtained from interacting with the environment, and later sampling them randomly and learning from them. This further minmizes correlation and stabilizies the performance of the model.
To achieve our results, we used the following hyper parameters: In our case, the structure of the neural network consists of three linear layers, with an input equal to the state space (37) and a final output corresponding to the number of available actions (4). In between, the hidden layer has a total of 64 neurons. Finally, we decided to use the relu activation function.

Double DQN

We also experimented with an additional improvement on the original DQN algorithm known as Double DQN or DDQN. Which in theory should prevent incidental high rewards that might not accurately reflect long term returns and stop q-values from exploding in early stages of learning. In our case, we simply edited our original model to instead choose action using our local network, then evaluate the selected actions using the target network. This resulted in a slight performance bump, as shown below.

Code

Q Network

Agent

DQN

Environment solved in 606 episodes! Average Score: 13.03
Episode #Average Score
1000.46
2003.20
3004.88
4004.63
5007.50
60010.58
70012.84
70613.03
Vanilla DQN

DDQN

Environment solved in 596 episodes! Average Score: 13.02
Double DQN
Episode #Average Score
1000.32
2001.31
3005.75
4007.11
5008.96
60011.09
69613.02

Obstacles & Future improvements

The next step would involve modifying the algorithm with the techniques such as Dueling DQN and Prioritized Experience Replay, as well as trying my hand at implementing the pixels to action version of the project. We would also like to experiment with the neural network architecture by changing the number of layers and neurons, and inspect how it would affect the output.

References