B for Behavior, Blackboard and Brain (3)

This is the  final blog-post of the series (for previous parts see here and here). As promised, here we will see the implementation of Reinforcement Learning through Behavior Tree (in natural Unreal Engine environment). The method written is fairly general and can  be used to generate BT corresponding to any RL based algorithm.

The k-armed bandit (which for our case turns into shooting colored boxes) algorithm can be categorized into 5 tasks.

  1. Select a box (based on probabilities and estimates)
  2. Rotate towards box (for more complex, locomotion may be involved)
  3. Shoot the box
  4. Wait (for reward assesement)
  5. Update the estimates

The tree built  with this categorization of the tasks is shown in figurerlbt

The Engine also provides an powerful interface for Blueprints to interact with C++. This is a good way to unify the artistic and programming development of a game. We will show that in real time here.

Consider the task BTTask_FindAppBox node. The Blueprint implementation is shown in the figure

taskfindbox

In the Engine, every BT task starts with the node Event Receive Execute AI. So we start with that and  make the connection to Cast node yielding the object corresponding to class MAIPlayerController. Once that is done, we invoke the C++ method through the node Look for App Box and by  setting the target pin as the casted object. The C++ method is posted here.

Similarly rest of the three tasks (except the Engine default task Wait) are implemented through this C++-Blueprint interface. Another example is BTTask_UpdateEstimates

estimates

with the corresponding C++ code posted here.

 

B for Behavior, Blackboard and Brain (2)

This post is continuation of this blog-post. Here  we will understand the algorithm behind the emergent behavior, called Reinforcement Learning (RL). The situation of shooting colored boxes is equivalent to k-armed  bandit problem which can be stated as

You are faced repeatedly with a choice among k different options, or actions. After each choice you receive a numerical reward chosen from a stationary probability distribution that depends on the action you selected. Your objective is to maximize the expected total reward over some time period, for example, over 1000 action selections, or time steps.

The reward for this game is: 100 for shooting red box and -10 for shooting blue box. The AI agent can access the scoreboard and take the actions accordingly. Based on the rewards, the agent generates the expected value associated to an action. This value is, in a way, the memory of all the rewards obtained when that particular action was performed. In more complicated systems, the rewards my be even probability based.

For RL, it is not only important to take those actions which have high expected value associated with them (exploiting the knowledge-base), but, once in a while, it is necessary to pick a random action irrespective of the associated expected value (exploration). This makes sure that the system doesn’t get stuck in the local optimum. For instance in case of correlated rewards, some actions when performed in certain sequence might yield better rewards. Later in this post it is shown why exploring leads to global optimum configuration knowledge!

For the problem at hand, the rewards are deterministic and static. So the RL algorithm is as follows (this algorithm can be found in Sutton’s book, section 2.4 (working link July 23, 2019))

Initialize, for a = 1 to k:
Q(a) \leftarrow 0
N (a) \leftarrow 0
Repeat forever:

\text{arg } \text{max}_a Q(a)  with probability 1-\epsilon or a random action with probability \epsilon

R \leftarrow \text{bandit}(A)
N (A) \leftarrow N (A) + 1
Q(A)\leftarrow Q(A) + \frac{1}{N(A)}\left[R - Q(A)\right]

In Unreal Engine, the above algorithm can be easily implemented and is posted here (specifically look the member function UMAIBrainComponent::TickComponent). The game is played in the Unreal Editor where the rendered arena image is

Arena

The first two red boxes on left are levers 0 and 1 and last two boxes on right are 3 and 2 levers.

When the game begins, the control is passed over to AI agent who, as per the logic flowing through Tick function, starts shooting the boxes implementing RL technique. The log of first 22 iterations of the shootouts performed on July 23, 2019 can be found here.

Log Analysis:

We start with 0th iteration. An action (lever) is chosen at random because all the estimates are set to 0. The reward obtained is 100 and the estimate of action 0 is updated. For the next 3 iterations, the action with the highest expected value is chosen by the agent which results in pawn firing at first red box. But during 5th (marked with iteration number 4) iteration, the agent explores and randomly* chooses action 2 (last blue box) in spite of its expected value as 0. As result reward is -10. But then again it starts exploiting the knowledge-base and obtains higher rewards. Then again during iteration number 11 it chooses action 3 (second last blue box) and gets negative reward. Again, it continues with highest reward action selection (based on knowledge-base).

Finally during iteration number 19, it chooses action 1 (second red box) and gets reward 100. The estimates are updated (as can be see from next iteration’s maximum estimates ) and now, agent has global high estimates in the knowledge-base which couldn’t have been registered if agent weren’t exploring.

In the next blog-post of this series, we will see the visual implementation of this logic (natural to UE codebase).

* Ignore the 1 – epsilon text typo in the log text.

 

B for Behavior, Blackboard and Brain (1)

Blackboard is a very efficient tool utilized by AI agent for generating appropriate behavior. In this blog-post I shall demonstrate the Unreal Engine code, in action, using the BlackBoard class.

First we initialize the BlackBoard object in the following fashion with line 8


void AMAIController::SetGameRep(AMAIGameState *GR)
{
MAIGameRep = GR;

// setup blackboard component
if(!MAIGameRep->GetBlackBoard())
{
UBlackboardComponent* BB = NewObject(this, TEXT("BlackboardComponent"));
if(BB)
{
BB->RegisterComponent();
MAIGameRep->SetBlackBoard(BB);

UBlackboardData* BlackboardAsset = NewObject(BB);
BlackboardAsset->UpdatePersistentKey(FName("PlayerScore"));
BB->InitializeBlackboard(*BlackboardAsset);
}
}

}

Then, we register the component with line 11 and declare the UBlackBoardData with line 14.  Next we add the key PlayerScore in the UBlackBoardData and finally initialize the BlackBoard with the asset. This is the code equivalent to the editor state shown below

bbeq

The BlackBoard keys can be accessed by the code


GetGameRep()->GetBlackBoard()->GetValueAsFloat(FName("PlayerScore"));

The idea behind the BlackBoard is to provide a scratch work space with relevant data, required for the decision making purposes. Furthermore it is equipped with the ability to send notifications once data has been updated and cache the calculations saving redundant CPU processing time and power.

BlackBoard can be shared among several AI instances. It basically provides a common ground for all the AI instances to operate in collaborative manner. This gives rise to new collective behavior leading to more realistic gameplay and recreation.

Next, we  want to focus our attention towards Reinforcement Learning algorithms which are more or less dormant in the game AI  world (I really don’t have the clue why). But good news is that people are now gearing towards its implementation in game Engines (Unity seems first one to include that). For more information watch this video. There have been some white papers based on RL in game AI for instance Capture The Flag: the emergence of complex cooperative agents and Implementing Reinforcement Learning in Unreal Engine 4 with Blueprints.

Using the BlackBoard class, I applied Reinforcement Learning on Unreal Engine AI controller which is shown in the video below

The aim was to train the AI to shoot red boxes without actually coding it. This is called emergent behavior which was not included in the compiled code, but based on the rewards, AI learnt to shoot red boxes! More information on how I implemented RL will follow in the sequel blog-post. Stay tuned.