B for Behavior, Blackboard and Brain (3)

This is the  final blog-post of the series (for previous parts see here and here). As promised, here we will see the implementation of Reinforcement Learning through Behavior Tree (in natural Unreal Engine environment). The method written is fairly general and can  be used to generate BT corresponding to any RL based algorithm.

The k-armed bandit (which for our case turns into shooting colored boxes) algorithm can be categorized into 5 tasks.

  1. Select a box (based on probabilities and estimates)
  2. Rotate towards box (for more complex, locomotion may be involved)
  3. Shoot the box
  4. Wait (for reward assesement)
  5. Update the estimates

The tree built  with this categorization of the tasks is shown in figurerlbt

The Engine also provides an powerful interface for Blueprints to interact with C++. This is a good way to unify the artistic and programming development of a game. We will show that in real time here.

Consider the task BTTask_FindAppBox node. The Blueprint implementation is shown in the figure

taskfindbox

In the Engine, every BT task starts with the node Event Receive Execute AI. So we start with that and  make the connection to Cast node yielding the object corresponding to class MAIPlayerController. Once that is done, we invoke the C++ method through the node Look for App Box and by  setting the target pin as the casted object. The C++ method is posted here.

Similarly rest of the three tasks (except the Engine default task Wait) are implemented through this C++-Blueprint interface. Another example is BTTask_UpdateEstimates

estimates

with the corresponding C++ code posted here.

 

B for Behavior, Blackboard and Brain (2)

This post is continuation of this blog-post. Here  we will understand the algorithm behind the emergent behavior, called Reinforcement Learning (RL). The situation of shooting colored boxes is equivalent to k-armed  bandit problem which can be stated as

You are faced repeatedly with a choice among k different options, or actions. After each choice you receive a numerical reward chosen from a stationary probability distribution that depends on the action you selected. Your objective is to maximize the expected total reward over some time period, for example, over 1000 action selections, or time steps.

The reward for this game is: 100 for shooting red box and -10 for shooting blue box. The AI agent can access the scoreboard and take the actions accordingly. Based on the rewards, the agent generates the expected value associated to an action. This value is, in a way, the memory of all the rewards obtained when that particular action was performed. In more complicated systems, the rewards my be even probability based.

For RL, it is not only important to take those actions which have high expected value associated with them (exploiting the knowledge-base), but, once in a while, it is necessary to pick a random action irrespective of the associated expected value (exploration). This makes sure that the system doesn’t get stuck in the local optimum. For instance in case of correlated rewards, some actions when performed in certain sequence might yield better rewards. Later in this post it is shown why exploring leads to global optimum configuration knowledge!

For the problem at hand, the rewards are deterministic and static. So the RL algorithm is as follows (this algorithm can be found in Sutton’s book, section 2.4 (working link July 23, 2019))

Initialize, for a = 1 to k:
Q(a) \leftarrow 0
N (a) \leftarrow 0
Repeat forever:

\text{arg } \text{max}_a Q(a)  with probability 1-\epsilon or a random action with probability \epsilon

R \leftarrow \text{bandit}(A)
N (A) \leftarrow N (A) + 1
Q(A)\leftarrow Q(A) + \frac{1}{N(A)}\left[R - Q(A)\right]

In Unreal Engine, the above algorithm can be easily implemented and is posted here (specifically look the member function UMAIBrainComponent::TickComponent). The game is played in the Unreal Editor where the rendered arena image is

Arena

The first two red boxes on left are levers 0 and 1 and last two boxes on right are 3 and 2 levers.

When the game begins, the control is passed over to AI agent who, as per the logic flowing through Tick function, starts shooting the boxes implementing RL technique. The log of first 22 iterations of the shootouts performed on July 23, 2019 can be found here.

Log Analysis:

We start with 0th iteration. An action (lever) is chosen at random because all the estimates are set to 0. The reward obtained is 100 and the estimate of action 0 is updated. For the next 3 iterations, the action with the highest expected value is chosen by the agent which results in pawn firing at first red box. But during 5th (marked with iteration number 4) iteration, the agent explores and randomly* chooses action 2 (last blue box) in spite of its expected value as 0. As result reward is -10. But then again it starts exploiting the knowledge-base and obtains higher rewards. Then again during iteration number 11 it chooses action 3 (second last blue box) and gets negative reward. Again, it continues with highest reward action selection (based on knowledge-base).

Finally during iteration number 19, it chooses action 1 (second red box) and gets reward 100. The estimates are updated (as can be see from next iteration’s maximum estimates ) and now, agent has global high estimates in the knowledge-base which couldn’t have been registered if agent weren’t exploring.

In the next blog-post of this series, we will see the visual implementation of this logic (natural to UE codebase).

* Ignore the 1 – epsilon text typo in the log text.

 

B for Behavior, Blackboard and Brain (1)

Blackboard is a very efficient tool utilized by AI agent for generating appropriate behavior. In this blog-post I shall demonstrate the Unreal Engine code, in action, using the BlackBoard class.

First we initialize the BlackBoard object in the following fashion with line 8


void AMAIController::SetGameRep(AMAIGameState *GR)
{
MAIGameRep = GR;

// setup blackboard component
if(!MAIGameRep->GetBlackBoard())
{
UBlackboardComponent* BB = NewObject(this, TEXT("BlackboardComponent"));
if(BB)
{
BB->RegisterComponent();
MAIGameRep->SetBlackBoard(BB);

UBlackboardData* BlackboardAsset = NewObject(BB);
BlackboardAsset->UpdatePersistentKey(FName("PlayerScore"));
BB->InitializeBlackboard(*BlackboardAsset);
}
}

}

Then, we register the component with line 11 and declare the UBlackBoardData with line 14.  Next we add the key PlayerScore in the UBlackBoardData and finally initialize the BlackBoard with the asset. This is the code equivalent to the editor state shown below

bbeq

The BlackBoard keys can be accessed by the code


GetGameRep()->GetBlackBoard()->GetValueAsFloat(FName("PlayerScore"));

The idea behind the BlackBoard is to provide a scratch work space with relevant data, required for the decision making purposes. Furthermore it is equipped with the ability to send notifications once data has been updated and cache the calculations saving redundant CPU processing time and power.

BlackBoard can be shared among several AI instances. It basically provides a common ground for all the AI instances to operate in collaborative manner. This gives rise to new collective behavior leading to more realistic gameplay and recreation.

Next, we  want to focus our attention towards Reinforcement Learning algorithms which are more or less dormant in the game AI  world (I really don’t have the clue why). But good news is that people are now gearing towards its implementation in game Engines (Unity seems first one to include that). For more information watch this video. There have been some white papers based on RL in game AI for instance Capture The Flag: the emergence of complex cooperative agents and Implementing Reinforcement Learning in Unreal Engine 4 with Blueprints.

Using the BlackBoard class, I applied Reinforcement Learning on Unreal Engine AI controller which is shown in the video below

The aim was to train the AI to shoot red boxes without actually coding it. This is called emergent behavior which was not included in the compiled code, but based on the rewards, AI learnt to shoot red boxes! More information on how I implemented RL will follow in the sequel blog-post. Stay tuned.

Behavior Trees in Game Artificial Intelligence

Due to circumstances beyond my present control, my career trajectory has taken a sharp turn (quantified by delta function ). I hope Dirac would be proud of that! In order to work with same passion and rigor, I have channelized my energy into Artificial Intelligence (AI) and parted ways from Physics. Of course it was painful and depressing, but I found that even such feelings have utility in the catharsis. This blog-post is an attempt to show just that!

The gaming industry has played pivotal role in reshaping the modern networking architecture and graphics rendering (replication, realistic rendering and ray tracing). Therefore it is not unreal to expect the industry to push forward the AI realm. This can be estimated from sheer number of players in games like Fortnite (250 million), Halo and  GTA V among others. Any breakthrough in the field of AI can be conveniently and collectively scrutinized by millions of players, facilitated by streamline workflow including developers, players and academics.

Behavior tree (BT) is an important mathematical structure which generates appropriate series of tasks in modular fashion. For instance, a patrolling pawn in some evil fortress. Unreal Engine (UE) is one of the very first game engines to implement BT in very natural way (given the visual scripting structure of UE called Blueprints). I will demonstrate the BT in action using UE project in this blog-post.

The BT can be pictured as

behaviortree

Here black nodes represent the “composites” (from of flow control) and pink nodes represent “tasks”. I have used two categories of composites

  1. Selector: Executed in left-right pattern. It stops traversing the subtrees once successful execution branch is found.
  2. Sequence: Executed in left-right pattern. It doesn’t stop executing subtrees until unsuccessful execution branch is  found.

The entire BT is executed top-down pattern in deterministic way. A next level implementation could involve assigning probabilities with each edge resulting in particular node, but we won’t talk about that here.

If you were to petrol, what would be the list of tasks you’d make to execute. Probably it might include

  1. Spotting enemy
  2. Chasing enemy if spotted
  3. Else perform random patrol in arbitrary directions

Now next step is to further divide the tasks into single elemental entities. For instance spotting enemy task includes checking lineofsight actors and spin towards appropriate actor if found. Thus the hierarchy and placement of composites and task should be as shown in the figure above.

Now BT in action corresponding to the chase is shown below.

Chase

One can clearly visualize the train of executing branches of the tree. Since Chase Player is a sequence node, we can deduce that the tasks “Rotate to face BB entry”  and “BTT_ChasePlayer” have been executed and now “Move To” task is undergoing and indeed that is what is being done in the Editor.

Next, the BT simulation of patrolling with that task “Move To” is

petrol_moveto

and “Wait” is

petrol_wait

The complete information to setup the project is detailed at https://docs.unrealengine.com/en-US/Engine/ArtificialIntelligence/BehaviorTrees/BehaviorTreeQuickStart/index.html. I encourage to try!

Finally I will give a teaser to upcoming UE project https://github.com/ravimohan1991/MAI

On-shell bosonic supersymmetric brane configuration

800px-calabi-yau

It has been, again, a long time, since I last wrote my blog-post! It is not that I don’t want to write, it is just that I have been having so much fun (doing my research) and somewhat busy (changing my apartment and doing similar non-productive chores). Now that I am within fewer steps away from my department, and that I have decided to spend rest of my PhD days in this new apartment, I can devote more time to writing.

This post is about kappa symmetry which is a tool to obtain the supersymmetric brane configurations. Now, Susy (the heart of my research), is not only the most beautiful and difficult 😉 symmetry but also the strongest symmetry that I have ever encountered.  For some theories, it turns out that a supersymmetric configuration automatically implies the equations of motion (the on-shell configuration)! Therefore, supersymmetric theories without the Lagrangian formalism can be probed and studied! Furthermore, there are usually alluring geometrical interpretations associated with the configurations.

Currently I am working out the solutions of some supersymmetric brane embeddings on a curved supergravity spacetime geometry (with the topology AdS_5\times\mathcal{C}\leftarrow S^1\times S^2\times S^1), which, according to the AdS/CFT correspondence, represent line defects (analog of Wilson & t’Hooft lines)  in the mysterious (2,0) super-conformal field theories in d=6.

Consider any SUGRA with bosonic (\mathcal{B}) and fermionic (\mathcal{F}) degrees of freedom. Now it turns out that one can set the \mathcal{F}=0 on-shell. I don’t clearly see that, but it seems that the supersymmetry constraints the theory to that extent that equations of motion render the \mathcal{F} non-dynamical! This also means setting \mathcal{F}=0 implies equations of motion. Now the question whether the \mathcal{B} configuration preserves supersymmetry reduces to the question that what transformation parameters \epsilon exist for on-shell bosonic configurations. Symbolically, \delta\mathcal{B}|_{\mathcal{F}=0}=0 while \delta\mathcal{F}|_{\mathcal{F}=0}=0. The structure of local supersymmetry in SUGRA is given by \delta\mathcal{B}\propto\mathcal{F} and \delta\mathcal{F}\propto\mathcal{P}(\mathcal{B})\epsilon.  Here \mathcal{P} is a Clifford valued operator with maximum first order derivatives.

The previous statements imply \mathcal{P}(\mathcal{B})\epsilon = 0. Now we note couple of points

  • The equation constraints the \mathcal{B} degrees of freedom via first order (in some cases I know, linear) partial differential equations which are much simpler than the second order on-shell differential equations. Thus the complexity is greatly reduced!
  • The equation also constraints the transformation parameter \epsilon in accordance with the bosonic configuration. For SUGRA geometry, we have what are known as Killing spinors which are solution of Killing equations corresponding to the bosonic degrees of freedom known as metric. For example, d=11, \mathcal{N}=1 supergravity, \mathcal{F} consists of gravitini \Psi_a, the supersymmetric partner of graviton (the metric). Then \delta\Psi_a=\left(\partial_a+\frac{1}{4}\omega_a^{bc}\Gamma_{bc}\right)\epsilon-\frac{1}{288}\left(\Gamma_a^{bcde}-8\delta_a^b\Gamma^{cde}\right)R_{bcde}\epsilon= \mathcal{P}(\mathcal{B})\epsilon. Thus the equation \delta\Psi_a=0 gives the solution of Killing spinor.

As of now, we don’t have the complete formulation of M-theory (a unification of five superstring theories). We have a good idea of how M-theory should look like at low energies. In other words, we know the dynamical degrees of freedom with large wavelengths and they make up supergravity theory (that we know and understand) + M branes. We even have a Lagrangian for the theory at that energy scale which is given by

S\approx S_{SUGRA} + S_{\text{Brane}}

The first term corresponds to \mathcal{N}=2 d=10 type II A/II B supergravity or \mathcal{N}=1 d=11 supergravity. The second term describes both the brane excitations (giving rise to field theories) and interactions with the gravity. The action here is known as brane effective action.

Now for my research purpose, I am supposed to find the placement of M2 brane in the SUGRA background (mentioned above) such that there is a supersymmetric bosonic configuration. The placement of brane is based on \mathcal{B}. Here again we set the \mathcal{F}=\theta=0 which is compatible with the on-shell configuration (brane equation of motion). To get the supersymmetric configuration

\delta\theta=\delta_\kappa\theta+\epsilon+\Delta\theta+\xi^\mu\partial_\mu\theta=0

where

  • \delta_\kappa\theta is the kappa symmetry
  • \xi^\mu\partial_\mu\theta world volume diffeomorphism
  • \Delta\theta is any other transformation besides supersymmetry generated by \epsilon.

Now again due to the reasons beyond me for now, the restriction of these transformations for the bosonic configuration

  • \delta_\kappa\theta|_{\mathcal{B}}=(1+\Gamma_\kappa|_\mathcal{B})\kappa
  • \Delta\theta|_\mathcal{B}=0 (makes sense since transformations by \epsilon are fermionic!)

Hence

\delta\theta=(1+\Gamma_\kappa|_\mathcal{B})\kappa+\epsilon

Now it turns out that not all the fermionic degrees of freedom in this theory are dynamical.  This forces us to work at the intersection of kappa symmetry gauge fixing conditions and \theta=0. So we follow a two step process

  1. Kappa symmetry invariance: \mathcal{P}\theta=0 where \mathcal{P} is field independent gauge fixing projector such that \theta = \mathcal{P}\theta+(1-\mathcal{P})\theta. And now the restriction of supersymmetric variation to bosonic configuration is
    \delta\mathcal{P}\theta|_{\mathcal{B}}=\mathcal{P}(1+\Gamma_{\kappa}|_\mathcal{B})_\kappa+\mathcal{P}\epsilonEquating this to 0 gives \kappa = \kappa(\epsilon) the compensating kappa transformation corresponding to the background spinor.
  2. Now we have the dynamical set of fermionic configuration given by (1-\mathcal{P})\theta|_{\mathcal{B}} which we set to 0.

Now from the above equations and little bit of linear algebra, we finally have \Gamma_\kappa|_\mathcal{B}\epsilon=\epsilon which is known as the kappa symmetry constraint.

Geometrical representation of the Killing spinors preserving N=4 supersymmetry (I)

In the low energy limit the mysterious M-theory boils down to a much tractable d=11 Supergravity theory (SUGRA). Therefore it is essential to understand the supersymmetric constraints of the theory which have crucial applications in the field of holography.

Supersymmetry is essentially a (very awesome if you ask me!) symmetry which keeps the theory invariant under the bosonic and fermionic variations given by

\delta_\epsilon\Theta =\epsilon\\\delta_\epsilon X^M=i\bar{\epsilon}\Gamma^M\Theta

Here \epsilon is a Killing spinor which satisfies the Killing equation

\nabla_X\epsilon=\lambda X.\epsilon

It becomes covariantly constant for \lambda =0. In the curved solutions of the SUGRA, supersymmetries are broken due to the non trivial covariant derivative. In order to preserve SUSY, the solutions of the Killing equation play essential role. We focus on those spinors which are invariant under the spin lift of the holonomy group of the appropriate manifold.  For d=11 SUGRA, the Killing equation takes the following algebraic form

\nabla_M\epsilon+\frac{1}{288}\left(\Gamma_M^{NPQR}-8\delta^N_M\Gamma^{PQR}\right)G_{NPQR}\epsilon=0

Now the notion of the G-structures essentially classifies the special differential forms which arise in the supersymmetric flux compactifications. As can be deduced from the Killing equation, the solutions characterize the Spin Bundle of the supersymmetries with the metric of the manifold with a spin structure in a very intimate way.

Definition: A spin structure on a manifold (\mathcal{M},g) with signature (s,t) is a principle Spin(s,t)-bundle with Spin(\mathcal{M})\to \mathcal{M} together with a bundle morphism \phi : Spin(\mathcal{M})\to SO(\mathcal{M}).

To define the G-structure, we associate the differential forms with the Killing spinors as follows

\Omega^{ij}_{\mu_1\mu_2\ldots\mu_k}=\bar{\epsilon}^i\Gamma_{\mu_1\mu_2\ldots\mu_k}\epsilon^j

The aim is to show that these differential forms obey the set of the first order differential equations as a natural consequence of the Killing equations. Now it can be shown that for Clabi-Yau manifolds, or manifolds with the G_2 holonomy, one usually finds the Killing spinor bundles trivially defined by an algebraic projection which are some differential forms applied to the complete spin bundle.

So this seems like a good point to start and make an ansatz for the projection operator for the spin bundle structure in the curved spacetime. These projections are essentially the differential forms defined above which give rise to the notion of the G_2 structures.

Here (for the reasons beyond me right now), three projection operators are defined \Pi_j for j=0,1,2 which break the 32 supersymmetries to four. Another factual data is that if there is a holographic dual to the theory with a Coulomb branch, then there is a non-trivial space of the moduli for brane probes. This moduli space will be realized as conformally Kahler section of the metric (for four supersymmetries). And it is on this section of the metric, supersymmetries will satisfy projection conditions \Pi_j\epsilon=0 with the form \Pi_j=\frac{1}{2}(1+\Gamma^{\xi_j}), where \Gamma^{\xi_j} represents the product of gamma matrices parallel to the moduli space of the branes.

Now we can find the equations of motion of the theory by demanding that the fermionic variations vanish, implying the Killing equation! The solution we are considering here essentially has the topology of AdS_4\times S^7. Using the orthonormal frames, https://arxiv.org/pdf/hep-th/0403006.pdf shows the presence of a Kahler structure on the brane-probe moduli space as a conformal multiple of

J_{\text{moduli}}=e^6\wedge e^9+e^7\wedge e^8-e^5\wedge e^{10}

I will continue from here in the next blog-post!

The Holometer

Physics is all about understanding the phenomena that occur in nature. We essentially want to write down the equations which describe these phenomena and can be used for the benefit of the humanity. Blackholes are the naturally occurring mysterious objects in the space. They have very strong gravitational pull and are condensed in very small  region where quantum effects are appreciable. Therefore, we need to formulate a successful theory of quantum gravity which will not only give us a better understanding of the nature, but also provide the powerful practical tools in the future.

The Holographic Principle is a physical principle of the successful theory of quantum gravity. Although we don’t quite understand quantum gravity, we can extrapolate the notions from the already well established physical theories and cleverly deduce a pattern which should be manifest in quantum gravity.  The pattern here is essentially the existence of a precise and very strong limit on the information content of the spacetime. The holographic principle intimately connects the number of quantum mechanical states with the region of spacetime and builds up the stage for a consistent theory of quantum mechanics, matter and gravity. String Theory is the theory of quantum gravity which has successfully realized this principle through the AdS/CFT conjecture. It is important to note that any experiment that directly validates the holographic principle does not necessarily validate the string theory itself.

Recently, my friend Suzanne Jacobs introduced me to a project named Holometer at Fermilab which is an attempt to experimentally verify the holographic principle. I aim to explain the concept behind the working of the Holometer in this, hopefully, self-contained, blog-post.

Now, general relativity is a theory of spacetime and matter. It is a good theory for large length scales (rough estimate is  from radius of the Earth to that of Milky-way galaxy and beyond!). At these scales, matter can be appropriately described by the classical mechanics and the spacetime can be treated as a continuum. At the length scales smaller than that of radius of earth, we have Newtonian gravity in which space and time are treated separately and matter is again governed by classical mechanics. When you go down at the length scale of an atom, gravity becomes weak and other forces like electromagnetic forces become dominant. This is the regime of quantum mechanics. So for all practical purposes, we can forget about gravity and classical mechanics, and just work with the Hilbert Space of quantum mechanics. In very loose sense, Hilbert space is the stage for quantum theory just as spacetime is the stage for general relativity.

Quantum theory is a very peculiar theory and one of its results can be stated as

To probe the smaller length scales, you need to apply more energy in the system.

This is known as the uncertainty principle and given by a simple formula \Delta x \Delta p \geq \hbar/2. Now this is fine. We have built the particle accelerators capable of achieving very high energies and verifying the theory known as Standard Model. But since we live in a universe with gravity, there is a theoretical upper bound on the amount of energy that we can put in a region of space without creating a blackhole (of which we don’t know much about). And at this point gravity (a perfectly understood concept in classical domain) comes back to haunt us in the quantum regime. This length scale is known as Planck Scale and its numerical value is 1.6×10^(-35) meters. And at this length scale we need to formulate a theory of quantum gravity.

Many physicists believe that the spacetime should be an emergent notion in quantum gravity. Based on this school of thought, and theoretical calculations like covariant entropy bound, the Holographic Principle has very nice interpretation. It basically associates a Hilbert Space with each causal diamond in the flat spacetime as shown in the picture. Here we are considering 1+1 spacetime manifold.

emergentspacetime

A causal diamond (here, rhombus A1A2 and B1B2) is roughly the region of spacetime which is causally connected and is characterized by the proper time parameter \tau (the distance along time axis tA and tB). Of course, here we assume that the observers are at rest with respect to this coordinate system. The Holographic principle assigns the Hilbert spaces \mathcal{H}_A with the causal diamond A1A2 and \mathcal{H}_B with the diamond B1B2. Essentially, these Hilbert spaces have states amongst which the causal connection can be established by definition. The intersection of the causal diamonds (shaded red) is the region of spacetime causally connected to both the points A2 and B2. The Hilbert space associated with this region is \mathcal{H}_{AB}. And now the holographic claim is

the Hilbert space \mathcal{H}_{AB} is completely determined by some mathematical manipulation of the spaces \mathcal{H}_{A} and \mathcal{H}_{B}.

the geometry of red shaded region is completely governed by the \mathcal{H}_{AB}

Now in the Hilbert spaces, the observables (experimentally detectable structures) are non-local. It simply means they don’t depend on the spacetime coordinates. In fact, as we have seen, the spacetime, hence locality, emerges from the holographic picture. It was not there in the quantum theory that we started with! So whenever I mention that something is non-local, it would just mean that it is somewhere in the Hilbert space of the spacetime.

(The matter of this blog-post from here is based on the non peer-reviewed article https://arxiv.org/abs/1506.06808. I shouldn’t be held responsible for any inconsistencies in the subsequent paragraphs :))

Ok, now consider an observable denoted by \hat{x} in the Hilbert space \mathcal{H}, which holographically represents the set of world lines in the spacetime manifold. Since \hat{x} is a quantum mechanical object, the set of world lines it corresponds to should exhibit the quantum behavior. \hat{x} is entirely new degree of freedom and differs from the position variable in the classical spacetime (please note that classical spacetime is different from the holographic spacetime we are talking about here). Also we, a priori, don’t know the corresponding Hamiltonian and the conjugate observable. This is radically different from the String Theory treatment of quantum gravity!

Define a measure (time-domain correlation function) to quantify the deviation of the quantum characteristic of \hat{x} from its classical counterpart \bar{x} by \sigma(\tau) = \langle\Delta x(t)\Delta x(t+\tau)\rangle_{t}F(\tau). In other words,  by the very definition of this function, \sigma(\tau)=0 if there is no quantum or the holographic behavior in the evolution of the world lines! And if some experiment establishes the equality, we can then safely say that the spacetime is perfectly classical and throw the holographic principle out of the window. The non-zero value of \sigma(\tau) represents the “jitter” or the “fuziness” that Fermilab’s Holometer is trying to detect.

Now a quantum mechanical state dechoeres (becomes more classical) with time. This effect can and will make the time-domain correlation function 0 which would destroy the entire purpose of the experiment. The condition to measure the non-zero \sigma(\tau) before the decoherence kicks in, gives the bounds on the dimensions of the experimental apparatus (length and size of the mirrors in the interferometer).

Physicists at Fermilab have an interesting construction to fish out the holographic “jitter” using certain “models” and the details of the experiment can be found at https://holometer.fnal.gov/faq.html.

My observation as a graduate student

This seems to be a good project to uncover and understand the physics at the Planck scales without having to achieve tremendously high amount of the energy. The results reported by the physicists, which are based on a particular model of the correlated holographic noise (cHN), at Fermilab are negative till now. But, as with all the scientific research programs, we now know what is incorrect and move on with the new and better models to gauge the cHN.

Curiously enough, the arXiv papers I consulted to study about these models are not peer reviewed and contain several instances of the ambiguities (broken Lorentz invariance for example). I am not really sure what to make of this, but again, these are just my personal views and I would follow this research only if the articles get published in a good peer reviewed journal.