The base of phasespace is the GenParticle object. This object, which represents a particle, either stable or decaying, has only one mandatory argument, its name.

In most cases (except for the top particle of a decay), one wants to also specify its mass, which can be either a number or tf.constant, or a function. Functions are used to specify the mass of particles such as resonances, which are not fixed but vary according to a broad distribution. These mass functions get three arguments, and must return a TensorFlow Tensor:

  • The minimum mass allowed by the decay chain, which will be of shape (n_events,).

  • The maximum mass available, which will be of shape (n_events,).

  • The number of events to generate.

This function signature allows to handle threshold effects cleanly, giving enough information to produce kinematically allowed decays (NB: phasespace will throw an error if a kinematically forbidden decay is requested).

A simple example

With these considerations in mind, one can build a decay chain by using the set_children method of the GenParticle class (which returns the class itself). As an example, to build the \(B^{0}\to K^{*}\gamma\) decay in which \(K^*\to K\pi\) with a fixed mass, we would write:

from phasespace import GenParticle

B0_MASS = 5279.58
KSTARZ_MASS = 895.81
PION_MASS = 139.57018
KAON_MASS = 493.677

pion = GenParticle('pi-', PION_MASS)
kaon = GenParticle('K+', KAON_MASS)
kstar = GenParticle('K*', KSTARZ_MASS).set_children(pion, kaon)
gamma = GenParticle('gamma', 0)
bz = GenParticle('B0', B0_MASS).set_children(kstar, gamma)
WARNING:tensorflow:From /home/docs/checkouts/ calling function (from tensorflow.python.eager.polymorphic_function.polymorphic_function) with experimental_relax_shapes is deprecated and will be removed in a future version.
Instructions for updating:
experimental_relax_shapes is deprecated, use reduce_retracing instead

Phasespace events can be generated using the generate method, which gets the number of events to generate as input. The method returns:

  • The normalized weights of each event, as an array of dimension (n_events,).

  • The 4-momenta of the generated particles as values of a dictionary with the particle name as key. These momenta are expressed as arrays of dimension (n_events, 4).

N_EVENTS = 1000

weights, particles = bz.generate(n_events=N_EVENTS)

The generate method return an eager Tensor: this is basically a wrapped numpy array. With Tensor.numpy(), it can always directly be converted to a numpy array (if really needed).

Boosting the particles

The particles are generated in the rest frame of the top particle. To produce them at a given momentum of the top particle, one can pass these momenta with the boost_to argument in both generate and ~`tf.Tensor`. This latter approach can be useful if the momentum of the top particle is generated according to some distribution, for example the kinematics of the LHC (see test_kstargamma_kstarnonresonant_lhc and test_k1gamma_kstarnonresonant_lhc in tests/ to see how this could be done).


Additionally, it is possible to obtain the unnormalized weights by using the generate_unnormalized flag in generate. In this case, the method returns the unnormalized weights, the per-event maximum weight and the particle dictionary.

{'K*': <tf.Tensor: shape=(1000, 4), dtype=float64, numpy=
array([[ 1968.15556109, -1634.11426486,  -170.47998772,  2715.78804872],
       [ 1643.25763924, -1589.41473005, -1160.3854172 ,  2715.78804872],
       [ -284.60786058,   708.41965684,  2447.48220115,  2715.78804872],
       [  296.22135402,  2504.03331924,   463.78789874,  2715.78804872],
       [ -170.02567056,  -454.61056153,  2517.42918037,  2715.78804872],
       [ -252.60368774,   258.80293968, -2538.15712374,  2715.78804872]])>, 'gamma': <tf.Tensor: shape=(1000, 4), dtype=float64, numpy=
array([[-1968.15556109,  1634.11426486,   170.47998772,  2563.79195128],
       [-1643.25763924,  1589.41473005,  1160.3854172 ,  2563.79195128],
       [  284.60786058,  -708.41965684, -2447.48220115,  2563.79195128],
       [ -296.22135402, -2504.03331924,  -463.78789874,  2563.79195128],
       [  170.02567056,   454.61056153, -2517.42918037,  2563.79195128],
       [  252.60368774,  -258.80293968,  2538.15712374,  2563.79195128]])>, 'pi-': <tf.Tensor: shape=(1000, 4), dtype=float64, numpy=
array([[ 1195.59030713,  -976.04751639,   107.75131779,  1553.44620711],
       [ 1136.66028881, -1000.72674284,  -579.40873263,  1627.4658032 ],
       [ -319.70854298,   560.85188279,  1386.79649944,  1536.05102549],
       [   34.88843975,   304.7472619 ,  -144.32462252,   366.60268504],
       [ -107.50672197,   141.17523812,   796.62987132,   828.00189018],
       [    7.06809631,   264.71712812,  -364.34725979,   471.54414085]])>, 'K+': <tf.Tensor: shape=(1000, 4), dtype=float64, numpy=
array([[  772.56525395,  -658.06674847,  -278.2313055 ,  1162.3418416 ],
       [  506.59735043,  -588.68798721,  -580.97668457,  1088.32224551],
       [   35.10068241,   147.56777405,  1060.68570171,  1179.73702323],
       [  261.33291426,  2199.28605734,   608.11252127,  2349.18536368],
       [  -62.51894858,  -595.78579966,  1720.79930905,  1887.78615854],
       [ -259.67178405,    -5.91418844, -2173.80986395,  2244.24390787]])>}

It is worth noting that the graph generation is cached even when using generate, so iterative generation can be performed using normal python loops without loss in performance:

for i in range(5):
    weights, particles = bz.generate(n_events=100)
    # ...
    # (do something with weights and particles)
    # ...

Resonances with variable mass

To generate the mass of a resonance, we need to give a function as its mass instead of a floating number. This function should take as input the per-event lower mass allowed, per-event upper mass allowed and the number of events, and should return a ~`tf.Tensor` with the generated masses and shape (nevents,). Well suited for this task are the TensorFlow Probability distributions or, for more customized mass shapes, the zfit pdfs (currently an experimental feature is needed, contact the zfit developers to learn more).

Following with the same example as above, and approximating the resonance shape by a gaussian, we could write the \(B^{0}\to K^{*}\gamma\) decay chain as (more details can be found in tests/helpers/

import tensorflow as tf
import tensorflow_probability as tfp
from phasespace import GenParticle

KSTARZ_MASS = 895.81

def kstar_mass(min_mass, max_mass, n_events):
   min_mass = tf.cast(min_mass, tf.float64)
   max_mass = tf.cast(max_mass, tf.float64)
   kstar_width_cast = tf.cast(KSTARZ_WIDTH, tf.float64)
   kstar_mass_cast = tf.cast(KSTARZ_MASS, dtype=tf.float64)

   kstar_mass = tf.broadcast_to(kstar_mass_cast, shape=(n_events,))
   if KSTARZ_WIDTH > 0:
       kstar_mass = tfp.distributions.TruncatedNormal(loc=kstar_mass,
   return kstar_mass

bz = GenParticle('B0', B0_MASS).set_children(GenParticle('K*0', mass=kstar_mass)
                                            .set_children(GenParticle('K+', mass=KAON_MASS),
                                                          GenParticle('pi-', mass=PION_MASS)),
                                            GenParticle('gamma', mass=0.0))


Shortcut for simple decays

The generation of simple n-body decay chains can be done using the nbody_decay function of phasespace, which takes

  • The mass of the top particle.

  • The mass of children particles as a list.

  • The name of the top particle (optional).

  • The names of the children particles (optional).

If the names are not given, top and p_{i} are assigned. For example, to generate \(B^0\to K\pi\), one would do:

import phasespace

N_EVENTS = 1000

B0_MASS = 5279.58
PION_MASS = 139.57018
KAON_MASS = 493.677

decay = phasespace.nbody_decay(B0_MASS, [PION_MASS, KAON_MASS],
                               top_name="B0", names=["pi", "K"])
weights, particles = decay.generate(n_events=N_EVENTS)

In this example, decay is simply a GenParticle with the corresponding children.

Eager execution

By default, phasespace uses TensorFlow to build a graph of the computations. This is usually more performant, especially if used multiple times. However, this has the disadvantage that _inside_ phasespac, the actual values are not computed on Python runtime, e.g. if a breakpoint is set the values of a tf.Tensor won’t be available.

TensorFlow (since version 2.0) however can easily switch to so called “eager execution”: in this mode, it behaves the same as Numpy; values are computed instantly and the Python code is not only executed once but every time.

To switch this on or off, the global flag in TensorFlow tf.config.run_functions_eagerly(True) or the enviroment variable “PHASESPACE_EAGER” (which switches this flag) can be used.

Random numbers

The random number generation inside phasespace is transparent in order to allow for deterministic behavior if desired. A function that uses random number generation inside always takes a seed (or rng) argument. The behavior is as follows

  • if no seed is given, the global random number generator of TensorFlow will be used. Setting this instance explicitly or by setting the seed via tf.random.set_seed allows for a deterministic execution of a whole _script_.

  • if the seed is a number it will be used to create a random number generator from this. Using the same seed again will result in the same output.

  • if the seed is an instance of tf.random.Generator, this instance will directly be used and advances an undefined number of steps.