Multi-leg one-loop massive amplitudes from integrand reduction via Laurent expansion

Journal of High Energy Physics, Mar 2014

We present the application of a novel reduction technique for one-loop scattering amplitudes based on the combination of the integrand reduction and Laurent expansion. We describe the general features of its implementation in the computer code Ninja, and its interface to GoSam. We apply the new reduction to a series of selected processes involving massive particles, from six to eight legs.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

http://link.springer.com/content/pdf/10.1007%2FJHEP03%282014%29115.pdf

Multi-leg one-loop massive amplitudes from integrand reduction via Laurent expansion

Hans van Deurzen 3 Gionata Luisoni 3 Pierpaolo Mastrolia 1 3 Edoardo Mirabella 3 Giovanni Ossola 0 2 Tiziano Peraro 3 0 The Graduate School and University Center, The City University of New York , 365 Fifth Avenue, New York, NY 10016, U.S.A 1 Dipartimento di Fisica e Astronomia, Universita` di Padova , and INFN Sezione di Padova, via Marzolo 8, 35131 Padova, Italy 2 Physics Department, New York City College of Technology, The City University of New York , 300 Jay Street Brooklyn, NY 11201, U.S.A 3 Max-Planck Insitut fur Physik, Fohringer Ring, 6, D-80805 Munchen, Germany We present the application of a novel reduction technique for one-loop scattering amplitudes based on the combination of the integrand reduction and Laurent expansion. We describe the general features of its implementation in the computer code Ninja, and its interface to GoSam. We apply the new reduction to a series of selected processes involving massive particles, from six to eight legs. - Reduction algorithm integrand reduction via Laurent expansion 2.1 Integrand and integral decomposition 2.2 Scattering amplitudes via Laurent expansion 2.3 The C++ library Ninja Contents 1 Introduction 2 4 5 3 Interfacing Ninja with GoSam Precision tests Scattering amplitudes in quantum field theories are analytic functions of the kinematic variables of the interacting particles, hence they can be determined by studying the structure of their singularities. The multi-particle factorization properties of the amplitudes become transparent when internal particles go on their mass-shell [1, 2]. These configurations correspond to poles of the amplitude and the investigation of the general structure of the residues corresponding to multi-particle factorization channel turns out to be of particular interest. Indeed the knowledge of such structure has been fundamental for discovering new relations fulfilled by scattering amplitudes, such as the BCFW recurrence relation [3] and its link to the leading singularity of one-loop amplitudes [4]. The systematic classification of the residues, for all the poles corresponding to the quadruple, triple, double, and single cuts, has been achieved, in four dimensions, by employing integrand-reduction methods [5, 6]. The latter led to the OPP integrand-decomposition formula for one-loop integrals [6], which allows one to write each residue as a linear combination of process-independent polynomials multiplied by process-dependent coefficients. These results provided a deeper understanding of the structure of scattering amplitudes and have shown the underlying simplicity beneath the rich mathematical structure of quantum field theory. Moreover, they provided the theoretical framework for the development of efficient computational algorithms for one-loop calculations in perturbation theory, which have been implemented in various automated codes [716] improving the state-of-the art of the predictions at the next-to-leading order accuracy [1750]. Recently, the classification of the structure of the residues has been obtained in a more general and elegant form within the framework of multivariate polynomial division and algebraic geometry [51, 52]. The use of these techniques proved that the integrand decomposition, originally formulated for one-loop amplitudes, is applicable at any order in perturbation theory, irrespective of the complexity of the topology of the diagrams involved. An iterative integrand-recursion formula, based on successive divisions of the numerators modulo the Grobner basis of the ideals generated by the cut denominators, can provide the form of the residues at all multi-particle poles for arbitrary amplitudes, independently of the number of loops. Extensions of the integrand reduction method beyond one-loop, initiated in [53, 54] and then systematized within the language of algebraic geometry [51, 52] have recently become the topic of several studies [5560], thus providing a new direction in the study of multi-loop amplitudes. In the context of the integrand reduction, the process-dependent coefficients can be numerically determined by solving a system of algebraic equations that are obtained by evaluating the numerator of the integrand at explicit values of the loop momentum [6]. The system becomes triangular if one evaluates the numerator at the multiple cuts, i.e. at the set of complex values of the integration momentum for which a given set of propagators vanish. The extraction of all coefficients via polynomial fitting has been implemented in publicly available codes performing integrand decomposition, such as CutTools [61] and Samurai [62]. These algorithms do not requires any specific recipe for the generation of the numerator function, which can be performed by using traditional Feynman diagrams, by means of recursive relations, or by gluing tree-level sub-amplitudes, as in unitaritybased methods. The code CutTools implements the four-dimensional integrand-reduction algorithm [6365], in which the cut-constructible term and the rational term are necessarily computed separately. The latter escapes four-dimensional integrand reduction and has to be computed using other methods, e.g. by using ad hoc tree-level Feynman rules [6572]. Significant improvements were achieved with the d-dimensional extension of integrandreduction methods [7375], which expose a richer polynomial structure of the integrand and allows for the combined determination of both cut-constructible and rational terms at once. This idea of performing unitarity-cuts in d-dimension was the basis for the development of Samurai, which extends the OPP polynomial structures to include an explicit dependence on the (d 4)-dimensional parameter needed for the automated computation of the full rational term. Moreover, it includes the parametrization of the residue of the quintuplecut [76] and implements the numerical sampling via Discrete Fourier Transform [64]. The integrand decomposition was originally developed for renormalizable gauge theories, where, at one-loop, the rank of the numerator cannot be larger than the number of external legs. The reduction of diagrams where the rank can be higher, as required for example when computing pp H + 2, 3 jets in gluon fusion in the large top-mass limit [42, 44], demands an extension of the algorithm to accommodate the richer monomial structures of the residues. This extension has been implemented in Samurai, together with the corresponding sampling required to fit all the coefficients [7779]. More recently, elaborating on the the techniques proposed in [80, 81], a different approach to the d-dimensional integrand-reduction method was proposed [78]. The key point of this method is to extract the coefficients more efficiently by performing a Laurent expansion of the integrand. The method is general and relies only on the knowledge of the explicit dependence of the numerator on the loop momentum. In general, when the multiple-cut conditions do not fully constrain the loop momentum, the solutions are still functions of some free parameters, possibly the components of the momentum which are not frozen by the cut conditions. The integrand-reduction algorithms implemented in the codes [61, 62] require to solve a system of equations obtained by sampling on those free parameters. The system is triangular thus the coefficients of a given residue can be computed only after subtracting all the non-vanishing contributions coming from higher-point residues. The key observation suggested in ref. [78] is that the reduction algorithm can be simplified by exploiting the universal structure of the residues, hence of their asymptotic expansion. Indeed, by performing a Laurent expansion with respect to one of the free parameters which appear in the solutions of the cut, both the integrand and the subtraction terms exhibit the same polynomial behavior of the residue. Moreover, the contributions coming from the subtracted terms can be implemented as corrections at the coefficient level, hence replacing the subtractions at the integrand level of the original algorithm. The parametric form of these corrections can be computed once and for all, in terms of a subset of the higher-point coefficients. This method significantly reduces the number of coefficients entering each subtracted term, in particular boxes and pentagons decouple from the computation of lower-points coefficients. If either the analytic expression of the integrand or the tensor structure of the numerator is known, this procedure can be implemented in a semi-numerical algorithm. Indeed, the coefficients of the Laurent expansion of a rational function can be computed, either analytically or numerically, by performing a polynomial division between the numerator and the (uncut) denominators. The scope of the present paper is to review the main features of the novel reduction algorithm and demonstrate its performance on a selection of challenging calculations of scattering amplitudes with massive bosons and quarks, involving six, seven, and eight particles. The integrand-reduction via Laurent expansion has been implemented in the c++ library Ninja [82], and interfaced to the GoSam framework [12] for the evaluation of virtual one-loop scattering amplitudes. The cleaner and lighter system-solving strategy, which deals with a diagonal system instead of a triangular one, and which substitutes the polynomial subtractions with coefficients corrections, turns into net gains in terms of both numerical accuracy and computing speed. We recall that the new library has been recently used in the evaluation of NLO QCD corrections to pp ttHj [45]. The paper is organized as follows. In section 2, we discuss the theoretical foundations of the integrand decomposition via Laurent expansion, and its implementation in an algorithm for the reduction of one-loop amplitudes. The description of the interface between Ninja and GoSam for automated one-loop calculation is discussed in section 3. Section 4 is devoted to a detailed study of the precision and of the computational performance of the novel framework, which shows a significant improvement with respect to the standard algorithms. Applications of the GoSam+Ninja framework to the evaluation of NLO QCD virtual correction to several multi-leg massive processes are shown in section 5. Reduction algorithm integrand reduction via Laurent expansion In this section we describe the Laurent-expansion method for the integrand reduction of one-loop amplitudes as implemented in the C++ library Ninja. Integrand and integral decomposition An n-point one-loop amplitude can be written as a linear combination of contributions M1n of the form M1n = ddq I1n = where N (q) is a process-dependent polynomial numerator in the components of the d = (4 2)-dimensional loop momentum q. The latter can be decomposed as follows, /q = /q + /, q2 = q2 2 , in terms of its 4-dimensional component, q, and 2 which encodes its (2)-dimensional components. The denominators Di are quadratic polynomials in q and correspond to Feynman loop propagators, Di = (q + pi)2 mi2. Every one-loop integrand in d dimensions can be decomposed as [6, 73] I1...n where the i1ik are irreducible polynomial residues, i.e. polynomials which do not contain any term proportional to the corresponding loop denominators Di1 , . . . , Dik . The second sum in eq. (2.4) runs over all unordered selections without repetition of the k indices {i1, , ik}. For any set of denominators Di1 , . . . , Dik with k 5, one can choose a 4-dimensional basis of momenta E = {e1, e2, e3, e4} which satisfies the following normalization conditions e1 e2 = 1, while all the other scalar products vanish. In addition, for k = 4 we choose the basis such that e4 is orthogonal to the external legs of the sub-diagram identified by the set of denominators in consideration. Similarly, for k = 2, 3 we choose both e3 and e4 to be orthogonal to the external legs of the corresponding sub-diagram. With this choices of momentum basis, the numerator and the denominators can be written as polynomials in the coordinates z (z1, z2, z3, z4, z5) = (x1, x2, x3, x4, 2). The variables xi are the components of q with respect to the basis E , q = pi1 + x1 e1 + x2 e2 + x3 e3 + x4 e4 . More explicitly, the numerator is a polynomial in the components of q and 2 The coordinates xi can also be written as scalar products, N (q) = N (q, 2) = N (x1, x2, x3, x4, 2) = N (z). x1 = (q + pi1 ) e2 x2 = (q + pi1 ) e1 With these definitions, one can show [6, 51, 73] that the most general parametric form i1 = c(i1) + c(1i1)x1 + c(2i1)x2 + c(3i1)x3 + c(4i1)x4. 0 This parametric form can also be extended to non-renormalizable and effective theories, where the rank of the numerator can be larger than the number of loop propagators [78]. Most of the term appearing in eq. (2.9) vanish after integration, i.e. they are spurious. The non-spurious coefficients, instead, appear in the final result which expresses the amplitude M1n as a linear combination of known Master Integrals, M1n = of a residue in a renormalizable theory is Ii1ik Ii1ik [1]. The problem of the computation of any one-loop amplitude can therefore be reduced to the problem of the determination of the coefficients of the Master Integrals appearing in eq. (2.10), which in turn can be identified with a subset of the coefficients of the parametric residues in eq. (2.9). Scattering amplitudes via Laurent expansion In ref. [78], elaborating on the the techniques proposed in [80, 81], an improved version of the integrand-reduction method for one-loop amplitudes was presented. This method allows, whenever the analytic dependence of the integrand on the loop momentum is known, + c(2i1i2)Ii1i2 [((q + pi1 ) e2)2] + c(9i1i2)Ii1i2 [ 2] to extract the unknown coefficients of the residues i1ik by performing a Laurent expansion of the integrand with respect to one of the free loop components which are not constrained by the corresponding on-shell conditions Di1 = = Dik = 0. Within the original integrand reduction algorithm [61, 62, 64], the determination of these unknown coefficients requires: i) to sample the numerator on a finite subset of the onshell solutions; ii) to subtract from the integrand the non-vanishing contributions coming from higher-point residues; and iii) to solve the resulting linear system of equations. With the Laurent-expansion approach, since in the asymptotic limit both the integrand and the higher-point subtractions exhibit the same polynomial behavior as the residue, one can instead identify the unknown coefficients with the ones of the expansion of the integrand, corrected by the contributions coming from higher-point residues. In other words, the system of equations for the coefficients becomes diagonal and the subtractions of higher-point contributions can be implemented as corrections at the coefficient level which replace the subtractions at the integrand level of the original algorithm. Because of the universal structure of the residues, the parametric form of these corrections can be computed once and for all, in terms of a subset of the higher-point coefficients. This also allows to significantly reduce the number of coefficients entering in each subtraction. For instance, box and pentagons do not affect at all the computation of lower-points residues. In the followings, we describe in more detail how to determine the needed coefficients of each residue. Pentagons. Pentagons contributions are spurious, i.e. they do not appear in the integrated result. In the original integrand reduction algorithm their computation is nevertheless needed, in order to implement the corresponding subtractions at the integrand level. Within the Laurent expansion approach, since the subtraction terms of five-point residues always vanish in the asymptotic limit, their computation is instead not needed and can be skipped. Boxes. The coefficient c0 of the box contributions can be determined via 4-dimensional quadruple cuts [4]. In four dimensions (i.e. q = q, 2 = 0) a quadruple cut Di1 = = Di4 = 0 has two solutions, q+ and q. The coefficient c0 can be expressed in terms of these solutions as 2 Qj6=i1,i2,i3,i4 Dj q=q+ The coefficient c4 can be found by evaluating the integrand on d-dimensional quadruple cuts in the asymptotic limit 2 [81]. A d-dimensional quadruple cut has an infinite number of solutions which can be parametrized by the extra-dimensional variable 2. These solutions become particularly simple in the limit of large 2, namely q = a p 2 + e4 2 p 2 e4 + O(1) where the vector a and the constant are fixed by the cut conditions. The coefficient c4, when non-vanishing, can be found in this limit as the leading term of the expansion of the Qj6=i1,i2,i3,i4 Dj The other coefficients of the boxes are spurious and their computation can be avoided. Triangles. The residues of the triangle contributions can be determined by evaluating the integrand on the solutions of d-dimensional triple cuts [80], which can be parametrized by the extra-dimensional variable 2 and another parameter t, where the vector a and the constant are fixed by the cut conditions Di1 = Di2 = Di3 = 0. On these solutions, the integrand generally receives contributions from the residue of the triple cut i1i2i3 , as well as from the boxes and pentagons which share the three cut denominators. However, after taking the asymptotic expansion at large t and dropping the terms which vanish in this limit, the pentagon contributions vanish, while the box contributions are constant in t but vanish when taking the average between the parametrizations q+ and q of eq. (2.14). More explicitly, Qj6=i1,i2,i3 Dj = i1i2i3 + Xj iD1i2ji3j + Xjk = i1i2i3 + d1 + d2 2 + O(1/t), with di+ + di = 0. By comparison of eqs. (2.15), (2.16) and (2.17) one gets all the triangle coefficients as It is worth to observe that, within the Laurent expansion approach, the determination of the 3-point residues does not require any subtraction of higher-point terms. Moreover, even though the integrand is a rational function, in this asymptotic limit it exhibits the same polynomial behavior as the expansion of the residue i1i2i3 , Qj6=i1,i2,i3 Dj N (q, 2) Qj6=i1,i2,i3 Dj + c(5i1i2i3) t2 c(6i1i2i3) t3 + O(1/t) + c(2i1i2i3) t2 c(3i1i2i3) t3 + O(1/t). Bubbles. The on-shell solutions of a d-dimensional double cut Di1 = Di2 = 0 can be parametrized as in terms of the three free parameters x, t and 2, while the vectors ai and the constants i are fixed by the on-shell conditions. After evaluating the integrand on these solutions and taking the asymptotic limit t , the only non-vanishing subtraction terms come from the triangle contributions, Qj6=i1,i2 Dj The integrand and the subtraction term are rational function, but in the asymptotic limit they both have the same polynomial behavior as the residue, namely Qj6=i1,i2 Dj Qj6=i1,i2 Dj i1i2j (q+, 2) = c(sj,0) + c(sj,9) 2 + c(sj,1) x + c(sj,2) x2 c(sj,5) + c(sj,8)x t + c(sj,6) t2 + O(1/t) Dj i1i2j (q, 2) = c(sj,0) + c(sj,9) 2 + c(sj,1) x + c(sj,2) x2 c(sj,3) + c(sj,7)x t + c(sj,4) t2 + O(1/t) (2.22) Dj i1i2 (q+, 2) = c(0i1i2) + c(9i1i2) 2 + c(1i1i2) x + c(2i1i2)x2 c(i1i2) + c(8i1i2)x t 5 i1i2 (q, 2) = c(0i1i2) + c(9i1i2) 2 + c(1i1i2) x + c(2i1i2)x2 c(i1i2) + c(7i1i2)x t 3 The coefficients c(sj,i) of the Laurent expansion of the subtractions terms in eqs. (2.22) can be computed once and for all as parametric functions of the triangles coefficients. Therefore, the subtraction of the triangles can be implemented as corrections to the coefficients of the expansion of the integrand. Indeed, by inserting eqs. (2.21), (2.22) and (2.23) in eq. (2.20) one gets for i = 0, . . . , 9. Tadpoles. Once the coefficients of the triangles and the bubbles are known, one can determine the non-spurious coefficient c0 of a tadpole residue i1 by evaluating the integrand on the single cut Di1 = 0. One can choose 4-dimensional solutions of the form q+ = pi1 + t e3 + 2t e4 parametrized by the free variable t, while the constant is fixed by the cut conditions. In the asymptotic limit t , only bubbles and triangles coefficients are non-vanishing, Qj6=i1 Dj Similarly to the case of the bubbles, in this limit the integrand and the subtraction terms exhibit the same polynomial behavior as the residue, i.e. Qj6=i1 Dj = n0 n4 t + O(1/t) Putting everything together, we can write the coefficient of the tadpole integral as the corresponding one in the expansion of the integrand, corrected by coefficient-level subtractions Once again, we observe that the subtraction terms c(sj2),0 and c(sj3k,0), coming from bubbles and triangles contributions respectively, are known parametric functions of the coefficients of the corresponding higher-point residues. The C++ library Ninja The C++ library Ninja [82] provides a semi-numerical implementation of the Laurent expansion method described above. Since the integrand is a rational function of the loop variables, its Laurent expansion is performed via a simplified polynomial division algorithm between the expansion of the numerator N and the uncut denominators. The inputs of the reduction algorithm implemented in Ninja are the external momenta pi and the masses mi of the loop denominators defined in eq. (2.3), and the numerator N (q, 2) of the integrand cast in four different forms. The first form provides a simple evaluation of the numerator as a function of q and 2, which is used in order to compute the coefficient c0 of the boxes. It can also be used in order to compute the spurious coefficients of the pentagons via penta-cuts, and all the ones of the boxes when the expansion in 2 is not provided. The other three forms of the numerator yield instead the leading terms of a parametric expansion of the integrand. The first expansion is the one used in order to obtain the coefficient c4 of the boxes. When the rank r is equal to the number n of external legs of the diagram, this method returns the coefficient of tn obtained by substituting in the numerator N (q, 2) as a function of a generic vector v, which is computed by Ninja and is determined by the quadruple-cut constraints. The second expansion is used in order to compute triangles and tadpole coefficients. In this case it returns coefficients of the terms tj 2k for j = r, r1, . . . , n3, obtained from N (q, 2) with the substitutions v32 = v42 = 0, as a function of the generic momenta vi and the constant 0. Since in a renormalizable theory r n, and by definition of rank we have j + 2k r, at most 6 terms can be non-vanishing in the specified range of j. For effective theories with r n + 1, one can have instead up to 9 non-vanishing polynomial terms. In each call of the numerator, Ninja specifies the lowest power of t which is needed in the evaluation, avoiding thus the computation of unnecessary coefficients of the expansion. The third and last expansion is needed for the computation of the 2-point residues and returns the coefficients of the terms tj 2kxl for j = r, r 1, . . . , n 2, obtained from N (q, 2) with the substitutions q v1 + x v2 + t v3 + 0 + 1 x +t 2 x2 + 2 v4, v2 v3 = v2 v4 = v32 = v42 = 0, (2.34) as a function of the cut-dependent momenta vi and constants i. In a renormalizable theory one can have at most 7 non-vanishing terms in this range of j, while for r n + 1 one can have 13 non-vanishing terms. As in the previous case, in each call of the numerator, Ninja specifies the lowest power of t which is needed. It is worth to notice that the expansion in eq. (2.34) can be obtained from the previous one in eq. (2.33) with the substitutions v2 v3 = v2 v4 = 0. All these expansions can be easily and quickly obtained from the knowledge of the analytic dependence of the loop momentum on q and 2. For every phase-space point, Ninja computes the parametric solutions of all the multiple cuts, performs the Laurent expansion of the integrand via a simplified polynomial division between the expansion of the numerator and the set of the uncut denominators, and implements the subtractions at the coefficient level appearing in eqs. (2.24) and (2.31). Finally, the obtained non-spurious coefficients are multiplied by the corresponding Master Integrals in order to get the integrated result as in eq. (2.10). The routines which compute the Master Integrals are called by Ninja via a generic interface which allows to use any integral library implementing it, with the possibility of switching between different libraries at run-time. By default, a C++ wrapper of the OneLoop integral library [83, 84] is used. This wrapper caches every computed integral allowing constant time lookups of their values from their arguments. An interface with the LoopTools [85, 86] library is available as well. The Ninja library can also be used in order to compute integrals from effective and non-renomalizable theories where the rank r of the numerator can exceed the number of legs by one unit. An example of this application, given in section 5, is Higgs boson production plus three jets in gluon fusion, in the effective theory defined by the infinite top-mass limit. Interfacing Ninja with GoSam The library Ninja has been interfaced with the automatic generator of one-loop amplitudes GoSam. The latter provides Ninja with analytic expressions for the integrands of one-loop Feynman diagrams for generic processes within the Standard Model and also for Beyond Standard Model theories. GoSam combines automated diagram generation and algebraic manipulation [8790] with integrand-reduction techniques [6, 6365, 73, 78]. Amplitudes are generated via Feynman diagrams, using QGRAF [87], FORM [88], Spinney [90] and Haggies [89]. After the generation of all contributing diagrams, the virtual corrections are evaluated using the d-dimensional integrand-level reduction method, as implemented in the library Samurai [62], which allows for the combined determination of both cut-constructible and rational terms at once. As an alternative, the tensorial decomposition provided by Golem95C [9193] is also available. Such reduction, which is numerically stable but more time consuming, is employed as a rescue system. After the reduction, all relevant master integrals can be computed by means of Golem95C [93], QCDLoop [94, 95], or OneLoop [83]. The possibility to deal with higher-rank one-loop integrals, where powers of loop momenta in the numerator exceed the number of denominators, is implemented in all three reduction programs Samurai [78, 79], Ninja and golem95C [96]. Higher rank integrals can appear when computing one-loop integrals in effective-field theories, e.g. for calculations involving the effective gluon-gluon-Higgs vertex [42, 44], or when dealing with spin-2 particles [41]. In order to embed Ninja into the GoSam framework, the algebraic manipulation of the integrands was adapted to generate the expansions needed by Ninja and described in section 2.3. The numerator, in all its forms, is then optimized for fast numerical evaluation by exploiting the new features of Form 4 [97, 98], and written in a Fortran90 source file which is compiled. At running time, these expressions are used as input for Ninja. The Fortran90 module of the interface between Ninja and GoSam defines subroutines which allow to control some of the settings of Ninja directly from settings of the code that generated the virtual part of the amplitudes. Upon importing the module, the user can change the integral library used by Ninja choosing between the use of OneLoop [83] and the LoopTools [85, 86]. For debugging purposes, one can also ask Ninja to perform some internal test or print some information about the ongoing computation. This option allows to choose among different internal tests, namely the global N = N test, the local N = N tests on different cuts, or a combination of both. These tests have been described in [62]. The verbosity of the debug output can be adjusted to control the amount of details printed out in the output file, for example the final results for the finite part and the poles of the diagram, the values of the coefficients that are computed in the reduction, the values of the corresponding Master Integrals, and the results of the various internal tests. Precision tests Within the context of numerical and semi-numerical techniques, the problem of estimating correctly the precision of the results is of primary importance. In particular, when performing the phase space integration of the virtual contribution, it is important to assess in real time, for each phase space point, the level of precision of the corresponding one-loop matrix element. Whenever a phase space point is found in which the quality of the result falls below a certain threshold, either the point is discarded or the evaluation of the amplitude is repeated by means of a safer, albeit less efficient procedure. Examples of such a method involve the use of higher precision routines, or in the case of GoSam the use of traditional tensorial reconstruction of the amplitude, provided by Golem95C. Various techniques for detecting points with low precision have been implemented within the different automated tools for the evaluation of one-loop virtual corrections. A standard method which is widely employed is based on the comparison between the numerical values of the poles with their known analytic results dictated by the universal behavior of the infrared singularities. While this method is quite reliable, not all integrals which appear in the reconstruction of the amplitude give a contribution to the double and single poles. This often results in an overestimate of the precision, which might lead to keep phase space points whose finite part is less precise than what is predicted by the poles. A different technique, which we refer to as scaling test [9], is based on the properties of scaling of scattering amplitudes when all physical scales (momenta, renormalization scale, masses) are rescaled by a common multiplicative factor x. As shown in [9], this method provides a very good correlation between the estimated precision, and the actual precision of the finite parts. Additional methods have been proposed, within the context of integrand-reduction approaches, which target the relations between the coefficients before integration, namely the reconstructed algebraic expressions for the numerator function before integration. This method, labeled N = N test [61, 62], can be applied to the full amplitude (global N = N test) or individually within each residue of individual cuts (local N = N test). The drawback of this technique comes from the fact that the test is applied at the level of individual diagrams, rather than on the final result summed over all diagrams, making the construction of a rescue system quite cumbersome. For the precision analysis contained in this paper, we present a new simple and efficient method for the estimation of the number of digits of precision in the results, which we call rotation test. This new method exploits the invariance of the scattering amplitudes under an azimuthal rotation about the beam axis, namely the direction of the initial colliding particles. Such a rotation, which does not affect the initial states, changes the momenta of all final particles without changing their relative position, thus reconstructing a theoretically identical process. However, the change in the values of all final state external momenta is responsible for different bases for the parametrization of the residues within the integrand reconstruction, different coefficients in front of the master integrals, as well as different numerical values when the master integrals are computed. We tested that the choice of the angle of rotation does not affect the estimate, as long as this angle is not too small. In order to study the correlation of the error estimated by the rotation test and the exact error, we follow the strategy of ref. [9]. In particular, we generated 104 points for the process ud W bbg with massive bottom quarks, both in quadrupole and standard double precision, which we label with Aquad and A respectively, as well as the same points in double precision after performing a rotation, called Arot. We define the exact error ex as In figure 1, we plot the distribution of the quantity evaluated for each phase space point. In the ideal case of a perfect correlation between the exact error, as estimated by the quadrupole precision result, and the error estimated by the less time-consuming rotation test, the value of C would be close to zero, while the spread of the distribution can provide a picture of the degree of correlation. Moreover, we observe a similar behavior for the rotation and the scaling tests. In the following, we will employ the rotation test as the standard method to estimate the precision of the finite part of each renormalized virtual matrix elements. If we call 0 the error at any given phase space point and calculate it according to eq. (4.2), we can define the precision of the finite part as P0 = log10(0). Concerning the precision of the double and single poles, P2 = log10(2) and P1 = log10(1), we employ the fact that the values of the pole coefficients, after renormalization, are solely due to infrared (IR) divergencies, whose expressions are well known [99]. 2 and 1 are defined using formula in eq. (4.1), in which the exact values are provided by the reconstructed IR poles, which is automatically evaluated by GoSam. In order to assess the level of precision of the results obtained with Ninja within GoSam, in figures 2 and 3, we plot the distributions of P2 (precision of the double pole), P1 (single pole) and P0 (finite part) for two challenging virtual amplitudes with massive internal and external particles, namely gg ttHg (ttHj) and uu Huugg (Hjjjj) in VBF. By selecting an upper bound on the value of P0, we can set a rejection criterium for phase space points in which the quality of the calculated scattering amplitudes falls below the requested precision. This also allows to estimate the percentage of points which would be discarded (or redirected to the rescue system). This value, as expected by analyzing the shape of the various distributions, is strongly process dependent and should be selected according to the particular phenomenological analysis at hand. As a benchmark value, in ref. [9], the threshold for rejection was set to P0 = 3. In a similar fashion, in table 1, we provide the percentages of bad points, which are points whose precision falls below the threshold, for increasing values of the rejection threshold. The two plots are built using a set of 5 104 and 1 105 phase space points, respectively for gg ttHg and uu Huugg (VBF). No cuts have been introduced in the selection of the points, which are randomly distributed over the whole available phase space for the outgoing particles, and are generated using rambo. 0.01 0-16 Figure 2. Precision Plot for gg ttHg: the distributions are obtained using 5 104 randomly distributed phase space points. Figure 3. Precision plot for uu Huugg in VBF: the distributions are obtained using 105 randomly distributed phase space points. The use of the novel algorithm implemented in Ninja yields significant improvements both in the accuracy of results and in reduction of the computational time, due to a more efficient reduction and less frequent calls to the rescue system. These features make GoSam+Ninja an extremely competitive framework for massive, as well as massless, calculations. The new library has been recently used in the evaluation of NLO QCD corrections to pp ttHj [45]. Applications to massive amplitudes In order to demonstrate the performances of the new reduction algorithm, we apply GoSam+Ninja to a collection of processes involving six, seven and eight external particles. We choose processes where massive particles appear in the products of the reactions or run in the loop. We list them in table 2, and give the details of their calculations in the following subsections: for each process we provide results for a phase space point and a detailed list of the input parameters employed. While some of the considered processes have already been studied in the literature, the virtual NLO QCD contributions to pp W bb+n jets (n = 1, 2), pp Zbbj, pp Zttj, pp V V V j (with V = W, Z), pp ZZZZ, and pp H + n jets (n = 4, 5) in VBF are presented in this paper for the first time. When occurring in the final state, the bottom quark is treated as massive. For calculation which were already performed with previous versions of the GoSam framework, we observe that the new reduction technique yields a significant net gain both in computing time and in accuracy. A paradigmatic example is represented by gg Hggg, whose evaluation per ps-point required approximately 20 seconds, as reported in [44], while now can be computed in less than 10 seconds. In the following, for each of the considered scattering amplitudes, we provide a benchmark phase space point for the most involved subprocesses and, when possible, a comparison with results available in the literature. The coefficients ai are which appear in the various tables are defined as follows: 2Re {Mtree-levelMone-loop} where the finite part a0 is computed in the dimensional reduction scheme if not stated otherwise. The reconstruction of the renormalized pole can be checked against the value of a1 and a2 obtained by the universal singular behavior of the dimensionally regularized one-loop amplitudes [99], while the precision of the finite parts is estimated by re-evaluating the amplitudes for a set of momenta rotated by an arbitrary angle about the axis of collision, as described in section 4. The accuracy of the results obtained with GoSam+Ninja is indicated by the underlined digits. p p W + 3 jets Partonic process: du eeggg. The finite part for this process is given in the conventional dimensional regularization (CDR) scheme and was compared to the new version of NJet [16]. We also find agreement in the phase space point given by the BlackHat Collaboration [22]. particle p1 p2 p3 p4 p5 p6 p7 E px py pz 500.0000000000000000 0.0000000000000000 0.0000000000000000 500.0000000000000000 500.0000000000000000 0.0000000000000000 0.0000000000000000 -500.0000000000000000 414.1300683745429865 232.1455649459389861 332.7544367808189918 -82.9857518524426041 91.8751521026383955 -43.3570226791010995 -24.0058236140056991 77.3623460793434958 86.3540681437814044 -15.2133893202618005 37.6335512949163018 -76.2187226821854011 280.1181818093759830 -83.1261116505822031 -263.2038567586509998 47.7490851160265990 127.5225295696610033 -90.4490412959934957 -83.1783077030789002 34.0930433392580028 du eeggg Ref. [16] a0 -91.1772093904611438 -91.17720939055536 a1 -57.6891244440692361 -57.68912444409629 a2 -11.6666666666668277 -11.66666666666660 p p Z Z Z + 1 jet Partonic process: uu ZZZg. Partonic process: dd e+eggg. The finite part for this process is given in CDR and was compared to the new version of NJet [16]. We also find agreement in the phase space point given by the BlackHat Collaboration [25]. dd e+eggg Ref. [16] a0 -91.0463291277814761 -91.04632919538757 a1 -57.6876717480941892 -57.68767174883881 a2 -11.6666666666669485 -11.66666666666667 uu ZZZg a0 -18.2274687669522883 a1 -22.3832348831861125 a2 -5.6666666666670755 Partonic process: uu W +W Zg. uu W +W Zg a0 -18.0050305906438837 a1 -22.1025452011781987 a2 -5.6666666666666661 p p W Z Z + 1 jet Partonic process: ud W +ZZg. ud W +ZZg a0 -16.6719638614981740 a1 -22.1957678011010735 a2 -5.6666666666670213 Partonic process: ud W +W W +g. ud W +W W +g a0 -15.8859769338002099 a1 -21.9889128719035618 a2 -5.6666666666864467 p p Z Z Z Z Partonic process: u u Z Z Z Z. uu ZZZZ a0 10.0010268560339206 a1 -3.9999999999613696 a2 -2.6666666666665884 Partonic process: uu W +W W +W . uu W +W W +W 7.8556327396245011 -3.9999999999981126 -2.6666666666669747 p p t tb b Partonic process: dd ttbb. 4.2 GeV 4 500.0 GeV dd ttbb a0 154.6475673006605973 a1 2.7409050914577211 a2 -2.6666666666666683 Partonic process: gg ttbb. particle p1 p2 p3 p4 p5 p6 E px py pz 250.0000000000000000 0.0000000000000000 0.0000000000000000 250.0000000000000000 250.0000000000000000 0.0000000000000000 0.0000000000000000 -250.0000000000000000 194.0670578462199387 -60.6594324624948058 -47.3641590248774236 -49.2915067154802884 172.4499944124030151 -15.6689752760792977 -7.1446619651393677 -11.5324581958636152 61.9093840678718479 12.0715208460656545 23.6785835371366993 55.7560301833003820 71.5735636735052054 64.2568868925084331 30.8302374528801408 5.0679347280435243 4.2 GeV 4 500.0 GeV gg ttbb a0 165.1250038552732349 a1 -3.4472725930136989 a2 -6.0000000000001563 p p t t+ 2 jets Partonic process: gg ttgg. The finite part for this process is given in CDR and was compared with ref. [29]. gg ttgg Ref. [29] a0 -93.9825428068626394 -93.98254280655584 a1 47.0991735298819236 47.0991735300671 a2 -11.9999999999947171 -11.999999999999874 Partonic process: ug ue+ebb. 4.2 GeV 4 500.0 GeV ug ue+ebb a0 145.3708954680396630 a1 -8.3679512693708471 a2 -5.6666666666675392 p p W b b + 1 jet 4.2 GeV 4 500.0 GeV ud e+ebbg a0 129.9538554864771243 a1 -5.3385683701189715 a2 -5.6666666666668695 4.2 GeV 4 500.0 GeV ud e+ebbdd a0 148.2499564138260837 a1 -7.7272995122163879 a2 -5.3333333333331892 4.2 GeV 4 500.0 GeV ud e+ebbss a0 161.1656361677729592 a1 -1.3276262260753209 a2 -5.3333333333332735 E px py pz 250.0000000000000000 0.0000000000000000 0.0000000000000000 250.0000000000000000 250.0000000000000000 0.0000000000000000 0.0000000000000000 -250.0000000000000000 118.4116290336479551 49.7302976872290259 -105.1432905838027665 -22.2058512007951201 9.9876328289379046 7.4336546931814604 4.5250271451401858 4.9007873616352553 113.2724870353399353 -8.8682847027615033 110.4231681698792471 23.2614225044082019 50.3648230617062964 -38.8899369624844553 -31.0689344437605435 -6.4241355543198972 105.2876197360011901 -28.2113391151385819 10.7916624062129962 100.8620009592996638 102.6758083043666687 18.8056083999740657 10.4723673063308507 -100.3942240702281055 4.2 GeV 4 500.0 GeV ud e+ebbgg 64.8935770569783301 -35.9610562256753425 -8.6666666666670285 p p W W b b 4.2 GeV 4 500.0 GeV a0 118.3990066409585751 8.5574384926230991 -2.6666666666666492 Partonic process: gg ee+bb. 4.2 GeV 4 500.0 GeV gg ee+bb a0 27.4387494492212056 a1 -7.9555523940773458 a2 -5.9999999999999885 p p W W b b + 1 jet uu ee+bbg a0 -38.5591246673060795 a1 -32.4828496060584442 a2 -8.3333333333334405 Partonic process: gg Hggg. gg Hggg a0 23.4307927578718953 a1 -56.3734964424517315 a2 -15.0000000000008757 p p Z t t+ 1 jet Partonic process: uu tte+eg. uu tte+eg a0 -20.4367763710913373 a1 -25.9078542815554513 a2 -5.6666666665792098 Partonic process: gg tte+eg. E px py pz 250.0000000000000000 0.0000000000000000 0.0000000000000000 250.0000000000000000 250.0000000000000000 0.0000000000000000 0.0000000000000000 -250.0000000000000000 174.2203895522303014 -25.0977827305029138 -19.5610151031829993 5.5472629175473589 186.7123996976260685 -14.0800163072181022 56.3619207264196902 -46.6601246640355427 60.3016377245591073 38.1795332240129639 22.1553968884492853 41.0822241824339116 18.6184873501163182 5.2347824612577458 -1.6661313271933778 -17.7895792583830961 60.1470856754682259 -4.2365166475497116 -57.2901711844925998 17.8202168224373914 gg tte+eg a0 9.2826425323344441 a1 -26.2816094048822784 a2 -9.0000000000005702 p p H t t+ 1 jet Partonic process: gg Httg. gg Httg a0 -37.1842465451705166 a1 -35.9217497445515619 a2 -8.9999999999990887 Partonic process: uu gHuu (VBF). uu gHuu a0 -94.6818287259862359 a1 -40.8904107779583796 a2 -8.3333333333336821 p p H + 4 jets (VBF) Partonic process: uu ggHuu (VBF). uu ggHuu a0 -85.2117220498222565 a1 -54.2263214854450339 a2 -11.3333333333333464 Partonic process: uu uuHuu. uu uuHuu a0 -36.9909931379802686 a1 -35.2029797282532968 a2 -8.0000000000000533 p p H + 5 jets (VBF) Partonic process: uu gggHuu (VBF). uu gggHuu a0 -164.8823178520154897 a1 -81.4472715794169630 a2 -14.3333333333333570 The integrand reduction techniques have changed the way to perform the decomposition of scattering amplitudes in terms of independent integrals. In these approaches, the coefficients which multiply each integral can be completely determined algebraically by relying on the knowledge of the universal structure of the residues of amplitudes at each multiple cuts. The residues are irreducible polynomials in the components of the loop momenta which are not constrained by the on-shell conditions defining the cuts. The coefficients of the master integrals are a subset of the coefficients of the residues. The generalized unitarity strategy implemented within the integrand decomposition requires to solve a triangular system, where the coefficients of the residues, hence of the master integrals, can be projected out of cuts only after removing the contributions of higher-point residues. By adding one more ingredient to this strategy, namely the Laurent series expansion of the integrand with respect to the unconstrained components of the loop momentum, we improved the system-solving strategy, that became diagonal. We demonstrated that this novel reduction algorithm, now implemented in the computer code Ninja, and currently interfaced to the GoSam framework, yields a very efficient and accurate evaluation of multi-particle one-loop scattering amplitudes, no matter whether massive particles go around the loop or participate to the scattering as external legs. We used GoSam+Ninja to compute NLO corrections to a set of non-trivial processes involving up to eight particles. The level of automation reached in less than a decade by the evaluation of scattering amplitudes at next-to-leading order has been heavily stimulated by the awareness that the methods for computing the virtual contributions were simply not sufficient, while the techniques for controlling the infrared divergencies and, finally, for performing phase-space integration were already available. Nowadays, the scenario is changed, and one-loop contributions to many multi-particle scattering reactions are waiting to be integrated. We hope that these advancements can stimulate the developments of novel methods for computing cross sections and distributions at next-to-leading-order accuracy for high-multiplicity final states. Acknowledgments We thank all the other members of the GoSam project for collaboration on the common development of the code. We also would like to thank Valery Yundin for comparisons of Z + 3 jets and W + 3 jets with NJet. The work of H.v.D., G.L., P.M., and T.P. was supported by the Alexander von Humboldt Foundation, in the framework of the Sofja Kovalevskaja Award Project Advanced Mathematical Methods for Particle Physics, endowed by the German Federal Ministry of Education and Research. The work of G.O. was supported in part by the National Science Foundation under Grant PHY-1068550. G.O. also wishes to acknowledge the kind hospitality of the Max Planck Institut fur Physik in Munich at several stages during the completion of this project. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.


This is a preview of a remote PDF: http://link.springer.com/content/pdf/10.1007%2FJHEP03%282014%29115.pdf

Hans van Deurzen, Gionata Luisoni, Pierpaolo Mastrolia, Edoardo Mirabella, Giovanni Ossola, Tiziano Peraro. Multi-leg one-loop massive amplitudes from integrand reduction via Laurent expansion, Journal of High Energy Physics, 2014, DOI: 10.1007/JHEP03(2014)115