Tensor contraction example This is a general property of all second order tensors. For example, the contraction between two pairs of indices of two rank-three tensors is drawn as: examples of tensors, but there is much more to tensor theory than vectors. The second chapter discusses tensor fields and curvilinear coordinates. This can equivalently be thought of as a tensor product followed by a trace. According to Wikipedia's page on tensor contraction: In general, a tensor of type $(m,n)$ (with $m \geq 1$ and $n \geq 1$) is an element of the vector space $V 5 days ago · tensor contraction. For example, the Ricci tensor is a non-metric contraction of the Riemann curvature tensor, and the scalar curvature is the unique metric contraction of the Ricci tensor. For example, a bilinear form is the same thing as a (0, 2)-tensor; an inner product is an example of a (0, 2)-tensor, but not all (0, 2)-tensors are inner tensor contraction of di erent (batches of) amplitudes. It demonstrates how to develop generic APIs in Fortran by making use of polymorphism and generic programming techniques. This is what the trace of a matrix is, if you write things down in coordinates. vector or covector). Definition of Tensor contraction. Could you guys just throw as many examples of tensor contraction as possible at me? Stuff like : Λ α β Μ β γ = Ν α γ, but harder. . Contractions# The most common tensor operation used is contraction, which is the joining of legs from different tensors. A symmetric tensor is invariant under an interchange of indices. For example, a multiplication of matrices with dimensions m×k and k×n can be expressed as a tensor A major kernel is the tensor contraction, which includes the following operations: (i) tensor–tensor contraction (TTC), (ii) tensor–matrix contraction (TMC), and (iii) tensor–vector contraction (TVC), being all of them core components in widely used tensor algorithms: the Khatri-Rao product , the higher order orthogonal iteration (HOOI Another illustrative example is a tensor contraction for which all tensors are of type CUTENSOR_R_16F and the compute descriptor is CUTENSOR_COMPUTE_DESC_32F: In this case the parallel reduction (if required for performance) would have to be performed in CUTENSOR_R_32F and thus require auxiliary workspace. 0d-5 2eorb 2emet 13 attilesize 40 end task tce energy Example: h2o_dimer_ccsd_aug_cc_pvdz. 0e-10 tol2e 1. If we know what the meaning of matrix multiplication is and generalise it, we will know what tensor contraction means in a more general setting? Basic Rules of Tensor Contraction Let and be Lorentz vectors and let be their corresponding index: Let be the metric tensor with the given indices Pay attention to the locations of the indices when doing contractions, but its not entirely important to know what the different locations mean to do the calculations. Support ¶ # 'target_num_slices' is useful if you want to do the contraction in parallel, it will devide the tensor network into pieces and then calculat them in parallel # 'math_repeats' means how many times are going to run JDtensorPath to find a best contraction path # 'search_parallel' means to run the JDtensorPath in parallel, True means to use all the CPUs, integer number means to use that number Jan 28, 2019 · Tensor contraction. For example, let's consider a tensor A with dimensions 2 x 3 x 2: Notice that the example above on the left is equivalent to a matrix multiplication between matrices A and B, while the example on the right produces a rank-3 tensor D via the contraction of a network with three tensors. 1 A Simple Example of Solving Tensor Network Contraction by How do I compute the tensor contraction C of A and B, which should be N1 x For an example, if A was a N x M matrix and B an M-vector, I could just do np. These codes are all presented in three programming languages common for scientific computation: MATLAB (R2016b onwards), Julia (v1. In this case smoothness is interpreted in the sense that for every smooth vector field Xon M, the Jan 2, 2023 · $\begingroup$ I’m back to claiming that (A) is invalid. See also Index Gymnastics, Tensor, Tensor Contraction Explore with Wolfram|Alpha Apr 9, 2021 · $\begingroup$ @EE18 "I'm afraid it doesn't use contraction" - What I call "trace" (1) and "product" (2) are precisely what you call "contraction". dot Example: Cjlon imn = D jlo im. 7. Jun 1, 2022 · 3. Furthermore, this approach is compatible with di erent TN contraction algorithms available in the literature [10 ,11 Mar 5, 2025 · The set of rules for manipulating and calculating with tensors. Here we provide simple example codes in order to demonstrate important tensor network algorithms. For einsum with one or two tensors the resulting output tensor is always ordered using the free (non-contracting) indices in the order that they appear in the contraction for instance, in the following tensor contraction G is a second order tensor. From this example, we see that when you multiply a vector by a tensor, the result is another vector. Tensor Contractions A tensor contraction multiplies elements of two tensors and computes partial sums to produce a third, in a fashion expressible by pairing up modes of different tensors, definingeinsum (term stems from Einstein’s summation convention) tensor contraction einsum diagram inner product w “ ř i u iv i outer product w ij “ u Performing tensor network contraction plan generation, auto-tuning, and its subsequent execution Gradually constructing a tensor network state (e. Inparticular a tensor field of type (0,1) is just a vector field, and a tensor field ωof type (1,0) is given by a covector ω x at each point. Let be a finite dimensional inner product space, and let be an orthonormal basis of . The rank (or decomposition rank) of a tensor w. An API Reference that provides a comprehensive overview of all library routines, constants, and data types. It offers higher-level tensor operations, including tensor contraction through its tensorcontraction() function, which is capable of symbolic tensor algebra in addition to standard numerical operations. It follows that a rank-ktensor with ranges f0;1grequires O(2k) storage. One can also view contraction of a tensor field in the context of modules over an appropriate ring of functions on the manifold [5] or the context of sheaves of modules over Contribute to 1996yuwang/QuantumChem_example development by creating an account on GitHub. The third chapter extends tensor theory to spaces other than vector spaces, namely manifolds. Two examples, together with the vectors they operate on, are: The stress tensor In fact, one of the main benefits of diagram notation is that it frees one from having to name every index in a complicated tensor network. 1) and Python (v3. Jan 28, 2020 · This chapter is aimed at understanding TN algorithms from the perspective of MLA Multi-linear algebra (MLA) . Incidentally, when two tensors are multiplied together without contraction the resulting tensor is called an outer product: for instance, the second-order tensor is the outer product of the two first-order tensors and . the total number of scalar operations required to perform the contraction The tensors are classified according to their type (n, m), where n is the number of contravariant indices, m is the number of covariant indices, and n + m gives the total order of the tensor. While Strassen's algorithm for GEMM is well studied in theory and practice, extending it to accelerate TC has not been previously pursued. Given a tensor of order , where , the -contraction of is the tensor of order defined as follows: for any , Choosing to be the set , we see that The definition of the contraction is best illustrated with the help of some simple examples. Images, for example, are naturally represented as third order tensors, where the modes correspond to height, width, and channels. The contraction of a rank-2 tensor (matrix) produces its (scalar) trace: Tr[A] = Ai i: (2) { Inner product: An inner product of two tensors is its outer product followed by one or several contractions. Example: Ajl i B op mn = C jlop imn!n=p Cjlon imn = D jlo im. I provide some short examples of Sep 13, 2021 · a, An example of a tensor network with four tensor nodes and five edges, where the edges a, c, d are closed and b, e are open. Tensor contraction generalizes the concept of a matrix dot product by summing over several axes of the tensors. Tensor contraction of a and b along specified axes. C = np. To begin, I understand that that when two indices of a tensor are equated, and a summation is performed over this equated common/repeated index, we are performing Carving-width and contraction trees for tensor networks J. PathInfo object. The foundation of all tensor networks routines is the contraction of a network containing multiple tensors into a single tensor. (1) Given a second-rank tensor, there is only one possible contraction and you obtain it by associating to each basis the trace of the matrix-representation. contraction_tree. “cuTT: A High-Performance Tensor Transpose Library for CUDA Compatible GPUs” (2017) Introductory Example#. 0. [3] Yang Shi et al. b, A sequential contraction order, where two tensors are merged into Oct 14, 2024 · Contraction reduces the rank of a tensor by pairing indices, effectively summing over them. For example, axes = 2 is the equal to axes = [[-2,-1 Dec 9, 2021 · For example, for a given tensor, what happens if we plug in a 1-form w and a vector v? Are there any restrictions on what 1-forms or vectors we can plug in to a certain tensor aside from dimensionality? I saw that in the definition for tensors it mentions that the tensor is on a point, how would it be used in a given tensor? A major kernel is the tensor contraction, which includes the following operations: (i) tensor–tensor contraction (TTC), (ii) tensor–matrix contraction (TMC), and (iii) tensor–vector contraction (TVC), being all of them core components in widely used tensor algorithms: the Khatri–Rao product [2], the higher-order orthogonal iteration Jun 1, 2017 · Tensors offer a natural representation for many kinds of data frequently encountered in machine learning. Furthermore, this approach is compatible with di erent TN contraction algorithms available in the literature [10 ,11 Jul 5, 2019 · For example, I know tensor is interpreted as a multilinear map. Mar 5, 2025 · The contraction of a tensor is obtained by setting unlike indices equal and summing according to the Einstein summation convention. The @tensor macro now accepts keyword arguments to facilitate a variety of options that help with debugging, contraction cost and backend selection. Anekstein, P. Jul 15, 2007 · Well, your trouble is due to the fact that's not the Kronecker delta! You have the identity map [itex]V \to V[/itex]. log2 of the maximum size of any intermediate tensor. Mar 6, 2019 · The IndexPair stands for exactly what is says: The first Index in the tensor-dimensions is mapped to the second index in the second tensor. Simply put, an ar-row does not need to start at the origin or point in any specific orientation, but it’s still the same arrow with the same magnitude. In fact, most common tensor network algorithms are simply composed of some combination of contraction steps, as discussed in Tutorial 1, together with some combination of tensor decompositions. I'll show a simple example and I hope you can help me. Tensordot (also known as tensor contraction) sums the product of elements from a and b over the indices specified by a_axes and b_axes. In multilinear algebra, a tensor contraction is an operation on a tensor that arises from the canonical pairing of a vector space and its dual. IdxPair(a,b) => A(1,2,3,4,x) * B(x,5,6,7,8,9) where a is the index of the last dimension, in this case x and b the index of the dimension in the second tensor Figure 1: Left: Example of a 4th-order tensor resulting from tile matrix layout used in dense LA, a tensor contraction, and a possible tensor contractions design using Einstein summation notation and a Domain Speci c Embedded Language (or DSEL ) . Apr 14, 2024 · Tensor contraction is a nuanced and potent mathematical operation employed extensively across various scientific disciplines, including physics, engineering, and computer science. This is just an artifact of first type changing things so that we get a (1,1) tensor, then evaluating. Then, we create a descriptor for each tensor by providing its data type, order, data type, and alignment (of the data pointer for which this tensor descriptor is created, in bytes). Contraction. t. 95 5 Tensor Network Contraction and Multi-Linear Algebra. The lists a_axes and b_axes specify those pairs of axes along which to contract the tensors. In the general case, the cost of any single tensor contraction is exponential in the number of dimensions of its inputs, as is the size of its output [29]. As we shall see, this usage should be limited to symmetric tensors. The action of tensordot can be understood by exploring its input parameters and examples in a bit more depth. Each contraction may be executed via matrix multiplication on a properly ordered and structured tensor. See full list on relate. Thus we have already learned the foundational tools of tensor network methods! Oct 30, 2019 · Tensor contraction is properly understood as an operation within a tensor not between a tensor and another tensor (e. It is also natural to consider tensor products of copies of the tangent and cotangent spaces, leading to higher order tensors. 99 5. For example, for a second-rank tensor, contr(T_j^i)=T_i^i. What is Tensor contraction. Thus, the contraction of the tensor with components $ a^{i}_{j} $ is an invariant $ a^{i}_{i} $, called the trace of the tensor; it is denoted by $ \text{Sp}(a^{i}_{j}) $, or $ \text{tr}(a^{i}_{j}) $. A complete example illustrating parallel implementation of tensor network contraction using the fine-grained API is shown below: contraction_cost(A; sequence) Return the cost of contracting the collection of ITensors according to the specified sequence, where the cost is measured in the number of floating point operations that would need to be performed to contract dense tensors of the dimensions specified by the indices of the tensors (so for now, sparsity is ignored in computing the costs). It is commonly used to simplify complex tensor expressions and perform operations like matrix multiplication or trace calculations. Wocjan August 30, 2019 Abstract We study the problem of nding contraction orderings on tensor networks for physical simulations using a syncretic abstract data type, the contraction-tree, and explain its connection to temporal and spatial measures Inspired by the classical simulation of quantum circuits, here we introduce a new framework for exact contraction of large tensor networks with arbitrary structure (see examples in Fig. Contraction Heuristics for Tensor Networks and TDD Networks I Examples: inner product, outer product, tensor product, Hadamard (elementwise) product, matrix multiplication I One higher order example is tensor-times-matrix (TTM), e. May 20, 2003 · a. Parallel I/O schemes ¶ Each process is assigned a local tensor index sorting and tensor contraction dynamically. A contraction of two tensors is the result of setting two of the indices (typically they must be a covariant/contravariant pair) to be equal and performing the For example, is a zeroth-order contraction of the second-order tensor . 3(i-ii), which are known as the (left/right) MPS transfer operators. 2 1 Introduction Tensor network (TN) methods in the context of many-body quantum systems havebeen developedrecently. C(i 1,i 2,i 3,i 4) = X k A(i 1,i 2,k)·B(k,i 3,i 4) ⊗ Transition to Computational Multilinear Algebra ⊗ Lecture 3. Tutorial for contracting tensor networks, with worked examples in MATLAB, Julia and Python. threshold max. As an example take the surface stress tensor. DonateGoogle Pay Feb 29, 2024 · SymPy is a Python library for symbolic mathematics. Tensor network (TN) contraction is an algorithmic method that can efficiently simulate some quantum circuits, often greatly reducing the computational cost over methods that simulate Feb 12, 2025 · I am learning some introductory material on tensors from the book "Fluid Mechanics", Kundu/Cohen. the opt_einsum. Nov 19, 2010 · I learn brilliantly from example and repetition (the understanding comes later) but the internet and my lecture notes seem to be void of any kind of worked example. It is a multi-dimensional generalization of matrix multiplication (GEMM). Precise calculation of molecular electronic wavefunctions by methods such as coupled-cluster requires the computation of tensor contractions, the cost of which has polynomial computational scaling with respect to the system and basis set sizes. 4. I'm not entirely sure what you mean by "as I contract a pair of indices, the result is a new tensor that does not belong to the list L", but I think doing the contraction in a single step would exactly solve this problem. A tensor field Tof type (k,l)onMis a smooth choice of a tensor T x of type (k,l)atxfor each x∈ M. r. Right: Illustration of tensor contractions needed and viable approaches to solve them in machine Dec 10, 2018 · Classical simulation of quantum computation is necessary for studying the numerical behavior of quantum algorithms, as there does not yet exist a large viable quantum computer on which to perform numerical tests. 1). The first key idea of this framework is to explicitly construct the contraction tree for a given tensor network, combining agglomerative, divisive, and performing contractions with tensors from many libraries via cotengra, even if they don't provide einsum or tensordot but do have (batch) matrix multiplication About Hyper optimized contraction trees for large tensor networks and einsums Why do we need tensor contraction? Tensor Contraction-Motivation h: Proportion of topics in a document A: Topic-word matrix Third order moment: • Learning latent variable model with tensor decomposition Example: Topic modeling tensor contraction. TensorContract [ tensor , { } ] returns tensor . Tensor contraction examples. Create Tensor Descriptors# We are now ready to use the cuTENSOR library and to initialize its library handle. TensorNetwork. contraction_width. A tensor network is a graph G= (V;E) in which each node is associated with a tensor and each edge with an index of the adjacent tensor(s). Tensor contraction is a powerful operation that reduces the dimensions of a tensor by summing over specific indices. A is an order-5 tensor. a. Maybe the contraction is doing some transformation to the map? Or since matrix multiplication is also a tensor contraction. A Getting Started guide that steps through a simple tensor contraction example. An example of the type problem that we consider is depicted in Figure 2A, where we wish to contract the network of tensors {A, B, C} to form an order-3 tensor F, which has components defined Nov 3, 2024 · Tensor Contraction. 2. Contraction reduces the tensor rank by 2. the relationship between the objects corresponding to the. For example In this section, we first introduce the background of tensor, tensor contraction, and the problem definition of finding the optimal contraction sequence of a tensor network in Section II-A. For example, if a tensor a i,jdescribes the relationship between the objects corresponding to the i,jindices, and a tensor b j,k for those corresponding to the j,kindices, then the resulting tensor describes the relationship between objects corresponding to the i,kin-dices. 4 Summary of the Tensor Network Algorithms in Higher Dimensions . Support # Create Tensor Descriptors¶ We are now ready to use the cuTENSOR library and to initialize its library handle. Tensor methods are noted for their ability to discover multi-dimensional dependencies, and tensor decompositions in particular, have been used to produce In this video, I continue the discussion on tensor operations by defining the contraction, inner product, and outer product. einsum(‘ijmn,kmln->ijkl’, [a, b]) tensor contraction of di erent (batches of) amplitudes. Tensor Contractions Analogous to how the serious business of matrix decompositions builds upon matrix multiplication, we will find that tensor contractions have a foundational role to play in tensor computations. Tensor Contractions. the cotengra. cs. Sep 9, 2015 · The $ p $-fold contraction of a tensor that is $ p $-times covariant and $ p $-times contravariant is an invariant. Two vectors, U and V can also be combined via an inner product to form a new scalar η. Jun 7, 2018 · I have only seen abstract theoretical examples on the internet so I'm asking for a bit of help on how to find the contractions given the expressions of the tensors and the pair of indices where we will carry out the contraction. In addition to extending the rank of tensor objects by forming dyadic, triadic, or n-adic products of tensors, we can reduce the rank of tensors by means of a process called contraction. To learn more have a Jan 26, 2013 · For example, the dual space to the space of velocity vectors is the cotangent space, and at each point there is a natural contraction between a tangent vector and a cotangent vector. This can be described as c i,k= X j a i,jb j,k, (1) Finally, for users seeking full control over the tensor network operations and parallelization, we offer fine-grained APIs as illustrated by the examples in the documentation for Network. number of iterations Precise calculation of molecular electronic wavefunctions by methods such as coupled-cluster requires the computation of tensor contractions, the cost of which has polynomial computational scaling with respect to the system and basis set sizes. Thus U · V = η. Experimental support for automatic differentiation has been added by adding reverse-mode chainrules. nw length of the diis cycle default value 40 conv. Tensor contraction with examples. I have some questions related to tensor contraction as mentioned in the book. In some scientific code of mine, I have to perform tensor contractions, so I used einsum, which is made for the purpose. In tensor component notation you can’t have (on the right side) two upstairs $\nu$’s and two downstairs $\nu$’s. A. Rank 0 tensor One real number, invariant under rotation • Ai = Vector a. a Rank 1 tensor where i=1,2,3 Components will transform under rotations. Extended Keyboard Examples Upload Random tensor contraction - Wolfram|Alpha Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Matrix multiplication is a representative example of tensor contraction. One couldhoweveridentifysome precursorsof them Aug 16, 2020 · By default Fastor deduces the return type of the output tensor from the indices of a tensor contraction automatically. Here’s an example: from sympy import Array, tensorcontraction # Create two symbolic matrices. 1). Tensor contraction (TC) is an important computational kernel widely used in numerous applications. The number of indices of a tensor is called its rank. the network, where each internal vertex represents the tensor contraction of its two children. 0e-10 singlet rhf end tce freeze atomic ccsd maxiter 100 tilesize 15 diis 5 thresh 1. Let us look at some example diagrams for familiar low-order tensors: And some examples of tensor contractions: In the first example above, the names of the indices are notated on the diagram. Contraction plays a significant role in physics, particularly in the formulation of conservation laws and the description of physical fields. 1. illinois. The Kronecker delta is a transpose the identity map: it's a map [itex]\mathbb{R} \to V^* \otimes V[/itex], which is effectively the same thing as choosing an element of [itex] V^* \otimes V[/itex], then we get (with respect to a basis and its dual) formal tensor analysis, such devices as the parallelogram rule are generally not considered. edu Oct 30, 2019 · Tensor contraction is properly understood as an operation within a tensor not between a tensor and another tensor (e. It seems the manual way of doing tensor contractions using matrix multiplication is much faster than using numpy's einsum function. That is ˙ ij = ˙ ji for a symmetric tensor. Tensor Network Contraction Tensor. contraction_cost. In Sect. , t ijkl = X q u ijqlv qk I A common contraction between two high order tensors is t abij = X p;q u apiqv pbqj I Tensor contractions can be reduced to products of matrices A Getting Started guide that steps through a simple tensor contraction example. Furthermore, this approach is compatible with di erent TN contraction algorithms available in the literature [10 ,11 This project goes back to a talk I gave at the FortranCon conference 2020. A tensor is a linear mapping of a vector onto another vector. a tensor contraction. Dec 5, 2024 · For example, in Figure 4, we contracted a C X tensor with an H tensor, whereas a gate-based approach would have to insert an identity gate (using a tensor product) as in Figure 1b. Tensor contraction involves summing the *This paper is part of the Industry Track of ISCA 2024’s program. For example, a multiplication of matrices with dimensions m×k and k×n can be expressed as a tensor tensor contraction of di erent (batches of) amplitudes. k. ContractionTree object. In a parallel execution, dynamic load balancing of tile-level local tensor index sorting and local tensor contraction (matrix multiplication) will be invoked. Dec 20, 2024 · One such powerful operation is tensordot, which serves the dual purpose of tensor contraction and dot product. Not all contractions start with an outer product. Edges between nodes represent shared indices that must be summed over. Now for a general tensor, you may come across the notion of a metric contraction, for example of a (0,2) tensor. • Aij = Rank 2 tensor a. e. Jakes-Schauer, D. Contracting a tensor is the equivalent of obtaining the trace of a matrix - it says something important about the matrix, but it reduces the matrix to a scalar, and hence, there is a loss of information. This subsection provides a basic overview of the API calls and features related to JIT compilation. Feb 4, 2017 · Memory considerations aside, I believe you can do the contractions in a single call to einsum, although you'll need some preprocessing. It's crucial for various operations such as matrix multiplication and the formation of scalar quantities from higher-order tensors. This specific example implements a tensor contraction / Einstein summation API. In practice, a simple method to find the environment tensors is to first initialize random tensors, and to then apply the transfer operators May 25, 2016 · Introduction to Tensor Calculus Taha Sochi May 25, 2016 Department of Physics & Astronomy, University College London, Gower Street, London, WC1E 6BT. 1, we start from a simple example with a 1D TN Tensor network (TN) stripe, which can be “contracted” by solving the eigenvalue For a symbolic tensor in the first argument of TensorContract, the contractions in the second argument are sorted, after sorting the slots in each contraction. 5. We begin by computing a contraction using the same steps as described in Getting Started, but with a different contraction example to emphasize the benefit of JIT compilation when the number of contracted modes increases. i, j indices, and a tensor b j,k for those corresp onding to. 5. Here is an example line. g. The answer is that the environment tensors σ and μ s hould be chosen as the dominant fixed points of the contractions shown in Fig. See the guide: Math > Tensor Math Function. As such, a vital part of performing a tensor network contraction. “Tensor Contractions with Extended BLAS Kernels on CPU and GPU” (2016) [4] Antti-PekkaHynninenet al. Example: CCSD calculation 11 scf thresh 1. Then, in Section II-B, we describe the vanilla BFS search algorithm that we adopt as the basis of our algorithm design. 4 days ago · tensor contraction. contraction_info. 94 References . It is this chapter that provides the foundations for tensor applications in physics. Example: The inner product of force and velocity gives the scalar power being delivered into (or being taken out of) a system: f(nt) · v(m/s) = p(W). elements of certain axes of a tensor. Even in this relatively the simple example, we see that the diagrammatic notation is already easier to interpret than the axes = 0: tensor product \(a\otimes b\) axes = 1: tensor dot product \(a\cdot b\) axes = 2: (default) tensor double contraction \(a:b\) When axes is integer_like, the sequence of axes for evaluation will be: from the -Nth axis to the -1th axis in a, and from the 0th axis to (N-1)th axis in b. Tensor contraction is declarative No explicit scheduling for computation No explicit memory layout for data We introduce low-level einsum for our primitive Low-level einsum = tensor contraction + explicit memory layout + explicit scheduling Einsum notation for tensor contraction torch. einsum("abcd,amcn->mbnd", A, B) TensorNetwork. , a quantum circuit state), followed by computing its properties, including arbitrary slices of amplitudes, expectation values, marginal distributions (reduced density matrices), as well as A tensor of rank two is sometimes written in vector notation as a symbol with two arrows above it. As a result, this approach allows us to reduce the to-tal computational cost by several orders of magnitude, compared to independent multiple runs of the tensor contraction. some partition of the indices (for tensors with order > 2) specifies the minimum dimension of an exact factorization on this partition, i. For Terminology: Within these tutorials we use the following conventions: The order of a tensor is defined as the number of indices it has, i. For example, if a tensor a i,j describes. Examples. hzf irlwrk ejl wsys bix gfxpjb jwyghz eztinv jtpnb kokidv dkow gycu zvrvb dsgjb umvme