pytaco.tensordot

pytaco.tensordot(t1, t2, axes=2, out_format=mode_format(compressed), dtype=None)

Compute the tensor dot product along the specified axes for tensors of order >= 1.

Given two tensors and an iterable containing two iterables (t1_axes, t2_axes), sum the products of t1 and t2 elements over the axes specified by t1_axes and t2_axes. The third (axes) argument can be a non-negative integer scalar such as N. In this case, the last N dimensions of t1 and the first N dimensions of t2 are summed over.

Parameters
t1, t2: tensor, array_like, dimensions >= 1

The two tensors we would like to dot.

axes: int or array_like
  • If an integer, sum over the last N axes of t1 and the first N axes of t2. The size of the dimensions must match.

  • If array_like or a list of axes to sum over, the first iterable is the list of axes applying to t1 and the second iterable is a list of axes applying to t2

out_format: format, mode_format, optional
  • If a format is specified, the result tensor is stored in the format out_format.

  • If a mode_format is specified, the result the result tensor has a with all of the dimensions stored in the mode_format passed in.

dtype: Datatype

The datatype of the output tensor.

Returns
result: tensor

The contraction of the two tensors passed in based on the axes given.

Notes

When there is more than one axis to sum over - and they are not the last axis of t1 and the first axis of t2, the argument axes should consist of two sequences of the same length, with the first axis to sum over given first in both sequences, the second axis second, and so forth.

As with all expressions, some sparse tensors cannot be computed without explicity transposing because taco is unable to simultaneously iterate over both tensors. In this case, an error will be thrown asking that the user transposes one of the tensors.

Examples

Here we use an example where we show using the axes as a list. We ‘dot’ the 1st axis of t1 with the 0-th axis of t2 and the 0th axis of t1 with the 1st axis of t2.

>>> import pytaco as pt
>>> from pytaco import compressed, dense
>>> import numpy as np
>>> from scipy.sparse import csr_matrix, csc_matrix
>>> t1 = np.triu(np.arange(60.).reshape(3,4,5))
>>> t2 = np.triu(np.arange(24.).reshape(4,3,2))
>>> res = pt.tensordot(t1,t2, axes=([1,0],[0,1]))
>>> res.shape
[5, 2]
>>> res.to_array()
array([[   0.,   60.],
       [  36.,  340.],
       [ 186.,  996.],
       [ 528., 2184.],
       [ 564., 2272.]])

# We can use sparse tensors to do the same thing
>>> fmt1 = pt.format([compressed, dense, compressed])
>>> fmt2 = pt.format([compressed, dense, compressed], [1, 0, 2])
>>> t3 = pt.remove_explicit_zeros(t1, fmt1)
>>> t4 = pt.remove_explicit_zeros(t2, fmt2)
>>> res2 = pt.tensordot(t3,t4, axes=([1, 0], [0, 1]), out_format=dense)
>>> res2.to_array()
array([[   0.,   60.],
       [  36.,  340.],
       [ 186.,  996.],
       [ 528., 2184.],
       [ 564., 2272.]])

Note that in the above, we had to switch the mode ordering in order to allow taco to perform the computation. In general, when dealing with sparse structures, assuming our axes are in the form (a_axes, b_axes) we want to ensure that a_axes[i] == b_axes[mode_ordering[i]]. For the curious, we are currently exploring ways to lift this restriction.

Also note that the above example is equivalent to writing Res[k, l] = t3[j, i, k] * t4[i, j, l] in index notation.