Skip to content

What's new in New Expression Templates

Ashar edited this page Jun 25, 2019 · 5 revisions

What's new in Tensor Expression Templates?

The new expression templates that we have opted to use is based on the library called Boost.YAP. Making this switch from traditional uBLAS expression template to YAP expression template has many benefits, starting with ease of maintainability to flexibility and robustness of the YAP expression templates. Before we start to discuss the new things and capabilities that have been introduced, Let's quickly have a look at how YAP expression is represented and works.

YAP Expression

Every YAP expression is not type-strict rather concept-strict. This implies that any class or structure that follows the YAP concept can be treated as a YAP expression. The most simple YAP expression can be:

template <boost::yap::expr_kind K, typename Tuple>
struct simple_yap_expr{
	Tuple elements;
	static const boost::yap::expr_kind kind = K;
};

Template parameters expr_kind defines the type of operation and Tuple is a Boost.hana a tuple that stores possibly other YAP expressions. These hana expressions are the operands and expr_kind is the operator. For more and more information please check Boost.YAP Documentations.

The New tensor_expression

I have replaced the old boost::numeric::ublas::tensor_expression with a new expression template in boost::numeric::ublas::detail::tensor_expression. A minimalist overview of the expression class is as follows:

template <boost::yap::expr_kind K, typename Tuple>
struct tensor_expression{
	Tuple elements;
	static const boost::yap::expr_kind kind = K;
	
	template <class T, class F, class A>
	auto eval();
	
	template <class T, class F, class A>
	void eval_to(tensor<T,F,A> &out);
	
	operator bool();
};

Using Boost.YAP User Macros I have overloaded all the relational, assignment and arithmetic operators. The list of all operator are as follow:

+ - * / == != > < >= <=

All these operators are overloaded with anything including custom scalar types, provided one of the operands is a tensor_expression or tensor. All these operators are lazy and element-wise. We also have eager assignment operators such as +=, -=, *= and /= also eager casting operators such as tensor_static_cast<..>(..), tensor_dynamic_cast<..>(..) and tensor_reinterpret_cast<..>(..) which element-wise casts the tensor to other keeping the layout same.

Note: Casting tensor results in a tensor that has the same layout but different array_type. It is always std::vector<>

Behaviour of tensor_expression

The new expression template closely mimics the standard behaviour of integral and type promotions. A simple way to think about this that any expression expands to its true form. Let me give an Example:

Say we build an expression to add two tensors of different data-types. For simplicity, we are skipping other tensor template parameters.

term denotes that it is a terminal node. In this case which holds a reference to a tensor

expr<+>
	term<tensor<int> &>
	term<tensor<float> &>

The code that makes this expression could be

tensor<int> t1{shape{5,5}, 4};
tensor<float> t2{shape{5,5}, 4.2};

auto expr = t1 + t2;

When we are element-wise lazily evaluating the expression. The following happens.

expr[0] = t1[0] + t2[0];
expr[6] = t1[6] + t2[6];

The standard C++ can surely add a float with int and multiply the result by an int. The data-type of the resulting value is trimmed down if the result-type is an int or kept if the resulting type is a float.

See,

tensor<int> a = expr;
tensor<float> b = expr;

std::cout<<a.at(0)<<" and "<<b.at(0)<<"\n";

Prints 8 and 8.2

Notice how the same expression results in two different types. This is the same behaviour is expected if we write the same expression.

int a = 4 + 4.2; // a is 8
float b = 4 + 4.2; // b is 8.2

We everywhere follow this rule, Actually, it is the default behaviour of overloads of Boost.YAP and putting any restrictions is just a lot of work.

Initially all these behaviour may seem like something that promote bugs. In reality, an numeric computation library must follow the standard rules for numeric in computing. It is all dependent on the end-user how they use these freedoms.

NOTE: If a relational expression contains more than 1 relational operands a runtime exception is thrown

Handling Relational Expressions

Relational Expression is those expressions that have at-lease one relational lazy operator in them. Say

auto expr = ((t1 + t2) == (t1 * 2));

This expr can only be converted into a boolean implicitly. Or such expression can be directly written inside an if statement.

bool res = expr; // implictly converts to a bool. This cause element-wise evaluation.

if(t1 + t2 == 2){ // element-wise evaluation. Checks if every element is 2
    // ... do something magical here.
}

if(t1 + t2){ // throws std::runtime_error. Cannot convert to boolean.
    // ... BOOM!!
}

Note: The evaluation of relational expressions stop as soon as any index fails the Predicate. Evaluating or trying to convert an expression into a bool that does not have a relational operator throws a run-time error

How expression operands are treated?

Please see this for more information about how the expression operands are treated in Boost.YAP. In short, the operands are moved if they are r-value or a reference (possibly constant) otherwise. Capturing l-values without constant-ness is useful for transforms that may modify the terminal nodes or operands. Although I am capturing them by constant references to be on the safe side.

Integration with Matrix and Vectors

We have all the operator overload for a vector & tensor_expression and matrix & tensor_expression. If any expression contains at-least one tensor. The expression is a tensor_expression. We wrap the matrix and vector into the terminal node of the expression.

Say,

tensor<int> t{shape{4,4}, 0};
matrix<int> m{4,4,1};

auto expr = (t + 2)*m;

The expression tree becomes:

expr<*>
	term<matrix<int> &>
	expr<+>
		term<int>[2]
		term<tensor<int> &>

Notice how, we have captured the matrix into the terminal node in the YAP expression, When we are evaluating this expression we handle the terminal of matrix accordingly and we fully lazily evaluate this expression.

Note: In case of a matrix and tensor operation, only rank 2 tensors can be used. Also, when we use a vector which has size say 10, the vector is expanded into a rank 2 tensor of shape {10,1}. Therefore adding a tensor and vector of shape {10,1} and 10 respectively is fine as a vector is treated as a tensor of shape {10,1} in the expression tree. However, the Same operation with a tensor of shape {1,10} is not possible and throws a run-time error.

Integration with Matrix and Vector Expression

The tensor_expression is fully compatible and works even with uBLAS matrix and vector expressions. We have transform that acts as an intermediate between two expressions. Let's say with the following code:

matrix<int> m1{4,4,4}, m2{4,4,-4};
tensor<int> t1{shape{4,4}, 8};

auto mat_expr = m1 + m2;
auto mixed_expr = mat_expr + (t1 + t2);

mixed_expr is a tensor_expression.

Any expression that has at-least one tensor operand, will become a tensor_expression

Let's see the expression tree.

expr<+>
	term< matrix_binary_expr<....> &>
	expr<+>
		term<tensor<int> &>
		term<tensor<int> &>

The matrix or vector expression is wrapped into a terminal. They are still not-evaluated rather the complete expression is just captured. We have internal transforms that intelligently recognizes that a terminal is a vector or matrix expression and accordingly lazily evaluates the expression element-wise within.

Note: We evaluate the matrix_expression within the tensor_expression lazily element-wise. If the only 0th index value is required we first evaluate the matrix expression for only index 0 then follow the YAP evaluation. All this is handled by internal transforms.

In the above expression, mat_expr is a matrix expression since the expression does not involve any tensor operands. This makes out expression template backwards compatible as we have absolutely done nothing to old expression templates.

Clone this wiki locally