# [OpenFOAM] 张量运算基础与代码补充

Please refresh the page if equations are not rendered correctly.
---------------------------------------------------------------

# 一 理论基础

## 1. 标量与向量1

### 1.1 符号约定

\mathbf{a}=\left(\begin{array}{l}
a_{x} \\
a_{y} \\
a_{z}
\end{array}\right)=\left(\begin{array}{l}
a_{1} \\
a_{2} \\
a_{3}
\end{array}\right)

\mathbf{b}=\left(\begin{array}{l}
b_{x} \\
b_{y} \\
b_{z}
\end{array}\right)=\left(\begin{array}{l}
b_{1} \\
b_{2} \\
b_{3}
\end{array}\right)

\mathbf{T}=\left[\begin{array}{lll}
T_{x x}&T_{x y}&T_{x z} \\
T_{y x}&T_{y y}&T_{y z} \\
T_{z x}&T_{z y}&T_{z z}
\end{array}\right]=\left[\begin{array}{lll}
T_{11}&T_{12}&T_{13} \\
T_{21}&T_{22}&T_{23} \\
T_{31}&T_{32}&T_{33}
\end{array}\right] .

\mathbf{e}_1 = \mathbf{e}_x = \left( \begin{array}{l}
1 \\
0 \\
0
\end{array}\right)

\mathbf{e}_2=\mathbf{e}_y=\left(\begin{array}{l}
0 \\
1 \\
0
\end{array}\right)

\mathbf{e}_3=\mathbf{e}_z=\left(\begin{array}{l}
0 \\
0 \\
1
\end{array}\right)

\mathbf{I}= \delta_{ij}=\left[\begin{array}{lll}
1&0&0 \\
0&1&0 \\
0&0&1
\end{array}\right]

\phi \mathbf{b}=\left(\begin{array}{c}
\phi b_{x} \\
\phi b_{y} \\
\phi b_{z}
\end{array}\right)

\phi \mathbf{T}=\left[\begin{array}{lll}
\phi T_{x x}&\phi T_{x y}&\phi T_{x z} \\
\phi T_{y x}&\phi T_{y y}&\phi T_{y z} \\
\phi T_{z x}&\phi T_{z y}&\phi T_{z z}
\end{array}\right]

### 1.2 两个向量的点积

\phi=\mathbf{a} \cdot \mathbf{b}=\mathbf{a}^T \mathbf{b}=\sum_{i=1}^3 a_i b_i

\mathbf{a} \cdot \mathbf{b}=a_{1} b_{1}+a_{2} b_{2}+a_{3} b_{3}=a_{i} b_{i}

\begin{array}{c}
\mathbf{a} \cdot \mathbf{a}=a_{1} a_{1}+a_{2} a_{2}+a_{3} a_{3}=a^{2} \\
\mathbf{a} \cdot \mathbf{b}=\mathbf{b} \cdot \mathbf{a} \\
\mathbf{a} \cdot(\mathbf{b}+\mathbf{c})=\mathbf{a} \cdot \mathbf{b}+\mathbf{a} \cdot \mathbf{c}
\end{array}

The geometrical representation of the scalar product is \mathbf{a} \cdot \mathbf{b}=a b \cos \theta as depicted by the shaded area in figure 1 :

Figure 1: The scalar product

### 1.3 两个向量的叉积

\mathbf{a} \times \mathbf{b}=\left(a_{2} b_{3}-a_{3} b_{2}, a_{3} b_{1}-a_{1} b_{3}, a_{1} b_{2}-a_{2} b_{1}\right)=e_{i j k} a_{j} b_{k}

e_{i j k}=\left{\begin{array}{ll}
0&\text { when any two indices are equal } \\
+1&\text { when } i, j, k \text { are an even permutation of } 1,2,3 \\
-1&\text { when } i, j, k \text { are an odd permutation of } 1,2,3
\end{array}\right.

\begin{array}{c}
\mathbf{a} \times \mathbf{a}=\mathbf{0} \\
\mathbf{a} \times \mathbf{b}=-\mathbf{b} \times \mathbf{a} \\
\mathbf{a} \times(\mathbf{b}+\mathbf{c})=\mathbf{a} \times \mathbf{b}+\mathbf{a} \times \mathbf{c}
\end{array}

Figure 2: The vector product

### 1.4 两个向量的外积

\begin{aligned}
\mathbf{T}=\mathbf{a} \otimes \mathbf{b}=\mathbf{a b}^{T}&= \left[ \begin{array}{c} a_{x} \\ a_{y} \\ a_{z} \end{array} \right] \left[ \begin{array}{ccc} b_{x}&b_{y}& b_{z} \end{array} \right] \\ &=\left[\begin{array}{lll}
a_{x} b_{x}&a_{x} b_{y}&a_{x} b_{z} \\
a_{y} b_{x}&a_{y} b_{y}&a_{y} b_{z} \\
a_{z} b_{x}&a_{z} b_{y}&a_{z} b_{z}
\end{array}\right] .
\end{aligned}

\mathbf{T}=\mathbf{a} \otimes \mathbf{b}=\mathbf{a b}

## 2. 二阶张量

u_{i}=T_{i j} v_{j}

\mathbf{T}=T_{i j}=\left(\begin{array}{cccc}
T_{11}&T_{12}&\ldots&T_{1 n} \\
T_{21}&T_{22}&\ldots&T_{2 n} \\
\vdots&\vdots&\ddots&\vdots \\
T_{n 1}&T_{n 2}&\ldots&T_{n n}
\end{array}\right)

\mathbf{T}=T_{i j}=\left(\begin{array}{ccc}
T_{11}&T_{12}&T_{13} \\
T_{21}&T_{22}&T_{23} \\
T_{31}&T_{32}&T_{33}
\end{array}\right)

### 2.3 两个张量的标量积

\begin{aligned}
\mathbf{T}: \mathbf{S}=\sum_{i=1}^{3} \sum_{j=1}^{3} T_{i j} S_{i j} = \\
T_{i j} S_{i j}=& T_{11} S_{11}+T_{12} S_{12}+T_{13} S_{13}+\\
& T_{21} S_{21}+T_{22} S_{22}+T_{23} S_{23}+\\
& T_{31} S_{31}+T_{32} S_{32}+T_{33} S_{33}
\end{aligned}

\begin{aligned}
\mathbf{T} \cdot \mathbf{S}=T_{i j} S_{j i}= \\
& T_{11} S_{11}+T_{12} S_{21}+T_{13} S_{31}+ \\
& T_{21} S_{12}+T_{22} S_{22}+T_{23} S_{32}+ \\
& T_{31} S_{13}+T_{32} S_{23}+T_{33} S_{33}
\end{aligned}

### 2.4 两个向量的张量积

The tensor product of two vectors, denoted by \mathbf{a b} (sometimes denoted \mathbf{a} \otimes \mathbf{b} ), is defined by the requirement that (\mathbf{a b}) \cdot \mathbf{v}=\mathbf{a}(\mathbf{b} \cdot \mathbf{v}) for all \mathbf{v} and produces a tensor whose components are evaluated as:

\mathbf{a b}=a_{i} b_{j}=\left(\begin{array}{ccc}
a_{1} b_{1}&a_{1} b_{2}&a_{1} b_{3} \\
a_{2} b_{1}&a_{2} b_{2}&a_{2} b_{3} \\
a_{3} b_{1}&a_{3} b_{2}&a_{3} b_{3}
\end{array}\right)

### 2.5 两个二阶张量的张量积

The tensor product of two tensors combines two operations \mathbf{T} and \mathbf{S} so that \mathbf{S} is performed first, i. e. (\mathbf{T} \cdot \mathbf{S}) \cdot \mathbf{v}=\mathbf{T} \cdot(\mathbf{S} \cdot \mathbf{v}) for all \mathbf{v} . It is denoted by (\mathbf{T} \cdot \mathbf{S}) and produces a tensor whose components are evaluated as:

P_{i j}=T_{i k} S_{k j}

The product is only commutative is both tensors are symmetric since

\mathbf{T} \cdot \mathbf{S}=\left[\mathbf{S}^{\mathrm{T}} \cdot \mathbf{T}^{\mathrm{T}}\right]^{\mathrm{T}}

### 2.6 张量的迹

\text {tr } \mathbf{T} = T_{ij} \delta_{ij} = T_{kk} = T_{11} + T_{22} + T_{22}

### 2.7 张量的行列式

The determinant of a tensor is also a scalar invariant function of the tensor denoted by

\begin{array}{c}
\operatorname{det} \mathbf{T}=\left|\begin{array}{ccc}
T_{11}&T_{12}&T_{13} \\
T_{21}&T_{22}&T_{23} \\
T_{31}&T_{32}&T_{33}
\end{array}\right|=\frac{1}{6} e_{i j k} e_{p q r} T_{i p} T_{j q} T_{k r}= \\
T_{11}\left(T_{22} T_{33}-T_{23} T_{32}\right)-T_{12}\left(T_{21} T_{33}-T_{23} T_{31}\right)+T_{13}\left(T_{21} T_{32}-T_{22} T_{31}\right)
\end{array}

## 3. 高阶张量

In Section [2.4](#2.4 两个向量的张量积) an operation was defined for the product of two vectors which produced a second rank tensor. Tensors of higher rank than two can be formed by the product of more than two vectors, e.g. a third rank tensor \mathbf{a b c} , a fourth rank tensor \mathbf{a b c d} . If one of the tensor products is replaced by a scalar (\cdot) product of two vectors, the resulting tensor is two ranks less than the original. For example, (\mathbf{a} \cdot \mathbf{b}) \mathbf{c d} is a second rank tensor since the product in brackets is a scalar quantity. Similarly if a scalar (\boldsymbol{:}) product of two tensors is substituted as in \mathbf{a b} \mathbf{~ : ~ c d} , the resulting tensor is four ranks less than the original. The process of reducing the rank of a tensor by a scalar product is known as contraction. The dot notation indicates the level of contraction and can be extended to tensors of any rank. In continuum mechanics tensors of rank greater than two are rare. The most common tensor operations to be found in continuum mechanics other than those in Sections 1 and 2 are:
a vector product of a vector \mathbf{a} and second rank tensor \mathbf{T} to produce a third rank tensor \mathbf{P}=\mathbf{a} \mathbf{T} whose components are:

P_{i j k}=a_{i} T_{j k}

a scalar product of a vector \mathbf{a} and third rank tensor \mathbf{P} to produce a second rank tensor \mathbf{T}=\mathbf{a} \cdot \mathbf{P} whose components are

T_{j k}=a_{i} P_{i j k}

a scalar (z) product of a fourth rank tensor \mathbf{C} and a second rank tensor \mathbf{S} to produce a second rank tensor \mathbf{T}=\mathbf{C}: \mathbf{S} whose components are

T_{i j}=C_{i j k l} S_{k l}

## 4 坐标系和坐标变换

4.1 笛卡尔坐标系

Figure 3: 坐标系和方向余弦

l = \frac{p}{|p|}

### 4.1 坐标旋转

Figure 4: 坐标旋转

\begin{array}{c|ccc}
O&x_{1}&x_{2}&x_{3} \\
\hline x_{1}^{\prime}&L_{11}&L_{12}&L_{13} \\
x_{2}^{\prime}&L_{21}&L_{22}&L_{23} \\
x_{3}^{\prime}&L_{31}&L_{32}&L_{33}
\end{array}

The matrix transformation can be expressed in a more compact form by defining the group of directional cosines as a tensor \mathbf{L}=L_{i i \cdot} . A coordinate \mathbf{x} in the O x_{1} x_{2} x_{3} axes can then be represented in the O x_{1}^{\prime} x_{2}^{\prime} x_{3}^{\prime} axes as:

\mathbf{x}^{\prime}=\mathbf{L} \cdot \mathbf{x}

Components of the transformation tensor \mathbf{L} must satisfy certain conditions since they are defined by two right-handed sets of axes. Since the axes are mutually perpendicular:

\begin{array}{l}
L_{11} L_{21}+L_{12} L_{22}+L_{13} L_{23}=0 \\
L_{21} L_{31}+L_{22} L_{32}+L_{23} L_{33}=0 \\
L_{31} L_{11}+L_{32} L_{12}+L_{33} L_{13}=0
\end{array}

and since the sums of squares of directional cosines are unity:

\begin{array}{l}
L_{11}^{2}+L_{12}^{2}+L_{13}^{2}=1 \\
L_{21}^{2}+L_{22}^{2}+L_{23}^{2}=1 \\
L_{31}^{2}+L_{32}^{2}+L_{33}^{2}=1
\end{array}

The two equations above describe the orthonormality conditions which can be expressed in a more compact form:

\mathbf{L} \cdot \mathbf{L}^{\mathbf{T}}=\mathbf{I}

The transformation matrix must satisfy one further requirement which ensures that both the sets of axes are right-handed. It is:

\operatorname{det} \mathbf{L}=1

## 5. 微分运算符

\nabla=\left(\begin{array}{l}
\frac{\partial}{\partial x} \\
\frac{\partial}{\partial y} \\
\frac{\partial}{\partial z}
\end{array}\right)=\left(\begin{array}{l}
\frac{\partial}{\partial x_{1}} \\
\frac{\partial}{\partial x_{2}} \\
\frac{\partial}{\partial x_{3}}
\end{array}\right)

### 5.1 梯度运算

• 一个标量的梯度结果是一个向量a，梯度运算也可以应用于高阶张量，它总是使张量升高一阶

\operatorname{grad} \phi=\nabla \phi=\left(\begin{array}{l}
\frac{\partial \phi}{\partial x} \\
\frac{\partial \phi}{\partial y} \\
\frac{\partial \phi}{\partial z}
\end{array}\right) .

• 因此，对向量进行梯度运算则得到一个二阶张量：

\operatorname{grad} \mathbf{b}=\nabla \otimes \mathbf{b}=\left[\begin{array}{ccc}
\frac{\partial}{\partial x} b_{x}&\frac{\partial}{\partial x} b_{y}&\frac{\partial}{\partial x} b_{z} \
\frac{\partial}{\partial y} b_{x}&\frac{\partial}{\partial y} b_{y}&\frac{\partial}{\partial y} b_{z} \
\frac{\partial}{\partial z} b_{x}&\frac{\partial}{\partial z} b_{y}&\frac{\partial}{\partial z} b_{z}
\end{array}\right]

\nabla \mathbf{b}

### 5.2 散度运算

• 向量\mathbf{b}的散度是一个标量\phi，由Nabla算子和点运算符的组合表示，即\nabla \cdot

\operatorname{div} \mathbf{b}=\nabla \bullet \mathbf{b}=\sum_{i=1}^{3} \frac{\partial}{\partial x_{i}} b_{i}=\frac{\partial b_{1}}{\partial x_{1}}+\frac{\partial b_{2}}{\partial x_{2}}+\frac{\partial b_{3}}{\partial x_{3}}

• 一个二阶张量\mathbf{T}的散度是一个向量：

### 5.3 散度算子中的乘法规则

• 向量\mathbf{a}和标量\phi的乘积的散度可以按如下方式分割，结果是一个标量:

• 两个向量\mathbf{a}\mathbf{b}的外积的散度可按如下方式分割，结果得到一个向量：

• 张量\mathbf{T}和向量\mathbf{b}的内积的散度可以按如下方式分割，结果是一个标量：

If one thinks that the product rule for the inner product of two vectors is missing, think about the result of the inner product of the two vectors and which tensor rank the result will have. After that, ask yourself how the divergence operator will change the rank.

1.3.1 The Total Derivative
The definition of the total derivative of an arbitrary quantity \phi - in the field of fluid dynamics is defined as:

where \mathbf{U} represents the velocity vector. The last term in equation (1.22) denotes the inner product. Depending on the quantity \phi (scalar, vector, tensor, and so on), the correct mathematical expression for the second term on the right hand side (RHS) has to be applied. Example given:
- If \phi is a scalar, we have to use equation for inner product \mathbf{a} \cdot \mathbf{b} (Eq. 1),
- If \phi is a vector, we have to use equation for tensor inner product \mathbf{b}=\mathbf{T} \bullet \mathbf{a} (Eq. 10).

# 二 利用OpenFOAM进行张量计算2

OpenFOAM作为使用cpp编写的一个开源计算流体力学（Computational Fluid Dynamics, CFD）软件，其本身是架构在一系列基础数学运算库之上的，本质上是一个采用数值方法求解各类微分方程的cpp类库的集合。自然，我们也可以调用这些类库进行一些基础的数学运算，如本文所讲到的张量运算。

## 数学运算与OpenFOAM运算符

OpenFOAM中r=(0,1,2,3)。零阶张量称为标量（scalar),第一阶张量称为向量（Vector），第二阶张量（tensor），数学上称矩阵（Matrix）。二阶张量有9个分量，表示为T_{ij}，三阶张量有27个分量，表示为P_{ijk}

\text { Operations exclusive to tensors of rank } 2\\
\begin{array}{ccc}
\hline \text { Transpose }&\text { T }^{\text {T }}&\text { T.T() } \\
\text { Diagonal }&\operatorname{diag} \text { T }&\operatorname{diag}(\text { T }) \\
\text { Trace }&\operatorname{tr} \text { T }&\operatorname{tr}(\text { T) } \\
\text { Deviatoric component }&\operatorname{dev} \text { T }&\operatorname{dev}(\text { T) } \\
\text { Symmetric component }&\text { symm T }&\operatorname{symm}(\text { T }) \\
\text { Skew-symmetric component }&\text { skew T }&\operatorname{skew}(\text { T }) \\
\text { Determinant }&\operatorname{det} \text { T }&\operatorname{det}(\text { T) } \\
\text { Cofactors }&\operatorname{cof} \text { T }&\operatorname{cof}(\text { T) } \\
\text { Inverse }&\operatorname{inv} \text { T }&\operatorname{inv}(\text { T }) \\
\text { Hodge dual }&* \text { T }&* \text { T } \\
\hline
\end{array}

\begin{array}{ccc}
\text { Operations exclusive to scalars } \\
\hline
\text { Sign (boolean) }&\operatorname{sgn}(s)&\operatorname{sign}(\mathrm{s}) \\
\text { Positive (boolean) }&s>=0&\operatorname{pos}(\mathrm{s}) \\
\text { Negative (boolean) }&s<0&\operatorname{neg}(\mathrm{s}) \\
\text { Limit }&n \text { scalar }&\operatorname{limit}(s, n)&\operatorname{limit}(\mathrm{s}, \mathrm{n}) \\
\text { Square root }&\sqrt{s}&\operatorname{sqrt}(\mathrm{s}) \\
\text { Exponential }&\exp s&\exp (\mathrm{s}) \\
\text { Natural logarithm }&\ln s&\log (\mathrm{s}) \\
\text { Base } 10 \text { logarithm }&\log _{10} s&\log 10(\mathrm{~s}) \\
\text { Sine }&\sin s&\sin (\mathrm{s}) \\
\text { Cosine }&\cos s&\cos (\mathrm{s}) \\
\text { Tangent }&\tan s&\tan (\mathrm{s}) \\
\text { Arc sine }&\operatorname{asin} s&\operatorname{asin}(\mathrm{s}) \\
\text { Arc cosine }&\operatorname{acos} s&\operatorname{acos}(\mathrm{s}) \\
\text { Arc tangent }&\operatorname{atan} s&\operatorname{atan}(\mathrm{s}) \\
\text { Hyperbolic sine }&\sinh s&\sinh (\mathrm{s}) \\
\text { Hyperbolic cosine }&\cosh s&\cosh (\mathrm{s}) \\
\text { Hyperbolic tangent }&\tanh s&\tanh (\mathrm{s}) \\
\text { Hyperbolic arc sine }&\operatorname{asinh} s&\operatorname{asinh}(\mathrm{s}) \\
\text { Hyperbolic arc cosine }&\operatorname{acosh} s&\operatorname{acosh}(\mathrm{s}) \\
\text { Hyperbolic arc tangent }&\operatorname{atanh} s&\operatorname{atanh}(\mathrm{s}) \\
\text { Error function }&\operatorname{erf} s&\operatorname{erf}(\mathrm{s}) \\
\text { Complement error function }&\operatorname{erfc} s&\operatorname{erf}(\mathrm{s}) \\ \hline
\end{array}

## 编程案例

### 5. 构造对称张量

    symmTensor st1(1, 2, 3, 4, 5, 6);
symmTensor st2(7, 8, 9, 10, 11, 12);

Info<< "Check dot product of symmetric tensors "
<< (st1 & st2) << endl;

Info<< "Check inner sqr of a symmetric tensor "  // 相等
<< innerSqr(st1) << " " << innerSqr(st1) - (st1 & st1) << endl;


\begin{aligned}
{\rm twoSymm}(\boldsymbol{T}) &\equiv& \boldsymbol{T} + \boldsymbol{T}^T \\
&=& \left(
\begin{aligned}
2T_{11}&T_{12} + T_{21}&T_{13} + T_{31} \\
T_{21} + T_{12}&2T_{22}&T_{23} + T_{32} \\
T_{31} + T_{13}&T_{32} + T_{23}&2T_{33}
\end{aligned} \right)
\end{aligned}

Definition: src/OpenFOAM/primitives/Tensor/TensorI.H

//- Return twice the symmetric part of a tensor
template<class Cmpt>
inline SymmTensor<Cmpt> twoSymm(const Tensor<Cmpt>& t)
{
return SymmTensor<Cmpt>
(
2*t.xx(), (t.xy() + t.yx()), (t.xz() + t.zx()),
2*t.yy(),          (t.yz() + t.zy()),
2*t.zz()
);
}


    symmTensor st1(1, 2, 3, 4, 5, 6);
symmTensor st2(7, 8, 9, 10, 11, 12);

Info<< "Twice the Symmetric Part of a Second Rank Tensor"
<< twoSymm(st1&st2) << endl;

Info<< "Check symmetric part of dot product of symmetric tensors "
<< twoSymm(st1&st2) - ((st1&st2) + (st2&st1)) << endl;


Twice the Symmetric Part of a Second Rank Tensor(100 152 182 222 262 308)
Check symmetric part of dot product of symmetric tensors (0 0 0 0 0 0 0 0 0)


\begin{aligned}
{\rm symm}(\boldsymbol{T}) &\equiv& \frac{1}{2} (\boldsymbol{T} + \boldsymbol{T}^T) \\
&=& \frac{1}{2} \left(
\begin{matrix}
2T_{11}&T_{12} + T_{21}&T_{13} + T_{31} \\
T_{21} + T_{12}&2T_{22}&T_{23} + T_{32} \\
T_{31} + T_{13}&T_{32} + T_{23}&2T_{33}
\end{matrix} \right)
\end{aligned}

Definition: src/OpenFOAM/primitives/Tensor/TensorI.H

//- Return the symmetric part of a tensor
template<class Cmpt>
inline SymmTensor<Cmpt> symm(const Tensor<Cmpt>& t)
{
return SymmTensor<Cmpt>
(
t.xx(), 0.5*(t.xy() + t.yx()), 0.5*(t.xz() + t.zx()),
t.yy(),                0.5*(t.yz() + t.zy()),
t.zz()
);
}


\begin{aligned}
{\rm skew}(\boldsymbol{T}) &\equiv& \frac{1}{2} (\boldsymbol{T} – \boldsymbol{T}^T) \\
&=& \frac{1}{2} \left(
\begin{matrix}
0&T_{12} – T_{21}&T_{13} – T_{31} \\
T_{21} – T_{12}&0&T_{23} – T_{32} \\
T_{31} – T_{13}&T_{32} – T_{23}&0
\end{matrix} \right)
\end{aligned}

Definition: src/OpenFOAM/primitives/Tensor/TensorI.H

//- Return the skew-symmetric part of a tensor
template<class Cmpt>
inline Tensor<Cmpt> skew(const Tensor<Cmpt>& t)
{
return Tensor<Cmpt>
(
0.0, 0.5*(t.xy() - t.yx()), 0.5*(t.xz() - t.zx()),
0.5*(t.yx() - t.xy()), 0.0, 0.5*(t.yz() - t.zy()),
0.5*(t.zx() - t.xz()), 0.5*(t.zy() - t.yz()), 0.0
);
}


Usage Example: src/TurbulenceModels/turbulenceModels/RAS/SSG/SSG.C

    tmp<volTensorField> tgradU(fvc::grad(U));
const volTensorField& gradU = tgradU();

volSymmTensorField b(dev(R)/(2*k_));
volSymmTensorField S(symm(gradU));
volTensorField Omega(skew(gradU));


In the above code, the symmetric and antisymmetric parts of the velocity gradient tensor \frac{∂u_j}{∂x_i} are defined as follows:

\begin{aligned}
S_{ij} &=& \frac{1}{2} \left( \frac{\partial u_j}{\partial x_i} + \frac{\partial u_i}{\partial x_j} \right), \\
\Omega_{ij} &=& \frac{1}{2} \left( \frac{\partial u_j}{\partial x_i} – \frac{\partial u_i}{\partial x_j} \right),
\end{aligned}

where S_{ij} is the strain rate tensor and \Omega_{ij} is the vorticity (spin) tensor.

### 6 其他待完善

    symmTensor st1(1, 2, 3, 4, 5, 6);
symmTensor st2(7, 8, 9, 10, 11, 12);

tensor t1(1, 2, 3, 4, 5, 6, 7, 8, 9);
tensor t6(1,0,-4,0,5,4,-4,4,3);
tensor t7(1, 2, 3, 2, 4, 5, 3, 5, 6);
vector v1(1, 2, 3);

Info<< sqr(v1) << endl;
Info<< symm(t7) << endl;
Info<< twoSymm(t7) << endl;
Info<< magSqr(st1) << endl;
Info<< mag(st1) << endl;


# Reference

1. https://cfd.direct/openfoam/tensor-mathematics ↩︎
2. 本文代码在OpenFOAM v8版本上测试通过。 ↩︎
Everything not saved will be lost.