One Hat Cyber Team
Your IP :
216.73.216.80
Server IP :
194.44.31.54
Server :
Linux zen.imath.kiev.ua 4.18.0-553.77.1.el8_10.x86_64 #1 SMP Fri Oct 3 14:30:23 UTC 2025 x86_64
Server Software :
Apache/2.4.37 (Rocky Linux) OpenSSL/1.1.1k
PHP Version :
5.6.40
Buat File
|
Buat Folder
Eksekusi
Dir :
~
/
home
/
vo
/
book-newprint
/
View File Name :
chapt22.tex
\section{$F_n$-algebras and their representations} \subsection {About $*$-representations of ${F}_n$-algebras}\label{sec:1.2.1} \markright{1.2. $F_n$-algebras and their representations} \noindent\textbf{1.} Let ${{F}}_n$ denote the standard polynomial of degree $n$ in $n$ non-commuting variables: $$ {{F}}_n(x_1,x_2,\dots,x_n)= \sum_{\sigma \in S_n} (-1)^{p(\sigma)} x_{\sigma(1)} \dots x_{\sigma(n)}, $$ where $p(\sigma)$ is the parity of the permutation $\sigma$, $S_n$ is the symmetric group. In the sequel, we say that $A$ is a ${{F}}_n$-algebra, if\/ $\forall x_1,\dots,x_n\in A$, we have that ${{F}}_n(x_1,\dots,x_n)=0$. The following Amitsur--Levitsky's theorem takes place~\cite{126}: {\em the algebra $M_n({\mathbb C})$ is a ${{F}}_{2n}$-algebra, but not a ${{F}}_{2n-1}$-algebra}. ${{F}}_n$-algebras form one of the most simple class of algebras, if considered from the standpoint of the structure of irreducible representations. \begin{theorem}\label{th:r.f} Consider the following statements: \begin{itemize} \item[\textup{(i)}] there exists a residual family ${\mathcal L}$ of irreducible representations $\irrep A \supset {\mathcal L} \ni \pi$ such that $\dim H_\pi \le n$ for all $\pi \in {\mathcal L}$\textup; \item[\textup{(ii)}] there exists a residual family ${\mathcal L}$ of representations $ \rep A \supset {\mathcal L} \ni \pi$ such that $\dim H_\pi \le n$ for all $\pi \in {\mathcal L}$\textup; \item[\textup{(iii)}] $A$ is a ${{F}}_{2n}$-algebra\textup; \item[\textup{(iv)}] for any $\pi \in \irrep A$, $\dim H_\pi \le n$. \end{itemize} We have the following implications: \textup{(i)} $\Rightarrow$ \textup{(ii)} $\Rightarrow$ \textup{(iii)} $\Rightarrow$ \textup{(iv)}. Neither of the inverse implications hold. \end{theorem} \begin{proof} Here we will only prove that (ii) $\Rightarrow$ (iii), since it is this statement that will be used later in examples to prove that the corresponding algebra is a ${{F}}_{2n}$-algebra. Assume that (ii) holds, but $A$ is not a ${{F}}_{2n}$-algebra. Then there exist $x_1$, \dots, $x_{2n}\in A$ such that ${F}_{2n}(x_1, \dots, x_{2n})=x \ne0$. Let us choose $\pi \in{\mathcal L}$ such that $\pi(x) \ne0$. Then we get \[ \pi({F}_{2n}(x_1, \dots, x_{2n})) = {F}_{2n}(\pi(x_1), \dots, \pi(x_{2n})) = \pi(x) \ne0. \] But since $\dim H_\pi \le n$, this contradicts the Amitsur--Levitsky's theorem. Neither of the inverse implications of Theorem~1 holds. Indeed, to see that (ii) does not imply (i), and (iv) does not imply (iii), consider the nilpotent algebra of complex $n\times n$ matrices of the form $$ X =\begin{pmatrix}0&&*\\&\ddots&\\0&&0 \end{pmatrix}, $$ which only has the trivial irreducible representation for any $n\in {\mathbb N}$. Condition (iii) does not imply (ii). For example, the algebra of matrices $X \in M_n({\mathbb C})$ of the form $$ X = \begin{pmatrix} a_{11} &* &\dots&*\\ 0 &\vdots&&\vdots \\ \vdots&*&\dots& *\\ 0&\dots&0&a_{11} \end{pmatrix} $$ is a ${F}_{2n-2}$-algebra. But for any representation $\pi$, $\dim H_\pi \le n-1$, and the nilpotent element \[ S = \begin{pmatrix}0&1&&0 \\&\ddots&\ddots& \\&&0&1 \\ 0&&&0 \end{pmatrix}, \] we have that $\pi(S^{n-1}) = (\pi (S))^{n-1} =0$, since $\pi(S)$ is a nilpotent element in $M_{n-1}({\mathbb C})$. Hence such representations do not separate $S^{n-1}$ and the zero element of the algebra. \end{proof} \noindent\textbf{2.} If ${\mathfrak A}$ is a $*$-algebra, and one only considers its $*$-rep\-re\-sen\-tations, then, evidently, (i) $\Leftrightarrow$ (ii). (iii) does not imply (ii), since, for example, the algebra $M_n({\mathbb C})$ with a non-standard involution does not have non-zero $*$-representa\-tions. Condition (iv) does not imply (iii). For example, the Weyl $*$\nobreakdash-alge\-bra ${{\mathbb C}}\langle P =P^*, Q = Q^* \mid [P, Q] = iI \rangle$ of differential operators with the coefficients being polynomials in one variable does not have $*$-representations in bounded operators, but it is not a ${{F}}_n$-algebra for any $n \in {\mathbb N}$. \medskip\noindent\textbf{3.} If ${\mathfrak A}$ is a $C^*$-algebra then all conditions of the lemma are equivalent, since for a Banach semi-simple algebra ${\mathfrak A}$, the set $\irrep {\mathfrak A}$ of its irreducible representations is a residual family (see \cite{142}) and, therefore, $(iv) \Rightarrow (i)$. \subsection{Examples of ${F}_n$-algebras generated by idempotents and their representations}\label{sec:1.2.2} Here, we give a number of examples of algebras and $*$-algebras generated by idempotents, and construct a residual family of representations or $*$-representations $\pi$ for each of them such that $\dim H_\pi \le n$, and, therefore, show that these algebras are ${{F}}_{2n}$-algebras. \medskip\noindent\textbf{1.} Representations of the ${{F}}_4$-algebra generated by two idempotents $q_1$, $q_2$, and the unit element, \begin{align*} Q_2&={\mathbb C}\langle q_1,q_2 \mid {q_1}^2=q_1,\,{q_2}^2=q_2 \rangle \\ &= {\mathbb C}\langle u=2q_1-e,\,v=2q_2-e \mid u^2=e,\,v^2=e \rangle \end{align*} are well known; nevertheless we present a description of the irreducible representation of the algebra to show the scheme of investigations we will follow in more complicated examples. All finite dimensional irreducible representations of $Q_2$, up to equivalence, are: a) four one-dimensional representations: $\pi_{0,0}(q_1)=0$, $\pi_{0,0}(q_2)=0$; $\pi_{1,0}(q_1)=1$, $\pi_{1,0}(q_2)=0$; $ \pi_{0,1}(q_1)=0$, $\pi_{0,1}(q_2)=1$; $\pi_{1,1}(q_1)=1$, $\pi_{1,1}(q_2)=1$; b) the family, parameterized by $z\in {\mathbb C}\backslash \{ 0,1 \}$, of two-dimensional representations: $$ \pi_z(q_1)= \begin{pmatrix} 1&0\\ 0&0 \end{pmatrix}, \quad \pi_z(q_2)= \begin{pmatrix} z&1\\ z-z^2&1-z \end{pmatrix}. $$ Every $\pi\in \irrep Q_2$ is one- or two-dimensional. Indeed, the space $H={\mathbb C}\langle e_{\lambda},\pi(v)e_{\lambda} \rangle$ ($e_{\lambda}$ is an eigenvector of $\pi(u)\cdot \pi(v)$, $\| e_{\lambda}\| =1$, $\lambda\ne 0$) is invariant for the representation $\pi$. \medskip\noindent\textbf{2.} In the algebra $Q_2$, one can introduce two natural structures of an algebra with involution: 1) $*_1$-algebra \begin{align*} {{\mathcal P}}_2&={\mathbb C}\langle p_1^{*_1}=p_1,p_2^{*_1}=p_2\mid p_1^2=p_1,p_2^2=p_2\rangle \\ &={\mathbb C}\langle u^{*_1}=u,v^{*_1}=v\mid u^2=v^2=e\rangle \\ &= {\mathbb C}\left[ {\mathbb Z}_2*{\mathbb Z}_2\right] \end{align*} is a group $*_1$-algebra generated by two unitary self-adjoint generators. Irreducible two-dimensional $*$-representations of ${{\mathcal P}}_2$ (up to a unitary equivalence) are: $$ \pi_{\phi}(p_1)= \begin{pmatrix} 1&0\\ 0&0 \end{pmatrix}, \quad \pi_{\phi}(p_2)= \begin{pmatrix} \cos^2\phi&\cos\phi\sin\phi\\ \cos\phi\sin\phi&\sin^2\phi \end{pmatrix}, $$ $\phi \in(0,\pi/2)$. They are equivalent to the representations $\pi_z$, $z\in(0,1)\subset {\mathbb R}$. 2) $*_2$-algebra $$ {{\mathcal Q}}_1={\mathbb C}\langle q_1, q_1^{*_2}\mid q_1^2=q_1\rangle={\mathbb C}\langle u, u^{*_2}\mid u^2=e\rangle $$ is a ${*_2}$-algebra generated by an idempotent and its adjoint. Irreducible two-dimensional $*$-representations of ${{\mathcal Q}}_1$ are: $$ \pi_{\alpha}(q_1)=\begin{pmatrix} 1&\alpha\\ 0&0 \end{pmatrix} ,\qquad \alpha>0. $$ They are equivalent to the representations $\pi_z$, $z\in(1,\infty) \subset {\mathbb R}$. \medskip\noindent\textbf{3.} The algebra $Q_2$ and the both $*$-algebras constructed from it, has a residual family of finite-dimensional representations. \begin{proposition} The two-dimensional representations $\pi_z$, $z\in {\mathbb C}\setminus \{ 0,1 \}$, \textup(as well as $\pi_{\phi}$, $\phi\in(0,{\pi}/{2})$, and $\pi_{\alpha}$, $\alpha>0$\textup) form a residual family. $Q_2$ is a ${{F}}_4$-algebra. \end{proposition} \begin{proof} Let us consider any $x\in Q_2$. \begin{align*} x = \alpha_0 & +\sum_{i=1}^{N_1} a_i (q_1q_2)^i +\sum_{j=1}^{N_2}b_j (q_2q_1)^j \\ \displaybreak[0] &+ \sum_{k=0}^{N_3}c_k (q_1q_2)^k q_1 +\sum_{l=0}^{N_4}d_l (q_2q_1)^l q_2, \end{align*} $\alpha_0 $, $a_i $, $b_j $, $c_k$, $d_l\in\mathbb{C}$. Then one has \begin{align*} \pi_{z}(x)&= \begin{pmatrix} \alpha_0 & 0 \\[3pt] 0 & \alpha_0 \end{pmatrix} + \begin{pmatrix} \sum_{i=1}^{N_1}a_i z^i & \sum_{i=1}^{N_1}a_i z^{i-1}\\[3pt] 0 & 0 \end{pmatrix} \\ &+ \begin{pmatrix} \sum_{j=1}^{N_2}b_j z^j & 0 \\[3pt] \sum_{j=1}^{N_2}b_j z^j (1-z) & 0 \end{pmatrix} + \begin{pmatrix} \sum_{k=0}^{N_3}c_k z^k & 0\\[3pt] 0 & 0 \end{pmatrix} \\ &+ \begin{pmatrix} \sum_{l=0}^{N_4}d_l z^{l+1} & \sum_{l=0}^{N_4}d_l z^l\\[3pt] \sum_{l=0}^{N_4}d_l z^{l+1}(1-z) & \sum_{l=0}^{N_4}d_l z^l (1-z) \end{pmatrix}. \end{align*} It easily follows from the structure of the matrix $\pi_{z}(x)$ that $\pi_{z}(x)=0$ for any $z\in \mathbb{C} \setminus \{0,1\}$ if and only if $x=0$. \end{proof} \noindent\textbf{4.} The structure of indecomposable representations of the algebra $Q_2$ is more complicated than the structure of irreducible ones. Let us present the list of all indecomposable representations of $Q_2$ ~\cite{naz,127}. For $\dim H = 2 k$, \[ \pi_{\lambda}(u)= \begin{pmatrix} I_k & 0\\ 0& -I_k \end{pmatrix},\quad \pi_{\lambda}(v)=\pm \begin{pmatrix} A & B \\ C & D \end{pmatrix}, \] where $I_k$ is the identity matrix of order $k$, and \begin{gather*} B= \begin{pmatrix} -2(1-\lambda)^{-1}&-2(1-\lambda)^{-2}&\cdots & -2 (1-\lambda)^{-k}\\ &-2(1-\lambda)^{-1} &\ddots &\vdots\\ \hfill\smash{\text{\Large 0}}& &\ddots & -2(1-\lambda)^{-2}\\ & & & -2(1-\lambda)^{-1}\\ \end{pmatrix}, \\ C= \begin{pmatrix} 2\lambda (1-\lambda)^{-1}& 2(1-\lambda)^{-2}&\cdots & 2(1-\lambda)^{-k}\\ & 2\lambda(1-\lambda)^{-1}& \ddots & \vdots\\ \hfill\smash{\text{\Large 0}}& & \ddots & 2(1-\lambda)^{-2} \\ & & & 2\lambda (1-\lambda)^{-1} \end{pmatrix}, \\* A = -B - I_k, \quad D = -C - I_k, \qquad \lambda\in \mathbb{C} \setminus \{1\}. \end{gather*} For $\dim H = 2k+1$, \[ \pi_{\lambda}(u)= \begin{pmatrix} I_{k+1} &0 \\ 0 &- I_k \end{pmatrix}, \quad\pi_{\lambda}(v)=\pm \begin{pmatrix} A & B \\ C & D \end{pmatrix}, \] where \begin{gather*} A= \begin{pmatrix} 1 & 2 & \cdots & 2\\ & 1 &\ddots &\vdots\\ && \ddots & 2\\ \smash{\text{\Large 0}} & & & 1 \end{pmatrix}, \quad B= \begin{pmatrix} -2&\cdots & -2 \\ 0 &\ddots &\vdots\\ &\ddots & -2\\ \smash{\text{\Large 0}} & & 0 \end{pmatrix}, \\[3pt] C= \begin{pmatrix} 0 & 2 &\cdots & 2 & 2\\ & 0 & \ddots & \vdots & \vdots\\ & & \ddots & 2 & 2\\ \smash{\text{\Large 0}}& & & 0 & 2 \end{pmatrix}, \quad D= \begin{pmatrix} -1 & -2 & \cdots & -2\\ &-1& \ddots &\vdots \\ & &\ddots & -2\\ \smash{\text{\Large 0}} & & & -1 \end{pmatrix}. \end{gather*} The algebra $Q_2$ is the group algebra of the Coxeter group $\twopointinf$ generated by two flips without any relations. \medskip\noindent\textbf{5.} Consider the Coxeter group $G_M$ with a matrix $M = (m_{ij})_{i,j=1}^m$, $m_{ij} \in {\mathbb N} \cup \{\infty\}$; $m_{ii} =1$ $m_{ij} = m_{ji} >1$, $i \ne j$; $i$, $j=1$, \dots, $m$, which is defined in terms of generators $(w_i)_{i=1}^m$ and the relations $(w_i w_j)^{m_{ij}} =e$, $i$, $j=1$, \dots, $m$; if $m_{ij} =\infty$, then there is no relation between the generators $w_i$ and $w_j$. If the Cartan matrix $K = \big( - \cos \pi /m_{ij}\bigr)_{i,j=1}^m$, which corresponds to $M$, is positive definite (all its principal minors are positive), then the group $G_M$ is finite; if $\det K =0$, but the other principal minors are positive, then the group $G_M$ is infinite, but $G_M$ is a semi-direct product of the lattice ${{\mathbb Z}}^{m-1}$ and a finite group $G_f(M)$, $G_M = {{\mathbb Z}}^{m-1} \rtimes G_f(M)$ see ~\cite{121} and others. Since the Coxeter group $G_M$ is generated by flips $w_i^2 =e$, $i=1$, \dots, $m$, ${{\mathbb C}}[G_M]$ also gives an example of an algebra generated by $m$ projections. There is a natural involution in ${{\mathbb C}}[G_M]$ such that all of the group elements are unitary, $g^* = g^{-1}$. (Generally speaking, this is not the unique involution that can be defined on ${{\mathbb C}}[G_M]$). The dimensions of the irreducible $*$-representations $\pi_{\alpha}$ of the group \hbox{$*$-al}gebra of the Coxeter group $G_M={\mathbb Z}^{m-1}\rtimes G_f$ are majorized by the number $|G_f|$. These representations form a residual family, because irreducible $*$-representations of ${\mathbb C}[G_M]$, with the involution $g^*=g^{-1}$, make a residual family. Hence ${\mathbb C}[G_M]$ is a ${{F}}_{2| G_f|}$-algebra which is generated by flips. \begin{remark} It is a very difficult problem to describe indecomposable representations of ${\mathbb C}[G_M]$ (except for the case where the Coxeter group $G_M$ is a finite group or is ${\mathbb Z}\rtimes{\mathbb Z}_2$) \cite{bon_dr}. \end{remark} \noindent\textbf{6.} Let now ${{\mathfrak A}}_k= {\mathbb C}\langle u_1^{(k)},\dots,u_{n_k}^{(k)}\mid ( )_k\rangle$ be ${{F}}_{2m_k}$-algebras generated by the flips $u_1^{(k)}$, \dots, $u_{n_k}^{(k)}$ and relations $( )_k$, such that $\pi^{(k)}$ is a residual family with $\dim H_{\pi^{(k)}}\le m_k$, $k=1$, \dots, $n$). Of course, the algebra \[ {\mathbb C}\langle u_1^{(1)},\dots,u_{n_n}^{(n)}\mid ( )_1,\dots,( )_n,\,[u_i^{(k)},u_j^{(l)}]=0, \,k\ne l\rangle \] is a ${{F}}_{2m_1\cdot\dots\cdot m_n}$-algebra having the residual family $\pi^{(1)}\otimes\dots\otimes\pi^{(n)}$. \medskip\noindent\textbf{7.} Examples of algebras that we will consider in the sequel are also defined by generators $u_1^{(1)}$, \dots, $u_{n_n}^{(n)}$, but if the upper indices are not equal, the generators pairwise commute or anti-commute. In items 7 and 8, these relations are as follows: $u_i^{(k)}u_j^{(l)}=\epsilon_{kl}\,u_j^{(l)}u_i^{(k)}$, $k\ne l$ ($\epsilon_{kl}=+1$ or $-1$, $\epsilon_{kl}=\epsilon_{lk}$), $k$, $l=1$, \dots, $n$, and do not depend on $i=1$, \dots, $n_k$ and $j=1$, \dots, $n_l$. Let ${{\mathfrak A}}_{n,\epsilon}$ be an algebra generated by $s_1$, $\dots$, $s_n$, \begin{gather*} {{\mathfrak A}}_{n,\epsilon}={\mathbb C}\bigl< s_1, \dots,s_n \mid s_i^2=1,\, s_is_j=\epsilon_{ij}s_js_i;\, i,j=1,\dots,n\bigr>, \\ \epsilon=(\epsilon_{ij}), \qquad \epsilon_{ii}=1. \end{gather*} The algebra ${{\mathfrak A}}_{n,\epsilon}$ is finite dimensional and semi-simple, it has a finite residual family of irreducible $*$-representations $\pi_p$ with $s_i^*=s_i$, $i=1$, \dots, $n$, and is an ${{F}}_m$-algebra, where $m\ge 2^{{n}/{2}+1}$. \medskip\noindent\textbf{8.} Let ${{\mathfrak B}}_{({{\mathfrak A}}_k),\epsilon}={\mathbb C}\langle u_1^{(1)}, \dots, u_{n_n}^{(n)}\mid ( )_1,\dots,( )_n;\, u_i^{(k)}u_j^{(l)}=\epsilon_{kl}u_j^{(l)}u_i^{(k)},\allowbreak\, k\ne l,\, k,l=1\dots,n; \, i=1,\dots,n_k,\, j=1,\dots,n_l\rangle$. This is a ${{F}}_{2^{n+1}m_1\cdot\dots\cdot m_n}$-algebra which has a residual family of \hbox{$*$-rep}\-re\-sen\-tations $\pi^{(1)}\otimes\dots\otimes\pi^{(n)}\otimes\pi_p$ with $\dim{H_{\pi^{(1)}\otimes\dots\otimes\pi^{(n)}\otimes\pi_p}}\le2^n \cdot m_1\cdot\dots\cdot m_n$ (here, $\pi^{(1)}\otimes\dots\otimes\pi^{(n)}\otimes\pi_p(u_i^{(k)})= 1\otimes\dots\otimes\pi^{(k)}(u_i^{(k)})\otimes\dots\otimes 1\otimes\pi_p(s_k)$). \medskip\noindent\textbf{9.} In items 7 and 8 above, the generators $u_i^{(k)}$ and $u_j^{(l)}$ commute or anti\-commute independently of $i$ and $j$. In this item, whether the generators $u_i^{(k)}$ and $u_j^{(l)}$ commute or not depends on $i$, $j$. Let $A_k={\mathbb C}\langle u,v,s_1,\dots,s_k\mid u^2=v^2=s_i^2=e,\, i=1,\dots,k, \allowbreak\, us_i=\alpha_is_iu,\, vs_i=\beta_is_iv,\, s_is_j=\epsilon_{ij}s_js_i;\, i,j=1,\dots,k\rangle$, where $\alpha_i=\pm 1$, $\beta_i=\pm 1$, $ \epsilon_{ij}=\epsilon_{ji}=\pm 1$; $i\ne j$, $\epsilon_{ii}=1$. Of course, this algebra should have been denoted by $A_{k, \alpha,\beta,\epsilon}$, but we leave out the symbols $\alpha$, $\beta$, and $\epsilon$ for brevity. Let us notice that the algebras ${\mathbb C}\langle u,v,s_1\mid us_1=-s_1u,\, vs_1=-s_1v\rangle$ and ${\mathbb C}\langle u,v,s_1\mid us_1=-s_1u,\, vs_1=s_1v\rangle$ were considered in \cite{124,125}. \begin{lemma} The algebra $A_k$ is a ${{F}}_{2^{k+2}}$-algebra, and has a residual family ${{\mathcal L}}_k$ subject to the condition\textup: $\forall \pi\in{{\mathcal L}}_k$, $\dim H_{\pi}\le 2^{k+1}$. \end{lemma} \begin{proof} For $k=0$, $A_0$ is a ${{F}}_4$-algebra, and has a residual family ${{\mathcal L}}_0$, since the algebra $A_0=Q_2$. Let $k=n$, and assume that all $A_n$ are ${{F}}_{2^{n+2}}$-algebras and there exists ${{\mathcal L}}_n$ with $\dim H_{\pi}\le 2^{n+1}$. By induction, consider the algebra $A_{n+1}$. It contains the subalgebra $B=C\langle u,v,s_1,\dots,s_n\mid \alpha_i,\beta_i,\epsilon_{ij}\rangle$. Clearly, $B$ is isomorphic to some $A_n$. Therefore, the claim is true for $B$. Let ${{\mathcal L}}_n$ be a residual family for $B$ with $\dim H_{\pi}\le 2^{n+1}$. We construct a residual family for $A_{n+1}$ by applying the following procedure: $\forall \pi \in {{\mathcal L}}_{n}$, $\pi:H_{\pi}\rightarrow H_{\pi}$, introduce $\hat{\pi}\in{{\mathcal L}}_{n+1}$, $\hat{\pi}:H_{\pi}\oplus H_{\pi}\rightarrow H_{\pi}\oplus H_{\pi}$, by \begin{gather*} \hat{\pi}(u)= \begin{pmatrix} \pi(u)&0\\0&\alpha_{n+1}\pi(u) \end{pmatrix},\quad \hat{\pi}(v)= \begin{pmatrix} \pi(v)&0\\0&\beta_{n+1}\pi(v) \end{pmatrix}, \\ \hat{\pi}(s_i)=\begin{pmatrix} \pi(s_i)&0\\0&\epsilon_{n+1i}\pi(s_i) \end{pmatrix} ,\qquad i=1,\dots,n, \\ \hat{\pi}(s_{n+1})= \begin{pmatrix} 0&I\\I&0 \end{pmatrix}, \end{gather*} $I:H_{\pi}\rightarrow H_{\pi}$ is the identity operator. Let us show that ${{\mathcal L}}_{n+1}$ is indeed a residual family. For $\forall x\in A_{n+1}$ there exists an expansion $x=b_1+s_{n+1}b_2$, with $b_1$, $b_2\in B$. If $b_2\ne 0$, then $\exists \pi\in {{\mathcal L}}_{n}$ such that $\pi(b_2)\ne 0$ $\Rightarrow$ $$ \hat{\pi}(x)= \begin{pmatrix} \pi(b_1)&*\\\pi(b_2)&* \end{pmatrix} \ne 0, \qquad \hat{\pi}\in{{\mathcal L}}_{n+1}. $$ If $b_2=0$, $b_1\ne 0$, then $\exists \pi\in {{\mathcal L}}_n$: $\pi(b_1)\ne 0$ $\Rightarrow$ $\hat{\pi}(b_1)\ne 0$. Note that $\forall \hat{\pi}\in {{\mathcal L}}_{n+1}$, $\dim \hat{\pi}\le 2^{n+2}$. \end{proof} \begin{remark} There is a natural involution in $A_k$ given by $u^*=u$, $v^*=v$, $s_i^*=s_i$. Since there exists a residual family for $Q_2$ such that $\forall \pi \in {{\mathcal L}}_0$ the operators $\pi(u)$, $\pi(v)$ are self-adjoint (see \ref{sec:1.2.2}, item~2), there exists a residual family for $A_k$ satisfying the condition: $\forall \pi \in {{\mathcal L}}_k$ the operators $\pi(u)$, $\pi(v)$, $\pi(s_i)$ are self-adjoint. Therefore, there is a residual family for $A_k$ consisting only of irreducible \hbox{$*$-rep}\-re\-sen\-tations. For a description of irreducible \hbox{$*$-rep}\-re\-sen\-tations of $A_k$, see also \cite{119}. \end{remark} \begin{remark} $\forall\ k$, $\alpha_i$, $\beta_i$, $\epsilon_{ij}$, the algebra $A_k$ is semi-simple. \end{remark} Moreover, the following theorem holds. \begin{theorem}\label{th:q2} Let $Q_{2,m}=A_m$ with $\alpha_i=1$, $\beta_i=1$, $\epsilon_{ij}=1$, for all $i$, $j$. Then every algebra $A_k$ is isomorphic to $M_{2^n}(Q_{2,m})$ or to $M_{2^n}(Z(A_k))$, where $Z(A_k)$ is the center of $A_k$. \end{theorem} \begin{proof} Let us split the proof into four steps. 1). Let us define the algebra $A_k={\mathbb C}\langle u,v,s_1,\dots,s_k\mid \alpha_i,\beta_i,\epsilon_{ij} \rangle$. Suppose that there exist $i$, $j$ such that $\epsilon_{ij}=-1$, for example, $s_1s_2=-s_2s_1$. Then, using the following substitution of generators $s_1'=s_1, s_2'=s_2,$ \begin{align*} s_j'&= \begin{cases} s_j, &\epsilon_{1j}=\epsilon_{2j}=1,\\ s_1s_j, &\epsilon_{1j}=-\epsilon_{2j}=1,\\ s_2s_j, &\epsilon_{1j}=-\epsilon_{2j}=-1,\\ \sqrt{(-1)}\,s_1s_2s_j,&\epsilon_{1j}=\epsilon_{2j}=-1, \end{cases} \\ u'&= \begin{cases} u,&\alpha_1=\alpha_2=1,\\ s_1u,&\alpha_1=-\alpha_2=1,\\ s_2u,&\alpha_1=-\alpha_2=-1,\\ \sqrt{(-1)}\,s_1s_2u,&\alpha_1=\alpha_2=-1, \end{cases} \\ v' &= \begin{cases} v,&\beta_1=\beta_2=1,\\ s_1v,&\beta_1=-\beta_2=1,\\ s_2v,&\beta_1=-\beta_2=-1,\\ \sqrt{(-1)}\,s_1s_2v,&\beta_1=\beta_2=-1, \end{cases} \end{align*} we obtain that $A_k={\mathbb C}\langle u',v',s_1',\dots,s_k'\mid \alpha_1'=\alpha_2'=\beta_1'=\beta_2',\allowbreak\,\epsilon_{12}'=-1, \, \epsilon_{1j}'=\epsilon_{2j}'=1,\,j>2\rangle$. Let $A_{k-2}$ denote the subalgebra $ {\mathbb C}\langle s_3',\dots,s_k',u',v'\rangle$. \begin{lemma} The algebra $A_k$ is isomorphic to $M_2(A_{k-2})$. \end{lemma} \begin{proof} Let $A_{k}$, $A_{k-2}$ be algebras described above. Then $\forall x\in A_{k}$ there exists a unique decomposition $x=s_1'a_1+s_2'a_2+s_1's_2'a_3+a_4$, where $a_i\in A_{k-2}$, $i=1$ \dots, 4. This implies the following identity $ x=\bigl((1+s_1')\,s_2a_1'+(1-s_1')\,s_2a_2'+(1+s_1')\,a_3'+(1-s_1')\,a _4'\bigr)/2$. It can be easily verified that $\psi \colon A_{k}\rightarrow M_2(A_{k-2})$, \begin{gather*} \psi(s_1')= \begin{pmatrix} e&0\\0&-e \end{pmatrix} ,\quad \psi(s_2')= \begin{pmatrix} 0&e\\e&0 \end{pmatrix} ,\quad \psi(s_j')= \begin{pmatrix} s_j'&0\\0&s_j' \end{pmatrix} , \\ \psi(u')= \begin{pmatrix} u'&0\\0&u' \end{pmatrix} ,\quad \psi(v')= \begin{pmatrix} v'&0\\0&v' \end{pmatrix} , \end{gather*} is an isomorphism. The inverse mapping $\psi^{-1} : M_2(A_{k-2})\rightarrow A_{k}$ is defined by the formula: $$ \psi^{-1} \begin{pmatrix} a_3'&a_1'\\a_2'&a_4' \end{pmatrix} =\frac12\bigl((1+s_1')s_2a_1'+(1-s_1')s_2a_2'+(1+s_1')a_3'+(1-s_1')a_ 4'\bigr). $$ which completes the proof of the lemma. \end{proof} Using this lemma one can obtain the following proposition: \begin{proposition} The algebra $A_k={\mathbb C}\langle u,v,s_{1},\dots,s_{k}\mid \alpha_i,\beta_i,\epsilon_{ij}\rangle$ is isomorphic to $M_{2^m}(A_{k-2m})$ for some $m$, where \[ A_{k-2m}= {\mathbb C}\langle u',v',s_{2m+1}',\dots,s_{k}'\mid \alpha_i',\beta_i',\epsilon_{ij}'=1\rangle . \] \end{proposition} So, we must study the structure of the algebra $A_k$ with the condition that $\epsilon_{ij}=1$. 2) We further assume, without loss of generality, that for some $m\in{\mathbb N}$, the relations $\alpha_i=\beta_i$, $1\le i<m$, $\alpha_i\ne \beta_i$, $m\le i\le k $, hold. Let us introduce the new generators $ u'=u$, $v'=v$, $$ s_j'= \begin{cases} s_j,&1\le j<m,\\ s_js_k, & m\le j<k. \end{cases} $$ Then $A_k={\mathbb C}\langle u',v',s_1',\dots,s_k'\mid \alpha_i=\beta_i,\,i<k,\,\alpha_k=\pm 1,\,\beta_k=\pm 1\rangle $, and there are two possibilities: $\alpha_k=\beta_k$ or $\alpha_k\ne\beta_k$. The first case is considered in 3), and the second in 4). 3) Let us arrange the family $\{s_j\}$ so that, for some $m$, the conditions $\alpha_i=\beta_i=1$, $1\le i<m$, and $\alpha_i= \beta_i=-1$, $m\le i\le k $, hold. Using the new generators $s_i'=s_i$, $1\le i<m$, $s_i'=s_is_{k}$, $m\le i<k-1$, $u'=u$, $v'=v$, we obtain the algebra $A_k$ with the coefficients $\alpha_i'=\beta_i'=1$, $i<k$. If $\alpha_k'=\beta_k'=1$, then the algebra $A_k$ is isomorphic to $Q_{2,k}$. In the case where $\alpha_k'=\beta_k'=-1$, we have the following proposition: \begin{proposition} $A_k={\mathbb C}\langle u,v,s_1,\dots,s_k\mid \alpha_i=\beta_i=1,\, i<k,\allowbreak\,\alpha_k=\beta_k=-1\rangle \cong M_2(Z(A_k))$. \end{proposition} \begin{proof} Let us denote $B={\mathbb C}\langle s_1,\dots,s_{k-1},f,f^{-1} \rangle $, $f=(1+s_k)uv+(1-s_k)vu$, and write any $x\in A_k$ in the form \[ x=\frac12\bigl((1+s_k)\,a_1+(1-s_k)\,a_2+(1+s_k)\,ua_3+(1-s_k)\,ua_4\bigr) \] $a_i\in B$. Note, that this decomposition is unique. Indeed, if we have another one: $$ x=\frac12\bigl((1+s_k)\,b_1+(1-s_k)\,b_2+(1+s_k)\,ub_3 +(1-s_k)\,ub_4\bigr), $$ $b_i\in B$, then we obtain the identity $0=1/2\bigl((1+s_k)(b_1-a_1)+(1-s_k)(b_2-a_2) +(1+s_k)u(b_3-a_3)+(1-s_k)u(b_4-a_4)\bigr)$. %\end{align*} Multiplying this formula by $1+s_k$, we conclude that \[ 0=2(1+s_k)(b_1-a_1). \] Combining it with the identity \[ u(1+s_k)(b_1-a_1)u=(1-s_k)(b_1-a_1)=0, \] we obtain that $(b_1-a_1)=0$. In the same way we show that $b_2=a_2$, $b_3=a_3$, $b_4=a_4$. Now we can write the formula for the isomorphism $\psi \colon A_k\rightarrow M_2(B)$, $$ \psi(x)= \begin{pmatrix} a_1&a_3\\a_4&a_2 \end{pmatrix} . $$ A direct verification shows that $\psi$ is an epimorphism and the algebra $B$ is isomorphic to the $Z(A_k)$. \end{proof} 4) Consider the second possibility. One can assume that $\alpha_k=-\beta_k=-1$ (otherwise we replace $u$ with $v$ or vice versa). Using the previous results we have $\alpha_{j}=\beta_{j}$, $j<k$. Then by the method described in 3), we obtain the identities $ \alpha_{j}=\beta_{j}=1$, $j<k-1$, $\alpha_{k-1}=\beta_{k-1}$. Thus, $\alpha_{j}=\beta_{j}=1$, $j<k-1$, $\alpha_k = -\beta_{k}$, and $\alpha_{k-1}$ equals $+1$ or $-1$. These cases are considered in the following propositions. \begin{proposition} $A_k={\mathbb C}\langle u,v,s_1,\dots,s_k\mid \alpha_i=\beta_i=1,\,i<k,\,\alpha_k=-\beta_k=-1\rangle \cong M_2(Q_{2,k-1})$. \end{proposition} \begin{proof} If we denote $v_1=1/2((1+s_k)v+(1-s_k)uvu)$, $v_2=1/2((1-s_k)v+(1+s_k)uvu)$, then \[A_k\cong {\mathbb C}\langle u,v_1,v_2, s_1,\dots,s_{k}\rangle\cong M_2(A_{k-1})\cong M_2(Q_{2,k-1}), \] where $A_{k-1}={\mathbb C}\langle v_1,v_2, s_1,\dots,s_{k-1}\rangle\cong Q_{2,k-1} $. Indeed, any $x\in A_k$ can be represented in the form $x=1/2\bigl((1+s_k)\,a_1+(1-s_k)\,a_2+(1+s_k)\,ua_3+(1-s_k)\,ua_4\bigr)$, $a_i\in A_{k-1}$, and the formula \[ \psi(x)=\begin{pmatrix} a_1&a_3\\a_4&a_2 \end{pmatrix}. \] gives the needed isomorphism $\psi \colon A_k\rightarrow M_2(A_{k-1})$. \end{proof} \begin{proposition} $ A_k={\mathbb C}\langle u,v,s_1,\dots,s_k\mid \alpha_i=\beta_i=1,\,i<k,\allowbreak \, \alpha_{k-1}=\beta_{k-1}=-1, \,\alpha_k=-\beta_k=-1 \rangle \cong M_4(Z(A_k)). $ \end{proposition} \begin{proof} It is easy to verify that the element $1/2\bigl((1+s_k)(uv)^2+(1-s_k)(vu)^2\bigr)$ lies in the center of the algebra $A_k$, and, moreover, this element and the family $\{s_i,\,i<k-1\}$ generate the center. Then the formulas $\psi\colon A_k\rightarrow M_4(Z(A_{k}))$ \begin{gather*} \psi(x)= \begin{pmatrix}x&0&0&0\\ 0&x&0&0\\ 0&0&x&0\\ 0&0&0&x \end{pmatrix},\qquad x\in Z(A_{k}), \\[3pt] \psi(u)= \begin{pmatrix}0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0 \end{pmatrix},\quad \psi(1/2(1+s_{k-1})v)= \begin{pmatrix}0&1&0&0\\ 1&0&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{pmatrix} , \\[3pt] \psi(s_{k-1})= \begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&-1&0\\ 0&0&0&-1 \end{pmatrix} ,\quad \psi(s_k)= \begin{pmatrix}1&0&0&0\\ 0&-1&0&0\\ 0&0&1&0\\ 0&0&0&-1 \end{pmatrix} , \end{gather*} determine the needed isomorphism. \end{proof} \nobreak The proof of the theorem is completed. \end{proof} By using Theorem~2.1, it is possible to obtain a description of all irreducible representations of the algebras $A_k$. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Non-commutative ``circle'', ``pair of intersecting %\penalty-10000{} li\-nes'' and ``hyperbola''. More examples of $F_4$-alge\-bras}\label{sec:1.2.3} \noindent\textbf{1.} Now we describe bounded irreducible solutions of the relations ($II_1$) $A^2+B^2=I$ (``non-commutative circle''), ($III_0$) $A^2-B^2 =0$ (``non-commutative pair of intersecting lines''), which is the same as the relations $\{\widetilde{A},\widetilde{B}\}=0$, and ($III_1$) $A^2 -B^2 =I$ (``non-commutative hyperbola''), which is the same as the relation $\{\widetilde{A},\widetilde{B}\}=I$. We show that these relations are $F_4$-relations, i.e., the corresponding algebras are ${F}_4$-algebras. \begin{proposition} Irreducible self-adjoint solutions $A$, $B$ of the relations $(II_1),\ (III_0),\ (III_1)$ are the following\textup: \begin{itemize} \item[$1)$] one-dimensional \textup($\dim H =1$\textup), $A=\lambda_1\mathbf{1}$, $B=\lambda_2\mathbf{1}$, where the pair $(\lambda_1,\lambda_2)$ belongs to the circle $K_{(II_1)}^{(1)}=\{ (\lambda_1,\lambda_2)\in\mathbb{R}^2\mid\ \lambda_1^2+\lambda_2^2 =1\}$, the pair of intersecting lines $K_{(III_0)}^{(1)}=\{(\lambda_1,\lambda_2)\in\mathbb{R}^2\mid\ \lambda_1^2=\lambda_2^2\}$, or the hyperbola $K_{(III_1)}^{(1)}=\{(\lambda_1,\lambda_2)\in\mathbb{R}^2\mid\ \lambda_1^2-\lambda_2^2 =1\}$, respectively\textup; \item[$2)$] two-dimensional \textup($\dim H =2$\textup), \[ A= \lambda_1 \begin{pmatrix} 1& 0\\ 0 & -1 \end{pmatrix}, \quad B= \lambda_2 \begin{pmatrix} \cos\phi & \sin\phi \\ \sin\phi & -\cos\phi \end{pmatrix}, \] $0<\phi<\pi$, where the pair $(\lambda_1,\lambda_2)$ belongs respectively to the set\textup: \begin{align*} K_{(II_1)} &=\{(\lambda_1,\lambda_2)\mid \lambda_1>0 ,\,\lambda_2>0,\, \lambda_1^2+\lambda_2^2 =1\},\\ K_{(III_0)} &=\{(\lambda_1,\lambda_2)\mid \lambda_1>0 ,\,\lambda_2>0,\, \lambda_1^2-\lambda_2^2 =0\}, \\ K_{(III_1)}&=\{(\lambda_1,\lambda_2)\mid \lambda_1>0, \,\lambda_2>0,\, \lambda_1^2-\lambda_2^2 =1\}. \end{align*} \end{itemize} \end{proposition} \begin{proof} Since the operators $A^2$ and $B^2$ belong to the center of the algebra, they are scalar in an irreducible representation, $A^2=\lambda_1^2I$, $B^2 =\lambda_2^2 I$. If the representation is not one-dimensional, then $\lambda_1>\nobreak0$, $\lambda_2>0$, and $(A/\lambda_1)^2 =(B/\lambda_2)^2=I$. Then the proposition follows from the description of irreducible pairs of unitary self-adjoint operators $U=A/\lambda_1$, $V=B/\lambda_2$. \end{proof} \begin{remark} For the ``circle'' $(II_1)$ $A^2+B^2=I$ only bounded solutions exist, since $\Vert A\Vert\le 1$, $\Vert B \Vert\le 1$. In contrast with this, for relations $(III_0)$ and $(III_1)$, a class of ``integrable'' representations by unbounded operators was defined and investigated (see~\cite{fa} and others). There is no need to use unbounded operators for studying the irreducible representations of the relations $(III_0)$ and $(III_1)$ either. Irreducibility here implies that the operators $A^2$ and $B^2$ commute with both $A$ and $B$, and hence are scalar. Thus the operators $A$, $B$ are bounded in any irreducible representation which are given in the proposition above. If we consider reducible representations of ($III_0$) and ($III_1$) by unbounded operators, new representations appear (see~\cite{fa} and others). \end{remark} \pagebreak[2] \noindent\textbf{2.} We have the following proposition. \nopagebreak \begin{proposition} The following algebras \textup(without involution\textup) \[ \mathbb{C}\bigl< x,y\mid x^2 + y^2 = e \bigr> =\mathbb{C}\bigl< a,b\mid a^2 - b^2 = e \bigr>, \] and \[ \mathbb{C}\bigl< x,y\mid x^2 = y^2 \bigr>= \mathbb{C}\bigl< a,b\mid \{a, b\} = 0 \bigr>, \] are $F_4$-algebras. \end{proposition} \begin{proof} We give the proof for the algebra $\mathbb{C}\bigl< a,b\mid\{a, b\} = 0 \bigr>$ defined by two generators $a$, $b$ and the relation $ab + ba =0$. As a vector space, it is the same as the space of complex polynomials in two variables but with the following multiplication of terms: \[ a^{k_1}b^{k_2}\cdot a^{j_1}b^{j_2}= (-1)^{k_2\cdot j_1}a^{k_1 +k_2}b^{j_1 +j_2}. \] Let us supply this algebra by involution defined as follows: \[ a=a^*,\quad b=b^*,\quad (a^{k_1}b^{k_2})^* = (-1)^{k_1\cdot k_2} a^{k_1} b^{k_2}. \] Irreducible $*$-representations of the $*$-algebra \[ \mathbb{C}\bigl< a,b\mid a=a^*,\, b=b^*,\, \{a, b\} = 0 \bigr> \] can be obtained as follows: \begin{enumerate} \item[1)] one-dimensional ($\dim H = 1$), $A=\lambda_1I$, $B=\lambda_2I$, where the pair $(\lambda_1,\lambda_2)$ belongs to the set \[ K^{(1)}=\{(\lambda_1,\lambda_2)\in\mathbb{R}^2\mid\lambda_1\lambda_2 = 0\}; \] \item[2)] two-dimensional ($\dim H = 2$), \[ A=\lambda_1 \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} , \quad B=\lambda_2 \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} , \] where the pair $(\lambda_1,\lambda_2)$ belongs to \[ K^{(2)}=\{(\lambda_1,\lambda_2)\in\mathbb{R}^2\mid \lambda_1>0,\, \lambda_2>0\}. \] \end{enumerate} Let us show that these representations separate elements of the algebra. Let \[ x=\alpha e + \beta a +\gamma b +\sum_{i,j} c_{ij}a^i b^j. \] If $\pi (x)=0$ for any one-dimensional representation, then $\alpha=\beta=\gamma =0$. If, further, $\pi (x)= 0$ for any two-dimensional representation, then we have: \begin{align*} \pi (x)&= \pi \Bigl(\sum_{i,j}c_{ij}a^i b^j\Bigr) \\ &= \Bigl(\sum_{i=2k,j=2l}c_{ij}\lambda_1^i\lambda_2^j\Bigr) \begin{pmatrix} 1& 0\\ 0 & 1 \end{pmatrix} \\ &\quad{}+\Bigl(\sum_{i=2k+1,j=2l}c_{ij}\lambda_1^i\lambda_2^j\Bigr) \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} \\ &\quad{}+ \Bigl(\sum_{i=2k,j=2l+1}c_{ij}\lambda_1^i\lambda_2^j\Bigr) \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \\ &\quad{}+ \Bigl(\sum_{i=2k+1,j=2l+1}c_{ij}\lambda_1^i\lambda_2^j\Bigr) \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} = 0 \end{align*} which implies $c_{ij}= 0$, $\forall i$, $j$, i.e., $ x=0$. Then by Theorem~\ref{th:r.f}, the algebra $\mathbb{C}\bigl<x,y\mid x^2=y^2\bigr> =\mathbb{C}\langle a,b \mid \{a, b\}=0 \rangle$ is an ${F}_4$-algebra. \end{proof} \noindent\textbf{3.} Let us notice that the algebra $Q_{4,2}=\mathbb{C}\bigl< q_1, q_2, q_3, q_4\mid q_i^2=q_i,\, i=1,2,3,4;\, \sum_{i=1}^{4}q_i=2e \bigr>$ is also a $F_4$-algebra. Its representation will be described below in Section~\ref{sec:2.2.1}. \medskip\noindent\textbf{4.} The algebra $\mathbb{C}\bigl< q_1, q_2, q_3, q_4\mid q_k^2=q_k ,\, \bigl[q_k,\sum_{j=1}^{4}q_j\bigr]=0, \, k=1,2,3,4 \bigr>$ is not a $F_n$-algebra for all finite $n$, since it has irreducible representations in all finite dimensions (see Section~\ref{sec:2.2.1}). \medskip\noindent\textbf{5.} The algebra \[ Q_{5,\lambda}=\mathbb{C}\Bigl< q_1,\dots, q_5\mid q_k^2=q_k ,\, k=1,\dots,5; \, \sum_{j=1}^{5}q_j=\lambda e \Bigr> \] is not a $F_n$-algebra for all $\lambda\in \mathbb{C}$ and all finite $n$ since it has an irreducible infinite-dimensional representation (see ~\cite{rab_sam_fa}). The construction of this representation, which can be found in ~\cite{rab_sam_fa}, generalizes the one for five idempotents with the zero sum given in ~\cite{bart}. %%% Local Variables: %%% mode: latex %%% TeX-master: "the" %%% TeX-master: "the" %%% End: