One Hat Cyber Team
Your IP :
216.73.216.24
Server IP :
194.44.31.54
Server :
Linux zen.imath.kiev.ua 4.18.0-553.77.1.el8_10.x86_64 #1 SMP Fri Oct 3 14:30:23 UTC 2025 x86_64
Server Software :
Apache/2.4.37 (Rocky Linux) OpenSSL/1.1.1k
PHP Version :
5.6.40
Buat File
|
Buat Folder
Eksekusi
Dir :
~
/
home
/
mellit
/
View File Name :
IMC04Problems.tex
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[a4paper,12pt]{article} \usepackage{amssymb} \usepackage{amsfonts,amsthm} \usepackage{amsmath,texdraw} \usepackage[dvips]{graphicx} \usepackage[english]{babel} \hyphenation{} \usepackage{latexsym} \usepackage{xypic} \input xy \xyoption{all} \begin{document} \begin{center} $\smallskip $ \smallskip {\LARGE Problems proposed for the $11^{th}$ International Mathematics Competition for University Students} \bigskip {\large Skopje, July 23 - July 29, 2004} \smallskip \end{center} {\bf Problem 1. M. V. Balashov,\; P. A. Kozhevnikov, Moscow Institute of Physics and Technology} This nice problem appeared after one scientific seminar on nonlinear analysis. Let function $f:R\to R$ is continuous at point $x_{0}$. It means that $$ \forall\varepsilon>0 \exists\ \delta>0\quad \forall x\in (x_{0}-\delta,x_{0}+\delta)\qquad |f(x)-f(x_{0})|<\varepsilon.\eqno(*) $$ Prove that we can choose for any $\varepsilon>0$ value $\delta=\delta (\varepsilon)$ that $\delta (\varepsilon)$ will be continuous function on interval $\varepsilon\in (0,+\infty)$. \bigskip {\it Solution}. For any $\varepsilon>0$ consider $$ \Delta (\varepsilon) = \sup\{ \delta>0\ | \forall x\in (x_{0}-\delta,x_{0}+\delta)\qquad |f(x)-f(x_{0})|<\varepsilon\}. $$ By the definition and (*) $\Delta (\varepsilon)\in (0,+\infty]$ for all $\varepsilon>0$. It is obvious that if $0<\varepsilon_{1}\le\varepsilon_{2}$ then $\Delta (\varepsilon_{1})\le \Delta (\varepsilon_{2})$, i.e. $\Delta$ is monotonic increasing function and exists $\varepsilon_{0}>0$: $\Delta (\varepsilon)<+\infty$ for all $\varepsilon\in (0,\varepsilon_{0})$ (otherwise $f$ is constant function and statement is obvious). Condition (*) is valid for $\delta = \Delta (\varepsilon)$. Now let $\delta (\varepsilon) = \int\limits_{0}^{\varepsilon}\Delta (t)\, dt$ for $\varepsilon\le \varepsilon_{0}$ and $\delta (\varepsilon)=\delta (\varepsilon_{0})$ for $\varepsilon>\varepsilon_{0}$. $\delta (\varepsilon)$ is desired function. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 2. Gergely R\"ost, University of Szeged} Let $n \ge t$ be nonnegative integers of the same parity. Consider a rectangle with $n$ points on the top side and $t$ points on the bottom, and $\frac{n+t}2$ nonintersecting arcs connecting these $n+t$ points pairwise. Let $D_1, \dots, D_N$ be all such different diagrams in which every bottom point is connected to some top point. For every pair $1 \le i,j \le N$ flip the diagram $D_i$ horizontally and put it over diagram $D_j$ so that their bottom points coincide. We get a rectangular diagram with $t$ points on the top and $t$ points on the bottom, which are pairwise connected by nonintersecting arcs, and some loops, like on the picture. \begin{figure} \centerline{ \begin{tiny} \xymatrix{ \ar@{-}[rrrrrr]\ar@{-}[dddd]&&\bullet \ar@{-} [ddl] &\bullet \ar@{-}[ddr] &\bullet \ar@{-}[ddr]&&\ar@{-}[dddd]\\ &&&&&&\\ \ar@{.}[rrrrrr]&\bullet&\bullet \ar@{-} `d [r] `[r] \ar@{-} `u[r] `[r]&\bullet&\bullet&\bullet&\\ &&&&&&\\ \ar@{-}[rrrrrr]&&\bullet \ar@{-} [uul]&\bullet \ar@{-} [uur]&\bullet \ar@{-} [uur]&&\\ } \end{tiny}} \end{figure} Denote this diagram by $D_i^* D_j$, let $k_{ij}$ be a number of loops in $D_i^* D_j$. Consider matrix $M = (M_{ij})_{i,j=1}^{N}$ where $$ M_{ij} = \begin{cases} x^{k_{ij}}, & \text{if every point of the bottom is connected}\\ & \text{to corresponding point of the top in} D_i^* D_j\\ 0, & \text{otherwise}.\\ \end{cases} $$ Prove that: a) Prove that $N = \begin{pmatrix} n\\ \frac{n-t}2\\ \end{pmatrix}$; b) $\det(M)$ is a nonzero polynomial in $x$. {\it Remark}. Matrix $M$ is known in combinatorial representation theory as a Gram matrix of a certain cell module of the Temperley-Lieb algebra. Calculations of $\det{M}$ are provided, for example in the paper of B.W.Westbury, The representation theory of the Temperley-Lieb algebras (Math. Zeitschrift, v.219(1995), no.4, pp.539-565). The following proof becomes very short being explained in terms of exact sequences and associative forms (see paper of Westbury), but we do it with objects of elementary linear algebra. \bigskip {\it Solution}. Let $F$ be a field of rational functions in $x$. Let $V^{(t,n)}$ be the vector space over $F$ with basis $D_1, \dots, D_N$ of the diagrams from the problem statement. Consider a bilinear form $R^{(t,n)}: V^{(t,n)} \times V^{(t,n)} \rightarrow F$ defined on basis elements as $R^{(t,n)}(D_i,D_j) = M_{ij}$. The problem statement is equivalent to that this form is nondegenerate. Let $A_n$ be a set of all diagrams with $n$ points on the top and $n$ points on the bottom pairwise connected by nonintersecting arcs. Let $a \in A_n$, and $D_i \in V^{(t,n)}$ be a diagram. Then we can put $a$ over $D_i$ (so that bottom points of $a$ coincide with top points of $D_i$), and get some diagram (denote it by $a D_i$) with $n$ top and $t$ bottom points. Let $k$ be a number of loops in $a D_i$, and $D$ be the diagram obtained by removing all the loops from $a D_i$. Define $$ a(D_i) = \begin{cases} x^k D, & \text{if every point of the bottom is connected}\\ & \text{to some point of the top in} D\\ 0, & \text{otherwise}.\\ \end{cases} $$ Then $a(D_i) \in V^{(t,n)}$, so $a$ defines some linear transformation of $V^{(t,n)}$. By $a^*$ we always mean the horizontal flip of the diagram $a$. Then for all $x,y \in V^{(t,n)}$ $$ R^{(t,n)}(a(x),y) = R^{(t,n)}(x,a^*(y)), $$ what can be simply checked on the basis diagrams. We are going to prove the statement by induction in $n$. The base (any small $n$) can be simply checked, cases $t = n, n-2$ can be also checked explicitly. Let $t < n-2$. Let the statement be proved for all pairs $(s,n-1)$ of nonnegative integers of the same parity with $s \le n-1$. So, all forms $R^{s,n-1}$ are nondegenerate. Consider the map $\pi_n: A_{n-1} \rightarrow A_n$, where $\pi(a)$ is obtained by adding to the diagram $a$ one rightmost point on the top and one rightmost point on the bottom, and connecting this points by an arc. Consider the map $f : V^{(t,n)} \rightarrow V^{(t-1,n-1)} \oplus V^{(t+1,n-1)}$ defined on diagrams $D_i \in V^{(t,n)}$ as following. If the rightmost points of the top and the bottom are connected in $D_i$, we remove both this points with connecting arc to get $f(D_i)$, so $f(D_i) \in V^{(t-1,n-1)}$. Otherwise the rightmost point of the top in $D_i$ is connected to some other point of the top, and we move this rightmost point to the bottom getting $f(D_i) \in V^{(t+1,n-1)}$. One can see that any diagram from $V^{(t-1,n-1)}$ and $V^{(t+1,n-1)}$ will be obtained in this way, and $f$ is invertible. So, one can get $N = dim(V^{(t,n)}) = \begin{pmatrix} n\\\frac{n-t}2 \end{pmatrix}$ by induction. Moreover, one has $f \circ \pi_n(a) = a \circ f$ for any $a \in A_{n-1}$. Consider the bilinear form $L^{t,n}$ on $V^{(t-1,n-1)} \oplus V^{(t+1,n-1)}$ defined by $L^{t,n}(x,y) = R^{t,n}(f^{-1} x, f^{-1}y)$. Then $L^{(t,n)}(a(x),y) = L^{(t,n)}(x,a^*(y))$ for every $a \in A_{n-1}$. Indeed, $$ L^{(t,n)}(a(x),y) = R^{(t,n)}((f^{-1} \circ a)x,f^{-1} y) = R^{(t,n)}((\pi_n(a) \circ f^{-1})x,f^{-1} y) = $$ $$ R^{(t,n)}(f^{-1} x, (\pi_n(a)^* \circ f^{-1}) y) = R^{(t,n)}(f^{-1} x,(f^{-1}\circ a^*)y) = L^{(t,n)}(x,a^*(y)) $$ as $\pi_n(a)^* = \pi_n(a^*)$. \emph{Lemma}. If $R^{(t,n)}$ is nondegenerate than there is no proper subspace $V \subset V^{(t,n)}$ such that $a(V) \subset V$ for every $a \in A_n$. \emph{Proof.} Let $B_n$ be a vector space over $F$ with the basis of the diagrams from $A_n$. The action of $A_n$ on $V^{(t,n)}$ can be simply expanded to the action of $B_n$ by linearity. We use the notation from problem statement. Diagram $D_iD_j^*$ belongs to $A_n$, and $D_iD_j^*(D_k) = M_{jk}D_i$. So the matrix of $D_iD_j^*$ in the basis of the diagrams will be $E_{ij}M$, where $E_{ij}$ is the matrix unit $(\delta_{k,i}\delta_{l,j})_{k,l=1}^N$ and $M$ is from the statement of the problem. As $M$ is invertible under assumption of lemma, any linear transformation of $V^{(t,n)}$ can be obtained by action of some element from $B_n$. But there is no proper invariant subspace for all linear transformations of the vector space. \emph{Lemma is proved.} Denote by $(x,y)$ a canonical bilinear form on $V^{(t-1,n-1)} \oplus V^{(t+1,n-1)}$ under which all basis elements are orthogonal. Then $L^{(t,n)}(x,y) = (L x,y)$ for some linear transformation $L$, let us decompose its matrix as $$ L = \begin{pmatrix} L_1 & L_2\\ L_3 & L_4\\ \end{pmatrix} $$ according to decomposition of the space into $V^{(t-1,n-1)} \oplus V^{(t+1,n-1)}$. As it was noted above components are $n_1$ and $n_2$-dimensional where $n_1 = \begin{pmatrix}n-1\\ \frac{n-t}2\end{pmatrix}$ and $n_2 = \begin{pmatrix}n-1\\ \frac{n-t}2-1\end{pmatrix}$ correspondingly. So, $n_1 > n_2$. So, $Ker(L_3)$ is a nonzero subspace of $V^{(t-1,n-1)}$, which is invariant under action of diagrams $A_{n-1}$. Indeed, let $x \in Ker(L_3)$ and $a \in A_{n-1}$. Then for every $y \in V^{(t+1,n-1)}$ $$ (L_3(a(x)),y) = (L a(x),y) = (L x, a^*(y)) = (L_3 x, a^*(y)) = 0, $$ so $a(x) \in Ker(L_3)$. Due to inductional assumption $R^{t-1,n-1}$ is nondegenerate, so lemma above implies $L_3 = 0$. One can prove that $L_2 = 0$ analogously. So, matrix $L$ is block diagonal and defines forms on $V^{(t-1,n-1)}$ and $V^{(t+1,n-1)}$ which agree with action of $A_{n-1}$ as it was so for $L^{(t,n)}$. These forms can be either zero or nondegenerate by inductional assumption and lemma above again. One can simply find the diagrams for each of this forms, on which it is nonzero. So, $L^{(t,n)}$ is nondegenerate, and $R^{(t,n)}$ is also nondegenerate. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 3. Gergely R\"ost, University of Szeged} Let $S$ be a set of real numbers such that $|s_1+s_2+...+s_k| < 1$ for every finite subset $\{s_1,s_2,...,s_k\} \subset S$. Show that $S$ is countable. \bigskip {\it Solution}. Let $S_n =S \cap (\frac{1}{n}, \infty)$ for any $n > 0$ integer. It follows from the inequality that $|S_n|<n$. Similarly define $S_{-n}=S \cap (-\infty, -\frac{1}{n})$, then $|S_{-n}| < n$. Any $s' \in S$ is an element of some $S_n$ or $S_{-n}$, because there exists an $n$ such that $s' > \frac{1}{n}$, or $s' < -\frac{1}{n}$. Then $S=\bigcup_{n \in N} (S_n \cup S_{-n})$, $S$ is a countable union of finite sets, hence countable. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 4. Alexander A. Fomin} The lengths of all sides of a convex polygon $A_{1}A_{2}\ldots A_{n}, n\ge 3$, are equal to each other. The angles of the polygon are denoted by $\alpha_{1}, \alpha_{2},\ldots, \alpha_{n}$ respectively. Prove that if $\pi>\alpha_{1}\ge \alpha_{2}\ge\ldots\ge\alpha_{n}>0$ then the polygon is regular. \bigskip {\it Solution}. We assume that $A_{k+n}=A_{k}$ for each integer $k$. Let $S_{k}$ be a circumference passing trough the vertices $A_{k-1}, A_{k}, A_{k+1}$ and $D_{k}$ be a disk bounded by $S_{k}$. The side $A_{k-1}A_{k}$ cuts the disk $D_{k}$ in two parts. Let $C_{k}$ be that part which is directed to the polygon and a segment $X_{k}$ be the intersection of $D_{k}$ and the straight line $A_{n}A_{1}$. In particular $X_{k}$ may be empty. Let $1<k<n$. Since $\alpha_{k-1}\ge\alpha_{k}$ it follows that $C_{k}$ is embedded in $D_{k-1}$ and therefore the segment $X_{k}$ is embedded in the segment $X_{k-1}$. Moreover if $\alpha_{k-1}=\alpha_{k}$ then $X_{k} = X_{k-1}$. If $\alpha_{k-1}>\alpha_{k}$ then the vertices of $X_{k}$ are necessarily inner points of the segment $X_{k-1}$. We obtain a sequence of embedded segments $[A_{n}, A_{1}] = X_{1}\ge X_{2}\ge\ldots \ge X_{n-1}$. Since the vertex $A_{n}$ belongs to the segment $X_{n-1}$ and it can not be an inner point of any segment it follows that $\alpha_{1}= \alpha_{2}= \ldots =\alpha_{n}$ and the polygon is regular. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 5. Marianna Csornyei, University College London} % mari@math.ucl.ac.uk For a given triangle $T$ on the plane, let $T_1$ and $T_2$ denote the largest and smallest regular triangles with $T_1\subset T\subset T_2$, respectively. Prove that $${\rm Area}(T_1)\cdot{\rm Area}(T_2)={\rm Area}(T)^2.$$ \bigskip {\it Solution}. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 6. Ebad S.\ Mahmoodian, Sharif University of Technology, Tehran } % Department of Mathematical Sciences % Sharif University of Technology % P.O. Box 11365--9415 % Tehran, I.R. IRAN Given 1003 nonzero natural numbers (repetition is allowed) with the total sum 2004, prove that one can produce {\it all} numbers from 1 to 2003 as partial sum of these given numbers. \bigskip {\it Solution}. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 7. Giedrius Alkauskas, Vilnius University, Lithunia} Let $\alpha$ be a positive real number. Define a sequence of real numbers by $a_{0}=\alpha$, $a_{1}=1$ and then inductively $a_{n+2}=|a_{n+1}-a_{n}|$. \\ (i) If additionally $\alpha\notin(1.61,1.63)$, prove that this sequence contains at least two equal terms.\\ (ii) Prove that there exists positive real $\alpha$ for which this sequence does not contain equal terms. \bigskip {\it Solution}. (i) First, if $\alpha<1$, then $a_{2}=1-\alpha$, and since $a_{2}<a_{1}$, then $a_{3}=a_{1}-a_{2}=1-(1-\alpha)=\alpha=a_{0}$. Hence, we may assume $\alpha\geq1$. Suppose, this sequence is non-increasing, that is, $a_{n}\geq a_{n+1}$ for all $n\geq0$. Therefore $a_{n+2}=a_{n}-a_{n+1}$ for all $n\geq0$. Hence, $a_{2}=\alpha-1$, $a_{3}=2-\alpha$, $a_{4}=2\alpha-3$, $a_{5}=5-3\alpha$, $a_{6}=5\alpha-8$, $a_{7}=13-8\alpha$, $a_{8}=13\alpha-21$. In particular, $a_{7}\geq0$ and $a_{8}\geq 0$, and this gives $\alpha\leq\frac{13}{8}=1.625$ and $\alpha\geq \frac{21}{13}=1.615...$- a contradiction. Therefore, there exists $n\geq0$, such that $a_{n}<a_{n+1}$. Let $l$ be the least such $n$ (for a given $\alpha$). Then $l\geq1$. Let $a_{l-1}=A$, $a_{l}=B$. Since $l$ is the least number with the property described above, so we have $A\geq B$, that is, $a_{l+1}=A-B$. By the choice, $a_{l+1}>a_{l}$, hence $a_{l+2}=A-2B$, and since $a_{l+1}>a_{l+2}$, this gives $a_{l+3}=(A-B)-(A-2B)=B\Rightarrow a_{l}=a_{l+3}$.\\ (ii) Define $\alpha=a_{0}=\frac{\sqrt{5}+1}{2}(=1.618...)$ and further $a_{n}=(\frac{\sqrt{5}-1}{2})^{n-1}$ for $n\geq 1$ (note that this is also correct for $n=0$). First note that this sequence is strictly decreasing. If we define $\beta=\frac{\sqrt{5}-1}{2}$, then $\beta^{2}=1-\beta$ and therefore $a_{n+2}=a_{n}-a_{n+1}=|a_{n}-a_{n+1}|$ for all $n\geq0$. Finally, this sequence is strictly decreasing and therefore does not contain two equal terms.\\ \\ \it{Note. }\rm The number $\alpha=\frac{\sqrt{5}+1}{2}$ is the ONLY positive real number, for which the sequence defined above does not contain two equal terms. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 8. Giedrius Alkauskas, Vilnius University, Lithunia} Given any positive integers $d_{1}<d_{2}<...<d_{m}$ with $m$ even. Define a polynomial $G(x)=1-x^{d_{1}}+x^{d_{2}}-...+x^{d_{m}}= 1+\sum\limits_{i=1}^{m}(-1)^{i}x^{d_{i}}$. Prove that all real roots of $G(x)$ are contained in the interval $(\frac{-1-\sqrt{5}}{2},\frac{1-\sqrt{5}}{2})$. \bigskip {\it Solution 1 }. Define a new polynomial $F(x)=G(x)(1+x+x^{2}+...+x^{d_{m}})$. Every root of $G(x)$ is also a root of $F(x)$, hence it suffices to prove the required statement for the polynomial $F(x)$. It is easy to see that all coefficients of $F(x)$ are in the set $\{0,1\}$. In fact, let for simplicity $d_{0}=0$. Then $F(x)=\sum\limits_{i=0}^{m}\sum\limits_{j=0}^{d_{m}}(-1)^{i}x^{d_{i}+j}$. Thus, for every $s$ in the ranges $0\leq s\leq d_{m}$ and $d_{m}<s\leq 2d_{m}$ the coefficient at $x^{s}$ is $\sum\limits_{k=0}(-1)^{k}$ (summing is taken for all $k$ satisfying $d_{k}\leq s$) and $\sum\limits_{k=0}(-1)^{m-k}$ (summing is taken for all $k$ satisfying $s\leq d_{m-k}+d_{m}$) respectively. Therefore, we have an alternating sum of $1'$s and $(-1)'$s, starting from $1$ (in the second case, note that $m$ is even), hence it is $1$ or $0$. Thus, $F(x)$ is a polynomial with coefficients from the set $\{0,1\}$ with free and leading terms equal to $1$. All coefficients are non-negative, hence $F(x)$ does not have positive roots. Now let $\gamma=\frac{1+\sqrt{5}}{2}$. Let $x\leq-\gamma$ and $D=2d_{m}$, which is an even number. We will show that in this case $F(x)>0$. In fact, ignoring even powers of $x$ (except $x^{D}$) we get: $F(x)\geq 1+|x|^{D}-|x|^{D-1}-|x|^{D-3}-...-|x|= 1+|x|^{D}-|x|\frac{|x|^{D}-1}{|x|^{2}-1}$. Since $\gamma$ is a positive root of the equation $Z^{2}=Z+1$, and $x\leq-\gamma<-1$, we get $|x|\leq|x|^{2}-1$, and therefore $F(x)\geq 1+|x|^{D}-(|x|^{D}-1)=2>0$. The same arguments for the polynomial $\overline{F}(y)=y^{D}F(1/y)$ show that $\overline{F}(y)$ has no roots $y$ with $y\leq-\gamma$, hence $F(x)$ does not vanish for negative $x\geq1/\gamma=\frac{1-\sqrt{5}}{2}$. The statement is proved. \bigskip {\it Solution 2}. First, the polynomial $G(x)$ does not have positive roots. In fact, grouping $G(x)= (1-x^{d_{1}})+(x^{d_{2}}-x^{d_{3}})+...+(x^{d_{m-2}}-x^{d_{m-1}})+x^{d_{m}}$ shows that $G(x)>0$ for $0\leq x\leq 1$ (all summands are nonnegative, and at least one is positive), and grouping $G(x)= 1+(-x^{d_{1}}+x^{d_{2}})+(-x^{d_{3}}+x^{d_{4}})+...+(-x^{d_{m-1}}+x^{d_{m}})$ shows that $G(x)>0$ for $x>1$. Suppose $d_{m}$ is even (the case $d_{m}$ is odd is analogous). Define $H(x)=G(-x)=1+\sum\limits_{i=1}^{m}(-1)^{i+d_{i}}x^{d_{i}}$. Let $\gamma$ is a positive root of equation $Z^{2}=Z+1$. We will show that $H(x)>0$ for $x\geq\gamma$. If $d_{m-1}=d_{m}-1$, then coefficient of $H(x)$ at $x^{d_{m}-1}$ is $+1$, since $m$ and $d_{m}$ are both even. Otherwise, $d_{m-1}\leq d_{m}-2$. Hence, a term $x^{d_{m}-1}$ in $H(x)$ has a positive sign, or is absent. Therefore, for $D=d_{m}$ and $x\geq\gamma>1$ we have: $H(x)\geq x^{D}-x^{D-2}-x^{D-3}-...-x^{2}-x+1=x^{D}-x\frac{x^{D-2}-1}{x-1}+1> x^{D}-\frac{x^{D-1}}{x-1}=x^{D-1}(x-\frac{1}{x-1})\geq0$ (in the last inequality we use that $x\geq \gamma$). On the second hand, the same arguments show that polynomial $\overline{H}(x)=x^{d_{m}}H(1/x)$ does not have positive roots $x\geq\gamma$, hence $H(x)$ does not have positive roots $x\leq 1/\gamma=\frac{\sqrt{5}-1}{2}$. \\ \\ \it{Note. }\rm The bounds given above are sharp, that is, the interval $(\frac{-1-\sqrt{5}}{2},\frac{1-\sqrt{5}}{2})$ cannot be refined to any $(\frac{-1-\sqrt{5}}{2}+\varepsilon,\frac{1-\sqrt{5}}{2}-\varepsilon)$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 9.} It is well-known that for every integer $n\geq 0$ there are polynomials $T_n$, $U_n$ of degree $n$ such that \[ T_n(\cos \varphi)=\cos n\varphi \quad \mathrm{and} \quad U_n(\cos \varphi)\sin\varphi =\sin (n+1)\varphi \] for all complex numbers $\varphi$. Define \[ A=\{ (x,y,n)\in\mathbb{Z}^3 \mid x,y>1,\, n\geq 1,\quad \frac{x^2-1}{y^2-1}=n^2 \} \] and \[ B=\{ (T_m(y),y,U_{m-1}(y)) \mid m,y\in\mathbb{Z},\; y>1,\; m\geq 1 \} \] Prove that $A=B$. \bigskip {\it Solution }. Define a map $S$ by \begin{eqnarray*} S:\mathbb{Z}^3&\longrightarrow&\mathbb{Z}^3 \\ (x,y,n)&\mapsto&(yx+(y^2-1)n,y,x+yn) \end{eqnarray*} This map is a bijection; indeed, keeping $y$ fixed we see that $S$ restricted to the first and the last factor is the linear map $\left(\begin{smallmatrix} y&y^2-1\\ 1&y \end{smallmatrix}\right)$, so we get \begin{eqnarray*} S^{-1}:\mathbb{Z}^3&\rightarrow &\mathbb{Z}^3 \\ (x,y,n)&\mapsto&(yx-(y^2-1)n,y,-x+yn) \end{eqnarray*} Note that $S$ and $S^{-1}$ extend to bijections on $\mathbb{R}^3$. Now consider the following\\ \textbf{Lemma 1:} There is at most one set $M$ of triples of reals $(x,y,n)$ with the following properties: \begin{itemize} \item[(1)] If $(x,y,n)\in M$, then $n\geq 1$ and $S(x,y,n)\in M$. \item[(2)] If $(x,y,n)\in M$ and $n>1$, then $(x',y',n')=S^{-1}(x,y,n)\in M$ and $n'\leq n-1$. \item[(3)] $(x,y,1)\in M$ if and only if $x=y>1$ are integers. \end{itemize} \textit{Proof:}\\ Suppose $M$ and $N$ both are sets with these properties, and assume $M-N$ is not empty. Choose some $(x,y,n)\in M-N$ with minimal $\left\lfloor n\right\rfloor$ (note that $(x,y,n)\in M-N$ implies $n\geq 1$ by condition (1)). If $n=1$ then $(x,y,n)\in M$ would imply $x=y>1$ by condition (3) for $M$, but then $(x,y,n)\in N$ by condition (3) for $N$. Hence $n>1$. By condition (2) we get $(x',y',n')=S^{-1}(x,y,n)\in M$ and $n'\leq n-1$, hence $\left\lfloor n'\right\rfloor<\left\lfloor n\right\rfloor$. By minimality of $\left\lfloor n\right\rfloor$, $(x',y',n')\in N$; by condition (1) we get $(x,y,n)=S(x',y',n')\in N$, which is a contradiction; hence $M-N$ is empty. By the same argument, $N-M$ is empty, and therefore $M=N$.\\ \textbf{Lemma 2:} The set $A$ satisfies the conditions of Lemma 1.\\ \textit{Proof:}\\ Let $(x,y,n)\in\mathbb{Z}^3$ and define $(X,Y,N)=S(x,y,n)$. Let us first of all prove the equivalence \begin{align} x^2-1=n^2(y^2-1)\quad\Leftrightarrow\quad X^2-1=N^2(Y^2-1) \tag{*} \end{align} This follows from the equivalence of the following equations: \begin{eqnarray*} X^2-1&=&N^2(Y^2-1) \\ (yx+(y^2-1)n)^2-1&=&(x+yn)^2(y^2-1) \\ y^2x^2+2yx(y^2-1)n+(y^2-1)^2n^2-1&=&(x^2-2xyn+y^2n^2)(y^2-1) \\ x^2-1+(y^2-1)(x^2+2yxn+(y^2-1)n^2)&=&(x^2-2xyn+y^2n^2)(y^2-1) \\ x^2-1&=&n^2(y^2-1) \end{eqnarray*} ad (1): If $(x,y,n)\in A$, then $n\geq 1$ by definition. Let $(X,Y,N)=S(x,y,n)$, then certainly $Y=y>1$, $X=yx+(y^2-1)n>yx>1$ and $N=x+yn>x>1$. Finally, $\frac{X^2-1}{Y^2-1}=N^2$ by (*); so $(X,Y,N)\in A$.\\ ad (2): Let $(X,Y,N)\in A$ and $N>1$, and define $(x,y,n)=S^{-1}(X,Y,N)$. Then $y=Y>1$. From \[ X^2-1=N^2(Y^2-1)=N^2Y^2-N^2<N^2Y^2-1 \] and $X,Y,N>0$ we get $X<NY$, hence $n=-X+YN>0$. Furthermore, $x^2-1=n^2(y^2-1)$ by (*), hence $x^2-1=n^2(y^2-1)>1$, so $x>1$. So, $(X,Y,N)\in A$. It remains to check that $n<N$. But from \[ X^2>X^2-1=N^2(Y^2-1)>N^2(Y-1)^2 \] we get $X>N(Y-1)$, which is the same as $N>NY-X=n$.\\ ad (3): If $(x,y,n)\in A$ and $n=1$, then $x^2-1=y^2-1$ implies $x=y>1$ since $x,y>1$. Conversely, if $x=y>1$, we get $x^2-1=y^2-1$, so $(x,y,n)\in A$.\\ \textbf{Lemma 3:} The set $B$ satisfies the conditions of Lemma 1.\\ \textit{Proof:}\\ Let $(x,y,n)\in B$, then $(x,y,n)=(T_m(y),y,U_{m-1}(y))$ for some $m\geq 1$ and some $y>1$. This implies that we can find a real number $\lambda\neq 0$ such that $y=\frac12(e^\lambda+e^{-\lambda})=\cos(i\lambda)$. Then \begin{align} n=U_{m-1}(y)&=\frac{e^{m\lambda}-e^{-m\lambda}}{2i}\frac{2i}{e^\lambda-e^{-\lambda}}\nonumber\\ &=e^{(m-1)\lambda}+e^{(m-3)\lambda}+\dots+e^{-(m-1)\lambda}\geq m \tag{**} \end{align} ad (1): If $(x,y,n)\in B$, then $n\geq m\geq 1$. Let $a=i\lambda$, then $y=\cos a$. From the equations \begin{eqnarray*} T_{m+1}(y)=\cos (m+1)a&=&\cos a\cos ma-\sin a\sin ma \\ &=&yT_m(y)-\sin^2 aU_{m-1}(y)=yx-(1-y^2)n \\ U_m(y)\sin a=\sin (m+1)a&=&\cos a\sin ma+\sin a\cos ma \\ &=&yU_{m-1}(y)\sin a+T_m(y)\sin a=(yn+x)\sin a \end{eqnarray*} we get that $S(x,y,n)=(T_{m+1}(y),y,U_m(y))\in B$.\\ ad (2): Let $(x,y,n)\in B$ and $n>1$. Write $(x,y,n)=(T_m(y),y,U_{m-1}(y))$. Since $U_0(y)=1$, we have $m>1$. In the same way as above we see that $(X,Y,N)=S^{-1}(x,y,n)=(T_{m-1}(y),y,U_{m-2}(y))\in B$. Finally we get from (**): \[ n-N=\frac{e^{m\lambda}+e^{-m\lambda}}{e^\lambda+e^{-\lambda}}=1+\frac{(e^{(m-1)\lambda}-1)(e^{(m+1)\lambda}-1)} {e^{m\lambda}(e^\lambda+e^{-\lambda})}\geq 1. \] ad (3): Let $(x,y,1)\in B$. From (**) we get $m=1$. Hence $x=T_m(y)=T_1(y)=y>1$. On the other hand, $(y,y,1)=(T_1(y),y,U_0(y))\in B$ for all integers $y>1$. This proves Lemma 2.\\[6pt] Taking all three lemmas together we get the desired result. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 10.} Find all triples of integers $(p,q,z)$ with the following properties: \begin{enumerate} \item[(1)] $p$ and $q$ are different (positive) primes. \item[(2)] $p+z$, $q+z$ and $pq+z$ are perfect squares. \end{enumerate} \bigskip {\it Solution}. Let $(p,q,z)$ be a triple of integers satisfying (1) and (2). Without loss of generality we assume $p<q$. Then there are integers $a,b,c\geq 0$ such that \begin{eqnarray*} p+z&=&a^2 \\ q+z&=&b^2 \\ pq+z&=&c^2 \end{eqnarray*} From $p<q<2q\leq pq$ we get $a<b<c$. Now, \begin{align*} q-p=b^2-a^2\geq b^2-(b-1)^2=2b-1 \tag{3} \end{align*} On the other hand, \begin{align*} q(p-1)=c^2-b^2=(c-b)(c+b) \tag{4} \end{align*} Since $q$ is a prime, it must divide one of the positive numbers $c-b$, $c+b$. In both cases we get $q\leq c+b$. From this and (4) we get $p-1\geq c-b$. Hence \[ q-p\leq (c+b)-(c-b+1)=2b-1 \] Combining this with (3) we see that $q-p=2b-1$. Since $q$ is odd and $p$ is a prime, we get $p=2$. This yields \[ q=2b+1, z=b^2-q=\left(\frac{q-1}{2}\right)^2-q=\frac{q^2-6q+1}{4}\in\mathbb{Z} \] We have shown that every solution with $p<q$ is of the form \[ (p,q,z)=(2,q,\frac{q^2-6q+1}{4}), \] where $q$ is an odd prime.\\ Conversely, every such triple satisfies the conditions of the problem: \begin{eqnarray*} p+z&=&2+\frac{q^2-6q+1}{4}=\frac{q^2-6q+9}{4}=\left(\frac{q-3}{2}\right)^2 \\ q+z&=&q+\frac{q^2-6q+1}{4}=\frac{q^2-2q+1}{4}=\left(\frac{q-1}{2}\right)^2 \\ pq+z&=&2q+\frac{q^2-6q+1}{4}=\frac{q^2+2q+1}{4}=\left(\frac{q+1}{2}\right)^2 \end{eqnarray*} So the set of solutions is \[ \left\{\left(2,q,\frac{q^2-6q+1}{4}\right),\left(q,2,\frac{q^2-6q+1}{4}\right) \right\} \] where $q$ runs through all odd primes. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 11.} For every pair $(i,j)$ of non-negative integers $i,j$ we are given a positive real number $a_{i,j}$ such that \begin{enumerate} \item[(1)] $a_{i,j}=a_{j,i}$ \item[(2)] $a_{i,j}=a_{i+1,j}+a_{i,j+1}$ \end{enumerate} for all integers $i,j\geq 0$. Prove that $a_{n,0}\geq 2^{-n}a_{0,0}$ for all integers $n\geq 0$. \bigskip {\it Solution}. Let us first of all generalize the second formula: \begin{align*} a_{i,j}=\sum_{k=0}^{n} \binom{n}{k} a_{i+n-k,j+k} \tag{3} \end{align*} The proof of this equation is easily done by induction on $n$: For $n=0$ there is nothing to prove. Assuming the result for some $n-1\geq 0$, we get \begin{eqnarray*} a_{i,j}&=&\sum_{k=0}^{n-1} \binom{n-1}{k} a_{i+n-1-k,j+k} \\ &=&\sum_{k=0}^{n-1} \binom{n-1}{k} (a_{i+n-k,j+k}+a_{i+n-1-k,j+k+1}) \\ &=&\sum_{k=0}^{n} \binom{n-1}{k} a_{i+n-k,j+k} + \sum_{k=0}^{n} \binom{n-1}{k-1}a_{i+n-k,j+k} \\ &=&\sum_{k=0}^{n} \left(\binom{n-1}{k-1} + \binom{n-1}{k} \right) a_{i+n-k,j+k} \\ &=&\sum_{k=0}^{n} \binom{n}{k} a_{i+n-k,j+k} \\ \end{eqnarray*} The second thing we need is the following inequality: \begin{align*} \binom{n+2}{k+1} \geq 4\left(1-\frac{1}{n+2}\right) \binom{n}{k} \tag{4} \end{align*} for all integers $n\geq 0$ and $k$ with $0\leq k\leq n$. To prove this, note that \[ \binom{n+2}{k+1} \binom{n}{k}^{-1} =\frac{(n+2)!k!(n-k)!}{(k+1)!(n-k+1)!n!}=\frac{(n+2)(n+1)}{(k+1)(n-k+1)} \] Applying the inequality $ab\leq \frac14(a+b)^2$ to the denominator, we find \[ \frac{(n+2)(n+1)}{(k+1)(n-k+1)}\geq \frac{(n+2)(n+1)}{\frac14(n+2)^2}=4\frac{n+1}{n+2} \] which proves (4).\\ Now we combine the results (3) and (4) to get the following estimate, using that $a_{i,j}>0$ for all $i,j$: \begin{eqnarray*} a_{i,j} &=& \sum_{k=0}^{n+2} \binom{n+2}{k} a_{i+n+2-k,j+k} \\ &>&\sum_{k=1}^{n+1} \binom{n+2}{k} a_{i+n+2-k,j+k} \\ &=&\sum_{k=0}^{n} \binom{n+2}{k+1} a_{i+n+2-k-1,j+k+1} \\ &\geq&\sum_{k=0}^{n} 4\left(1-\frac{1}{n+2}\right)\binom{n}{k} a_{i+1+n-k,j+1+k} \\ &=&4\left(1-\frac{1}{n+2}\right) \sum_{k=0}^n \binom{n}{k} a_{i+1+n-k,j+1+k} \\ &=&4\left(1-\frac{1}{n+2}\right) a_{i+1,j+1} \end{eqnarray*} This holds for all $n\geq 0$, so by $n\rightarrow \infty$ we get \begin{align*} a_{i,j}\geq 4a_{i+1,j+1} \tag{5} \end{align*} Now we are going to prove $a_{n+1,0}\geq \frac12a_{n,0}$. We do this by induction on $n$. For $n=0$ note that $a_{0,0}=a_{0,1}+a_{1,0}=2a_{1,0}$. Assuming the inequality is true for $n-1$, we get by (5): \[ a_{n+1,0}=a_{n,0}-a_{n,1}\geq a_{n,0}-\frac14 a_{n-1,0}\geq a_{n,0}-\frac12 a_{n,0}=\frac12 a_{n,0} \] This immediately implies the inequality stated in the problem. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 12. Roman Karasev, Moscow} %\email{r_n_karasev@mail.ru} Let $f(x)$ be a polynomial with real coefficients of degree $n$ having $n$ distinct real roots $x_1$, $x_2$, \dots, $x_n$. Prove that for any non-negative integer $k \le n-2$ $$ \sum_{i=1}^n \frac{x_i^k}{f'(x_i)} = 0. $$ \bigskip {\it Solution}. Consider the rational function $r(z) = \dfrac{z^k}{f(z)}$ of complex variable $z$. Note that by the degree reasons $r(z) \le \dfrac{C}{|z|^2}$ for some constant $C$ and large enough $|z|$. So for the integral over the circumference $S_R$ of radius $R$ and centre at the origin we have $$ \left|\oint_{S_R} r(z) dz\right| \le \frac{2\pi C}{R} $$ for large enough $R$. On the over hand, for large enough $R$ we have the equality $$ \oint_{S_R} r(z) dz = 2\pi i \sum_{i = 1}^n \mathop{\rm Res}\nolimits_{x_i} r(z) = 2\pi i \sum_{i = 1}^n \frac{x_i^k}{f'(x_i)}. $$ Thus for large enough $R$ we have $$ \left|\sum_{i = 1}^n \frac{x_i^k}{f'(x_i)}\right| \le \frac{C}{R}, $$ and going to the limit $R\rightarrow+\infty$ we obtain that the sum considered equals $0$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 13. Dereck Schleicher, International University Bremen} For every complex $w\notin\{0,1\}$, define \[ f(z):=\sum(\log w)^{-4} \] where the sum is over all branches of the complex logarithm. a) Show that there are two polynomials $P$ and $Q$ such that $f(z)=P(z)/Q(z)$ for all $z\in{\mathbb C}\setminus\{0,1\}$. b) Show that for all $z\in{\mathbb C}\setminus\{0,1\}$, \[ f(z) = w\frac{w^2+4w+1}{6(w-1)^4} \,\,. \] \bigskip {\it Solution}. The idea of the proof is as follows: first, it is clear that the left hand side is well defined and independent of the order of summation, because we have a sum of the type $\sum n^{-4}$, and the branches of the logarithms do not matter because all branches are taken. Therefore, the left hand side is a holomorphic function on the complex plane, except possibly for isolated singularities at $0$ and $1$. The function $\log$ has its only zero at $w=1$, so the infinite sum is holomorphic in ${\mathbb C}\setminus\{0\}$ except for a quadruple pole at $w=1$. Now we investigate the behavior near infinity. We have $\mbox{Re}(\log(w))=\log|w|$, hence (with $c:=\log|w|$) \begin{eqnarray*} |\sum(\log w)^{-4}| &\leq& \sum|\log w|^{-4} = \sum (\log|w|+2\pi i n)^{-4} +O(1) \\ &=& \int_{-\infty}^\infty (c+2\pi ix)^{-4}\,dx +O(1) \\ &=& c^{-4}\int_{-\infty}^\infty (1+2\pi ix/c)^{-4}\,dx +O(1) \\ &=& c^{-3}\int_{-\infty}^\infty (1+2\pi it)^{-4}\,dt +O(1) \\ &\leq& \alpha(\log|w|)^{-3} \end{eqnarray*} for a universal constant $\alpha$. Therefore, the infinite sum tends to $0$ as $|w|\to\infty$. In particular, the isolated singularity at $\infty$ is not essential, but rather has (at least a single) zero at $\infty$. The remaining singularity is at $w=0$. The infinite sum is clearly bounded on the circle $|w|=\varepsilon$ for any $\varepsilon>0$, and it is easily seen that $|w|<\varepsilon$ satisfies an even better bound. Therefore, the infinite sum is bounded in a neighborhood of $w=0$, so the singularity is removable and easily seen to be a zero. We conclude that the infinite sum is holomorphic on ${\mathbb C}$ with at most one pole and without essential singularity at $\infty$, so it is a rational function. This solves the first part. For the second part, we can write $f(w)=P(w)/Q(w)$ of two polynomials $P$ and $Q$ which we may as well assume coprime. Since $f$ has a quadruple pole at $w=1$ and no other poles, we have $Q(w)=(w-1)^2$ up to a constant factor which we can as well set equal to $1$, and this determines $P$ uniquely. Since $f(w)\to 0$ as $w\to\infty$, the degree of $P$ is at most $3$, and since $P(0)=0$, it follows that $P(w)=w(aw^2+bw+c)$ for yet undetermined complex constants $a,b,c$. From here, it is not hard to determine the constants explicitly, for example by evaluating residues or by solving a similar (simpler) problem for $f'$. In fact, one can pose a similar problem for $\sum(\log w)^{-n}$ with other exponents $n$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 14. Roman Karasev, Moscow} %\email{r_n_karasev@mail.ru} Let $X$ be a set of $\binom{2k-4}{k-2} + 1$ real numbers, $k\ge 2$. Prove that there exists a monotonous sequence $\{x_i\}_{i=1}^k\subseteq X$ such that $$ |x_{i+1}-x_1|\ge2|x_i-x_1| $$ for all $i=2$, \dots, $k-1$. {\it Solution}. We prove more general statement: {\bf Lemma.} Let $k, l \ge 2$, let $X$ be a set of $\binom{k + l - 4}{k-2} + 1$ real numbers. Then either $X$ contains an increasing sequence $\{x_i\}_{i=1}^k\subseteq X$ of length $k$ and $$ |x_{i+1}-x_1|\ge2|x_i-x_1|\quad\forall i=2,\ldots,k-1, $$ or $X$ contains a decreasing sequence $\{x_i\}_{i=1}^l\subseteq X$ of length $l$ and $$ |x_{i+1}-x_1|\ge2|x_i-x_1|\quad\forall i=2,\ldots,l-1. $$ {\bf Proof of the lemma.} We use induction on $k+l$. In case $k=2$ or $l=2$ the lemma is obviously true. Now let us make the step of induction. Let $m$ be the minimal element of $X$, $M$ be its maximal element. Let $$ X_m = \{x\in X : x \le \frac{m + M}{2}\},\quad X_M = \{x\in X : x > \frac{m + M}{2}\}. $$ Since $\binom{k+l-4}{k-2} = \binom{k+(l-1) - 4}{k-2} + \binom{(k-1) + l - 4}{(k-1)-2}$, we can see that either $$ |X_m| \ge \binom{(k-1) + l - 4}{(k-1)-2} + 1,\quad\text{or}\quad |X_M| \ge \binom{k+(l-1) - 4}{k-2} + 1. $$ In the first case we apply the inductive assumption to $X_m$ and either obtain decreasing sequence of length $l$ with required properties (in this case the inductive step is made), or obtain increasing sequence $\{x_i\}_{i=1}^{k-1}\subseteq X_m$ of length $k-1$. Then we note that the sequence $\{x_1, x_2,\ldots, x_{k-1}, M\}\subseteq X$ has length $k$ and all the required properties. In the case $|X_M| \ge \binom{k+(l-1) - 4}{k-2} + 1$ the inductive step is made in the similar way. Thus the lemma is proved. \qed The reader may check that the number $\binom{k + l - 4}{k-2} + 1$ cannot be smaller in the lemma. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 15. Dusan Djukic, University of Belgrade} Find all irreducible non-constant polynomials $P(x)\in\mathbb{Z} [x]$ with the following property: there exist complex numbers $a,b$ such that $P(a)=P(b)=P(2a+3b)=0$. \bigskip {\it Solution}. Let us denote $c=2a+3b$. Then $\displaystyle b=\frac{c-2a}3$. Let $P(x)=C(x-x_1)(x-x_2)\cdots(x-x_n)$, where $C\neq0$, $x_1,\dots,x_n\in\mathbb{C}$. Consider the polynomial $\displaystyle R(x)=\prod_{i,j=1}^n\left(x-\frac{x_i-2x_j}3\right)$. Since it is symmetric with respect to $x_1,\dots,x_n$, its coefficients are integers. By the assumption, the number $b$ is a zero of $R(x)$, and hence its minimal polynomial $P(x)$ divides $R(x)$. Therefore, for each zero $x_k$ of $P(x)$ there exist zeros $x_i,x_j$ of $P(x)$ such that $\displaystyle x_k=\frac{x_i-2x_j}3$ and consequently $3|x_k|=|x_i-2x_j|\leq|x_i|+2|x_j|$. However, if $x_k$ is a zero of the largest module, we must have $|x_i|=|x_j|=|x_k|$ and the ratio $x_i/x_j$ must be real, which imply $x_k=x_i=-x_j$. Let us consider the polynomial $\overline{P}(x)=P(-x)$. Since $x_k$ is a zero of $\overline{P}(x)$, it follows that $P(x)$ divides $\overline{P}(x)$. Consequently $P(-x)=\overline{P}(x)=\pm P(x)$. If $P(-x)=-P(x)$ then $P(0)=0$, and since $P(x)$ is irreducible we conclude that $P(x)=cx$ for some $c\in\mathbb{Z}\setminus\{0\}$. If $P(-x)=P(x)$ then one easily obtains that $P(x)=Q(x^2)$ for some polynomial $Q(x)$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 16. Dusan Djukic, University of Belgrade} Let be given a polynomial with real coefficients $f(x)=1+x+a_2x^2 +a_3x^3+\cdots+a_nx^n$, where $0\leq a_i\leq 1$ for all $i$. Prove that every zero of $f(x)$ has module greater than $\displaystyle\frac {\sqrt5-1}2$. \bigskip {\it Solution}. Let $z$ be an arbitrary complex number of module not exceeding $\displaystyle a=\frac{\sqrt5-1}2$. To show that $f(z)\neq0$ we distinguish two cases.\begin{enumerate} \item[(i)] $|1+z|\geq1$. Then $\displaystyle|f(z)-1-z|= |a_2z^2+\dots+a_nz^n|\leq a_2|z|^2+\dots+a_n|z|^n\leq |z|^2+\dots+|z|^n<\frac{|z|^2}{1-|z|}\leq\frac{a^2}{1-a}=1 \leq|1+z|$ and hence $f(z)\neq0$. \item[(ii)] $|1+z|<1$. Defining $a_1=1$ and $a_k=0$ for $k>n$, we claim that $$|a_kz^k+a_{k+1}z^{k+1}|\leq|z|^k\hspace{5mm}\mbox{for all }k\in\mathbb{N}.\eqno(\ast)$$ Indeed, if $a_k\leq a_{k+1}$ then $|a_kz^k+a_{k+1}z^{k+1}|=|a_kz^k(1+z)+(a_{k+1}-a_k)z^{k+1}| \leq a_k|z|^k|1+z|+(a_{k+1}-a_k)|z|^{k+1}\leq a_k|z|^k+ (a_{k+1}-a_k) |z|^k=a_{k+1}|z|^k\leq|z|^k$. Similarly, if $a_k>a_{k+1}$ then $|a_kz^k+a_{k+1}z^{k+1}|= |a_{k+1}z^k(1+z)+(a_k-a_{k+1})z^k|\leq a_{k+1}|z|^k|1+z|+ (a_k-a_{k+1})|z|^k\leq a_k|z|^k\leq|z|^k$. This proves $(\ast)$. The inequality in $(\ast)$ is strict for $k>n$. It follows that $\displaystyle|f(z)-1|=|z+a_2z^2+\dots+a_nx^n|\leq|z+ a_2z^2|+|a_3z^3+a_4z^4|+\cdots<|z|+|z|^3+|z|^5+\dots=\frac{|z|} {1-|z|^2}\leq\frac{a}{1-a^2}=1$ and hence $f(z)\neq0$. \end{enumerate} \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 17. Dusan Djukic, University of Belgrade} Evaluate $\displaystyle\lim_{x\rightarrow1-}\left(\ln(1-x)+ \sum_{n=1}^{\infty}\frac{x^n}{1+x+\dots+x^{n-1}}\right)$. \bigskip {\it Solution}. Since $\displaystyle\ln(1-x)=-\sum_{n=1}^ {\infty}\frac{x^n}n$, we can rewrite the required limit as $$ \lim_{x\rightarrow1-}\sum_{n=1}^{\infty}x^n\left(\frac1{1+x+\dots+x^{n-1}}-\frac1n\right)= $$ $$ =\lim_{x\rightarrow1-}\sum_{n=1}^{\infty}(x^n-x^{n+1})\left(\frac1{1-x^n}-\frac1{n(1-x)}\right)= $$ \begin{multline} =\lim_{x\rightarrow1-}\sum_{n=1}^{\infty}(x^n-x^{n+1})\left(\frac1{1-x^n}+\frac1{\ln x^n}\right)\nonumber\\ -\lim_{x\rightarrow1-}\sum_{n=1}^{\infty}(x^n-x^{n+1})\left(\frac1{n\ln x}+\frac1{n(1-x)}\right)=\nonumber \end{multline} $$= \lim_{x\rightarrow1-}\sum_{n=1}^{\infty}(x^n-x^{n+1})f(x^n)-\lim_{x\rightarrow1-}f(x)\sum_{n=1}^{\infty}\frac{x^n-x^{n+1}}n\:,\eqno(1) $$ where $\displaystyle f(x)=\frac1{1-x}+\frac1{\ln x}$. The function $f(x)$ is integrable on $[0,1]$ and bounded in a neighborhood of $x=1$: $\displaystyle\lim_{x\rightarrow 1-}f(x)=\lim_{t\rightarrow 0+}\frac{ \ln(1-t)+t}{t\ln(1-t)}=\lim_{t\rightarrow 0+}\frac{ -t-t^2/2-o(t^2)+t}{-t^2-o(t^2)}=\frac12$. Therefore the second limit in (1) converges to 0, whereas the first limit in (1) is an integral sum of $f(x)$ on $[0,1]$ and consequently converges to $\displaystyle\int_0^1f(x)dx=\gamma$ (Euler's constant) as $1-x\rightarrow0$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 18. Dusan Djukic, University of Belgrade} Let $S$ be the set of all symmetric singular $2\times 2$ matrices with real entries and trace 1. Given an arbitrary nonzero vector $x=(a,b)^T$, show that there exists a sequence $(A_n)_{n\in \mathbb{N}}$ of matrices in $S$ such that the sequence $x_n$ defined by $x_0=x$ and $x_{n+1}=A_nx_n$ diverges. \bigskip {\it Solution}. Every matrix in $S$ can be written in the form $$ S_{\theta}=\left(\begin{array}{cc}\cos^2\theta&\cos \theta\sin\theta\\\cos\theta\sin\theta&\sin^2\theta\end{array} \right) $$ for some $\theta\in\mathbb{R}$. Let us write $x$ as $x=\|x\|v_{\varphi}$, where $v_{\varphi}$ denotes the vector $(\cos\varphi,\sin\varphi)^T$. We observe that $S_{\theta}v_{\varphi}=\cos(\varphi-\theta)v_{\theta}$. Therefore, for any sequence $A_n=S_{\theta_n}$ of matrices in $S$ the sequence $x_0=x$, $x_{n+1}=A_nx_n$ satisfies $$x_n=P_n\|x\| v_{\theta_n},\hspace{5mm}\mbox{where }P_n=\cos(\theta_1-\varphi) \cos(\theta_2-\theta_1)\cdots\cos(\theta_n-\theta_{n-1}).$$ Now it suffices to give an example of a sequence $\theta_n$ such that the sequence $P_n$ converges to a strictly positive value and the sequence $\theta_n$ diverges. One such sequence is given by $\displaystyle\theta_n=\varphi+1+\frac12+\cdots+\frac1n$. Indeed, the product $\displaystyle P_n=\prod_{k=1}^n\cos\frac1n=\prod_{k=1}^n\left (1-\frac1{2n^2}+ o(n^{-2})\right)$ converges as $n\rightarrow\infty$, while $\theta_n\rightarrow\infty$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 19. Milos Milosavlievic, University of Belgrade} Let $P(x)=x^2-1$. How many distinct real solutions does the following equation have: $$\underbrace{P(P(...(P}_{2004}(x))...))=0\ ?$$ \bigskip {\it Solution}. Put $P_{n}(x)=\underbrace{P(P(...(P}_{n}(x))...))$. As $P_1(x)\geq -1$, for each $x\in R$, it must be that $P_{n+1}(x)=P_1(P_n(x))\geq -1$, for each $n\in N$ and each $x\in R$. There fore the solution $P_n(x)=a$, where $a<-1$ has no real solutions. Let us prove that the equation $P_n(x)=a$, where $a>0$, has exactly two distinct real solutions. To this end we use mathematical induction by $n$. If $n=1$ the assertion follows directly. Assuming that the assertion holds for a $n\in N$ we prove that it must also hold for $n+1$. Since $P_{n+1}(x)=a$ is equivalent to $P_1(P_n(x))=a$, we conclude that $P_n(x)=\sqrt{a+1}$ or $P_n(x)=-\sqrt{a+1}.$ The equation $P_n(x)=\sqrt{a+1}$, as $\sqrt{a+1}>1$, has exactly two distinct real solutions by the inductive hypothesis, while the equation $P_n(x)=-\sqrt{a+1}$ has no real solutions(because $-\sqrt{a+1}<-1).$ Hence the equation $P_{n+1}(x)=a$, has exactly two distinct real solutions. Let us prove now that the equation $P_n(x)=0$ has exactly $n+1$ distinct real solutions. Again we use mathematical induction. If $n=1$ the solutions are $x=\pm 1$, and if $n=2$ the solutions are $x=0$ and $x=\pm \sqrt{2}$, so in both cases the number of the solutions is equal to $n+1$. Suppose that the assertion holds for a $n\in N$. Note that $P_{n+2}(x)=P_2(P_n(x))=P_{n}^2(x)(P_{n}^2(x)-2)$,so the set of all real solutions of the equation $P_{n+2}=0$ is exactly the union of the sets of all real solutions of the equations $P_n(x)=0$, $P_n(x)=\sqrt{2}$ and $P_n(x)=-\sqrt{2}.$ By the inductive hypothesis the equation $P_n(x)=0$ has exactly $n+1$ distinct real solutions, while the equations $P_n(x)=\sqrt{2}$ and $P_n(x)=-\sqrt{2}$ have two and no distinct real solutions, respectively. Hence, the sets above being pairwise disjoint, the equation $P_{n+2}(x)=0$ has exactly $n+3$ distinct real solutions. Thus we have proved that, for each $n\in N$, the equation $P_n(x)=0$ has exactly $n+1$ distinct real solutions, so the answer to the question posed in this problem is $2005$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 20. Milos Milosavlievic, University of Belgrade} Let $n\geq 2$ and $0\leq x_{1},x_{2},...,x_{n}\leq \frac{\pi}{2}$ be such that $\sum_{k=1}^{n} \sin x_{k}=1.$ Consider the set $S_{n}$ of all sums $x_{1}+x_{2}+...+x_{n}$. $(a)$ Show $S_{n}$ is an interval. $(b)$ Let $l_{n}$ be the length of $S_{n}$. Find $lim_{n\rightarrow \infty} l_{n}.$ \bigskip {\it Solution}. $(a)$ Equivalently, we consider the set $$Y=\{y=(y_{1},y_{2},...,y_{n}) | \ 0 \leq y_{1},y_{2},...,y_{n}\leq 1,\ y_{1}+y_{2}+...+y_{n}=1\}\subset R^{n}$$ and the image $f(Y)$ of $Y$ under $$f(y)=\arcsin y_{1}+\arcsin y_{2}+...+\arcsin y_{n}.$$ Note that $f(Y)=S_{n}$. Since $Y$ is a connected subspace of $R^{n}$ and $f$ is a continuous function, the image $f(Y)$ is also connected, and we know that the only connected subspaces of $R$ are intervals. Thus $S_{n}$ is an interval. $(b)$ I prove that $$n\ \arcsin \frac{1}{n} \leq x_{1}+x_{2}+...+x_{n}\leq \frac{\pi}{2}.$$ Since the graph of $\sin x$ is concave down for $x\in [0,\frac{\pi}{2}]$, the chord joining the points $(0,0)$ and $(\frac{\pi}{2},1)$ lies below the graph. Hence $$ \frac{2x}{\pi}\leq \sin x \textrm{ for all } x\in [0,\frac{\pi}{2}] $$ and we can deduce the right-hand side of the claim: $$\frac{2}{\pi}(x_{1}+x_{2}+...+x_{n})\leq \sin x_{1}+\sin x_{2}+...+\sin x_{n}=1.$$ The left-hand side follows immediately from Jensen's inequality, since $\sin x$ is concave down for $x\in [0,\frac{\pi}{2}]$ and $0\leq \frac{x_{1}+x_{2}+...+x_{n}}{n}< \frac{\pi}{2}$ $$ \frac{1}{n}=\frac{x_{1}+x_{2}+...+x_{n}}{n}\leq \sin(\frac{x_{1}+x_{2}+...+x_{n}}{n}). $$ Since $S_{n}$ is interval contained in $S_{n}= [n\ \arcsin \frac{1}{n},\frac{\pi}{2}]$,as we have just shown, and $n\ \arcsin \frac{1}{n},\ \frac{\pi}{2} \in S_{n}$ (for $x_{1}=x_{2}=...=x_{n}=\arcsin \frac{1}{n}$ implies $\sum_{k=1}^n x_{k}=n\ \arcsin \frac{1}{n}$ and $\sum_{k=1}^n x_{k}=\frac{\pi}{2}$), we conclude that $S_{n}= [n\ \arcsin \frac{1}{n},\frac{\pi}{2}].$ Thus $l_{n}=\frac{\pi}{2}-n\ \arcsin \frac{1}{n}$ and $$lim_{n\rightarrow \infty} l_{n}=\frac{\pi}{2}-lim_{n\rightarrow \infty}\frac{\arcsin(1/n)}{1/n}=\frac{\pi}{2}-1.$$ \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 21. Milos Milosavlievic, University of Belgrade} If $z_{1},z_{2},...,z_{n}$ are complex numbers then there is a subset $S$ of $\{1,2,...,n\}$ for which $$|\sum_{k\in S} z_{k}|\geq \frac{1}{\pi} \sum_{k=1}^{n} |z_{k}|.$$ \bigskip {\it Solution}. Write $z_{k}=|z_{k}|e^{i\alpha _{k}}.$ For $-\pi \leq \theta \leq \pi,$ let $S(\theta)$ be the set of all $k$ for which $\cos(\alpha_{k}-\theta)>0.$ Then $$|\sum_{k\in S(\theta)} z_{k}|=|\sum_{k\in S(\theta)} e^{-i\theta}z_{k}|\geq Re \sum_{k\in S(\theta)} e^{-i\theta}z_{k}=\sum_{k=1}^{n} |z_{k}| \cos^{+}(\alpha_{k}-\theta).$$ Choose $\theta_{0}$ so as to maximize the last sum, and put $S=S(\theta_{0}).$ This maximum is at least as large as the average of the sum over $[-\pi, \pi]$, and this average is $\frac{1}{\pi}\sum_{k=1}^{n} |z_{k}|,$ because $$\frac{1}{2\pi}\int_{-\pi}^{\pi} \cos^{+}(\alpha - \theta) d\theta=\frac{1}{\pi}$$ for every $\alpha.$ \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 22. Valeriu Anisiu, Babes-Bolyai University, Cluj-Napoca} % email anisiu@math.ubbcluj.ro In the Euclidean space $\mathbb{R}^{d}$ denote by $S$ the set of all sequences $(x_{n})_{n\in\mathbb{N}}$ satisfying $x_{i}\neq0$ and the inequality $\left\Vert x_{i}-x_{j}\right\Vert \geq1$ for $i,j\in \mathbb{N}$, $i\neq j$. Find all real numbers $r$ such that the series ${\sum\limits_{n\in\mathbb{N}}}\dfrac{1}{\left\Vert x_{n}\right\Vert ^{r}}$ is convergent for each sequence $(x_{n})_{n\in\mathbb{N}}$ in $S$. \bigskip {\it Solution}. Observe first that denoting by $\left\vert \cdot\right\vert $ the $l^{\infty}$ norm in $\mathbb{R}^{d},$ our series has the same convergence as ${\sum\limits_{n\in\mathbb{N}}}\dfrac{1}{\left\vert x_{n}\right\vert ^{r}}$ (because $\left\vert x\right\vert \leq\left\Vert x\right\Vert \leq\sqrt {d}\left\vert x\right\vert $). \textbf{a)} We show first that $r>d$ (if $r$ exists). Let $(x_{n})_{n\in\mathbb{N}}$ be an enumeration of the countable set $\mathbb{Z}^{d}\backslash\{0\}.$ Then of course $(x_{n})_{n\in\mathbb{N}}$ is in $S.$ We have $\text{card}\{x\in\mathbb{Z}^{d}:\left\vert x\right\vert =k\}=(2k+1)^{d}-(2k-1)^{d},$ so ${\sum\limits_{n\in\mathbb{N}}}\dfrac {1}{\left\vert x_{n}\right\vert ^{r}}={\sum\limits_{k\geq1}}\dfrac {(2k+1)^{d}-(2k-1)^{d}}{k^{r}}.$ This series is convergent if and only if the series ${\sum\limits_{k\geq1}}\dfrac{1}{k^{r-d+1}}$ is convergent, hence $r>d.$ \textbf{b)} We show now that the series converges for each $r>d$ and $(x_{n}) $ in $S.$ For $\varepsilon=\frac{1}{2\sqrt{d}}$ the cubes $B_{n}=\{x\in\mathbb{R}% ^{d}:\left\vert x-x_{n}\right\vert <\varepsilon\}$ are pairwise disjoint. Denote by $n_{k}$ the number of the terms of the sequence $(x_{n})$ which are in $A_{k}=\{x\in\mathbb{R}^{d}:k-1<\left\vert x\right\vert \leq k\}$ for $k\in\mathbb{N}.$ Observe that if $x_{n}\in A_{k}$ then $B_{n}\cap A_{k}$ contains at least an \textquotedblleft octant\textquotedblright\ of the cube $B_{n},$ so that $meas(B_{n}\cap A_{k})\geq2^{-d}meas(B_{n})=2^{-d}(2\varepsilon)^{d}% =\varepsilon^{d}.$ We obtain that $n_{k}\leq meas(A_{k})/\varepsilon^{d}=\left( (2k)^{d}% -(2k-2)^{d}\right) /\varepsilon^{d}$, hence: ${\sum\limits_{n\in\mathbb{N}}}\dfrac{1}{\left\vert x_{n}\right\vert ^{r}% }={\sum\limits_{k\in\mathbb{N}}}\;{\sum\limits_{x_{n}\in A_{k}}}\dfrac {1}{\left\vert x_{n}\right\vert ^{r}}\leq{\sum\limits_{x_{n}\in A_{1}}}% \dfrac{1}{\left\vert x_{n}\right\vert ^{r}}+{\sum\limits_{k>1}}n_{k}\dfrac {1}{(k-1)^{r}}\leq{\sum\limits_{x_{n}\in A_{1}}}\dfrac{1}{\left\vert x_{n}\right\vert ^{r}}+{\sum\limits_{k>1}}\dfrac{(2k)^{d}-(2k-2)^{d}% }{\varepsilon^{d}(k-1)^{r}}.$ The last series has the same convergence as ${\sum\limits_{k>1}}\dfrac {k^{d-1}}{k^{r}}={\sum\limits_{k>1}}\dfrac{1}{k^{r-d+1}},$ i.e. is convergent for $r>d.$ In conclusion, the answer is $r\in(d,+\infty).$ \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 23. Valeriu Anisiu, Babes-Bolyai University, Cluj-Napoca} % email anisiu@math.ubbcluj.ro Show that for each positive integer $n,$ the equation $x^{2}+y^{2}+z^{2}=0$ has a solution in $\mathbb{Z}_{n}$ with $x,y,z$ not all zero (mod $n$). \bigskip {\it Solution}. a) If $n$ is even: $x=n/2,$ $y=n/2,$ $z=0$ b) If $n$ is not square free, $n=p^{k}q$ with $p$ prime, $k>1$: $x=p^{k-1}q$, $y=0$, $z=0$ c) It remains to consider the case when $n$ is odd and square free $n=p_{1}...p_{k}$ (with $p_{i}$ prime $>2$). The ring $\mathbb{Z}_{n}$ being isomorphic with $\mathbb{Z}_{p_{1}}% \times...\mathbb{Z}_{p_{k}}$ it will be sufficient to prove the statement when $n$ is an odd prime. We show that in this case one may take $z=1.$ Define the subsets of $\mathbb{Z}_{n}$: $A=\left\{ x^{2}:0\leq x\leq(n-1)/2\right\} $, $B=\left\{ -y^{2}-1:0\leq y\leq(n-1)/2\right\} $. The set $A$ has $(n+1)/2$ elements (because $0\leq x\leq(n-1)/2,$ $0\leq y\leq(n-1)/2,$ $x^{2}=y^{2}$ (mod $n$) implies $x=y$). Similarly, the set $B$ has also $(n+1)/2$ elements. One obtains that $A\cap B\neq\emptyset$ so that there exist $x,y\in\mathbb{Z}_{n}$ such that $x^{2}=-1-y^{2}.$ \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 24. Armen Edigarian, Jagiellonian University} % Institute of Mathematics, % Jagiellonian University, % Reymonta 4, % 30-059 Krak\'ow, % Poland % email umedigar@cyf-kr.edu.pl Let $a,b\in\mathbb{D}$ and let $f:\mathbb{D}\to\mathbb{D}$ be a holomorphic function, where $\mathbb{D}$ denote the unit disc on the complex plane. Assume that $f(a)=a$ and $f(b)=b$. Show that $f(z)=z$ for any $z\in\mathbb{D}$. {\it Remark}. It is well-known in folklore. \bigskip {\it Solution}. Put $m(z,w)=\big|\frac{z-w}{1-\bar wz}\big|$ the Mobius function. By Schwarz-Pick lemma $m(f(z),f(w))\le m(z,w)$ and if for some $z\ne w$ we have the equality, then $f$ is an automorphism. Since $f(a)=a$ and $f(b)=b$ we get $f(z)=z$ for any $z\in\mathbb{D}$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 25. Armen Edigarian, Jagiellonian University} % Institute of Mathematics, % Jagiellonian University, % Reymonta 4, % 30-059 Krak\'ow, % Poland % email umedigar@cyf-kr.edu.pl Let $a_n$ be an increasing sequence such that $\lim_{n\to\infty}a_n=+\infty$. Show that $\sum_{n}\frac{a_n-a_{n-1}}{a_n^\alpha}$ is convergent iff $\alpha>1$. \bigskip {\it Solution} Assume that $\alpha\le1$ then \[ \sum_{n=m}^M\frac{a_n-a_{n-1}}{a_n^\alpha}\ge\sum_{n=m}^M\frac{a_n-a_{n-1}}{a_n}\ge \frac{\sum_{n=m}^M(a_n-a_{n-1})}{a_M}=\frac{a_M-a_{m-1}}{a_M}\to1, \] when $M\to\infty$. Hence, the series is divergent. For $\alpha>1$ we have \[ \frac{a_n-a_{n-1}}{a_n^\alpha}\le\int_{a_{n-1}}^{a_n}\frac{1}{x^\alpha}dx \] Hence, \[ \sum_{n}\frac{a_n-a_{n-1}}{a_n^\alpha}\le\int_1^\infty\frac{1}{x^\alpha}dx+C<\infty, \] where $C>0$ is an appropriate constant. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 26. Armen Edigarian, Jagiellonian University} % Institute of Mathematics, % Jagiellonian University, % Reymonta 4, % 30-059 Krak\'ow, % Poland % email umedigar@cyf-kr.edu.pl Let $f$ be a $C^{2004}$ function on $(a,b)$. Assume that $f^{(2004)}\ne0$ on $(a,b)$ and that $\lim_{b^{-}}\frac{f(x)}{x}=1$. Show that $\lim_{b^{-}}f'(x)=1$. \bigskip {\it Solution} Note that if the limit $\lim_{b^{-}}f'(x)$ exists then it must be equal to $1$ (use l'Hospital rule). If it does not exists then $f$ cannot be monotonic in $(b-\epsilon,b)$ for any $\epsilon>0$. So, there exists a sequence $\xi_n\to b$ such that $f''(\xi_n)=0$ for any $n\ge1$. From this we get that for any $k\in\mathbb{N}$, $k\le2004$, there exists a sequence $\eta_n\to b$ such that $f^{(k)}(\eta_n)=0$. In particular, $f^{(2004)}$ must have zeros. A contradiction. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 27. M. E. Kuczma, Warsaw} % OR Michal Krych ?? Let $f,g\colon[a,b]\to[0,\infty)$ be continuous and non-decreasing functions such that for each $x\in[a,b]$ the following inequality holds: $$\int_a^x\sqrt{f(t)}\,dt\le\int_a^x\sqrt{g(t)}\,dt$$ and $\int_a^b\sqrt{f(t)}\,dt=\int_a^b\sqrt{g(t)}\,dt$. Prove that $\int_a^b\sqrt{1+f(t)}\,dt\geq\int_a^b\sqrt{1+g(t)}\,dt$. \bigskip {\it Solution}. Let $F(x)=\int_a^x\sqrt{f(t)}\,dt$ and $G(x)=\int_a^x\sqrt{g(t)}\,dt$. The functions $F,G$ are convex, $F(a)=0=G(a)$ and $F(b)=G(b)$ by the hypothesis. We are supposed to show that $$\int_a^b\sqrt{1+\big(F'(t)\big)^2}\,dt\geq\int_a^b\sqrt{1+\big(G'(t)\big)^2}\,dt$$ i.e. The length ot the graph of $F$ is $\geq$ the length of the graph of $G$. This is clear since both functions are convex, their graphs have common ends and the graph of $F$ is {\sl below} the graph of $G$ --- the length of the graph of $F$ is the least upper bound of the lengths of the graphs of piecewise linear functions whose values at the points of non-differentiability coincide with the values of $F$, if a convex polygon $P_1$ is contained in a polygon $P_2$ then the perimeter of $P_1$ is $\le$ the perimeter of~$P_2$. {\it Remark}. \quad The assumption {\it the function $g$ is non-decreasing} may be dropped. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 28. M. Misiurewicz, Warsaw and IUPUI, Indianapolis} Let $M=(m_{k,l})$ be a real square matrix o size $n\times n$ such that $m_{k,l}>0$ if $k\ne l$ and $\sum_km_{k,l}<0$ for $l=1,2,\dots,n$. Prove that all eigenvalues of $M$ have negative real parts. \bigskip {\it Solution}. Let $\lambda=a+bi$ be an eigenvalue of $M$, $a,b\in\mathbb{R}$ and $(z_1,z_2,\dots,z_n)$ a vector such that for every $l$ the equality $\sum_km_{k,l}z_k=\lambda z_l$ holds. Let $x_k=\hbox{\rm re}(z_k)$, $y_k=\hbox{\rm im}(z_k)$. Thus $\sum_km_{k,l}x_k=ax_l-by_l$ and $\sum_km_{k,l}y_k=ay_l+bx_l$. This implies that $\sum_km_{k,l}(x_kx_l+y_ky_l)=a(x_l^2+y_l^2)$ for all $l$. Let $j$ be such an index that $|z_l|\le|z_j|$ for all $l\in\{1,2,\dots,n\}$. Since $m_{k,l}>0$ for $k\ne l$ and $|x_kx_l+y_ky_l|\le|z_k|\cdot|z_l|$ the following inequality holds\hfill\break \begin{multline} a(x_j^2+y_j^2)=\sum_km_{k,j}(x_kx_j+y_ky_j)\le\nonumber\\ \le m_{j,j}|z_j|^2+\sum_{k\ne j}m_{k,j}|z_k|\cdot|z_j|\le |z_j|^2\sum_km_{k,j}<0\,.\nonumber \end{multline} Therefore $a<0$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 29. Michal Krych, Warsaw} Let $M$ be a square matrix of size $n\times n$ with eigenvalues $\lambda_1,\lambda_2,\dots,\lambda_n$, not necessarily distinct. Let $L_M(X)=MX+XM^T$ for any square matrix of size $n\times n$. Find the eigenvalues of $L_M$.\hfil\break $M^T$ is the transposed matrix: if $M=(m_{k,l})$ then $M^T=(m_{l,k})$. \bigskip {\it Solution}. Let $\lambda_r$ and $\lambda_s$ be eigenvalues of $M$ and $\vec v_r$, $\vec v_s$ eigenvectors associated to $\lambda_r$ and $\lambda_s$ i.e. $M\vec v_j=\lambda\vec v_j$ for $j=r,s$. We have $M\vec v_r(\vec v_s)^T+\vec v_r(\vec v_s)^TM^T=(M\vec v_r)(\vec v_s)^T+\vec v_r\big(M\vec v_s\big)^T=\lambda_r\vec v_r(\vec v_s)^T+\lambda_s\vec v_r(\vec v_s)^T$, so the number $\lambda_r+\lambda_s$ is an eigenvalue of $L_M$. The map $L_M$ maps $n^2$--dimensional linear space into itself, so it has at most $n^2$ eigenvalues. Notice that if vectors $\vec u, \vec w$ are linearly independent, then the matrices $\vec u(\vec w)^T$ and $\vec w(\vec u)^T$ are linearly independent, too. This implies that if the eigenvalues of $M$ are distinct then the eigenvalues of $L_M$ are the numbers $\lambda_r+\lambda_s$. If $M$ has multiple eigenvalues we can use the generalized eigenvectors, if there are not enough many of eigenvectors. Let $\lambda$ be an eigenvalue and $\vec v_1$, $\vec v_2$, {\dots},$\vec v_k$ be vectors such that $M\vec v_1=\lambda\vec v_1$, $M\vec v_j=\lambda\vec v_j+\vec v_{j-1}$ for $j=2,3,\dots,k$, $\mu$ another eigenvalue and $\vec w_1 $, $\vec w_2$,{\dots},$\vec w_l$ vectors such that $M\vec w_1=\mu\vec w_1$, $M\vec w_j=\mu\vec w_j+\vec w_{j-1}$ for $j=2,3,\dots,l$. Then $M\vec v_r(v_s)^T+\vec v_r(v_s)^TM^T=(\lambda+\mu)\vec v_r(v_s)^T+\vec v_{r-1}(v_s)^T+\vec v_r(v_{s-1})^T$. It follows from this equality that the map $L_M-(\lambda+\mu)Id$ maps the space ${\rm span}(\{\vec v_r(\vec v_s)^T|\quad 1\le r\le k, 1\le s\le l\})$ into the space spanned by the same vectors except the vector $\vec v_k(\vec v_l)^T$. To end the proof it is enough to notice that if the vectors $\vec v_1,\dots,\vec v_k,\vec w_1,\dots,\vec w_l$ are linearly independent then so are the matrices $\vec v_r(\vec w_s)^T$. The proof is complete. {\it Remark}. A matrix with multiple eigenvalues may treated is a limit of matrices with distinct eigenvalues. Using this we can consider only matrices with distinct eigenvalues if the matrices with complex coefficients are considered. If the coefficients are in an arbitrary field this is not possible. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 30. Mike Drazin} Let $S$ be any set with a given partial order $\le$, and, for any given subset $X\subseteq S$ and $y\in S$, say that $y$ is the (unique) maximum element of $X$, and write $y=\max X$, iff $y\in X$ and $x\le y$ for every $x\in X$; also, for any subset $Z\subseteq S$, define $\min Z$ analogously. (Of course, depending on $X$ and $Z$, $\max X$ and $\min Z$ may or may not exist.) Suppose that, to each $a\in S$ there are assigned corresponding subsets $X_{a}$ and $Z_{a}$ of $S$, and show that \begin{equation}\label{Drazin} \max X_{a}=\min Z_{a} \textrm{ for every } a\in S \textrm{ with } Z_{a} \textrm{ non-empty} \end{equation} if and only if (A)\qquad $X_{a}\cap Z_{a}$ is non-empty iff $Z_{a}$ is (for all $a\in S$), and (B)\qquad $X_{a}\le Z_{a}$ (i.e.\ $x\le z$ for all $x\in X_{a}$ and all $z\in Z_{a}$) for all $a\in S$ with $Z_{a}$ non-empty. \bigskip {\it Solution}. By the transitivity of $\le$, obviously (\ref{Drazin}) implies each of (A) and (B), and so we need only show that (A) and (B) together imply (\ref{Drazin}). Note that (B) trivially implies that for every $a\in S$, (C)\qquad $X_{a}\cap Z_{a}$ is either a singleton set or the empty set. [For, given any $u,v\in X_{a}\cap Z_{a}$, we have $u\in X_{a}$ and $v\in Z_{a}$, so that $Z_{a}$ is non-empty and hence $u\le v$ by (B). Similarly $v\le u$, and so, by anti-symmetry, $u=v$. Thus $X_{a}\cap Z_{a}$ can have at most one element.] Finally, (A), (B) and (C) together imply (\ref{Drazin}. For, given any $a\in S$ with $Z_{a}$ non-empty, then (A) and (C) ensure that $X_{a}\cap Z_{a}$ is a singleton set, say $X_{a}\cap Z_{a} =\{w\}$. Then (B) gives $x\le w$ for all $x\in X_{a}$, i.e.\ $w$ is the unique maximum of $X_{a}$. Similarly $w$ is the unique minimum of $Z_{A}$, and so (\ref{Drazin}) holds. {\it Remark}. This is Proposition 1 of M. P. Drazin, Lin.\ Alg.\ Appl.\ 165 (1992), 185--196 \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 31. Taras Androshchuk, Kyiv Taras Shevchenko University, Ukraine} %%%%% E-mail address: nutaras@univ.kiev.ua %%%%% Prove that $$ \int_{0}^{1}\int_{0}^{1}\frac{dx\;dy}{x^{-1}+|\ln y|-1}\le 1. $$ \bigskip {\it Solution}. We have the following inequality $$ x^{-1}-1\ge|\ln x|,\;x\in(0,1], $$ which follows from $$ \left.(x^{-1}-1)\right|_{x=1}=\left.|\ln x|\right|_{x=1}=0, $$ $$ (x^{-1}-1)'=-\frac{1}{x^{2}}\le-\frac{1}{x}=|\ln x|',\;x\in(0,1]. $$ Therefore $$ \int_{0}^{1}\int_{0}^{1}\frac{dx\;dy}{x^{-1}+|\ln y|-1}\le\int_{0}^{1}\int_{0}^{1}\frac{dx\;dy}{|\ln x| +|\ln y|}=\int_{0}^{1}\int_{0}^{1}\frac{dx\;dy}{|\ln (x\cdot y)|}. $$ The solution now follows from the following Lemma, if we take $\psi(x)=1/|\ln x|$. \medskip{\bf Lemma.} Let $\psi:[0,1]\rightarrow{\mathbb R}$ be a measurable function, such that $\psi(x)\ge 0$, $x\in[0,1]$. Then $$ \int_{0}^{1}\int_{0}^{1}\psi(u\cdot v)\;du\;dv\le\int_{0}^{1}\psi(u)|\ln (u)|\;du. $$ {\it Proof}. \begin{multline} \int_{0}^{1}\int_{0}^{1}\psi(u\cdot v)\;du\;dv=\\ =\lim_{n\rightarrow\infty}\sum_{k=0}^{n-1}\psi\left(\frac{k}{n}\right)\mu\left(\left\{(u,v):0\le u,v\le1,\frac{k}{n}\le u\cdot v <\frac{k+1}{n}\right\}\right),\nonumber \end{multline} where $\mu$ is Lesbegue's measure on ${\mathbb R^{2}}$. Let estimate the measure of the sets above. \begin{multline} \mu\left(\left\{0\le u,v\le1,\frac{k}{n}\le u\cdot v <\frac{k+1}{n}\right\}\right)\le\\ \le\mu\left(\left\{\frac{k}{n}\le u\le 1,\frac{k}{n}\le u\cdot v <\frac{k+1}{n}\right\}\right)=\\ =\int_{\frac{k}{n}}^{1}\left(\frac{k+1}{n}-\frac{k}{n}\right)\;\frac{du}{u}=\frac{1}{n}\left(-\ln\left(\frac{k}{n}\right)\right).\nonumber \end{multline} We continue the estimation of the integral and obtain \begin{multline} \int_{0}^{1}\int_{0}^{1}\psi(u\cdot v)\;du\;dv\le\\ \le\lim_{n\rightarrow\infty}\sum_{k=1}^{n-1}\psi\left(\frac{k}{n}\right)\frac{1}{n}\left(-\ln\left(\frac{k}{n}\right)\right)=\int_{0}^{1}\psi(u)|\ln(u)|\;du.\nonumber \end{multline} \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 32. Moubinool Omarjee, Paris} $ A= \left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \end{array} \right ), B=\left(\begin{array}{ccc} a & -c & b \\ -g & i & -h \\ d & -f & e \end{array} \right), $ are matrices in $M_{3}(K)$ with $K={\mathbb R}$ or ${\mathbb C}$. Show that $A$ and $B$ are similar. \bigskip {\it Solution}. $B=P^{-1}AP$ with $P=\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{array} \right)=R_{\frac{\pi}{2}}$ matrix of rotation around the $x$-axis. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 33. Moubinool Omarjee, Paris} $A$ and $B$ are matrices in $M_{n}({\mathbb R})$ such that $rank (A)=rank (B)=rank (A+B)=1$. Show that $Trace (AB)=Trace(A)\cdot Trace(B)$ \bigskip {\it Solution}. {\bf Lemma} $N$ is a matrix such that $rank (N)=1$. Then $N^{2}=Tr(N)\cdot N$ where $Tr$ is the trace. {\it Proof}. Since $rank (N)=1$ we can write the column of $N$ as $$ N=(C|k_{1}C|\ldots|k_{n}C)=C\cdot K^{T} $$ where $K^{T}$ is the transpose of $K=(1;k_{1};\ldots;k_{n})$. Then $$ N^{2}=(C\cdot K^{T})(C\cdot K^{T})=C(K^{T}\cdot C)K^{T}=(K^{T}\cdot C)N=Tr(N)\cdot N. $$ We use this lemma so that $A^{2}=Tr(A)A, B^{2}=Tr(B)B$ and $(A+B)^{2}=Tr(A+B)(A+B)$. Expanding, $$ A^{2}+B^{2}+AB+BA=Tr(A)A+Tr(B)B+Tr(B)A+Tr(A)B $$ $$ Tr(A)A+Tr(B)B+AB+BA=Tr(A)A+Tr(B)B+Tr(B)A+Tr(A)B $$ $$ AB+BA=Tr(B)A+Tr(A)B $$ Taking the trace gives $$ 2Tr(AB)=2Tr(A)\cdot Tr(B). $$ \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 34. Moubinool Omarjee, Paris} Let $E$ be a real vector space of dimension $n$ and $u$ a linear map from $E$ to $E$. Also, let $P$ be a real polynomial such that $0$ is a root of $P$ with multiplicity $k$, $2\le k< n$ and $P(u)=0_{L(E;E)}$. Show that $\ker u^{k}=\ker u^{k+1}$ and $E=\ker u^{k} \oplus \textrm{Im } u^{k}$. \bigskip {\it Solution}. $P(X)=X^{k}Q(X)$ with $Q(0)\neq 0,X^{k}$ and $Q(X)$ are prime polynomials. The Bezout theorem implies the existence of $A$ and $B$ such that $$ X^{k}\cdot A(X)+Q(X)\cdot B(X)=1, $$ \begin{equation}\label{Omarjee1} u^{k}A(u)+Q(u)B(u)=id. \end{equation} We also have $\ker(P(u))=E=\ker u^{k}\oplus\ker Q(u)$. Clearly $\ker u^{k}\subset\ker u^{k+1}$. Now take $x$ in $\ker u^{k+1}$ and use (\ref{Omarjee1}) with the vector $u^{k}(x)$, $$ u^{k}(x)=A(u)u^{2k}(x)+B(u)Q(u)u^{k}(x)=0+B(u)P(u)(x)=0, $$ so $\ker u^{k+1}\subset\ker u^{k}$. Therefore $$ \ker u^{k}=\ker u^{k+1}. $$ $P(u)=u^{k}Q(u)=0$ gives $\textrm{Im }Q(u)\subset\ker u^{k}$. (\ref{Omarjee1}) gives $$ x=u^{k}A(u)x+Q(u)B(u)x\in\textrm{Im }u^{k}+\ker u^{k}. $$ So $$ \textrm{Im }Q(u)\subset\ker u^{k}, E\subset\textrm{Im }u^{k}+\ker u^{k},E=\ker u^{k}\oplus\ker Q(u) $$ The rank theorem gives $E=\ker u^{k}\oplus\textrm{Im }u^{k}$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 35. Moubinool Omarjee, Paris} Let $A$ be a matrix in $M_{n}({\mathbb C})\setminus\{0\}$ such that, for any $i,j,k,l,$ $$ a_{ik}\cdot a_{kl}\cdot a_{lj}=a_{kk}\cdot a_{ll}\cdot a_{ij}. $$ Is $A$ diagonalisable? \bigskip {\it Solution}. Let $A=(a_{ij})$, $A^{2}=(c_{ij})$ and $A^{3}=(b_{ij})$ where $$ c_{ij}=\sum_{k=1}^{n}a_{ik}\cdot a_{kj}, $$ $$ b_{ij}=\sum_{l=1}^{n}c_{il}\cdot a_{lj}=\sum_{k=1}^{n}\sum_{l=1}^{n}a_{ik}\cdot a_{kj}\cdot a_{lj}=\sum_{k=1}^{n}\sum_{l=1}^{n}a_{kk}\cdot a_{ll}\cdot a_{ij}. $$ Therefore $$ b_{ij}=\left(\sum_{k=1}^{n}a_{kk}\right)\left(\sum_{l=1}^{n}a_{ll}\right)a_{ij}=(Tr A)^{2}a_{ij}. $$ Hence $$ A^{3}-(Tr A)^{2}\cdot A=A(A-(Tr A)I_{n})(A+(Tr A)I_{n}) $$ If $Tr A\neq 0$ then the polynomial $P(x)=x(x-Tr A)(x+Tr A)$ is split with simple roots, so $A$ is diagonalisable. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 36. Moubinool Omarjee, Paris} Let $ A= \left(\begin{array}{ccc} 0 & 0 & 1 \\ i & 1+i & i \\ 1 & 0 & 0 \end{array} \right ) $ and $ B=\left(\begin{array}{ccc} 1+i & i & i \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) $ be matrices in $M_{3}({\mathbb C})$. Show that \begin{enumerate} \item{$A$ and $B$ are similar in $M_{3}({\mathbb C})$.} \item{$\textrm{Re } A$ and $\textrm{Re } B$ are similar in $M_{3}({\mathbb R})$.} \item{$\textrm{Im } A$ and $\textrm{Im } B$ are similar in $M_{3}({\mathbb R})$.} \end{enumerate} Where $\textrm{Re } A$ is the real part of $A$ and $\textrm{Im } A$ is the imaginary part of $A$. \bigskip {\it Solution}. $ B=\left(\begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)A\left(\begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)=P^{-1}AP $ with $P=\left(\begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)$. Note $P^{2}$ is a symmetry transformation. A calculation shows that $\textrm{Re } B=P^{-1}\textrm{Re } A P$ and $\textrm{Im } B=P^{-1}\textrm{Im } A P$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 37. Moubinool Omarjee, Paris} Let $p$ be a prime number and $B$ be a matrix in $M_{p}({\mathbb Z}/p{\mathbb Z})$ such that $$ B=\left(\begin{array}{cccccc} 0 & 0 & 0 & \ldots & 0 & 0 \\ 1 & 0 & 0 & \ldots & 0 & 0 \\ 0 & 1 & 0 & \ldots & 0 & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\ 0 & 0 & \ldots & 1 & 0 & 0 \\ 0 & 0 & \ldots & 0 & 1 & 0 \end{array} \right). $$ Find all $A\in M_{p}({\mathbb Z}/p{\mathbb Z})$ such that $AB-BA=I_{p}$. \bigskip {\it Solution}. The coefficients of $AB-BA$ are denoted by $$ (AB-BA)_{ij} = \left\{ \begin{array}{ll} B_{i,j+1}-B_{i-1,j}=0 & \textrm{ if } i\neq j\\ 1 & \textrm{ if } i=j \end{array} \right. . $$ If $i>j$ we have $$ B_{i,j+1}=B_{i-1,j}=\ldots=B_{i-j+p-1,p}. $$ If $i=j$ we have $$ B_{i,i+1}=B_{i-1,i}+1 $$ so $$ B_{p-1,p}=B_{1,2}+p-2 $$ but $B_{1,2}=1, B_{i,i+1}=i$ for $1\le i\le p-1$. If $i<j$ we have $$ B_{i,j+1}=B_{i-1,j} $$ but $B_{1,j-i+2}=0$, so $B_{i,j}=0$ for $i<j+1$. The matrix $B$ is of the form $W+P(A)$ where $P$ is any polynomial in the ring ${\mathbb Z}/p{\mathbb Z}[X]$ and $$ W=\left(\begin{array}{cccccc} 0 & 1 & 0 & \ldots & 0 & 0 \\ 0 & 0 & 2 & \ldots & 0 & 0 \\ 0 & 0 & 0 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \vdots & \ddots & p-2 & 0 \\ 0 & 0 & \ldots & 0 & 0 & p-1 \\ 0 & 0 & \ldots & 0 & 0 & 0 \end{array} \right). $$ \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 38. Moubinool Omarjee, Paris} Let $A,B$ be matrices in $M_{n}({\mathbb R})$, with $A^{T}=A$ and $B^{T}=B$. Show that $(Tr(AB+BA))^{2}\le 4Tr(A)^{2}\cdot Tr(B)^{2}$, where $A^{T}$ is the transpose of $A$. \bigskip {\it Solution}. $A$ has real entries and $A^{T}=A$ so there exists $P$ such that $P^{T}\cdot P=I_{n}$ and $D=P^{T}\cdot A\cdot P$ is diagonal. Let $C=P^{T}\cdot B P$. Then $$ Tr(AB+BA)=Tr(DC+CD), Tr(A)^{2}=Tr(D)^{2}, Tr(B)^{2}=Tr(C)^{2}. $$ Also, $D=diag(d_{1},\ldots ,d_{n})$ and $C=(c_{ij})$ with $c_{ij}=c_{ji}$. So $$ Tr(DC+CD)=2\sum d_{i}c_{ii}, Tr (D)^{2}=\sum (d^{2})_{i}, Tr(C)^{2}=\sum_{i}\sum_{j}(c^{2})_{ij}. $$ Using the Cauchy-Schwarz inequality $$ (Tr(DC+CD))^{2}\le 4 Tr(D)^{2}\cdot\sum_{i}(c^{2})_{ii}\le 4 Tr(D)^{2}\sum_{i}\sum_{j}(c^{2})_{ij}=4\cdot Tr(D)^{2}\cdot Tr(C)^{2}. $$ $$ (Tr(AB+BA))^{2}\le 4 Tr(A)^{2}\cdot Tr(B)^{2}. $$ With equality if $A$ and $B$ are colinear. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Problem 39. Moubinool Omarjee, Paris} Let $E$ be a ${\mathbb R}$-vector space with $\textrm{dim }E=n$ and $V_{1},\ldots ,V_{k}$ be a sub vector space of $E$ with $$ \sum_{i=1}^{k}\textrm{dim }V_{i}>(k-1)n. $$ Prove that $V_{1}\cap\ldots\cap V_{k}\neq\{0\}$. \bigskip {\it Solution}. Let $W=V_{1}\times\ldots\times V_{k}$ and $F=E^{k-1}$. Define $$ g:W\rightarrow F $$ $$ g(x_{1},\ldots ,x_{k})=(x_{1}-x_{2},\ldots ,x_{k-1}-x_{k}) $$ $g$ is a linear map from $W$ to $F$ and $$ \textrm{dim }W=\sum_{i=1}^{k}\textrm{dim }V_{i}>(k-1)n=\textrm{dim }F $$ $g$ is not injective, consider $(x_{1},\ldots ,x_{k})\in W\setminus\{0\}$, the condition $g(x)=0$ gives $x_{1}=x_{2}=\ldots=x_{k}$ so $x_{1}\neq 0$, $x_{1}\in V_{1}\cap \ldots \cap V_{k}$. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{document}