Matrix eigenvalue calculator
Author: h | 2025-04-24
Free Matrix Eigenvalues calculator - calculate matrix eigenvalues step-by-step
Eigenvalue Calculator of a matrix
This tool helps you estimate the eigenvalues of a matrix quickly and accurately.Eigenvalue CalculatorHow to Use the Eigenvalue CalculatorTo use this eigenvalue calculator, follow these steps:Enter the size of the matrix (between 2 and 5).Fill in the matrix inputs that appear.Click the “Calculate” button to get the eigenvalues of the matrix.How the Calculator WorksThis calculator uses standard numerical algorithms to estimate the eigenvalues of a matrix. Eigenvalues are calculated by solving the characteristic equation of the given matrix.LimitationsThe calculator has the following limitations:It can only handle matrices up to size 5×5.The accuracy of the eigenvalues may be affected by the precision limitations of JavaScript numerical operations. Use Cases for This CalculatorEstimating Eigenvalues for Population Growth Rate CalculationEnter the matrix representing population growth to estimate eigenvalues for analyzing long-term growth trends accurately.Estimating Eigenvalues for Climate Change Impact AssessmentUse the eigenvalue estimator to analyze the impact of climate change through mathematical modeling, enabling informed decision-making for sustainable development.Estimating Eigenvalues for Economic ForecastingBy utilizing the eigenvalue estimator, predict economic trends based on historical data to make strategic decisions and optimize financial resources effectively.Estimating Eigenvalues for Risk Assessment in Financial MarketsAssess the risks in financial markets by estimating eigenvalues, enabling you to proactively manage risks and enhance investment portfolio performance.Estimating Eigenvalues for Network Traffic AnalysisEstimate eigenvalues of network traffic matrices to improve network efficiency, detect bottlenecks, and enhance overall network performance.Estimating Eigenvalues for Image Compression TechniquesUtilize eigenvalue estimator for image compression algorithms to reduce the size of images without losing quality, optimizing storage and transmission efficiency.Estimating Eigenvalues for Structural Engineering AnalysisAnalyze structures’ stability and behaviors by estimating eigenvalues to predict potential failures and optimize structural design for safety and durability.Estimating Eigenvalues for Quantum Mechanics SimulationsEstimate eigenvalues in quantum mechanics simulations to understand particle behaviors, aiding in research and development in the field of physics.Estimating ???A??? from the example, the values down the diagonal were ???2??? and ???2???. Their sum is ???4???, which means the sum of the eigenvalues will be ???4??? as well. The sum of the entries along the diagonal is called the trace of the matrix, so we can say that the trace will always be equal to the sum of the eigenvalues.???\text{Trace}(A)=\text{sum of }A\text{'s eigenvalues}???Realize that this also means that, for an ???n\times n??? matrix ???A???, once we find ???n-1??? of the eigenvalues, we’ll already have the value of the ???n???th eigenvalue.Second, the determinant of ???A???, ???|A|???, will always be equal to the product of the eigenvalues. In the last example, ???|A|=(2)(2)-(1)(1)=4-1=3???, and the product of the eigenvalues was ???\lambda_1\lambda_2=(1)(3)=3???.???\text{Det}(A)=|A|=\text{product of }A\text{'s eigenvalues}???Finding eigenvectorsOnce we’ve found the eigenvalues for the transformation matrix, we need to find their associated eigenvectors. To do that, we’ll start by defining an eigenspace for each eigenvalue of the matrix.The eigenspace ???E_\lambda??? for a specific eigenvalue ???\lambda??? is the set of all the eigenvectors ???\vec{v}??? that satisfy ???A\vec{v}=\lambda\vec{v}??? for that particular eigenvalue ???\lambda???.As we know, we were able to rewrite ???A\vec{v}=\lambda\vec{v}??? as ???(\lambda I_n-A)\vec{v}=\vec{O}???, and we recognized that ???\lambda I_n-A??? is just a matrix. So the eigenspace is simply the null space of the matrix ???\lambda I_n-A???.???E_\lambda=N(\lambda I_n-A)???To find the matrix ???\lambda I_n-A???, we can simply plug the eigenvalue into the value we found earlier for ???\lambda I_n-A???. Let’s continue on with the previous example and find the eigenvectors associated with ???\lambda=1??? and ???\lambda=3???. ExampleFor the transformation matrix ???A???, we found eigenvalues ???\lambda=1??? and ???\lambda=3???. Find the eigenvectors associated with each eigenvalue. With ???\lambda=1??? and ???\lambda=3???, we’ll have two eigenspaces, given by ???E_\lambda=N(\lambda I_n-A)???. With Therefore, the eigenvectors in the eigenspace ???E_1??? will satisfy ???v_1+v_2=0??????v_1=-v_2???So with ???v_1=-v_2???, we’ll substitute ???v_2=t???, and say that???\begin{bmatrix}v_1\\ v_2\end{bmatrix}=t\begin{bmatrix}-1\\ 1\end{bmatrix}???Which means thatEigenvalue Calculator - How to Find Matrix Eigenvalues?
Eigenvalues Under the effect of A transformation, the vector ξ only changes in scale to λ times of the original. ξ is called an eigenvector of A, and λ is the corresponding eigenvalue (eigenvalue), which is a quantity that can be measured (in experiments). Correspondingly, in the theory of quantum mechanics, many quantities cannot be measured. Of course, this phenomenon also exists in other theoretical fields. Let A be an n-order matrix. If there exists a constant λ and a non-zero n-dimensional vector x such that Ax=λx, then λ is called the eigenvalue of the matrix A, and x is the eigenvector of A belonging to the eigenvalue λ. Feature vector Mathematically, an eigenvector of a linear transformation is a non-degenerate vector whose direction is invariant under the transformation. The scale to which the vector is scaled under the transformation is called its eigenvalue. A linear transformation can usually be fully described by its eigenvalues and eigenvectors. An eigenspace is the set of eigenvectors with the same eigenvalue. The word "characteristic" comes from the German word eigen. Hilbert first used the word in this sense in 1904, and Helmholtz used it earlier in a related sense. The word eigen can be translated as "own", "specific to", "characteristic", or "individual". This shows how important eigenvalues are in defining a particular linear transformation.. Free Matrix Eigenvalues calculator - calculate matrix eigenvalues step-by-stepMatrix eigenvalues and eigenvectors calculator
\mathcal {E}\), and \(A_{ij} = 0\) otherwise. In an undirected graph (which are the only kind we deal with in the paper), A is symmetric and therefore has only real eigenvalues. If the graph is d-regular, then d is an eigenvalue of A and is also its spectral radius. The multiplicity of d as an eigenvalue of A equals the number of connected components of the graph. Thus the graph is connected if and only if d is a simple eigenvalue of A. The graph is bipartite if and only if \(-d\) is an eigenvalue of A. If the graph is bipartite, then its adjacency matrix A looks like \( A = \left[ \begin{array}{cc} 0 &{} B \\ B^\top &{} 0 \end{array}\right] , \) where \(B \in \{0,1\}^{|\mathcal {V}_r| \times |\mathcal {V}_c|}\) is called the biadjacency matrix. The eigenvalues of A equal \(\pm \sigma _1 , \ldots , \pm \sigma _l\) together with a suitable number of zeros, where \(l = \min \{ |\mathcal {V}_r| , |\mathcal {V}_c| \}\), and \(\sigma _1 , \ldots , \sigma _l\) are the singular values of B. Here, the singular values of B denote the square roots of nonnegative eigenvalues of \(B^\top B\). In particular, in a \((d_r,d_c)\)-biregular graph, \(\sqrt{d_r d_c}\) is the largest singular value of B. These and other elementary facts about graphs can be found in [28]. Definition 1 A d-regular graph is said to be a Ramanujan graph if the second largest eigenvalue by magnitude of its adjacency matrix, call it \(\lambda _2\), satisfies$$\begin{aligned} | \lambda _2 | \le 2 \sqrt{d-1} . \end{aligned}$$ (1) A d-regular bipartite graphFootnote 2 is said to be a bipartite Ramanujan graph if the second largest singular value of its biadjacency matrix, call it \(\sigma _2\), satisfies$$\begin{aligned} \sigma _2 \le 2 \sqrt{d-1} . \end{aligned}$$ (2) Note the distinction being made between the two cases. If a graph is d-regular and bipartite, then it cannot be a Ramanujan graph, because in that case \(\lambda _2 = -d\), which violates (1). On the other hand, if it satisfies (2), then it is called a bipartite Ramanujan graph. \(B^\top B\) and determining its eigenvalues. Throughout we make use of the fact that \(P^\top = P^{-1}\). We begin with the case \(l \le q\). Use block-partition notation to divide \(B B^\top \) into l blocks of size \(q \times q\). Then$$\begin{aligned} (B B^\top )_{ij}= & {} \sum _{s=1}^q P^{(i-1)(s-1)} (P^\top )^{(s-1)(j-1)} \\= & {} \sum _{s=1}^q P^{(i-j)(s-1)} = \sum _{s=0}^{q-1} P^{(i-j)s} . \end{aligned}$$It readily follows that \( (B B^\top )_{ii} = q I_q , i = 1 , \ldots , q . \) Now observe that, for any nonzero integer k, the set of numbers \(ks ~\mathrm{mod}~q\) as s varies over \(\{ 0 , \ldots , q-1 \}\) equals \(\{ 0 , \ldots , q-1 \}\). (This is where we use the fact that q is a prime number.) Therefore, whenever \(i \ne j\), we have that \( (B B^\top )_{ij} = \sum _{s=0}^{q-1} P^s = \mathbf{1}_{q \times q} , \) where \(\mathbf{1}_{q \times q}\) denotes the \(q \times q\) matrix whose entries are all equal to one. We observe that ql is an eigenvalue of \(B B^\top \), with normalized eigenvector \((1/\sqrt{ql}) \mathbf{1}_{ql}\). Therefore if we define \(M_l = B B^\top - \mathbf{1}_{ql \times ql}\) and partition it commensurately with B, we see that the off-diagonal blocks of \(M_l\) are all equal to zero, while the diagonal blocks are all identical and equal to \(q I_q - \mathbf{1}_{q \times q}\). This is the Laplacian matrix of a fully connected graph with q vertices, and thus has \(q-1\) eigenvalues of q and one eigenvalue of 0. Therefore \(M_l = B B^\top - \mathbf{1}_{ql \times ql}\) has \(l(q-1)\) eigenvalues of q and l eigenvalues of 0. Moreover, \(\mathbf{1}_{ql}\) is an eigenvector of M corresponding to the eigenvalue zero. Therefore \(B B^\top = M_l + \mathbf{1}_{ql} \mathbf{1}_{ql}^\top \) has a single eigenvalue of ql, \(l(q-1)\) eigenvalues of q, and \(l-1\) eigenvalues of 0. This is equivalent to the claim about singular values of \(B^\top \). Now we study the case where \(l \ge q\). Let \(M_q \in \{0,1\}^{q^2 \times q^2}\) denote the matrix in the previous case with \(l = q\).Matrix Eigenvalues calculator - AtoZmath.com
Transformation ???T???, the span remains the same, such that ???T(\vec{v})??? has the same span as ???\vec{v}???, then you know ???\vec{v}??? is an eigenvector. The vectors ???\vec{v}??? and ???T(\vec{v})??? might be different lengths, but their spans are the same because they lie along the same line.The reason we care about identifying eigenvectors is because they often make good basis vectors for the subspace, and we’re always interested in finding a simple, easy-to-work-with basis.Finding eigenvaluesBecause we’ve said that ???T(\vec{v})=\lambda\vec{v}??? and ???T(\vec{v})=A\vec{v}???, it has to be true that ???A\vec{v}=\lambda\vec{v}???. Which means eigenvectors are any vectors ???\vec{v}??? that satisfy ???A\vec{v}=\lambda\vec{v}???.We also know that there will be ???2??? eigenvectors when ???A??? is ???2\times2???, that there will be ???3??? eigenvectors when ???A??? is ???3\times3???, and that there will be ???n??? eigenvectors when ???A??? is ???n\times n???.While ???\vec{v}=\vec{O}??? would satisfy ???A\vec{v}=\lambda\vec{v}???, we don’t really include that as an eigenvector. The reason is first, because it doesn’t really give us any interesting information, and second, because ???\vec{v}=\vec{O}??? doesn’t allow us to determine the associated eigenvalue ???\lambda???.So we’re really only interested in the vectors ???\vec{v}??? that are nonzero. If we rework ???A\vec{v}=\lambda\vec{v}???, we could write it as???\vec{O}=\lambda\vec{v}-A\vec{v}??????\vec{O}=\lambda I_n\vec{v}-A\vec{v}??????(\lambda I_n-A)\vec{v}=\vec{O}???Realize that this is just a matrix-vector product, set equal to the zero vector. Because ???\lambda I_n-A??? is just a matrix. The eigenvalue ???\lambda??? acts as a scalar on the identity matrix ???I_n???, which means ???\lambda I_n??? will be a matrix. If, from ???\lambda I_n???, we subtract the matrix ???A???, we’ll still just get another matrix, which is why ???\lambda I_n-A??? is a matrix. So let’s make a substitution ???B=\lambda I_n-A???.???B\vec{v}=\vec{O}???Written this way, we can see that any vector ???\vec{v}??? that satisfies ???B\vec{v}=\vec{O}??? will be in the null space of ???B???, ???N(B)???. But we already said that ???\vec{v}??? was going to be nonzero, which tells us right away that there mustMatrix eigenvalues Calculator - 123calculus.com
Why can't I install Matrix Calculator | solution?The installation of Matrix Calculator | solution may fail because of the lack of device storage, poor network connection, or the compatibility of your Android device. Therefore, please check the minimum requirements first to make sure Matrix Calculator | solution is compatible with your phone.How to check if Matrix Calculator | solution is safe to download?Matrix Calculator | solution is safe to download on APKPure, as it has a trusted and verified digital signature from its developer.How to download Matrix Calculator | solution old versions?APKPure provides the latest version and all the older versions of Matrix Calculator | solution. You can download any version you want from here: All Versions of Matrix Calculator | solutionWhat's the file size of Matrix Calculator | solution?Matrix Calculator | solution takes up around 5.0 MB of storage. It's recommended to download APKPure App to install Matrix Calculator | solution successfully on your mobile device with faster speed.What language does Matrix Calculator | solution support?Matrix Calculator | solution supports isiZulu,中文,Việt Nam, and more languages. Go to More Info to know all the languages Matrix Calculator | solution supports.. Free Matrix Eigenvalues calculator - calculate matrix eigenvalues step-by-step Free Online Matrix Eigenvalues calculator - calculate matrix eigenvalues step-by-stepMatrix Eigenvalues calculator - atozmath.com
Solutions > Topic Pre AlgebraAlgebraPre CalculusCalculusFunctionsLinear AlgebraTrigonometryStatisticsPhysicsChemistryFinanceEconomicsConversions Full pad x^2 x^{\msquare} \log_{\msquare} \sqrt{\square} \nthroot[\msquare]{\square} \le \ge \frac{\msquare}{\msquare} \cdot \div x^{\circ} \pi \left(\square\right)^{'} \frac{d}{dx} \frac{\partial}{\partial x} \int \int_{\msquare}^{\msquare} \lim \sum \infty \theta (f\:\circ\:g) f(x) Steps Graph Related Examples Generated by AI AI explanations are generated using OpenAI technology. AI generated content may present inaccurate or offensive content that does not represent Symbolab's view. Verify your Answer Subscribe to verify your answer Subscribe Save to Notebook! Sign in to save notes Sign in Verify Save Show Steps Hide Steps Number Line Related Examples x^{2}-x-6=0 -x+3\gt 2x+1 line\:(1,\:2),\:(3,\:1) f(x)=x^3 prove\:\tan^2(x)-\sin^2(x)=\tan^2(x)\sin^2(x) \frac{d}{dx}(\frac{3x+9}{2-x}) (\sin^2(\theta))' \sin(120) \lim _{x\to 0}(x\ln (x)) \int e^x\cos (x)dx \int_{0}^{\pi}\sin(x)dx \sum_{n=0}^{\infty}\frac{3}{2^n} Description Solve problems from Pre Algebra to Calculus step-by-step step-by-step sigma en Related Symbolab blog posts Practice Makes Perfect Learning math takes practice, lots of practice. Just like running, it takes practice and dedication. If you want... Popular topics median calculator dot product calculator arc length calculator maclaurin series calculator z score calculator critical point calculator inequalities calculator range calculator determinant calculator second derivative calculator lcm calculator partial derivative calculator complete the square calculator distributive property calculator mixed fractions calculator Time Calculator gradient calculator triple integrals calculator partial fractions calculator indefinite integral calculator solve for x calculator double integral solver vector calculator Date Calculator vertex calculator binomial expansion calculator decimal to fraction calculator difference quotient calculator eigenvalue calculator piecewise functions calculator radius of convergence calculator roots calculator exponential function calculator interval of convergence calculator fractions divide calculator inflection point calculator expandComments
This tool helps you estimate the eigenvalues of a matrix quickly and accurately.Eigenvalue CalculatorHow to Use the Eigenvalue CalculatorTo use this eigenvalue calculator, follow these steps:Enter the size of the matrix (between 2 and 5).Fill in the matrix inputs that appear.Click the “Calculate” button to get the eigenvalues of the matrix.How the Calculator WorksThis calculator uses standard numerical algorithms to estimate the eigenvalues of a matrix. Eigenvalues are calculated by solving the characteristic equation of the given matrix.LimitationsThe calculator has the following limitations:It can only handle matrices up to size 5×5.The accuracy of the eigenvalues may be affected by the precision limitations of JavaScript numerical operations. Use Cases for This CalculatorEstimating Eigenvalues for Population Growth Rate CalculationEnter the matrix representing population growth to estimate eigenvalues for analyzing long-term growth trends accurately.Estimating Eigenvalues for Climate Change Impact AssessmentUse the eigenvalue estimator to analyze the impact of climate change through mathematical modeling, enabling informed decision-making for sustainable development.Estimating Eigenvalues for Economic ForecastingBy utilizing the eigenvalue estimator, predict economic trends based on historical data to make strategic decisions and optimize financial resources effectively.Estimating Eigenvalues for Risk Assessment in Financial MarketsAssess the risks in financial markets by estimating eigenvalues, enabling you to proactively manage risks and enhance investment portfolio performance.Estimating Eigenvalues for Network Traffic AnalysisEstimate eigenvalues of network traffic matrices to improve network efficiency, detect bottlenecks, and enhance overall network performance.Estimating Eigenvalues for Image Compression TechniquesUtilize eigenvalue estimator for image compression algorithms to reduce the size of images without losing quality, optimizing storage and transmission efficiency.Estimating Eigenvalues for Structural Engineering AnalysisAnalyze structures’ stability and behaviors by estimating eigenvalues to predict potential failures and optimize structural design for safety and durability.Estimating Eigenvalues for Quantum Mechanics SimulationsEstimate eigenvalues in quantum mechanics simulations to understand particle behaviors, aiding in research and development in the field of physics.Estimating
2025-04-16???A??? from the example, the values down the diagonal were ???2??? and ???2???. Their sum is ???4???, which means the sum of the eigenvalues will be ???4??? as well. The sum of the entries along the diagonal is called the trace of the matrix, so we can say that the trace will always be equal to the sum of the eigenvalues.???\text{Trace}(A)=\text{sum of }A\text{'s eigenvalues}???Realize that this also means that, for an ???n\times n??? matrix ???A???, once we find ???n-1??? of the eigenvalues, we’ll already have the value of the ???n???th eigenvalue.Second, the determinant of ???A???, ???|A|???, will always be equal to the product of the eigenvalues. In the last example, ???|A|=(2)(2)-(1)(1)=4-1=3???, and the product of the eigenvalues was ???\lambda_1\lambda_2=(1)(3)=3???.???\text{Det}(A)=|A|=\text{product of }A\text{'s eigenvalues}???Finding eigenvectorsOnce we’ve found the eigenvalues for the transformation matrix, we need to find their associated eigenvectors. To do that, we’ll start by defining an eigenspace for each eigenvalue of the matrix.The eigenspace ???E_\lambda??? for a specific eigenvalue ???\lambda??? is the set of all the eigenvectors ???\vec{v}??? that satisfy ???A\vec{v}=\lambda\vec{v}??? for that particular eigenvalue ???\lambda???.As we know, we were able to rewrite ???A\vec{v}=\lambda\vec{v}??? as ???(\lambda I_n-A)\vec{v}=\vec{O}???, and we recognized that ???\lambda I_n-A??? is just a matrix. So the eigenspace is simply the null space of the matrix ???\lambda I_n-A???.???E_\lambda=N(\lambda I_n-A)???To find the matrix ???\lambda I_n-A???, we can simply plug the eigenvalue into the value we found earlier for ???\lambda I_n-A???. Let’s continue on with the previous example and find the eigenvectors associated with ???\lambda=1??? and ???\lambda=3???. ExampleFor the transformation matrix ???A???, we found eigenvalues ???\lambda=1??? and ???\lambda=3???. Find the eigenvectors associated with each eigenvalue. With ???\lambda=1??? and ???\lambda=3???, we’ll have two eigenspaces, given by ???E_\lambda=N(\lambda I_n-A)???. With Therefore, the eigenvectors in the eigenspace ???E_1??? will satisfy ???v_1+v_2=0??????v_1=-v_2???So with ???v_1=-v_2???, we’ll substitute ???v_2=t???, and say that???\begin{bmatrix}v_1\\ v_2\end{bmatrix}=t\begin{bmatrix}-1\\ 1\end{bmatrix}???Which means that
2025-04-24Eigenvalues Under the effect of A transformation, the vector ξ only changes in scale to λ times of the original. ξ is called an eigenvector of A, and λ is the corresponding eigenvalue (eigenvalue), which is a quantity that can be measured (in experiments). Correspondingly, in the theory of quantum mechanics, many quantities cannot be measured. Of course, this phenomenon also exists in other theoretical fields. Let A be an n-order matrix. If there exists a constant λ and a non-zero n-dimensional vector x such that Ax=λx, then λ is called the eigenvalue of the matrix A, and x is the eigenvector of A belonging to the eigenvalue λ. Feature vector Mathematically, an eigenvector of a linear transformation is a non-degenerate vector whose direction is invariant under the transformation. The scale to which the vector is scaled under the transformation is called its eigenvalue. A linear transformation can usually be fully described by its eigenvalues and eigenvectors. An eigenspace is the set of eigenvectors with the same eigenvalue. The word "characteristic" comes from the German word eigen. Hilbert first used the word in this sense in 1904, and Helmholtz used it earlier in a related sense. The word eigen can be translated as "own", "specific to", "characteristic", or "individual". This shows how important eigenvalues are in defining a particular linear transformation.
2025-03-29\mathcal {E}\), and \(A_{ij} = 0\) otherwise. In an undirected graph (which are the only kind we deal with in the paper), A is symmetric and therefore has only real eigenvalues. If the graph is d-regular, then d is an eigenvalue of A and is also its spectral radius. The multiplicity of d as an eigenvalue of A equals the number of connected components of the graph. Thus the graph is connected if and only if d is a simple eigenvalue of A. The graph is bipartite if and only if \(-d\) is an eigenvalue of A. If the graph is bipartite, then its adjacency matrix A looks like \( A = \left[ \begin{array}{cc} 0 &{} B \\ B^\top &{} 0 \end{array}\right] , \) where \(B \in \{0,1\}^{|\mathcal {V}_r| \times |\mathcal {V}_c|}\) is called the biadjacency matrix. The eigenvalues of A equal \(\pm \sigma _1 , \ldots , \pm \sigma _l\) together with a suitable number of zeros, where \(l = \min \{ |\mathcal {V}_r| , |\mathcal {V}_c| \}\), and \(\sigma _1 , \ldots , \sigma _l\) are the singular values of B. Here, the singular values of B denote the square roots of nonnegative eigenvalues of \(B^\top B\). In particular, in a \((d_r,d_c)\)-biregular graph, \(\sqrt{d_r d_c}\) is the largest singular value of B. These and other elementary facts about graphs can be found in [28]. Definition 1 A d-regular graph is said to be a Ramanujan graph if the second largest eigenvalue by magnitude of its adjacency matrix, call it \(\lambda _2\), satisfies$$\begin{aligned} | \lambda _2 | \le 2 \sqrt{d-1} . \end{aligned}$$ (1) A d-regular bipartite graphFootnote 2 is said to be a bipartite Ramanujan graph if the second largest singular value of its biadjacency matrix, call it \(\sigma _2\), satisfies$$\begin{aligned} \sigma _2 \le 2 \sqrt{d-1} . \end{aligned}$$ (2) Note the distinction being made between the two cases. If a graph is d-regular and bipartite, then it cannot be a Ramanujan graph, because in that case \(\lambda _2 = -d\), which violates (1). On the other hand, if it satisfies (2), then it is called a bipartite Ramanujan graph.
2025-04-04\(B^\top B\) and determining its eigenvalues. Throughout we make use of the fact that \(P^\top = P^{-1}\). We begin with the case \(l \le q\). Use block-partition notation to divide \(B B^\top \) into l blocks of size \(q \times q\). Then$$\begin{aligned} (B B^\top )_{ij}= & {} \sum _{s=1}^q P^{(i-1)(s-1)} (P^\top )^{(s-1)(j-1)} \\= & {} \sum _{s=1}^q P^{(i-j)(s-1)} = \sum _{s=0}^{q-1} P^{(i-j)s} . \end{aligned}$$It readily follows that \( (B B^\top )_{ii} = q I_q , i = 1 , \ldots , q . \) Now observe that, for any nonzero integer k, the set of numbers \(ks ~\mathrm{mod}~q\) as s varies over \(\{ 0 , \ldots , q-1 \}\) equals \(\{ 0 , \ldots , q-1 \}\). (This is where we use the fact that q is a prime number.) Therefore, whenever \(i \ne j\), we have that \( (B B^\top )_{ij} = \sum _{s=0}^{q-1} P^s = \mathbf{1}_{q \times q} , \) where \(\mathbf{1}_{q \times q}\) denotes the \(q \times q\) matrix whose entries are all equal to one. We observe that ql is an eigenvalue of \(B B^\top \), with normalized eigenvector \((1/\sqrt{ql}) \mathbf{1}_{ql}\). Therefore if we define \(M_l = B B^\top - \mathbf{1}_{ql \times ql}\) and partition it commensurately with B, we see that the off-diagonal blocks of \(M_l\) are all equal to zero, while the diagonal blocks are all identical and equal to \(q I_q - \mathbf{1}_{q \times q}\). This is the Laplacian matrix of a fully connected graph with q vertices, and thus has \(q-1\) eigenvalues of q and one eigenvalue of 0. Therefore \(M_l = B B^\top - \mathbf{1}_{ql \times ql}\) has \(l(q-1)\) eigenvalues of q and l eigenvalues of 0. Moreover, \(\mathbf{1}_{ql}\) is an eigenvector of M corresponding to the eigenvalue zero. Therefore \(B B^\top = M_l + \mathbf{1}_{ql} \mathbf{1}_{ql}^\top \) has a single eigenvalue of ql, \(l(q-1)\) eigenvalues of q, and \(l-1\) eigenvalues of 0. This is equivalent to the claim about singular values of \(B^\top \). Now we study the case where \(l \ge q\). Let \(M_q \in \{0,1\}^{q^2 \times q^2}\) denote the matrix in the previous case with \(l = q\).
2025-04-14