Lets review the concept of Lagrangians. Suppose we wish to optimize a function \(f(x_1, x_2, \dots, x_n\) under the boundary condition \(g(x_1, x_2, \dots, x_n = 0\). From calculus, we know that without the boundary condition, the min/max of \(f(x)\) occurs when the partial derivatives vanish. However, the extremas within the boundary might have nonvanishing partial derivatives with respect to all of the variables. For example, consider an hyperbola \(f(x, y) = x^2 + y^2\). The function is globally minimized when \(x = y = 0\). However, the minimization under the ellipse \(x^2/4 + y^2 = 1\) is much trickier. The boundary is a bended version of an ellipse, and at the partial derivatives with respect to x, y do not vanish anywhere around the boundary.
Lagrange’s Method provides a solution for this problem. Define a new function called the lagrangian with an additional parameter \(\lambda\) as
\[\mathcal L (\lambda, x) \ := \ f(x) – \lambda g(x). \]
When the Lagrangian is globally minimized/maximized, the partial derivative with respect to \(\lambda\) vanishes, which directly gives the boundary condition. So the extremas of this function are local/global extremas of the function \(f(x)\) within the boundary. The term \(\lambda g(x)\) levels out the partial derivatives at the extrema.
Note that the word Lagrangian is vaguely used for quantities that deal with boundary conditions. The example above is a demonstrative prevalent example, but the choice of the Lagrangian varied on the type of the problem. We study a variant call the Graph Lagrangian.
Suppose \(G\) is an \(n\)-vertex graph. We define a graph polynomial as follows.
\[p(x_1, dots, x_n ) \ = \ \sum_{uv \in E(G)} x_u x_v\]
We are interested in the maximum of this value over the constraint \(x_i \geq 0\) and \(\sum_i x_i = 1\). Call this value \(\lambda\). We bound this value above and below.
Proposition 1. (lower bound)
\[\lambda \ \geq \ \frac{e(G)}{n}\]
proof. We bound the graph polynomial below without specifying the values of \(x_i\)’s. Then, the maximum value of this lower bound is computed. This value is guaranteed to be less than \(\lambda\) by construction.
Notice that
\[p_G(x_1, \dots, x_n) \ \geq \ e(G) \left(\prod_{uv \in E(G)} x_u x_v\right)^{e(G)} \]
by the AM-GM inequality. The product simplifies to
\[prod_{uv \in E(G)} x_u x_v \ = \ \prod_{u\in V(G)} x_u^{\text{deg} (u)/e(G)}\]
which can be bounded below again by using Jensen’s inequality. With respect to positive bases \(x_1, \dots, x_n\), the function
\[f(a_1, \dots a_n) \ = \ \prod_i x_i^{a_i}\]
is a concave function. Also remember that \(\sum_u \text{deg}(u)/e(G) = 2\). Therefore, we obtain the inequality
\[\prod_{u\in V(G)} x_u^{\text{deg} (u)/e(G)} \ \leq \ \prod_{u\in V(G)} x_u^{2/n}\]
and thus
\[\lambda \ \geq \ e(G) \prod_{u\in V(G)} x_u^{2/n}\]
By a simple mixing argument, we recognize that the maximum value of this lower bound is attained when \(x_i = 1/n\) for all \(i \in [n]\). Therefore,
\[\lambda \ \geq \ \frac{e(G)} {n^2}\] □
Proposition 2. (upper bound)
Suppose the graph \(G\) is \(K_r\)-free. Then,
\[\lambda \ \leq \ \frac 1 2 \left(1 – \frac 1 {r – 1} \right) . \]
proof. We inspect properties about the point where the graph polynomial is maximized. In particular, we seek the global maximum that minimizes the nonzero entries of \(x_1, \dots x_n\). The input for this global maximum corresponds to a clique generated by the vertices that correspond to nonzero entries. Suppose for a contradiction that in a maximizing input \(x_1, \dots, x_n\), there exists two nonzero entries \(x_u, x_v\) where \(u, v\) distinct. By focusing the weight to one of the verticies, e.g. \(x_u, x_v \rightarrow x_u+x_v, 0\) or vice versa, the value of the polynomial function can be increased, or at the least the number of nonzero entries can be decreased without changing the polynomial function value. Therefore at the global maxima, the nonzero entries form a clique.
Now consider a graph \(G\) that is \(K_r\)-free. The largest possible clique in this graph is \(K_{r – 1}\), and the graph polynomial is maximized by the choice of a uniform weight \(\frac 1 {r – 1}\) for the nonzero edges. Upon some algebra, we notice
\[\lambda \ \leq \ \frac 1 2 \left(1 – \frac 1 {r – 1} \right) . \] □
These two facts can be combined to prove a result slightly weaker than Turan’s theorem. Sandwiching the two inequalities, we get that
\[e(G) \ \leq \ \left(1 – \frac 1 {r – 1}\right) \frac{n^2} 2.\]
The results align with Turan up to an order of \(n\). However, this result alone does not prove that the upper bound is given by the floor function.
Leave a Reply