Logo
0

Primal problem

minf(x)s.tg(x)bh(x)=dxX\begin{aligned} \min f(x) \\ s.t\quad g(x) \leq b \\ h(x)=d \\ x \in X \end{aligned}

Lagrange problem

minf(x)+λ(g(x)b)+μ(h(x)d)s.tλ0μfreexX\begin{aligned} \min f(x)+\lambda(g(x)-b)+\mu(h(x)-d) \\ s.t \quad \lambda \geq 0 \\ \mu\,free \\ x \in X \end{aligned}

As we know

minx{f(x)+λ(g(x)b)+μ(h(x)d)g(x)b,h(x)=d,xX}minx{f(x)g(x)b,h(x)=d,xX}\begin{gathered} \min_{x}\{f(x)+\lambda(g(x)-b)+\mu(h(x)-d)|g(x) \leq b,h(x) = d,x \in X\} \\ \leq \\ \min_{x}\{f(x)|g(x) \leq b,h(x) = d,x \in X\} \end{gathered}

According to the optimization principle

minx{f(x)+λ(g(x)b)+μ(h(x)d)g(x)b,h(x)=d,xX}minx{f(x)+λ(g(x)b)+μ(h(x)d)xX}\begin{gathered} \min_{x}\{f(x)+\lambda(g(x)-b)+\mu(h(x)-d)|g(x) \leq b,h(x) = d,x \in X\} \\ \geq \\ \min_{x}\{f(x)+\lambda(g(x)-b)+\mu(h(x)-d)|x \in X\} \end{gathered}

Therefore

minx{f(x)+λ(g(x)b)+μ(h(x)d)xX}\begin{gathered} \min_{x}\{f(x)+\lambda(g(x)-b)+\mu(h(x)-d)|x \in X\} \end{gathered}

is a lower bound of primal problem, we hope to maximize this lower bound

For specific λ\lambda and μ\mu, we can get an optimal xx
For this xx \in XX, we have the same lower bound

L(λ,μ)=(g(x)b)λ+(h(x)d)μ+f(x)\begin{gathered} L(\lambda, \mu) = (g(x)-b)\lambda+(h(x)-d)\mu+f(x) \end{gathered}

It is a non-differentiable concave functions as follow:

It is impossible for us to enumerate all xx \in XX, so we can not figure out the overall view of L(λ,μ)L(\lambda,\mu), to find the "peak", subgradient is a good method to improve the lower bound

Then we update λ\lambda and μ\mu, and re-optimize (5) to update xx

Loop until reach algorithm end criteria

❗It is crucial to select accurate step size(learning rate), bad step size could lead to divergence

© 2025 All rights reservedBuilt with DataHub Cloud

Built with LogoDataHub Cloud