Lagrangian Relaxation / Aliens Trick¶
This lane is about one very specific optimization pattern:
- the original problem asks for the best value with exactly
Kchoices / segments / operations - the direct DP over that count is too expensive
- but if each choice costs a penalty
lambda, the relaxed problem becomes easy
Then we:
- solve the relaxed problem for a fixed
lambda - also track how many choices the relaxed optimum uses
- binary search
lambda - recover the answer for the exact count
K
This is the exact contest meaning of:
Alien DPAliens Trickwqs binary searchLagrangian relaxation
This page is not about generic Lagrange multipliers in continuous optimization.
It is about the discrete contest lane that sits next to:
At A Glance¶
- Use when: the statement wants the best answer with exactly
Kpicks / segments / operations, and adding one global penalty per pick makes the relaxed DP much cheaper - Core worldview: replace "use exactly
K" by "each use costslambda", solve that penalized DP, and search for the slope where the optimal count crossesK - Main tools: penalized objective, pair states
(penalized_value, count), tie-breaking by count, integer binary search onlambda - Typical complexity:
solve_relaxed(lambda)multiplied byO(log V)search over the penalty range - Main trap: using the trick before proving or importing the needed monotonicity / concavity, or tie-breaking the count in the wrong direction
Strong contest signals:
- "choose exactly
Ksegments / facilities / picks" - the direct DP over
Kis too slow, but the unconstrained or penalized version is linear / near-linear - official editorials mention
Alien DP,wqs binary search,concavity of f(K), orLagrangian relaxation
Strong anti-cues:
Kis small enough that ordinaryO(NK)DP is already fine- the expensive part is still one split-point row -> Divide and Conquer DP
- the state is one convex function over position/value -> Slope Trick
- the transition is affine and previous states become lines -> Convex Hull Trick / Li Chao Tree
Prerequisites¶
- DP Foundations
- Reasoning for "relaxed solver first" intuition and exchange-style proof habits
Helpful neighboring topics:
- Divide and Conquer DP
- Knuth Optimization
- Slope Trick as another "optimize the DP family, not the raw table" lane
Problem Model And Notation¶
Let:
The exact first route in this repo is a maximization problem.
For a fixed integer penalty lambda, define the relaxed objective:
The relaxed solver returns:
- the maximum penalized value
- and, among all optima with that value, the largest count
So the relaxed answer is one pair:
That tie-break direction is part of the algorithm, not a cosmetic choice.
Why Binary Search On lambda Works¶
When we increase lambda, every extra choice becomes less attractive.
So the optimal count in the relaxed problem moves downward.
The exact math language is:
f(k)behaves like a concave discrete function in many contest lanes where Aliens trick applieslambdaacts like the slope we use to probe that concave envelope
In practice, the usable contest rule is:
- if
count(lambda) >= K, the penalty is still cheap enough to allow at leastKchoices - if
count(lambda) < K, the penalty is too expensive
So we binary search the largest lambda such that the relaxed optimum still uses at least K choices.
Then:
Core Invariants¶
1. The Relaxed Solver Must Return (value, count)¶
A scalar DP value is not enough.
You must also know how many choices that relaxed optimum used.
2. Tie-Breaking Controls Boundary Correctness¶
For a maximization route with penalty subtraction:
- maximize penalized value
- on equal penalized value, prefer the larger count
This is exactly what keeps the count monotone and makes flat slopes recoverable.
Without the right tie-break, binary search may land on the wrong side of a plateau.
3. Search On Integers When The Slopes Are Integral¶
Do not default to floating-point binary search.
If the objective and marginal slopes are integral, integer search is safer and exact.
This is especially important when the answer scale is large.
4. Recover The Original Objective At The End¶
The relaxed solver returns:
So after finding the final penalty:
answer = penalized_value + lambda * K
Forgetting this last step is the most common implementation bug.
Worked Example: Red and Blue Lamps¶
In Red and Blue Lamps, after the standard transformation:
- choose exactly
Rpositions - no two chosen positions are adjacent
- each chosen position
icontributes weightB_i
Let:
For fixed penalty lambda, the relaxed problem is:
That relaxed problem is just a linear DP on a path:
- skip this position
- or take it and force the previous one to be skipped
So the expensive exact-R constraint disappears into:
- one linear DP
- one count field
- one binary search
This is the cleanest in-repo first route for Aliens trick.
Variant Chooser¶
Use Plain O(NK) DP When¶
Kis small enough- or the direct table already fits comfortably
Use Lagrangian / Aliens Trick When¶
- exact
Kis the only expensive dimension - the relaxed problem becomes much cheaper
- and you can trust the needed monotonicity / concavity behavior
Use Divide and Conquer DP Instead When¶
- the real optimized object is still one partition-DP row
- and the exact
Kdimension is solved directly by row transitions
Use Slope Trick Instead When¶
- the state is a convex function over coordinate/value
- not an exact-count family
f(k)
Implementation Notes¶
- represent the relaxed DP state as
(value, count) - for maximization with penalized subtraction, compare by:
- larger
value - on tie, larger
count - binary search the largest
lambdawithcount >= K - recover by adding
lambda * Kback - if the penalty range is wide, use
__int128internally for penalized values
Common Failure Modes¶
- tie-breaking by the smaller count in a maximization route
- binary-searching decimals when the real slope is integral
- forgetting to add back
lambda * K - using Aliens trick with no evidence that the count moves monotonically
- opening this lane when plain
O(NK)DP was already enough
Reopen Paths¶
- exact starter and quick refresher -> Aliens Trick hot sheet
- exact flagship anchor -> Red and Blue Lamps
- compare direct-row optimization -> Divide and Conquer DP
- broader router -> Dynamic Programming