Modular Arithmetic¶
Modular arithmetic is the standard contest language for "the exact integer is huge, but its residue is enough." It looks elementary at first, but most real bugs in this topic come from forgetting that some familiar operations survive modulo m and some do not.
This page aims to make three things feel automatic:
- reduce early and often for
+,-, and* - only divide when an inverse actually exists
- only reduce exponents when a theorem really permits it
If Number Theory Basics is about divisibility structure, modular arithmetic is about computing safely inside one residue system.
At A Glance¶
Reach for modular arithmetic when:
- the statement literally asks for the answer modulo
MOD - counts, powers, or products become too large to store exactly
- you need repeated exponentiation, inverses, or binomial coefficients
- a DP counts ways and only the residue matters
- polynomial or matrix operations are performed under a fixed modulus
Strong contest triggers:
print the answer modulo 1e9+7a^b mod MODnCk mod MODdivide by x modulo MODa^(b^c) mod MODmany queries under one fixed prime modulus
Strong anti-cues:
- the modulus changes per query and the real issue is factorization or CRT
- the task is about exact rational values, not residues
- the modulus is composite and you were about to apply Fermat blindly
- the main problem is number-theoretic structure rather than residue computation
- the real difficulty is orbit counting under rotation or other symmetry, and modular arithmetic is only the helper layer
What success looks like after studying this page:
- you know exactly which algebraic moves are always legal mod
m - you can explain why modular division requires
gcd(a, m) = 1 - you can choose between Fermat inverse and extended Euclid
- you can tell when exponent reduction is valid and when it is a trap
- you can precompute factorial and inverse-factorial tables without guessing
Prerequisites¶
- C++ Language For Contests
- Number Theory Basics for gcd and coprimality language
Problem Model And Notation¶
We write:
to mean that a and b leave the same remainder modulo m, or equivalently:
The residue system modulo m contains the classes:
In contest code, normalize(x) usually means:
x %= MOD
if x < 0: x += MOD
The most important operations are:
- addition
- subtraction
- multiplication
- exponentiation
- inversion when it exists
The first three are always compatible with congruence. Division is not.
Quick legality table:
| Operation | Always safe mod m? |
What you need to check |
|---|---|---|
a + b |
yes | nothing extra |
a - b |
yes | normalize the result in code |
a * b |
yes | avoid overflow before % |
a^e |
yes for the base | reduce the base first; reducing the exponent needs a theorem |
a / b |
no | inverse of b must exist |
From Brute Force To The Right Idea¶
Why We Reduce At All¶
Suppose a DP counts paths, subsets, or strings. The exact answer might have hundreds of thousands of digits, but the judge only asks for the value modulo MOD.
The naive instinct is:
- compute the true integer answer
- take
% MODat the end
This is usually impossible because intermediate values overflow long before the computation finishes.
The Structural Observation¶
Congruence is stable under:
- addition
- subtraction
- multiplication
So we can replace large intermediate values by their residues immediately without changing the final answer modulo m.
That turns modular arithmetic from "last-step formatting" into "the arithmetic system we work in from the beginning."
The most common contest manifestation is a counting DP transition such as:
dp[v] = (dp[v] + dp[u]) % MOD
That line is sound because addition respects congruence, so the entire DP can live inside the residue system.
The First Big Trap: Division Is Different¶
In ordinary arithmetic, if:
and c != 0, we cancel c.
Modulo m, this is not always sound.
For example, modulo 6:
but:
So modular arithmetic is not just "do the same thing, then % MOD". You must know which operations remain legal.
Core Invariant And Why It Works¶
Congruence Respects +, -, *¶
If:
then for any c:
This follows directly from the definition m | (a - b).
That is why we may reduce after every addition or multiplication in DP, combinatorics, and matrix transitions.
Exponentiation Respects Congruence In The Base¶
If:
then for any integer k >= 1:
So base reduction is always safe:
This is why powmod starts by reducing the base.
Why Inverses Exist Exactly When gcd(a, m) = 1¶
We say a has a multiplicative inverse modulo m if there exists x such that:
This is equivalent to:
for some integer y, which is exactly a Bézout identity. Such an identity exists if and only if:
So the rule is:
- inverse exists iff
aandmare coprime
This one theorem explains both:
- why modular division is sometimes legal
- why it sometimes fails completely
Fermat's Little Theorem And Why Prime Moduli Feel Easy¶
If p is prime and a is not divisible by p, then:
Multiply both sides by a^{-1} and we get:
That is the standard contest formula for inverses under a prime modulus:
This is the main reason prime moduli like 1_000_000_007 and 998244353 are so friendly:
- every nonzero residue has an inverse
- fast exponentiation gives that inverse in
O(log p)
Why Exponent Reduction Needs A Theorem¶
This is the second big trap.
You may always reduce the base modulo m, but you may not always reduce the exponent modulo m.
MIT's notes make this warning explicit. For example:
but:
Exponent reduction only becomes valid when a theorem gives a cycle length.
For prime p and a \not\equiv 0 \pmod p:
More generally, if gcd(a, m) = 1, Euler's theorem gives:
so exponents can be reduced modulo \varphi(m).
If a is not coprime with the modulus, you must handle edge cases separately rather than applying exponent reduction blindly.
Complexity And Tradeoffs¶
powmod(a, e, m):O(log e)- one inverse by extended Euclid:
O(log m) - one inverse by Fermat under prime modulus:
O(log m) - factorial / inverse-factorial precompute up to
N:O(N) - one
nCkquery after precompute:O(1)
Practical tradeoffs:
| Situation | Best tool | Why |
|---|---|---|
| fixed prime modulus, one inverse | Fermat + powmod |
easy and contest-standard |
| composite modulus or inverse existence unclear | extended Euclid | directly checks gcd(a, m) = 1 |
many nCk mod p queries with one prime modulus |
factorial + inverse-factorial tables | one build, then O(1) queries |
| exponent tower under prime modulus | exponent reduction modulo p - 1 plus edge cases |
reuse Fermat cycle |
| arbitrary modulus with factorial factors sharing primes | more advanced number theory / CRT | naive inverse-factorial formula may break |
Rule of thumb:
- prime modulus -> the workflow is much simpler
- composite modulus -> always ask whether the needed inverse exists
- when you see
% MOD, ask whether the task is merely arithmetic, or whether gcd / factorization structure is secretly driving the legality
Variant Chooser¶
| Variant | What changes | When to choose it | Where it gets awkward |
|---|---|---|---|
| prime-mod helper set | powmod, inv, nCk by factorials |
fixed prime modulus, standard DP/combinatorics | wrong if modulus is composite or n >= mod assumptions break |
| composite-mod inverse | use extended Euclid | modulus is not prime or inverse existence must be checked | no inverse when gcd(a, m) != 1 |
| factorial / inverse-factorial tables | precompute once | many nCk queries under one prime modulus |
invalid if denominator contains noninvertible factors mod composite m |
| exponent reduction | shrink exponent mod cycle length | exponent towers, repeated powering | dangerous when coprimality or zero-base edge cases are ignored |
| CRT / prime-power handling | combine several local mod systems | arbitrary modulus with richer combinatorics | outside the basic toolkit of this page |
Worked Examples¶
Example 1: Binary Exponentiation¶
Suppose we need:
Binary exponentiation maintains:
Start with:
answer = 1base = 3exp = 13
Binary form of 13 is 1101, so we repeatedly:
- multiply by the current base when the low bit is
1 - square the base
- shift the exponent right
One possible trace:
exp = 13odd ->answer = 1 * 3 = 3,base = 9exp = 6even ->answer = 3,base = 13exp = 3odd ->answer = 3 * 13 = 39 ≡ 5,base = 16exp = 1odd ->answer = 5 * 16 = 80 ≡ 12
So:
The point is not the arithmetic itself, but the invariant: every loop preserves the value of the original power modulo m.
Example 2: Inverse Exists Or Does Not Exist¶
Consider 3^{-1} mod 11.
Since:
the inverse exists. In fact:
so the inverse is 4.
Now consider 2^{-1} mod 6.
Here:
so no inverse exists.
This is the cleanest example of why modular division is conditional, not a built-in operation.
Example 3: nCk Mod A Prime¶
If MOD is prime and we need many binomial queries, use:
and rewrite division as multiplication by inverses:
After precomputing:
fact[i] = i! mod MODinv_fact[i] = (fact[i])^{-1} mod MOD
one query becomes:
fact[n] * inv_fact[k] * inv_fact[n-k] mod MOD
This is the standard contest pattern when many combinatorics queries share one prime modulus.
Example 4: Exponentiation II And Exponent Reduction¶
Suppose we want:
for a prime p.
If a \not\equiv 0 \pmod p, Fermat gives the cycle length p - 1, so:
So the computation becomes:
- compute
e = b^c mod (p - 1) - compute
a^e mod p
But if a \equiv 0 \pmod p, you must separate:
- exponent
0-> answer often treated as1by contest convention - exponent
> 0-> answer0
This is why exponent reduction is a theorem-driven optimization, not an unconditional % (p-1) trick.
Algorithm And Pseudocode¶
This repo's canonical helpers are:
Normalize¶
normalize(x, MOD):
x = x % MOD
if x < 0:
x += MOD
return x
Binary Exponentiation¶
powmod(a, e, MOD):
a = normalize(a, MOD)
result = 1 % MOD
while e > 0:
if e is odd:
result = (result * a) % MOD
a = (a * a) % MOD
e >>= 1
return result
Inverse By Fermat (Prime Modulus Only)¶
inv_prime(a, MOD):
return powmod(a, MOD - 2, MOD)
Inverse By Extended Euclid¶
inv_general(a, MOD):
find x, y such that ax + MOD*y = gcd(a, MOD)
if gcd(a, MOD) != 1:
inverse does not exist
return normalize(x, MOD)
Many nCk Queries Under One Prime Modulus¶
build_comb(N, MOD):
fact[0] = 1
for i from 1 to N:
fact[i] = fact[i-1] * i mod MOD
inv_fact[N] = powmod(fact[N], MOD - 2, MOD)
for i from N down to 1:
inv_fact[i-1] = inv_fact[i] * i mod MOD
binom(n, k):
if k < 0 or k > n:
return 0
return fact[n] * inv_fact[k] * inv_fact[n-k] mod MOD
Implementation Notes¶
1. Normalize After Subtraction¶
In C++:
(-1) % MOD
is negative. So write:
(a - b + MOD) % MOD
or use a normalize helper.
2. Cast Before Multiplying¶
If a and b are int, then:
a * b
may overflow before % MOD is applied. Promote to long long first.
3. Modular Division Is Multiplication By An Inverse¶
Never write ordinary division under the modulus unless you have already proved the denominator is invertible.
The real question is always:
4. Prime-Mod Factorial Formulas Have Hidden Assumptions¶
The usual factorial formula for nCk mod MOD assumes you can invert the denominator factors.
That is immediate for a prime modulus when those factors are nonzero modulo MOD, but it becomes delicate for composite moduli or prime powers.
There is also a very practical contest assumption hiding here:
- if
MODis prime and0 <= n < MOD, then1, 2, ..., nare all nonzero moduloMOD - if
n >= MOD, thenn!is already divisible byMOD, so the naive factorial / inverse-factorial story needs more care
Tiny anti-example:
- modulo
5, the valueC(5, 1)is perfectly meaningful and equals0 - but
fact[5] \equiv 0 \pmod 5, so the naive table formula usinginv_fact[5]is not even well-formed
That is the point where Lucas-style reasoning or prime-power handling starts to matter.
5. Exponent Reduction Is Not Base Reduction¶
You may always reduce:
a -> a % MOD
You may not always reduce:
e -> e % MOD
without a theorem like Fermat or Euler, plus the right coprimality conditions.
6. Prime Modulus And Composite Modulus Are Different Worlds¶
Under a prime modulus:
- every nonzero value is invertible
- Fermat inverse is available
- combinatorics formulas are cleaner
Under a composite modulus:
- some nonzero residues have no inverse
- cancellation may fail
- factorial-based formulas may need prime-factor bookkeeping or CRT
7. The Topic Often Splits Into Two Contest Workflows¶
When you see modular arithmetic, ask which workflow this really is:
computation workflow:powmod, inverse, factorials, direct residue operationslegality workflow: does inverse exist, can I reduce exponents, is prime-vs-composite the real issue
Many wrong solutions fail not because the arithmetic loop is hard, but because the legality check was skipped.
8. Judge Conventions Still Matter¶
Contest problems sometimes define edge cases explicitly, for example:
0^0 = 1
So even when the modular algebra is clean, still read the statement for any convention that overrides your mathematical instinct on boundary cases.
Practice Archetypes¶
Use modular arithmetic when the problem feels like one of these:
mod power: repeated exponentiation under one modulusprime-mod inverse: divide by a value modulo a primeprime-mod combinatorics: manynCkor counting queries under one fixed primeexponent tower: outer power is easy, inner exponent must be reducedcomposite-mod warning: the statement tempts you to divide, but invertibility is the real question
Good internal practice anchors:
- Exponentiation: first warm-up for
powmod - Exponentiation II: exponent reduction under a prime modulus
Suggested progression:
- master
powmod - understand inverse existence via gcd
- use Fermat inverse under prime mod
- precompute factorial / inverse-factorial tables
- only then trust exponent reduction tricks automatically
References And Repo Anchors¶
Research sweep refreshed on 2026-04-24.
Course:
- MIT 6.1200J / 18.062J Lecture 09: Modular Arithmetic
- MIT Mathematics for Computer Science lecture notes
Reference:
- CP-Algorithms: Modular Inverse
- CP-Algorithms: Binary Exponentiation
- CP-Algorithms: Binomial Coefficients
Practice:
Repo anchors:
- Modular arithmetic ladder
- Exponentiation
- Exponentiation II
- modular-arithmetic.cpp
- factorial-binomial-mod.cpp
- extended-gcd-diophantine.cpp
- Number theory cheatsheet