Venues
venues, conference map, journals, literature culture
1 Why This Page
This page is not a ranking guide.
It is a venue fit guide.
The point is not to ask “which venue is best?” in the abstract. The useful question is:
For this kind of result, who is the audience and what kind of evidence do they expect?
That question helps with both paper reading and research orientation.
It tells you:
- where a direction tends to live
- how theorem-heavy and experiment-heavy work are blended
- which papers are likely to be good reading targets for your current background
2 Venue Snapshot
Type: top-level research mapSetting: readers trying to understand where different kinds of work are typically publishedMain claim: venue names are most useful when read as signals about audience and evidence cultureWhy it matters: different venue families reward different balances of theorem, experiment, systems, and application content
3 How To Use This Page
When you see a paper, ask four questions:
What is the main contribution type?theorem, algorithm, benchmark, system, application, or synthesisWhat evidence carries the paper?proofs, experiments, ablations, empirical evaluation, or some mixWho is the natural audience?learning theorists, broad ML readers, optimization people, statisticians, graph-learning researchersWhat is the surrounding literature culture?conference-first, journal-first, or mixed
Venue fit becomes much clearer after those questions.
4 Cluster 1: Broad ML Conference Venues
4.1 Typical venues
NeurIPSICMLICLRAISTATS
4.2 Best for
- broad machine learning interest
- papers mixing theory and experiment
- model, method, or representation contributions with strong ML relevance
- active frontier work where fast iteration matters
4.3 Evidence culture
These venues often reward a good blend of:
- conceptual novelty
- strong experimental story
- enough theory to clarify or support the main mechanism when theory is relevant
This does not mean all papers look the same, but it does mean the audience is broader than a theorem-only community.
4.4 Where it appears in this site
5 Cluster 2: Learning-Theory Venues
5.1 Typical venues
COLTJMLR- theory-heavy tracks or theory-oriented papers in broad ML venues
5.2 Best for
- theorem-first contributions
- generalization, sample complexity, optimization theory, and online learning
- work where mathematical clarity is the main contribution
5.3 Evidence culture
Here theorems usually carry more weight than large experimental campaigns.
But readers still need to ask:
- what assumptions make the result meaningful?
- is the theorem illuminating a practical regime or only a stylized one?
- if experiments exist, are they illustrating the mathematics or replacing missing theory?
5.4 Where it appears in this site
6 Cluster 3: Optimization And Numerical Venues
6.1 Typical venues
Mathematical ProgrammingSIAM Journal on OptimizationMathematical Programming Computation- optimization-related work in
NeurIPS,ICML, or engineering conferences when ML relevance is central
6.2 Best for
- optimization theory
- solver design
- duality and certificate structure
- numerical methods with strong mathematical analysis
- differentiable optimization in research areas that still care about solver behavior
6.3 Evidence culture
These venues often care more about:
- formulation clarity
- assumptions and proof quality
- computational behavior tied to the theory
than about large benchmark collections alone.
6.4 Where it appears in this site
7 Cluster 4: Statistics And Probability Venues
7.1 Typical venues
Annals of StatisticsAnnals of ProbabilityJASAJRSS BBernoulliAISTATSwhen the work has strong ML overlap
7.2 Best for
- estimation theory
- uncertainty and calibration
- high-dimensional inference
- asymptotic and non-asymptotic statistical analysis
- probability tools that support modern data science
7.3 Evidence culture
These venues often read very differently from broad ML conferences.
The audience usually expects:
- sharper attention to assumptions
- stronger inferential interpretation
- more careful statistical framing
7.4 Where it appears in this site
8 Cluster 5: Algorithms And Theory-CS Venues
8.1 Typical venues
STOCFOCSSODA- related theory workshops and journals
8.2 Best for
- algorithmic guarantees
- complexity results
- lower bounds and impossibility results
- sketching, streaming, and discrete-math flavored theory
8.3 Evidence culture
Compared with broad ML venues, these communities often place more emphasis on:
- theorem novelty
- proof architecture
- asymptotic guarantees
- formal model choice
8.4 Where it appears in this site
- Theorem Families
- Paper Lab
- future
Discrete Mathand algorithms-adjacent parts of the site
9 Cluster 6: Graph And Data-Mining Venues
9.1 Typical venues
KDDWWW- graph-learning work also appears in
NeurIPS,ICML, andICLR
9.2 Best for
- graph mining and representation learning
- web-scale relational problems
- recommendation and network analysis
- graph methods where application framing matters strongly
9.3 Evidence culture
These venues often care about:
- the graph problem setting itself
- scale and datasets
- application relevance
- whether the method makes sense for real networked data
9.4 Where it appears in this site
10 How The Site’s Directions Map To Venue Clusters
10.1 High-dimensional probability and random matrices
Often lives across:
- probability and statistics journals
COLTJMLR- some theory-heavy papers in broad ML venues
10.2 Modern learning theory
Often lives across:
COLTJMLR- theory-facing papers in
NeurIPS,ICML, and sometimesICLR
10.3 Graph learning beyond simple message passing
Often lives across:
NeurIPSICMLICLRKDDWWW
10.4 Generative modeling through score, flow, and transport
Often lives across:
NeurIPSICMLICLRJMLR
10.5 Optimization inside learning and inference pipelines
Often lives across:
- optimization journals
NeurIPSICML- engineering and control venues when the problem framing demands it
10.6 Representation geometry and in-context structure
Often lives across:
ICLRNeurIPSICMLJMLR
11 Common Venue Mistakes
11.1 Mistake 1: reading venue name as a quality oracle
Venue names are signals about audience and expectations, not replacements for technical judgment.
11.2 Mistake 2: assuming theorem-heavy and benchmark-heavy papers are judged the same way
Different venue families tolerate different balances of proof, experiment, and systems detail.
11.3 Mistake 3: ignoring journals
If you only read conference papers, you miss a large amount of strong work in optimization, probability, and statistics.
11.4 Mistake 4: using venue as the only way to choose papers
Venue fit helps, but reading trails, theorem families, and surveys are still better first tools than prestige-chasing.
12 What To Learn Next
- Surveys, if you want literature entry points before venue-specific reading
- How Top-Venue Papers Are Shaped, if you want to understand the paper-construction side
- Claim-Evidence Matrix, if you want to analyze how venue expectations affect paper structure
13 Sources And Further Reading
- NeurIPS -
Paper bridge- official home for one of the main broad ML conference venues. Checked2026-04-25. - ICML -
Paper bridge- official home for a major broad ML venue with a wide algorithm-and-theory mix. Checked2026-04-25. - ICLR -
Paper bridge- official home for the major representation-learning conference with strong deep-learning presence. Checked2026-04-25. - Association for Computational Learning / COLT -
Paper bridge- official home for the learning-theory conference family. Checked2026-04-25. - Journal of Machine Learning Research -
Second pass- important journal venue for theory, methodology, and broader ML work. Checked2026-04-25. - Proceedings of Machine Learning Research -
Second pass- official proceedings hub for many ML and statistics conferences, including AISTATS and COLT. Checked2026-04-25.