call for blogposts
Table of contents
GRaM is about taking that geometry seriously!
For the second edition of the GRaM workshop, we invite blog posts that explore how geometric thinking can ground machine learning models: how respecting structure can lead to better representations, more interpretable models, and sharper theoretical insight.
Key Dates
- Submission deadline: Friday, 26 February, 9:00 AoE
- Any modifications to your blog post, via a pull request on github.
- Notification of acceptance: Friday, 11 March, 9:00 AoE
- Camera-ready merge: Friday, 11 March, 9:00 AoE
Why blog posts?
They are a space to:
- explain ideas that do not yet fit neatly into full theorem–experiment pipelines,
- reflect on why a method works (or fails), not just that it works,
- connect distant strands of theory, practice, and intuition,
- articulate open problems, negative results, or conceptual frameworks,
- teach, synthesize, speculate, and provoke!!!!
Scientific progress does not happen only through finalized results. It also happens through clear explanations, honest reflections, and shared intuition. Blog posts foster a more communal kind of science—and we want to make room for that.
Scope and spirit
We welcome contributions that are broad, creative, and intellectually generous. This includes (but is very much not limited to):
- Explanations of recent papers (including your own)
- Conceptual overviews of geometric ideas in ML
- Connections between geometry and modern large models
- Thoughtful critiques, limitations, or negative results
- Open problems and research agendas
- Tutorials, perspectives, or unifying viewpoints
If it meaningfully relates to geometry and learning, it belongs here.
Themes of interest
The workshop focuses on geometrically grounded approaches, meaning methods that respect the structure of the problem domain and support geometric reasoning. Topics of interest include:
Preserving data geometry
- Preservation of symmetries (e.g. equivariant operators)
- Geometric representation systems (e.g. Clifford algebras, steerable representations Clebsch–Gordan products)
- Isometric or metric-aware latent representations
Inducing geometric structure
- Geometric priors via curvature, symmetry, or topology
- Non-Euclidean generative models (e.g. diffusion or flow matching on manifolds)
- Metric-preserving or geodesic-aware embeddings
Geometry in theoretical analysis
- Data and latent geometry (data manifolds, statistical manifolds)
- Loss landscape geometry and optimization on manifolds
- Theoretical frameworks from differential geometry, algebraic geometry, or group theory
- Open problems at the intersection of geometry and learning
Scale and simplicity
- Geometry at scale: does equivariance still matter for large models?
- Redundancy and minimality: when is geometry essential, and when is it unnecessary?
- Challenging assumptions: negative results and failure modes of geometric methods
Again, this list is not exhaustive. We explicitly encourage submissions that stretch, reinterpret, or question these categories.