Keynote Speakers

More information regarding our invited speakers will be announced in this page as it becomes available.

Ariel Felner (Ben Gurion University)

Title: The Mathematical Foundations of Bidirectional Heuristic Search

Many problems in AI are solved with heuristic search algorithms from the well-known A* family. In bidirectional search a search forward from the start state and a search backward from the goal state are initiated aiming that the two frontiers will meet generating a solution path from the start state to the goal state. Bidirectional search is a fundamental technique that dates back to the very beginning of the research on heuristic search. However, for decades the mathematical foundations behind bidirectional search were missing. In the past few years there has been a significant breakthrough which introduces novel theoretical and mathematical understandings of what is required by a bidirectional search and how it generalizes unidirectional search. The talk will cover the line of research that led to these new developments while focusing on the following theme: what is the computational work that must be done by a bidirectional search (in contrast to a unidirectional search) and whether/what algorithms can really do it. The talk will cover new terms such as a must-expand pair of nodes, GMX and its vertex-cover and more. Moreover, new algorithms that are based on the new theory such as MM, fractional MM, NBS, DVCBS, GBFHS, BAE*, IDBiHS will be described while discussing their benefits and drawbacks.

Sergei Gukov (California Institute of Technology)

Title: AI and AC: the Andrews-Curtis Conjecture

Using a long-standing conjecture from combinatorial group theory, we explore, from multiple perspectives, the challenges of finding rare instances carrying disproportionately high rewards. Based on lessons learned in the context defined by the Andrews-Curtis conjecture, we propose algorithmic enhancements and a topological hardness measure with implications for a broad class of search problems. As part of our study, we demonstrate the length reducibility of all but two presentations in the Akbulut-Kirby series (1981) and resolve various potential counterexamples in the Miller-Schupp series (1991), including three infinite subfamilies. Based on work with A.Shehper, A.Medina-Mardones, L.Fagan, B.Lewandowski, A.Gruen, Y.Qiu, P.Kucharski, and Z.Wang, as well as ongoing work that should be out by the time of the meeting.

Jeff Jaffe (Former CEO of the World Wide Web Consortium)

Title: Future Directions for Math and AI: in Conversation with Martin Golumbic

Xingquan Zhu (Florida Atlantic University)

Title: Scaling Heterogeneous Network Intelligence

Learning from large heterogeneous networks poses significant challenges due to their massive scale, diverse node and edge types, varying nodal features, and complex local neighborhood structures. The problem becomes even more intricate when object labels are not discrete—such as single- or multi-label—but continuous and represented as distributions. In this talk, we will present methods to address two key challenges: label uncertainty and scalability in large heterogeneous network learning. To handle label uncertainty, we formulate a graph label distribution learning task that leverages a graph transformer architecture to optimally aggregate information across meta-paths. This approach balances the influence of network topology and nodal attributes to effectively learn label distributions for heterogeneous nodes. To address scalability, we introduce an ensemble learning framework that trains multiple graph learners under distinct sampling conditions. This ensemble naturally captures different aspects of graph heterogeneity and enhances robustness. Experiments and analyses on large-scale heterogeneous networks demonstrate the effectiveness and efficiency of the proposed methods.