Schedule
(Subject to change)
- Sunday
- Monday
- Tuesday
- Wednesday
- Thursday
- Friday
National University of Singapore
Lecture Theatre 31
Faculty of Science S16
6 Science Drive 2
Singapore 117546
Abstract: In this tutorial, we will explore how techniques from property testing and interactive proofs can be leveraged in quantum learning theory. Concretely, we will consider the following questions: How can we test the assumptions of quantum learning algorithms? How can we delegate quantum learning algorithms in a verifiable manner? And how can we certify the hypotheses produced by quantum learning algorithms?
Abstract: As quantum devices grow up in size while remaining affected by noise, a central question is when and how they can surpass classical methods in practice. Pauli propagation has recently emerged as a powerful classical simulation framework, significantly raising the benchmark for demonstrating quantum advantage. By approximating the evolution of quantum operators in the Pauli basis, these methods achieve rigorous performance guarantees across a wide range of noisy and noiseless circuits. In this talk, I will present recent theoretical advances in Pauli propagation and highlight its potential well beyond classical simulation, including its use in quantum-inspired machine learning models and hybrid strategies that facilitate the practical deployment of near-term quantum devices. I will also discuss how the propagation framework can be generalized from discrete-variable to continuous-variable quantum systems.
Abstract: Methods in the design and analysis of quantum algorithms have changed dramatically over the past ten years. These changes are rooted in new techniques for efficiently manipulating linear operators encoded in quantum computations: quantum signal processing (QSP) and quantum singular value transformation (QSVT). These algorithms, built from simple alternating circuit ansätze, enable one to apply functions to the spectra of linear operators encoded as sub-blocks of unitary matrices. This ability has proven surprisingly flexible—unifying, simplifying, and improving most quantum algorithms.
In this tutorial we review the construction and key properties of these algorithms, discuss how they are applied to solve disparate problems, and motivate recent extensions and improvements. Additionally, we highlight limitations of quantum algorithms for spectral mapping, and compare QSP/QSVT with quantum algorithms for manipulating linear operators in other ways (e.g., linear combination of Hamiltonian simulation). We aim to keep the tutorial accessible to both quantum and classical audiences, toward clarifying open questions in the intersection of quantum algorithms, functional analysis, and numerical linear algebra.
Marina Bay Sands Singapore
Sands Expo and Convention Centre,
Level 3, Heliconia Junior Ballroom
10 Bayfront Ave, Singapore 018956
Lirandë Pira – Welcome to QTML 2025
Patrick Rebentrost – Welcome to Q(T)ML
José Ignacio Latorre – Welcome to CQT and Quantum in Singapore
Abstract: The technique of combining multiple votes to enhance the quality of a decision is the core of boosting algorithms in machine learning. In particular, boosting provably increases decision quality by combining multiple “weak learners”—hypotheses that are only slightly better than random guessing—into a single “strong learner” that classifies data well. Inspired by work by Barak, Hardt and Kale, I will introduce QuantumBoost, a quantum algorithm that achieves the best known runtime complexity over other boosting methods. I will also share some interesting insights in the way my collaborators (Yanlin Chen, Tuyen Nguyen and Ronald de Wolf) and I ultimately developed QuantumBoost and proved its correctness with the help of Gemini’s Deep Think model.
Abstract: The convergence of artificial intelligence (AI) and quantum computing represents one of the most promising frontiers in modern computational science. While ”quantum for AI” has been widely explored as a potential application of quantum computing, ”AI for quantum” — leveraging AI technologies to enhance quantum algorithms and quantum hardware — is now rapidly emerging, particularly with the advent of modern generative model techniques. In this talk, I will focus on how contemporary AI methods can be utilized to accelerate the development of quantum algorithms and enable next-generation quantum computing applications, including our proposed Generative Quantum Eigensolver (GQE) as a concrete example of such AI-driven approaches.
Authors: Alexander Schmidhuber, Ryan O’Donnell, Robin Kothari and Ryan Babbush
Abstract: We describe a quantum algorithm for the Planted~Noisy~k-XOR problem (also known as sparse Learning Parity with Noise) that achieves a nearly quartic (4th power) speedup over the best known classical algorithm while also only using exponentially less space. Our work generalizes and simplifies prior work of Hastings, by building on his quantum algorithm for the Tensor Principal Component Analysis (PCA) problem. We achieve our quantum speedup using a general framework based on the Kikuchi
Method (recovering the quartic speedup for Tensor PCA), and we anticipate it will yield similar speedups for other planted inference problems. These speedups rely on the fact that planted inference problems naturally instantiate the Guided Sparse Hamiltonian problem. Since planted inference problems serve as a testbed for studying the hardness of statistical learning, our work paves a path towards significant polynomial quantum speedups in machine learning.
Authors: William J. Huggins, Tanuj Khattar and Nathan Wiebe
Abstract: For many practical applications of quantum computing, the most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can reduce the cost of applying an operation multiple times in parallel. We combine these techniques with modern approaches for classical data loading based on “quantum read-only memory” (QROM). We find that we can polynomially reduce the total number of gates required for data loading, but we find no
advantage in cost models that only count the number of non-Clifford gates. Furthermore, for realistic cost models and problem sizes, we find that it is possible to reduce the cost of parallel data loading by an order of magnitude
or more. We present several applications of quantum mass production, including a scheme that uses parallel phase estimation to asymptotically reduce the gate complexity of state-of-the-art algorithms for estimating eigenvalues of
the quantum chemical Hamiltonian. We also show that mass production can be used to reduce the cost of serial calls to the same data loading oracle by precomputing several copies of a novel QROM resource state.
Authors: Adrián Pérez-Salinas, Patrick Emonts, Jordi Tura Brugués and Vedran Dunjko
Abstract: Classical simulation of quantum physics is a central approach to investigating physical phenomena. Quantum computers enhance computational capabilities beyond those of classical resources, but it remains unclear to what extent existing limited quantum computers can contribute to this enhancement. In this work, we explore a new hybrid, efficient quantum-classical representation of quantum states, the multiple-basis representation.
This representation consists of a linear combination of states that are sparse in some given bases, specified by quantum circuits. Such representation is particularly appealing when considering depth-limited quantum circuits within reach of current hardware. We analyze the expressivity of multiple-basis
representation states depending on the classical simulability of their quantum circuits. In particular, we show that multiple-basis representation states include, but are not restricted to, both matrix-product states and stabilizer states. Furthermore, we investigate applications of this representation in the problems of approximation of ground states, simulation of deeper computations by specifying bases with shallow circuits, and a tomographical protocol to describe states as multiple-basis representations. We envision this work to open the path of simultaneous use of several hardware-friendly bases, a natural description of hybrid computational methods accessible for near-term
hardware.
Authors: Manuel Rudolph, Armando Angrisani, Tyson Jones, Yanting
Teng, Alexander Schmidhuber, Antonio Anna Mele, Marco Cerezo, Hsin-Yuang Huang and Zoe Holmes
Abstract: Pauli propagation (PP) is a relative newcomer to the corpus of classical simulation algorithms and yet is already competitive with other state-of-the-art methods for certain tasks. At their core, PP methods approximate the evolution of a quantum operator (typically, an observable in the Heisenberg picture) via a truncated Pauli path integral. This approach tends to be limited by very different quantum circuit characteristics compared to, for example, popular and potent tensor network methods. We provide a comprehensive account of this new simulation framework and present PauliPropagation.jl: the first general-purpose open-source Pauli propagation software. We discuss the Pauli propagation framework from high to low level, from the algorithmic formulation and known theoretical results, to our lived experience and practical implementation details. Furthermore, we present two theoretical efficiency guarantees for PP simulation of noise-free scrambling circuits and circuits subject to arbitrary local noise. We hope this new tool not only proves useful for studying quantum systems and benchmarking quantum hardware, but also inspires new classical-quantum frameworks utilizing the best and most suitable classical simulation method in conjunction with upcoming quantum
computers.
Authors: Caesnan Leditto, Angus Southwell, Behnam Tonekaboni, Muhammad Usman and Kavan Modi
Abstract: HodgeRank generalizes ranking algorithms, e.g. Google PageRank, to rank alternatives based on real-world (often incomplete) data using graphs and discrete exterior calculus. It analyzes multipartite interactions on high-dimensional networks with a complexity that scales exponentially with dimension. We develop a quantum algorithm that approximates the HodgeRank solution with complexity independent of dimension. Our algorithm extracts relevant information from the state such as the ranking consistency, which achieves a superpolynomial speedup over similar classical methods.
Authors: Xiufan Li, Soumik Adhikary and Patrick Rebentrost
Abstract: Quantum algorithms for solving linear systems of equations have seen significant development since their inception. A method uses a classical combination of quantum states (CQS) along with the so-called Ansatz tree structure to construct approximate solutions to Ax=b with provable guarantees
brought about by the Krylov subspace. However, the algorithm may require to construct the entire Ansatz tree to achieve the convergence guarantee, resulting in less efficiency when scaling to large-scale quantum machine learning. In this work, we propose StoCQS, an efficient strategy for Ansatz tree construction to solve linear systems with a convergence guarantee aided by importance sampling techniques and stochastic gradient descent, potentially with a reduced number of states. Our algorithm thus promises improved feasibility of implementing a quantum linear systems solver on large-scale quantum tasks
with provable theoretical guarantees.
Authors: Nicholas Rubin, Guanghao Low, Robbie King, Eugene DePrince, Alec White, Ryan Babbush, Dominic Berry and Rolando Somma
Abstract: We present sum-of-squares spectral amplification (SOSSA), a framework for improving quantum simulation relevant to low-energy problems. We show how SOSSA can be applied to energy and phase estimation and provide fast quantum algorithms for these problems that significantly improve over prior art. To illustrate the power of SOSSA in applications, we consider the Sachdev-Ye-Kitaev model, a representative strongly correlated system, and demonstrate asymptotic speedups over generic simulation methods by a factor that is the square root of the system size. Our results reinforce those observed in [G.H. Low etal., arXiv:2502.15882 (2025)], where SOSSA was used to achieve state-of-the-art gate costs for phase estimation of
real-world quantum chemistry systems.
See the list of posters to be presented in Session I.
Marina Bay Sands Singapore
Sands Expo and Convention Centre,
Level 3, Heliconia Junior Ballroom
10 Bayfront Ave, Singapore 018956
Abstract: Characterizing quantum many-body systems is a fundamental problem across physics, chemistry, and materials science. While significant progress has been made, many existing Hamiltonian learning protocols demand digital quantum control over the entire system, creating a disconnect from many real-world settings. Can one learn the parameters of a many-body Hamiltonian using a single local probe access to a small subsystem of a many-body thermal state undergoing time evolution? I will describe a new combination of tools from algebraic geometry and smoothed analysis that yields a provably correct algorithm for learning generic Hamiltonians in various physically natural families even in this setting. This demonstrates that robust Hamiltonian learning remains achievable even under severely constrained experimental access.
Authors: Arjan Cornelissen, Joao F. Doriguello, Alessandro Luongo and Ewin Tang
Abstract: Clustering is one of the most important tools for analysis
of large datasets, and perhaps the most popular clustering
algorithm is Lloyd’s algorithm for k-means. This algorithm
takes n vectors in a d-dimensional space and outputs k
centroid vectors, which partition the vectors into clusters
based on which centroid is closest to a particular vector.
We present a classical epsilon-k-means algorithm that
performs an approximate version of one iteration of Lloyd’s
algorithm, with a time complexity that improves
exponentially in the data size n compared to previous
classical algorithms. It matches the runtime of the q-means
quantum algorithm originally proposed by Kerenidis,
Landman, Luongo, and Prakash (NeurIPS 2019). To our
knowledge, this is the fastest classical algorithm for
approximate k-means. We then turn our attention to the
quantum setting, and propose an improved version of the
q-means algorithm. Our new quantum algorithm achieves a
better runtime than previous quantum approaches and offers
a polynomial improvement over our classical epsilon-k-means
in several parameters. Unlike prior quantum algorithms, our
method does not rely on quantum linear algebra primitives.
Instead, it uses QRAM to prepare simple quantum states
based on the current cluster assignments and applies
multivariate quantum amplitude estimation. Finally, we
provide the first quantum and classical lower bounds for
performing a single iteration of the k-means problem. These
results show that our algorithms are optimal in most of the
relevant parameters.
Authors: Sayantan Pramanik, Chaitanya Murti, Chiranjib Bhattacharyya
and M Girish Chandra
Abstract: In this paper, we study the cost of training parametrised
quantum circuits (PQCs) that are employed in variational
quantum algorithms (VQAs), which depends on the number of
circuit-evaluations (CE). Since each circuit-execution is
monetarily and temporally expensive, we treat cost and CE
on an equal footing, and focus on the following questions:
Questions: How many CE are required to train quantum models
to an epsilon-stationary solution of the objective
function? Since the cost of estimating the gradients scales
linearly with the number of parameters, can gradient-free
algorithms be employed to get similar a performance at a
lower expense?
Contributions: Our contributions towards answering these
questions are threefold:
(a) As the number of CE depends linearly on the
Lipschitz-smoothness constant of the objective function, we
provide tightened bounds on it. Our result also generalises
to arbitrary loss functions besides expectation values.
(b) For a circuit with d optimisable parameters, each
(stochastic) gradient-based iteration consumes 2d
evaluations of the circuit, which hinders the scaling-up of
circuits to accommodate more parameters. We adapt a
gradient-free algorithm called the Stochastic Three Points
method to the VQA-setting, and demonstrate both
theoretically and numerically that it reduces the number of
requisite CE (over stochastic gradient descent) by a factor
of d. Particularly, in conjunction with (a), we see a
sesquilinear reduction in the cost of training PQCs – from
the order of d^3.5 to d^2.
(c) Finally, we provide theoretical and empirical evidence
to suggest that quantum models are inherently adversarially
robust. This is due to the adversarial loss being lower and
upper bounded by scalar multiples of the non-adversarial
loss. This obviates the usual (expensive) overheads of
adversarial training that arise from the estimation of
input-gradients, and augmentation of the training dataset
with adversarial examples.
Authors: Pablo Rodriguez-Grasa, Matthias C. Caro, Jens Eisert, Elies Gil-Fuster, Franz J. Schreiber and Carlos Bravo-Prieto
Abstract: Generalization is a central concept in machine learning,
yet for quantum models, it is predominantly analyzed
through uniform bounds that depend on a model’s overall
capacity rather than the specific function learned. These
bounds are often too loose and insensitive to the training
process. In this work, we address this limitation by
deriving the first PAC-Bayesian generalization bound for a
broad class of quantum machine learning (QML) models. Our
framework analyzes layered circuits composed of general
quantum channels, parameterized via their process matrices,
which include unitary, dissipative, and feedforward
operations. By performing a channel perturbation analysis,
we establish a non-uniform generalization bound that
explicitly depends on the norms of the post-training
parameter matrices. This data-dependent nature allows our
bound to reflect the properties of the learned solution,
and we show that it can offer improvements over existing
uniform, covering-number-based bounds. By connecting
channel perturbation theory with the powerful PAC-Bayesian
framework, our work provides a foundational tool for a more
nuanced, training-aware analysis of generalization in QML.
Authors: Alice Barthe, Jens Eisert, Bryce Fuller, Elies Gil-Fuster, Michele Grossi, Zoë Holmes, Sofiène Jerbi, Johannes Jakob
Meyer, Erik Recio-Armengol, Mehrad Sahebi, Seongwook Shin,
Yudai Suzuki and Ryan Sweke
Abstract: A key challenge in quantum machine learning (QML) is to
identify tasks where quantum models offer a genuine
advantage over classical approaches. One promising strategy
is dequantization—showing that classical algorithms can
match the performance of quantum models for specific tasks.
In this work, we study classical kernel-based methods as
dequantization methods for QML models based on
parameterized quantum circuits (PQCs), including quantum
neural networks and quantum kernel methods. For a given
PQC, we define a corresponding family of trigonometric
kernels that can capture the function class expressible by
the quantum model. We approximate these kernels using
Random Fourier Features (RFF) and derive theoretical bounds
on the difference between the true risk of RFF-approximated
classical models and their quantum counterparts in both
regression and classification tasks. Based on these bounds,
we identify sufficient conditions for dequantization.
Finally, we show that in certain cases, the exact kernel
can be computed efficiently via tensor networks, removing
the need for approximation. Our results can be used in the
design of PQC-based QML models to avoid dequantization.
Moreover, our proposed dequantization schemes can be used as independent heuristic classical algorithms before
deploying costly QML models.
Author: Maximilian Fröhlich
Abstract: We introduce a new tensor network method for simulating
quantum circuits based on a locally adaptive formulation of
the Time-Dependent Variational Principle (TDVP), aimed at
overcoming limitations of the widely used Time-Evolving
Block Decimation (TEBD) algorithm. TEBD, though effective
for short-range interactions and low entanglement growth,
suffers from cumulative truncation errors and
inefficiencies when simulating long-range gates, which
require SWAP insertions that artificially increase
entanglement and computational cost.
The proposed method reformulates circuit simulation in the
Schrödinger picture by interpreting each gate as a small
time-evolution step and projecting its generator onto the
tangent space of the MPS manifold. This local dynamic TDVP
approach enables more accurate simulations by preserving
the MPS geometry and avoiding unnecessary truncations. It
also supports dynamic bond dimension growth and direct
simulation of long-range gates without introducing SWAP
gates, thus maintaining computational efficiency.
To implement this, single-qubit gates are directly
contracted into MPS tensors, while multi-qubit gates are
handled by slicing the MPS across affected regions and
applying localized TDVP projections. A projector-splitting
formalism is used to apply only the relevant summands for
the affected qubits, significantly reducing computational
overhead. For a gate acting on q qubits, the number of
projectors is reduced from 2L−1 (full TDVP) to 2q−1,
without loss of accuracy.
Numerical benchmarks demonstrate that local dynamic TDVP
matches TEBD in accuracy while requiring significantly
fewer MPS parameters. A key example is a 36-qubit
Trotterized 1D periodic Ising circuit with long-range
interactions, where TEBD produces rapid entanglement growth
and central bond dimension blow-up, while TDVP maintains
way smoother, lower bond growth. These results suggest that
local dynamic TDVP is a powerful and scalable alternative
for simulating large quantum circuits, particularly those
involving long-range gates or requiring dynamic
entanglement handling.
Authors: Yuxuan Du, Min-Hsiu Hsieh and Dacheng Tao
Abstract: The vast and complicated many-qubit state space forbids us
to comprehensively capture the dynamics of modern quantum
computers via classical simulations or quantum tomography.
Recent progress in quantum learning theory prompts a
crucial question: can linear properties of a many-qubit
circuit with d tunable RZ gates and G-d Clifford gates be
efficiently learned from measurement data generated by
varying classical inputs? In this work, we prove that the
sample complexity scaling linearly in d is required to
achieve a small prediction error, while the corresponding
computational complexity may scale exponentially in d. To
address this challenge, we propose a kernel-based method
leveraging classical shadows and truncated trigonometric
expansions, enabling a controllable trade-off between
prediction accuracy and computational overhead. Our results
advance two crucial realms in quantum computation: the
exploration of quantum algorithms with practical utilities
and learning-based quantum system certification. We conduct
numerical simulations to validate our proposals across
diverse scenarios, encompassing quantum information
processing protocols, Hamiltonian simulation, and
variational quantum algorithms up to 60 qubits.
Authors: Matthias C. Caro, Jens Eisert, Marcel Hinsche, Marios
Ioannou, Alexander Nietner and Ryan Sweke
Abstract: We consider the problem of testing and learning from data
in the presence of resource constraints, such as limited
memory or weak data access, which place limitations on the
efficiency and feasibility of testing or learning. In
particular, we ask the following question: Could a
resource-constrained learner/tester use interaction with a
resource-unconstrained but untrusted party to solve a
learning or testing problem more efficiently than they
could without such an interaction? In this work, we answer
this question both abstractly and for concrete problems, in
two complementary ways: For a wide variety of scenarios, we
prove that a resource-constrained learner cannot gain any
advantage through classical interaction with an untrusted
prover. As a special case, we show that for the vast
majority of testing and learning problems in which quantum
memory is a meaningful resource, a memory-constrained
quantum algorithm cannot overcome its limitations via
classical communication with a memory-unconstrained quantum
prover. In contrast, when quantum communication is allowed,
we construct a variety of interactive proof protocols, for
specific learning and testing problems, which allow
memory-constrained quantum verifiers to gain significant
advantages through delegation to untrusted provers. These
results highlight both the limitations and potential of
delegating learning and testing problems to resource-rich
but untrusted third parties.
Authors: Jonathan Allcock, Miklos Santha, Pei Yuan and Shengyu Zhang
Abstract: Dynamical Lie algebras (DLAs) have emerged as a valuable
tool in the study of parameterized quantum circuits,
helping to characterize both their expressiveness and
trainability. In particular, the absence or presence of
barren plateaus (BPs)—flat regions in parameter space
that prevent the efficient training of variational quantum
algorithms—has recently been shown to be intimately
related to quantities derived from the associated DLA.
In this work, we investigate DLAs for the quantum
approximate optimization algorithm (QAOA), one of the
most studied variational quantum algorithms for solving
graph MaxCut and other combinatorial optimization
problems. While DLAs for QAOA circuits have been
studied before, existing results have either been based
on numerical evidence, or else correspond to DLA
generators specifically chosen to be universal for
quantum computation on a subspace of states. We
initiate an analytical study of barren plateaus and
other statistics of QAOA algorithms, and give bounds on
the dimensions of the corresponding DLAs and their
centers for general graphs. We then focus on the
$n$-vertex cycle and complete graphs. For the cycle
graph we give an explicit basis, identify its
decomposition into the direct sum of a 2-dimensional
center and a semisimple component isomorphic to n-1
copies of su(2). We give an explicit basis for this
isomorphism, and a closed-form expression for the
variance of the cost function, proving the absence of
BPs. For the complete graph we prove that the dimension
of the DLA is $O(n^3)$ and give an explicit basis for
the DLA.
Abstract: Quantum computing provides a natural framework for generative modeling through sampling tasks with established complexity-theoretic advantages, yet standard parametrized-circuit approaches face persistent challenges in trainability and scalability. This talk reports recent progress on two complementary algorithmic directions developed to address these issues. The first is a differentiable quantum generative model (DQGM) based on quantum Chebyshev transforms, which enables post-training resolution scaling and efficient sampling without additional optimization. The second centers on quantum Boltzmann machines (QBMs), which offer a fault-tolerant path to scalable generative learning. A semi-quantum RBM (sqRBM) architecture with a commuting-visible Hamiltonian structure allows closed-form expressions for probabilities and gradients, providing provable expressive advantages over classical RBMs. Building on this, a quantum variant of contrastive divergence achieves O(1) forward-pass scaling for training. Theoretical results from these works are supported by numerical simulations, outlining a scalable and resource-efficient route for quantum generative modeling beyond the NISQ era.
See the list of posters to be presented in Session II
Marina Bay Sands Singapore
Sands Expo and Convention Centre,
Level 3, Heliconia Junior Ballroom
10 Bayfront Ave, Singapore 018956
Abstract: I will show how Fourier analysis can be used to construct quantum machine learning models that are massively scalable, while simultaneously encoding an important bias present in nearly all datasets. More specifically, we will see how using the quantum Fourier transform to build models in Fourier space leads to a class of universal generative machine learning models that can be trained entirely on classical hardware, enabling scaling to circuits with thousands of qubits and millions of parameters. By empirically implementing this training algorithm on large, real-world datasets, we will see how such models can be successfully trained despite the provable existence of barren plateaus under random parameter initialization. I will also introduce a new generative machine learning algorithm, called generative bandlimiting, that leverages this approach and exploits known biases in the Fourier spectra of data. Finally, I will conclude with some reflections on the overall direction of the field.
Authors: Stephen Jordan, Noah Shutty, Mary Wootters, Adam Zalcman, Alexander Schmidhuber, Robbie King, Sergei Isakov, Tanuj Khattar and Ryan Babbush
Abstract: Whether quantum computers can achieve exponential speedups in optimization has been a major open question in quantum algorithms since the field began. Here we introduce a quantum algorithm called Decoded Quantum Interferometry (DQI), which uses the quantum Fourier transform to reduce optimization problems to decoding problems. For approximating optimal polynomial fits to data over finite fields, DQI efficiently achieves a better approximation ratio than any polynomial time classical algorithm known to us, thus suggesting exponential quantum speedup. Sparse unstructured optimization problems such as max-k-XORSAT are reduced to decoding of LDPC codes. We prove a theorem which allows the performance of DQI to be calculated instance-by-instance based on the empirical performance of classical decoders. We use this to construct an instance of max-XORSAT for which DQI finds an approximate optimum that cannot be found by simulated annealing or any of the other general-purpose classical heuristics that we tried, unless given five orders of magnitude more compute time than the decoding problem requires. Although we subsequently design a tailored classical solver that beats DQI within reasonable runtime, our results nevertheless demonstrate that the combination of quantum Fourier transforms with powerful decoding primitives provides a promising new approach to finding quantum speedups for hard optimization problems.
This submission presents the original DQI algorithm alongside recent improvements and generalizations.
Abstract: The training of parameterized quantum circuits (PQCs) as machine learning models is one of the most widely studied paradigms in quantum machine learning (QML). In this framework, a typical model consists of quantum feature maps, which encode a classical input into the Hilbert space, and variational ansatzes which manipulate the input via trainable parameterized gates. Importantly, the output of these models can be represented as a partial Fourier series in the input, giving rise to the name quantum Fourier models (QFMs). Fully understanding how the choice of feature map and ansatz affects the Fourier spectrum of QFMs is crucial for maximizing their performance, including for popular frameworks such as quantum neural networks (QNNs) and quantum kernels. In this work, we theoretically motivate the appearance of correlations between the Fourier coefficients of QFMs, implying the inability to control each term in the Fourier series independently. For a range of popular ansatzes, we compute these correlations and find a unique pattern for each ansatz, which we call the Fourier fingerprint. Subsequently, we demonstrate the utility of the Fourier coefficient correlation (FCC), derived from the fingerprint, for learning random 1D and 2D Fourier series. In these experiments, we show that the FCC can effectively predict which ansatzes will yield the best performance. Finally, we demonstrate how our the FCC applies to the more challenging problem of 2D jet reconstruction in high-energy physics and find that the FCC can again be used to predict the best performing ansatz before training occurs. Overall, our results illustrate that the Fourier fingerprint is a powerful new tool for the problem of optimal ansatz choice in QML.
Abstract: Classical neural networks with random initialization famously behave as Gaussian processes in the limit of many neurons, which allows one to completely characterize their training and generalization behavior. No such general understanding exists for quantum neural networks (QNNs), which—outside of certain special cases—are known to not behave as Gaussian processes when randomly initialized. We here prove that QNNs and their first two derivatives instead generally form what we call Wishart processes, where certain algebraic properties of the network determine the hyperparameters of the process. This Wishart process description allows us to, for the first time: give necessary and sufficient conditions for a QNN architecture to have a Gaussian process limit; calculate the full gradient distribution, generalizing previously known barren plateau results; and calculate the local minima distribution of algebraically constrained QNNs. Our unified framework suggests a certain simple operational definition for the “”trainability”” of a given QNN model using a newly introduced, experimentally accessible quantity we call the degrees of freedom of the network architecture.
Abstract: Variational quantum algorithms (VQAs) are promising candidates for near-term applications of quantum computers, but their training represents a major challenge in practice. We introduce exact-geodesic VQAs, a space-curvature aware framework that enables analytic Riemannian optimization of variational quantum circuits through a convenient choice of circuit ansatz. Our method exploits the exact metric to find a near-optimal parameter optimization path based on exact geodesic transport with conjugate gradients (EGT‑CG). This supersedes the celebrated quantum natural gradient method, in fact recovering it as its first-order approximation. Further, the exact-geodesic updates for our circuit ansatz have the same cost as standard gradient descent. This contrasts with previous metric-aware methods, which require resource-intensive estimations of the metric. For chemistry problems of up to 14 electrons, our toolkit allows us to achieve up to a 20x reduction in the number of iterations over Adam or quantum natural gradient methods. Moreover, for degenerate problems, which are notoriously difficult to optimize with conventional methods, we achieve rapid convergence to the global minima. Our work demonstrates that the cost of VQA optimization can be drastically reduced by harnessing the Riemannian geometry of the manifold expressed by the circuit ansatz, with both practical and fundamental implications at the interface between quantum machine learning, differential geometry, and optimal control theory.
Abstract: Barren plateaus are fundamentally a statement about quantum loss landscapes on average but there can, and generally will, exist patches of barren plateau landscapes with substantial gradients. Previous work has studied certain classes of parameterized quantum circuits and found example regions where gradients vanish at worst polynomially in system size. Here we present a general bound that unifies all these previous cases and that can tackle physically-motivated ansätze that could not be analyzed previously. Concretely, we analytically prove a lower-bound on the variance of the loss that can be used to show that in a non-exponentially narrow region around a point with curvature the loss variance cannot decay exponentially fast. This result is complemented by numerics and an upper-bound that suggest that any loss function with a barren plateau will have exponentially vanishing gradients in any constant radius subregion. Our work thus suggests that while there are hopes to be able to warm-start variational quantum algorithms, any initialization strategy that cannot get increasingly close to the region of attraction with increasing problem size is likely inadequate.
In addition to the conference buffet lunch, delegates today have the choice to attend a pop-up networking reception for women in quantum at the Heliconia Junior Ballroom. This event is open to all who identify as women, no registration needed.
Abstract: Variational quantum algorithms (VQAs) have emerged as promising candidates for solving complex optimization and machine learning tasks on near-term quantum hardware. However, due to hardware limitations, small-scale users face challenges executing quantum operations, making delegation to more powerful quantum devices desirable. In this work, we introduce a framework for delegated variational quantum algorithms (DVQAs), where a client with limited quantum capabilities delegates the execution of a VQA to a more powerful quantum server. In particular, we introduce a protocol that enables a client to delegate a variational quantum algorithm to a server while ensuring that the input, the output and also the computation itself remain secret. Additionally, if the protocol does not abort, the client can be certain that the computation outcome is indeed correct. Our approach first proposes a verifiable Protocol for delegating the quantum computation required at each optimization step of a VQA, and then combines the iterative steps into an error-resilient optimization process that offers end-to-end verifiable algorithm execution. Our results demonstrate that secure delegation of variational quantum algorithms is a realistic solution for near-term quantum networks, paving the way for practical quantum cloud computing applications.
Abstract: Identification of defects or anomalies in 3D objects is a crucial task to ensure correct functionality. In this work, we combine Bayesian learning with recent developments in quantum and quantum-inspired machine learning, specifically orthogonal neural networks, to tackle the anomaly detection problem for an industrially relevant use case. Bayesian learning enables uncertainty quantification of predictions, while orthogonality in weight matrices enables smooth training. We develop orthogonal (quantum) versions of 3D convolutional neural networks and show that these models can successfully detect anomalies in 3D objects. To test the feasibility of incorporating quantum computers into a quantum-enhanced anomaly detection pipeline, we perform hardware experiments with our models on IBM’s 127 qubit Brisbane device, testing the effect of noise and limited measurement shots.
Abstract: Thermal states play a fundamental role in various areas of physics, and they are becoming increasingly important in quantum information science, with applications related to semi-definite programming, quantum Boltzmann machine learning, Hamiltonian learning, and the related task of estimating the parameters of a Hamiltonian. Here we establish formulas underlying the basic geometry of parameterized thermal states, and we delineate quantum algorithms for estimating the values of these formulas. More specifically, we prove formulas for the Fisher–Bures and Kubo–Mori information matrices of parameterized thermal states, and our quantum algorithms for estimating their matrix elements involve a combination of classical sampling, Hamiltonian simulation, and the Hadamard test. These results have applications in developing a natural gradient descent algorithm for quantum Boltzmann machine learning, which takes into account the geometry of thermal states, and in establishing fundamental limitations on the ability to estimate the parameters of a Hamiltonian, when given access to thermal-state samples. For the latter task, and for the special case of estimating a single parameter, we sketch an algorithm that realizes a measurement that is asymptotically optimal for the estimation task. We finally stress that the natural gradient descent algorithm developed here can be used for any machine learning problem that employs the quantum Boltzmann machine ansatz.
Opening remarks:
- National Quantum Office, Singapore – Ling Keok Tong
Speakers:
- Google– Robbie King
- Horizon Quantum – Benjamin Tan
- Multiverse Computing – Borja Aizpurua
- Quantinuum – Konstantinos Meichanetzidis
- QAI Ventures – Angelina Frank
- Xanadu– Maria Schuld
The conference dinner (which requires a ticket) will be at the Hibiscus Junior Ballroom at the conference venue.
Marina Bay Sands Singapore
Sands Expo and Convention Centre,
Level 3, Heliconia Junior Ballroom
10 Bayfront Ave, Singapore 018956
Abstract: Characterizing a quantum system by learning its evolution is a fundamental problem with a myriad of applications. In this talk, I will explore different models for learning quantum processes from a quantum learning theory perspective, which are relevant in realistic scenarios involving noisy data or adversarial behaviour. These more-realistic quantum process learning models allow us to bridge the gap between sophisticated but contrived learning theory techniques and algorithms with provable guarantees, and real applications in areas such as physics and cryptography. I will discuss this importance and highlight two main physically motivated learning models: statistical query learning of quantum processes and agnostic process learning. Statistical queries are natural yet powerful learning models that provide both learning guarantees and robustness to noise [WD24,WD25]. Agnostic process learning [WLKD24], on the other hand, enables efficient learning of quantum processes even when the data source is noisy or potentially adversarial. Specifically, the model is formalized as follows: given query access to an unknown quantum channel Φ and a known concept class C of channels, the goal is to output a quantum channel that approximates Φ as well as the best channel in C, up to some error. I will also discuss several natural applications of this model, including quantum machine learning, quantum metrology, classical simulation, and error mitigation. I will present relevant techniques and conclude with a discussion of open questions and limitations in both of these quantum process learning models.
Authors: Casper Gyurik, Alexander Schmidhuber, Robbie King, Vedran
Dunjko and Ryu Hayakawa
Abstract: Topological data analysis (TDA) aims to extract
noise-robust features of a data set by studying the number
and persistence of holes in its topology. We show that a
central task of TDA — deciding whether a given hole
persists across different length scales — is
$\mathsf{BQP}_1$-hard and contained in $\mathsf{BQP}$,
implying an exponential quantum speedup for this task under
standard complexity-theoretic assumptions. Our results are
based on the observation that the persistence of a hole can
be encoded in the guided sparse Hamiltonian problem, where
the guiding state is constructed from a harmonic
representative of the hole.
Authors: Zihao Li, Changhao Yi, You Zhou and Huangjun Zhu
Abstract: Classical shadow estimation (CSE) is a powerful tool for
learning the properties of quantum states and quantum
processes. Here we consider the CSE task for quantum
unitary channels. Based on collective measurements on
multiple systems, we propose a query efficient protocol for
this task, whose query complexity has a quadratic advantage
over the previous best approach for this problem, and
almost saturates the information-theoretic lower bound. To
further enhance practicality, we present a variant protocol
using only single-copy measurements, which still offers
much better query performance than previous protocols that
do not use quantum memories. This protocol can also serve
as a key subroutine for learning an arbitrary unknown
Hamiltonian from dynamics, outperforming existing
approaches to this problem.
Authors: Hugo Thomas, Pierre-Emmanuel Emeriau and Ulysse Chabaud
Abstract: In this work, we introduce a shadow tomography protocol
tailored to linear
optical systems. Our protocol enables the tomography of
number states—and
superpositions thereof—using only passive linear optical
transformations and
photon-number resolving (PNR) detectors. Extra
technicalities emerge as this
setting is not tomographic complete but is practically
relevant due to its
experimental accessibility. We adapt the classical shadow
framework to this
context and provide both sample and time complexity bounds.
In particular, we
characterize the class of observables that can be estimated
efficiently,
leveraging the visible space formalism. Our analysis
reveals that, even under
the constraints of passive linear optics, useful and
scalable shadow tomography
is possible, opening the door to new practical applications
in photonic quantum
computing.
Authors: Martin Larocca, Maxwell West, Marco Cerezo, Frederic
Sauvage, Roy Forestano, Nathan Killoran and David Wierichs
Abstract: Classical Shadows (CSs) have emerged as an efficient method
to predict many properties of an unknown quantum state. A
common denominator in CS protocols is to parametrize the
random measurement bases with a random group action. While
many different groups have been proposed, they have been
mostly studied on a case-by-case basis, like random Paulis
and Cliffords, or matchgates.
In an step toward a unified analysis, recent work showed
that one can obtain a general expression for the shadows
channel corresponding to the action of an arbitrary group
G, provided each irreducible representation appears at most
once. The importance of this realization is that it allows
one to access a whole family of CS protocols parametrized
by the choice of group representations. Moreover, the
channel can be inverted analytically, i.e., at no cost.
However, many—if not most—relevant settings involve
multiplicities and thus remain beyond the scope of previous
analyses.
In this work, we show one can go beyond this multiplicity
limitation and obtain an analytically invertible channel on
arbitrary group representations. We give an explicit choice
of basis which we call a “good basis” such that the
measurement channel becomes a weighted sum of projectors
into the irreducible representation. We complement this
analysis by deriving general bounds on the variance of the
estimators which directly relate to sample-complexity
bounds, and discuss how to implement good-basis
measurements.
Our method both unifies existing CS procedures based on
local, global, orthogonal, symplectic and fermionic
Gaussian unitaries, and allows us to easily generate new
protocols based on other groups, or different
representations of previous groups. For example, we
characterize novel shadow protocols based on sampling from
the orthogonal group, the spin and tensor representations
of SU(2), and the exceptional Lie group G2.
Authors: Yixian Qiu, Lirandë Pira and Patrick Rebentrost
Abstract: Learning from quantum data presents new challenges to the
paradigm of learning from data. This typically entails the
use of quantum learning models to learn quantum processes,
that come with enough subtleties to modify the theoretical
learning frameworks. This intersection warrants new
frameworks for complexity measures, including those on
quantum sample complexity and generalization bounds.
Empirical risk minimization (ERM) serves as the
foundational framework for evaluating learning models in
general. The necessity for regularization strategies leads
to the development of advanced regularization strategies
such as tilted empirical risk minimization (TERM).
Theoretical aspects of quantum learning under a quantum ERM
framework are presented in [PRX Quantum 5, 020367 (2024)].
In this work, we propose a definition for TERM suitable to
be employed when learning quantum processes, which gives
rise to quantum TERM (QTERM). By extension, QTERM can be
viewed as a regularization strategy for quantum state
learning. This work contributes to the existing literature
on quantum and classical physics threefold. First, we prove
QTERM learnability by deriving upper bounds on QTERM’s
sample complexity. Second, we establish new PAC
generalization bounds on classical TERM. Third, we present
QTERM agnostic learning guarantees for quantum hypothesis
selection. These results contribute to the broader
literature of complexity bounds on the feasibility of
learning quantum states, as well as the presence of
regularization techniques in quantum learning.
Author: Vishnu Iyer
Abstract: Recent work has shown that one can efficiently learn
fermionic Gaussian unitaries, also commonly known as
nearest-neighbor matchcircuits or non-interacting fermionic
unitaries. However, one could ask a similar question about
unitaries that are near Gaussian: for example, unitaries
prepared with a small number of non-Gaussian circuit
elements. These operators find significance in quantum
chemistry and many-body physics, yet no algorithm exists to
learn them.
We give the first such result by devising an algorithm
which makes queries to an $n$-mode fermionic unitary
$U$ prepared by at most $O(t)$ non-Gaussian gates and
returns a circuit approximating $U$ to diamond distance
$\varepsilon$ in time $poly(n,2^t,1/\varepsilon)$. This
resolves a central open question of Mele and
Herasymenko under the strongest distance metric. In
fact, our algorithm is much more general: we define a
property of unitary Gaussianity known as unitary
Gaussian dimension and show that our algorithm can
learn $n$-mode unitaries of Gaussian dimension at least
$2n – O(t)$ in time $poly(n,2^t,1/\varepsilon)$.
Indeed, this class subsumes unitaries prepared by at
most $O(t)$ non-Gaussian gates but also includes
several unitaries that require up to $2^{O(t)}$
non-Gaussian gates to construct.
In addition, we give a $poly(n,1/\varepsilon)$-time
algorithm to distinguish whether an $n$-mode unitary is
of Gaussian dimension at least $k$ or $\varepsilon$-far
from all such unitaries in Frobenius distance, promised
that one is the case. Along the way, we prove
structural results about near-Gaussian fermionic
unitaries that are likely to be of independent
interest.
Authors: Josep Lumbreras, Mikhail Terekhov and Marco Tomamichel
Abstract: We initiate the study of quantum state tomography with
minimal disturbance to the samples. Can we learn a precise
description of a quantum state through sequential
measurements of samples while at the same time making sure
that the post-measurement state of the samples is only
minimally perturbed? Defining regret as the cumulative
disturbance of all samples, the challenge is to find a
balance between the most informative sequence of
measurements on the one hand and measurements incurring
minimal regret on the other. Here we answer this question
for pure qubit states by exhibiting a protocol that
achieves maximal precision while incurring a regret that
grows only polylogarithmically with the number of samples,
a scaling that we show to be optimal.
Authors: John Kallaugher and Daniel Liang
Abstract: The (tolerant) Hamiltonian locality testing problem,
introduced
in [Bluhm, Caro,Oufkir `24], is to determine whether a
Hamiltonian $H$ is $\varepsilon_1$-close to being $k$-local
(i.e., can be written as
the sum of weight-$k$ Pauli operators) or
$\varepsilon_2$-far from any $k$-local
Hamiltonian, given access to its time evolution operator
and using as little total
evolution time as possible, with distance typically defined
by the normalized
Frobenius norm. We give the tightest known bounds for this
problem, proving an
$O(\sqrt{\frac{\eps_2}{(\eps_2-\eps_1)^5}})$ evolution time
upper bound and an
$\Omega(\frac{1}{\eps_2-\eps_1})$ lower bound. Our
algorithm does not require
reverse time evolution or controlled application of the
time evolution
operator, although our lower bound applies to algorithms
using either tool.
Furthermore, we show that if we are allowed reverse time
evolution, this
lower bound is tight, giving a matching
$O(\frac{1}{\eps_2-\eps_1})$ evolution time
algorithm.
Abstract: This will be an overview talk wherein I go over the recent works in the last few years on learning stabilizer states (and their generalizations). I will discuss a couple of recent works wherein we give tolerant testing protocols for these states as well as applications to learning states with bounded stabilizer rank
National University of Singapore
Lecture Theatre 16 & 17
School of Computing COM2
15 Computing Drive
Singapore 117418
Abstract: In this talk, I will present two recent works related to the question of quantum advantages in machine learning. In the first work, we address a major obstacle to the widespread use of quantum machine learning models in practice: quantum models, even once trained, still require access to a quantum computer in order to be evaluated on new data. To solve this issue, we introduce a class of quantum models where quantum resources are only required during training, while the deployment of the trained model is classical. We prove that: (i) this class of models is universal for classically-deployed quantum machine learning; (ii) it does have restricted learning capacities compared to ‘fully quantum’ models, but nonetheless (iii) it achieves a provable learning advantage over fully classical learners; where (ii) and (iii) are contingent on widely believed assumptions in complexity theory. In the second work, we refine our understanding of the regimes where quantum advantages arise in machine learning, by proving a PAC learning advantage in the realm of shallow-depth circuits. This learning advantage has the particularity that it is unconditional, meaning that we do not need to make assumptions such as the existence of classically hard, quantumly easy, cryptographic functions to show an advantage.
Abstract: We propose novel classical and quantum online algorithms for learning finite-horizon and infinite-horizon average-reward Markov Decision Processes (MDPs). Our algorithms are based on a hybrid exploration-generative reinforcement learning (RL) model wherein the agent can, from time to time, freely interact with the environment in a generative sampling fashion, i.e., by having access to a “simulator”. By employing known classical and new quantum algorithms for approximating optimal policies under a generative model within our learning algorithms, we show that it is possible to avoid several paradigms from RL like “optimism in the face of uncertainty” and “posterior sampling” and instead compute and use optimal policies directly, which yields better regret bounds compared to previous works. For finite-horizon MDPs, our quantum algorithm obtains regret bounds which only depend logarithmically on the number of time steps $T$, thus breaking the $O(\sqrt{T})$ classical barrier. This matches the time dependence of the prior quantum works of Ganguly et al.~(arXiv’23) and Zhong et al.~(ICML’24), but with improved dependence on other parameters like state space size $S$ and action space size $A$. For infinite-horizon MDPs, our classical and quantum bounds still maintain the $\widetilde{O}(\sqrt{T})$ dependence but with better $S$ and $A$ factors. Nonetheless, we propose a novel measure of regret for infinite-horizon MDPs with respect to which our quantum algorithm has $\poly\log{T}$ regret, exponentially better compared to classical algorithms. Finally, we generalise all of our results to compact continuous state spaces.
Session A (PHY)
Authors: Aiden Rosebush, Alexander Greenwood and Li Qian
Abstract: We propose a machine learning based approach to generating
entanglement witnesses where the number of measurement
settings may be directly specified. While previous methods
of constructing witnesses express results in terms of a
standard basis for local measurements like the Pauli basis,
this algorithm directly trains and produces the specified
number of Hermitian matrices, which can be represented by
the same number of measurement settings on a given system.
For \(N\) qudits of dimension \(d\) we use the fully
separable eigenstates of the generators of \(SU(d)\) for
each qudit as training data to determine the correct
measurement settings, and we adjust the bias term of the
witness with a new differential program to ensure maximal
noise tolerance and perfect accuracy. Additionally, we
implemented adversarial training to produce witnesses of
higher noise tolerance, requiring even fewer measurement
settings. We provide an automated script to implement the
above process that conveniently finds witnesses with better
noise tolerance and/or fewer measurement settings than all
existing methods in every nontrivial case we test. We apply
this method to Bell states, GHZ states, W states,
hypergraph states, and a range of qudit states. We consider
systems from 2 – 5 qubits, bipartite qudits up to d = 10,
and tripartite qutrits. Finally, we numerically verify each
of the witnesses we generated with small test sets.
Authors: Marco Ballarin, Juan José García-Ripoll, David Hayes and
Michael Lubasch
Abstract: Quantum state preparation of high-dimensional functions is
an important component in many quantum algorithms. For
these algorithms to provide a quantum advantage, both the
classical and quantum subroutines required for state
preparation need to be as efficient as possible. Here we
present classical algorithms based on tensor network
methods that are efficient with regard to dimensionality.
To avoid the barren plateau problem during the
optimization, we devise a procedure that smoothly
transforms the circuit from an easy-to-prepare initial
function into the target multivariate function. We
illustrate the approach by numerically optimising quantum
circuits for multivariate Gaussians of sizes up to 17
dimensions and using a total of 102 qubits. Additionally,
we fine-tune the circuits composed of hardware-native
quantum gates, taking realistic experimental noise into
account, and demonstrate the experimental feasibility on
Quantinuum’s H2 quantum computer, where we prepare a
9-dimensional Gaussian with polynomially decaying
correlations using 54 qubits.
Authors: Nana Liu, Michele Minverini, Dhrumil Patel and Mark Wilde
Abstract: In quantum thermodynamics, a system is described by a
Hamiltonian and a list of non-commuting charges
representing conserved quantities like particle number or
electric charge, and an important goal is to determine the
system’s minimum energy in the presence of these conserved
charges. Relevant quantum states of these systems are
parameterized thermal states, alternatively known as
quantum Boltzmann machines. In optimization theory, a
semi-definite program (SDP) involves a linear objective
function optimized over the cone of positive semi-definite
operators intersected with an affine space. These problems
arise from differing motivations in the physics and
optimization communities and are phrased using very
different terminology, yet they are essentially identical
mathematically. By adopting Jaynes’ mindset motivated by
quantum thermodynamics, we observe that minimizing free
energy in the aforementioned thermodynamics problem,
instead of energy, leads to an elegant solution in terms of
a dual chemical potential maximization problem that is
concave in the chemical potential parameters. As such, one
can employ standard (stochastic) gradient ascent methods to
find the optimal values of these parameters, and these
methods are guaranteed to converge quickly. At low
temperature, the minimum free energy provides an excellent
approximation for the minimum energy. We then show how this
Jaynes-inspired gradient-ascent approach can be used in
both first- and second-order classical and hybrid
quantum–classical algorithms for minimizing energy, and
equivalently, how it can be used for solving SDPs, with
guarantees on the runtimes of the algorithms. The hybrid
quantum–classical algorithms can be considered quantum
Boltzmann machine learning algorithms for energy
minimization. The approach discussed here is well grounded
in quantum thermodynamics and, as such, provides physical
motivation underpinning why algorithms published fifty
years after Jaynes’ seminal work, including the matrix
multiplicative weights update method, the matrix
exponentiated gradient update method, and their quantum
algorithmic generalizations, perform well at solving SDPs.
Authors: Josep Lumbreras, Ruo Cheng Huang, Yanglin Hu, Mile Gu and
Marco Tomamichel
Abstract: We investigate work extraction protocols designed to
transfer the maximum possible energy to a battery using
sequential access to $N$ copies of an unknown pure qubit
state. The core challenge is designing interactions to
optimally balance two competing goals: charging the battery
optimally using the qubit in hand, and acquiring more
information qubit by qubit to improve energy harvesting in
subsequent rounds. Here, we leverage the
exploration-exploitation trade-off in reinforcement
learning to develop adaptive strategies that achieve energy
dissipation that scales only polylogarithmically in $N$.
This represents an exponential improvement over current
protocols based on full state tomography.
Authors: Richard East, Quanlong Wang, Razin Shaikh, Lia Yeh,
Boldizsár Poór and Bob Coecke
Abstract: Since its inception, the ZX calculus has provided a
diagrammatic way to reason about qubit quantum systems,
representing any quantum computation or linear map between
qubits as ZX-diagrams, a type of tensor network. It is
recognized as a universal, sound, and complete language for
qubit linear algebra. The study of quantum mechanics is
deeply integrated with the representation theory of groups
and algebras, especially SU(2) representation theory,
essential for understanding quantum properties like spin
and angular momentum. Despite advancements, a gap remained
between diagrammatic languages like the ZX calculus and the
algebraic structures in SU(2) representation theory which
are key in fields such as quantum chemistry and condensed
matter physics. The Penrose spin calculus was introduced to
bridge this gap, integrating the ZX calculus’s diagrammatic
approach with SU(2)’s algebraic depth. Inspired by
Penrose’s work on spin networks and quantum geometry, it
extends these concepts into a comprehensive diagrammatic
language for SU(2) calculations, incorporating symmetric
projectors and diagrammatic expressions of 3jm Wigner
symbols, crucial for studying spin coupling and angular
momentum. This facilitates an intuitive understanding and
manipulation of these concepts, making SU(2) concepts more
accessible and broadening their application in quantum
computing and information theory.
Authors: Changwon Lee, Tak Hur and Daniel Kyungdeock Park
Abstract: Implementing efficient and scalable decoders for quantum
error correction is essential for practical quantum
computing. Recurrent transformer-based architectures such
as AlphaQubit achieve high decoding accuracy but suffer
from prohibitive computational costs. To address this, we
introduce a Mamba decoder that replaces each Multi-Head
Attention block of AlphaQubit with a Mamba module.
On Google’s Sycamore memory experiment, our Mamba decoder
matches transformer-level performance, achieving logical
error rates of $2.98\times10^{-2}$ at distance 3 and
$3.03\times10^{-2}$ at distance 5. We further evaluate
real-time performance over 400 cycles with a
latency-dependent noise model tied to computational
complexity.
The transformer’s prohibitive $O(d^4)$ complexity leads to
a severe accumulation of decoder-induced errors, whereas
the Mamba decoder’s efficient $O(d^2)$ scaling avoids this
problem, demonstrating more robust performance. Our results
thus highlight Mamba’s superior speed-accuracy trade-off,
establishing it as a viable architecture for large-scale,
real-time decoders for quantum error correction.
Authors: Yuxuan Yan, Muzhou Ma, You Zhou and Xiongfeng Ma
Abstract: Long-range entanglement is an important quantum resource,
particularly for topological orders and quantum error
correction. In reality, preparing long-range entangled
states requires a deep unitary circuit, which poses
significant experimental challenges. A promising avenue is
offered by replacing some quantum resources with local
operations and classical communication (LOCC). With these
classical components, one can communicate outcomes of
mid-circuit measurements in distant subsystems,
substantially reducing circuit depth in many important
cases. However, to prepare general long-range entangled
states, finding LOCC-assisted circuits of a short depth
remains an open question. Here, to address this challenge,
we propose a quantum-classical hybrid algorithm to find
optimal LOCC protocols for preparing ground states of given
Hamiltonians. In our algorithm, we introduce an efficient
way to estimate parameter gradients and use such gradients
for variational optimization. Theoretically, we establish
the conditions for the absence of barren plateaus, ensuring
trainability at a large system size. Numerically, the
algorithm accurately solves the ground state of long-range
entangled models, such as the perturbed
Greenberger–Horne–Zeilinger state and surface code. Our
results demonstrate the advantage of our method over
conventional unitary variational circuits: the practical
advantage in the accuracy of estimated ground state energy
and the theoretical advantage in creating long-range
entanglement. This work has been published in Physical
Review Letters [Phys. Rev. Lett. 134, 170601 (2025)].
Authors: Qing Liu, Zihao Li, Xiao Yuan, Huangjun Zhu and You Zhou
Abstract: Efficiently measuring nonlinear properties is a significant
yet challenging task from quantum information processing to
many-body physics. Current methodologies often suffer from
an exponential sampling cost or require auxiliary qubits
and deep quantum circuits. To address these limitations, we
propose an efficient auxiliary-free replica shadow (AFRS)
framework, which leverages the power of the joint
entangling operation on a few input replicas while
integrating the mindset of shadow estimation. We rigorously
prove that AFRS can offer exponential improvements in
estimation accuracy compared with the conventional shadow
method, and facilitate the simultaneous estimation of
various nonlinear properties, unlike the destructive swap
test. Additionally, we introduce an advanced local-AFRS
variant tailored to estimating local observables with
constant-depth quantum circuits, significantly simplifying
the experimental implementation. Our work paves the way for
efficient and practical quantum measurements on near-term
quantum hardware.
Authors: Paolo Braccia, N. L. Diaz, Martin Larocca, M. Cerezo and Diego García-Martín
Abstract: Sampling unitary Fermionic Linear Optics (FLO), or
matchgate circuits, has become a fundamental tool in
quantum information. Such capability enables a large number
of applications ranging from randomized benchmarking of
continuous gate sets, to fermionic classical shadows. In
this work, we introduce optimal algorithms to sample over
the non-particle-preserving (active) and
particle-preserving (passive) FLO Haar measures. In
particular, we provide appropriate distributions for the
gates of $n$-qubit parametrized circuits which produce
random active and passive FLO. In contrast to previous
approaches, which either incur classical
$\mathcal{O}(n^3)$ compilation costs or have suboptimal
depths, our methods directly output circuits which
\textit{simultaneously} achieve an optimal
down-to-the-constant-factor $\Theta(n)$ depth and
$\Theta(n^2)$ gate count; with only a $\Theta(n^2)$
classical overhead. Finally, we also provide quantum
circuits to sample Clifford FLO with an optimal
$\Theta(n^2)$ gate count.
Session B (QNN + LT)
Authors: Sacha Lerch, Ricard Puig, Manuel Rudolph, Armando
Angrisani, Tyson Jones, Supanut Thanasilp, Marco Cerezo and
Zoe Holmes
Abstract: Understanding the capabilities of classical simulation
methods is key to identifying where quantum computers are
advantageous. Not only does this ensure that quantum
computers are used only where necessary, but also one can
potentially identify subroutines that can be offloaded onto
a classical device. In this work, we show that it is always
possible to generate a classical surrogate of a sub-region
(dubbed a “patch”) of an expectation landscape produced by
a parameterized quantum circuit. That is, we provide a
quantum-enhanced classical algorithm which, after simple
measurements on a quantum device, allows one to classically
simulate approximate expectation values of a subregion of a
landscape. We provide time and sample complexity guarantees
for a range of families of circuits of interest, and
further numerically demonstrate our simulation algorithms
on an exactly verifiable simulation of a Hamiltonian
variational ansatz and long-time dynamics simulation on a
127-qubit heavy-hex topology.
Authors: Léo Monbroussou, Beatrice Polacchi, Verena Yacoub, Eliott
Z. Mamon, Hugo Thomas, Eugenio Caruccio, Giovanni Rodari,
Francesco Hoch, Gonzalo Carvacho, Nicolo Spagnolo, Taira
Giordani, Mattia Bossi, Abhiram Rajan, Niki Di Giano,
Riccardo Albiero, Francesco Ceccarelli, Roberto Osellame,
Ulysse Chabaud, Fabio Sciarrino and Elham Kashefi
Abstract: Linear optical architectures have been extensively
investigated for quantum computing and quantum machine
learning applications. Recently, proposals for photonic
quantum machine learning have combined linear optics with
resource adaptivity, such as adaptive circuit
reconfiguration, which promises to enhance expressivity and
improve algorithm performances and scalability. Moreover,
linear optical platforms preserve some subspaces due to the
fixed number of particles during the computation, a
property recently exploited to design a novel quantum
convolutional neural networks. This last architecture has
shown an advantage in terms of running time complexity and
of the number of parameters needed with respect to other
quantum neural network proposals. We propose to present the
results from two papers of our team. First, we propose a
new scheme that relies on state injection, a
measurement-based technique that can produce states that
are more controllable, and solve learning tasks that are
believed to be intractable classically. Secondly, we design
and experimentally implement the first photonic quantum
convolutional neural network (PQCNN) architecture based on
particle-number preserving circuits equipped with state
injection. Subsequently, we experimentally validate the
PQCNN for an image classification on a photonic platform
utilizing a semiconductor quantum dot-based single-photon
source and programmable integrated photonic interferometers
comprising 8 and 12 modes. We highlight the potential
utility of a simple adaptive technique for a nonlinear
Boson Sampling task, compatible with near-term quantum
devices. Such approach open the path to new QML algorithms
with useful polynomial advantages, as such photonic
architecture present a very low running time.
Authors: Frederic Sauvage, Pranav Kalidindi, Frederic Rapp and
Martin Larocca
Abstract: Geometric Machine Learning (GML) successes have been
achieved through thorough study and design of new
equivariant neural networks. In comparison, geometric
quantum machine learning (GQML) models critically lacks
such a detailed understanding and a unifying perspective on
their design remains elusive. Focusing on GQML models for
graph datasets, we show that a comprehensive
characterisation of their constituents is possible.
We further probe benefits of this toolbox including the
generalization of known models,
sometimes at virtually no cost, and straightforward
classical pre-training strategies.
Authors: Arthur Rattew, Po-Wei Huang, Naixu Guo, Lirandë Pira and
Patrick Rebentrost
Abstract: Fault-tolerant Quantum Processing Units (QPUs) promise to
deliver exponential speed-ups in select computational
tasks, but a clear recipe for integrating them into
existing classical deep-learning pipelines is missing. In
this work, we focus on the setting of accelerating
inference, where a pre-trained network outputs a
probability distribution over possible outputs (e.g.,
classes or tokens), and ask: for which architectures, and
under which Quantum Random Access Memory (QRAM)
assumptions, can a fault-tolerant QPU asymptotically
outperform classical hardware? All the networks we consider
are composed of fundamental residual blocks which consist
of regularized multi-filter 2D convolutions, sigmoid
activations, skip-connections, and layer normalisations,
echoing the structure of ResNet.
To give end-to-end complexity bounds taking into account
both input and memory assumptions, we consider three
regimes (where the first two assume that QRAM is feasible).
In regime 1, both the input tensor and network weights are
given by QRAM. We prove that a network with an
$N$-dimensional vectorized input, $d$ residual block
layers, and a final residual-linear-pooling layer can be
implemented with $O(\text{polylog}(N)^d)$ inference cost,
and suggest that known dequantization methods do not apply.
Examples of tasks covered by this setting include those
with repeated calls to a slowly-changing input, such as is
the case with multi-turn LLM chat. Regime 2 keeps only the
weights in QRAM; the input must be loaded at $O(N)$ cost.
Even so, shallow multi-layer bilinear-style networks can
achieve quartic (or better) speedups. Regime 3 has no QRAM,
and so quantum speedup is unlikely in terms of either the
dimension of the input, or the number of network
parameters, and must instead exploit high dimensional
latent feature transforms.
Authors: Daniel Liang and Vishnu Iyer
Abstract: We study the problem of tolerant testing of stabilizer
states. In particular, we give the first such algorithm
that accepts mixed state inputs. Formally, given a mixed
state ρ that either has fidelity at least ε1 with some
stabilizer pure state or fidelity at most ε2 with all such
states, where ε2 ≤ ε1^O(1), our algorithm distinguishes the
two cases with sample complexity poly(1/ε1) and time
complexity O(n * poly(1/ε1)).
Authors: Alice Barthe, Mahtab Yaghubi, Michele Grossi and Vedran Dunjko
Abstract: One of the key challenges in quantum machine learning is
finding relevant machine learning tasks with a provable
quantum advantage.
A natural candidate for this is learning unknown
Hamiltonian dynamics.
Here, we tackle the supervised learning version of this
problem, where we are given random examples of the inputs
to the dynamics as classical data, paired with the
expectation values of some observable after the time
evolution, as corresponding labels (outputs). The task is
to replicate the corresponding input-output function.
We prove that this task can yield provable exponential
classical-quantum learning advantages under common
complexity assumptions in natural settings.
To design our quantum learning algorithms, we introduce a
new method, which we call Fourier coefficient sampling for
parametrized circuit functions, and which may be of
independent interest.
Furthermore, we discuss the limitations of generalizing our
method to arbitrary quantum dynamics while maintaining
provable guarantees.
We explain that significant generalizations are impossible
under certain complexity-theoretic assumptions, but
nonetheless, we provide a heuristic kernel method, where we
trade-off provable correctness for broader applicability.
Authors: Matthias C. Caro, Preksha Naik and Joseph Slote
Abstract: Testing properties of Boolean functions is often
dramatically faster than learning. However, this advantage
usually disappears when testers are limited to random
samples of the function—a natural setting for data
science—rather than adaptive queries. In this work we
investigate the \emph{quantum} version of this “”data
science scenario””: quantum algorithms that test properties
of a function $f$ solely from quantum data in the form of
copies of the function state $\ket{f}\propto
\sum_x\ket{x,f(x)}$.
\textbf{New tests.} For three well-established
properties—monotonicity, symmetry, and
triangle-freeness—we show that the speedup lost when
restricting classical testers to sampled data can be
recovered by considering quantum data.
\textbf{Inadequacy of Fourier sampling.} Our new testers
use techniques beyond quantum Fourier sampling, and we show
that this necessary. In particular, there is no
constant-complexity tester for symmetry relying solely on
Fourier sampling and random classical samples.
\textbf{Classical queries vs. quantum data.} We exhibit a
testing problem that can be solved from $\mathcal{O}(1)$
classical queries but that requires $\Omega(2^{n/2})$
function state copies. The \textsc{Forrelation} problem
provides a separation of the same magnitude in the opposite
direction, so we conclude that quantum data and classical
queries are “”maximally incomparable”” resources for testing.
\textbf{Towards lower bounds.} We also begin the study of
\emph{lower bounds} for testing from quantum data. For
quantum monotonicity testing, we prove that the ensembles
used to prove exponential lower bounds for classical
sample-based testing, do not yield any nontrivial lower
bounds for testing from quantum data. New insights specific
to quantum data will be required for proving copy
complexity lower bounds for testing in this model.
Authors: Marcel Hinsche, Jens Eisert and Jose Carrasco
Abstract: Identifying the symmetry properties of quantum states is a
central theme in quantum information theory and quantum
many-body physics. In this work, we investigate quantum
learning problems in which the goal is to identify a hidden
symmetry of an unknown quantum state. Building on the
recent formulation of the state hidden subgroup problem
(StateHSP), we focus on abelian groups and develop a
quantum algorithmic approach to efficiently learn any
hidden symmetry subgroup in this case. We showcase the
versatility of the approach in three concrete applications:
These are learning (i) qubit and qudit stabilizer groups,
(ii) cuts along which a state is unentangled, and (iii)
hidden translation symmetries. Taken together, our results
underscore the potential of the StateHSP framework as a
foundation for broader symmetry-based quantum learning
tasks.
Authors: Mingrui Jing, Erdong Huang, Xiao Shi, Shengyu Zhang and Xin Wang
Abstract: Quantum neural networks (QNNs) hold promise for leveraging quantum systems to tackle complex learning tasks, but their scalability has been hindered by severe trainability issues such as barren plateaus. In this work, we introduce the Quantum Recurrent Embedding Neural Network (QRENN), a novel quantum architecture inspired by fast-track information
pathways in ResNets and grounded in general quantum circuit design principles. By employing tools from dynamical Lie algebra theory, we rigorously prove that QRENN circuits are trainable and can avoid barren plateaus. We validate the practical power of QRENN by applying it to two challenging quantum supervised learning tasks: classifying quantum Hamiltonians and detecting symmetry-protected topological (SPT) phases. Our model demonstrates high accuracy and robustness in both settings, showcasing its ability to
learn nontrivial quantum features from data, suggesting a
promising path toward scalable and reliable quantum machine
learning models.
Authors: Akitada Sakurai, Aoi Hayashi, William Munro and Kae Nemoto
Abstract: It has recently been shown that several quantum machine
learning models, including Quantum Circuit Learning (QCL),
Quantum Reservoir Computing (QRC), and Quantum Extreme
Learning Machines (QELM), can be analyzed via the Fourier
series. In such works, the data-encoding circuit determines
the accessible frequency components, and increasing circuit
depth expands the representational capacity. However, these
models face challenges in practical applications due to the
lack of control over which frequency components are needed
for a given task. In our talk, we propose a practical
QRC/QELM architecture inspired by Random Fourier Features
(RFF). Our model utilizes layered quantum circuits with
Z-rotation encoders and fixed permutation unitaries to
efficiently generate RFF-like frequency structures. We
demonstrate that our method requires only $\mathcal{O}(log
N_f\cdot L)$ computational cost to generate $N_f$ features
in preprocessing, compared to $\mathcal{O}(N_f)$ in
classical RFF. Furthermore, the model remains robust when
the permutation circuit is replaced with more general
quantum dynamics. These results provide practical design
principles for constructing expressive and scalable QML
models, applicable to tasks such as image classification.
QTML 2025 is finished. See you for the next edition in 2026!
LT – Learning Theory
QNN – Variational QML
QA – Fault tolerant/quantum algos
PHY – ML for quantum, physics
TOM – shadows/tomography
