October 2022

News 📰

  1. Determinable and interpretable network representation for link prediction

  2. Azure Quantum Credits Program propels quantum innovation and exploration for researchers, educators, and students

  3. Penn State researchers to explore using quantum computers to design new drugs

  4. Multiverse Computing and Mila Join Forces to Advance Artificial Intelligence with Quantum Computing

  5. Quantum AI Elon Musk Review / Scam App Or Legit?

Videos 📽️

  1. Quantum Machine Learning Explained

  2. Qiskit Falll Fest CIC-IPN Mexico 2022- Quantum Machine Learning

  3. Quantum Machine Learning Neuroimaging for Alzheimer’s Disease

  4. MLBBQ: Quantum Machine Learning by Pavel Popov

  5. Quantum Machine Learning: Opportunities and Challenges

  6. Hands-on quantum machine learning | Rodrigo Morales | ADC2022

  7. “Reinforcement Learning for Quantum Technologies,” presented by Florian Marquardt

Publications 📃

Representation Theory for Geometric Quantum Machine Learning

Abstract:

Recent advances in classical machine learning have shown that creating models with inductive biases encoding the symmetries of a problem can greatly improve performance. Importation of these ideas, combined with an existing rich body of work at the nexus of quantum theory and symmetry, has given rise to the field of Geometric Quantum Machine Learning (GQML). Following the success of its classical counterpart, it is reasonable to expect that GQML will play a crucial role in developing problem-specific and quantum-aware models capable of achieving a computational advantage. Despite the simplicity of the main idea of GQML – create architectures respecting the symmetries of the data – its practical implementation requires a significant amount of knowledge of group representation theory. We present an introduction to representation theory tools from the optics of quantum learning, driven by key examples involving discrete and continuous groups. These examples are sewn together by an exposition outlining the formal capture of GQML symmetries via “label invariance under the action of a group representation”, a brief (but rigorous) tour through finite and compact Lie group representation theory, a reexamination of ubiquitous tools like Haar integration and twirling, and an overview of some successful strategies for detecting symmetries.

Theory for Equivariant Quantum Neural Networks

Abstract:

Most currently used quantum neural network architectures have little-to-no inductive biases, leading to trainability and generalization issues. Inspired by a similar problem, recent breakthroughs in classical machine learning address this crux by creating models encoding the symmetries of the learning task. This is materialized through the usage of equivariant neural networks whose action commutes with that of the symmetry. In this work, we import these ideas to the quantum realm by presenting a general theoretical framework to understand, classify, design and implement equivariant quantum neural networks. As a special implementation, we show how standard quantum convolutional neural networks (QCNN) can be generalized to group-equivariant QCNNs where both the convolutional and pooling layers are equivariant under the relevant symmetry group. Our framework can be readily applied to virtually all areas of quantum machine learning, and provides hope to alleviate central challenges such as barren plateaus, poor local minima, and sample complexity.

Theoretical Guarantees for Permutation-Equivariant Quantum Neural Networks

Abstract:

Despite the great promise of quantum machine learning models, there are several challenges one must overcome before unlocking their full potential. For instance, models based on quantum neural networks (QNNs) can suffer from excessive local minima and barren plateaus in their training landscapes. Recently, the nascent field of geometric quantum machine learning (GQML) has emerged as a potential solution to some of those issues. The key insight of GQML is that one should design architectures, such as equivariant QNNs, encoding the symmetries of the problem at hand. Here, we focus on problems with permutation symmetry (i.e., the group of symmetry Sn), and show how to build Sn-equivariant QNNs. We provide an analytical study of their performance, proving that they do not suffer from barren plateaus, quickly reach overparametrization, and can generalize well from small amounts of data. To verify our results, we perform numerical simulations for a graph state classification task. Our work provides the first theoretical guarantees for equivariant QNNs, thus indicating the extreme power and potential of GQML.

Protocols for classically training quantum generative models on probability distributions

Abstract:

Quantum Generative Modelling (QGM) relies on preparing quantum states and generating samples from these states as hidden - or known - probability distributions. As distributions from some classes of quantum states (circuits) are inherently hard to sample classically, QGM represents an excellent testbed for quantum supremacy experiments. Furthermore, generative tasks are increasingly relevant for industrial machine learning applications, and thus QGM is a strong candidate for demonstrating a practical quantum advantage. However, this requires that quantum circuits are trained to represent industrially relevant distributions, and the corresponding training stage has an extensive training cost for current quantum hardware in practice. In this work, we propose protocols for classical training of QGMs based on circuits of the specific type that admit an efficient gradient computation, while remaining hard to sample. In particular, we consider Instantaneous Quantum Polynomial (IQP) circuits and their extensions. Showing their classical simulability in terms of the time complexity, sparsity and anti-concentration properties, we develop a classically tractable way of simulating their output probability distributions, allowing classical training to a target probability distribution. The corresponding quantum sampling from IQPs can be performed efficiently, unlike when using classical sampling. We numerically demonstrate the end-to-end training of IQP circuits using probability distributions for up to 30 qubits on a regular desktop computer. When applied to industrially relevant distributions this combination of classical training with quantum sampling represents an avenue for reaching advantage in the NISQ era.

Universal algorithms for quantum data learning

Abstract:

Operating quantum sensors and quantum computers would make data in the form of quantum states available for purely quantum processing, opening new avenues for studying physical processes and certifying quantum technologies. In this Perspective, we review a line of works dealing with measurements that reveal structural properties of quantum datasets given in the form of product states. These algorithms are universal, meaning that their performances do not depend on the reference frame in which the dataset is provided. Requiring the universality property implies a characterization of optimal measurements via group representation theory.

Generalization despite overfitting in quantum machine learning models

Abstract:

The widespread success of deep neural networks has revealed a surprise in classical machine learning: very complex models often generalize well while simultaneously overfitting training data. This phenomenon of benign overfitting has been studied for a variety of classical models with the goal of better understanding the mechanisms behind deep learning. Characterizing the phenomenon in the context of quantum machine learning might similarly improve our understanding of the relationship between overfitting, overparameterization, and generalization. In this work, we provide a characterization of benign overfitting in quantum models. To do this, we derive the behavior of a classical interpolating Fourier features models for regression on noisy signals, and show how a class of quantum models exhibits analogous features, thereby linking the structure of quantum circuits (such as data-encoding and state preparation operations) to overparameterization and overfitting in quantum models. We intuitively explain these features according to the ability of the quantum model to interpolate noisy data with locally “spiky” behavior and provide a concrete demonstration example of benign overfitting.

Interpretable Quantum Advantage in Neural Sequence Learning

Abstract:

Quantum neural networks have been widely studied in recent years, given their potential practical utility and recent results regarding their ability to efficiently express certain classical data. However, analytic results to date rely on assumptions and arguments from complexity theory. Due to this, there is little intuition as to the source of the expressive power of quantum neural networks or for which classes of classical data any advantage can be reasonably expected to hold. Here, we study the relative expressive power between a broad class of neural network sequence models and a class of recurrent models based on Gaussian operations with non-Gaussian measurements. We explicitly show that quantum contextuality is the source of an unconditional memory separation in the expressivity of the two model classes. Additionally, as we are able to pinpoint quantum contextuality as the source of this separation, we use this intuition to study the relative performance of our introduced model on a standard translation data set exhibiting linguistic contextuality. In doing so, we demonstrate that our introduced quantum models are able to outperform state of the art classical models even in practice.

QML Vietnam
QML Vietnam
Prepare for the Future