A quantum machine learning algorithm based on generative models

Slides:



Advertisements
Similar presentations
Quantum One.
Advertisements

Fig. 5 Correlation of RNA expression and protein abundance.
Fig. 2 Robustness tests of the shift in the snow-monsoon relationship.
Fig. 4 Ballistic simulation of BP FETs.
Recovery of quantum adiabaticity by adding energy gap fluctuations
Fig. 2 Global production, use, and fate of polymer resins, synthetic fibers, and additives (1950 to 2015; in million metric tons). Global production, use,
Fig. 1 Evolution of magnetic field lines around a foreshock bubble in the GSE-XY plane (z = 0): Results of a hybrid simulation. Evolution of magnetic field.
Fig. 3 Illustration of inter- and intrasegment dynamics contributing to NMR relaxation. Illustration of inter- and intrasegment dynamics contributing to.
Fig. 1 Map of water stress and shale plays.
Origin of the asymmetry and determination of the critical angle
Fig. 3 Electron PSD in various regions.
Fig. 4 Resynthesized complex boronic acid derivatives based on different scaffolds on a millimole scale and corresponding yields. Resynthesized complex.
Experimental supercurrent and corresponding quantum dot modeling
Fig. 6 Comparison of properties of water models.
Fig. 2 Reference-fixing experiment, results.
Fig. 3 Scan rate effects on the layer edge current.
Fig. 1 Experimental apparatus used to train and test free-flying bees on their capacity to learn addition and subtraction. Experimental apparatus used.
Fig. 3 Rotation experiment, setup.
Fig. 1 Product lifetime distributions for the eight industrial use sectors plotted as log-normal probability distribution functions (PDF). Product lifetime.
Fig. 2 Full-frame images recording the violation of a Bell inequality in four images. Full-frame images recording the violation of a Bell inequality in.
Characteristics of the topology-optimized actuator and long-term tests
Fig. 3 Affective responses by exchange condition (PR and RE), level of redistribution (0 = random, one-card, and two-card exchange), and outcome as winner.
Fig. 1 Proportions of normative beliefs by exchange condition, level of redistribution (0 = random, one-card, and two-card exchange), and outcome as winner.
Fig. 1 Parameterization and temporal distribution of carbon isotopic events in the database. Parameterization and temporal distribution of carbon isotopic.
Fig. 1 fNIRS probe placement design for PFC, M1, and SMA measurements.
Fig. 1 Phase diagram and FS topologies.
Emergence of resonances and prediction of distinct network responses
Fig. 2 2D QWs of different propagation lengths.
Fig. 3 Correlation between magnetoresistance and emergence of superconductivity. Correlation between magnetoresistance and emergence of superconductivity.
Fig. 1 Automatic optimization of rMFM parameters yields stronger agreement between empirical and simulated RSFC. Automatic optimization of rMFM parameters.
Fig. 1 Histograms of the number of first messages received by men and women in each of our four cities. Histograms of the number of first messages received.
Fig. 5 Schematic phase diagrams of Ising spin systems and Mott transition systems. Schematic phase diagrams of Ising spin systems and Mott transition systems.
Fig. 1 X-ray scattering and EBSD analyses of the bulk Fe25Co25Ni25Al10Ti15 HEA. X-ray scattering and EBSD analyses of the bulk Fe25Co25Ni25Al10Ti15 HEA.
Fig. 1 Average contribution (million metric tons) of seafood-producing sectors, 2009–2014. Average contribution (million metric tons) of seafood-producing.
Fig. 1 Cross-sectional images of He-implanted V/Cu/V samples.
Fig. 2 Results of the learning and testing phases.
Fig. 3 Production of protein and Fe(II) at the end of growth correlated with increasing concentrations of ferrihydrite in the media that contained 0.2.
Fig. 2 Folding motions of the TCO with strain-softening behavior.
Fig. 4 SPICE simulation of stochasticity.
Autonomously designed free-form 2D DNA origami
Fig. 6 Global sensitivity analysis illustrates the key model parameters that determine the sickling of RBCs. Global sensitivity analysis illustrates the.
Fig. 1 Empirical probability density functions of the estimated climatic drivers. Empirical probability density functions of the estimated climatic drivers.
by Patrick Francois, Thomas Fujiwara, and Tanguy van Ypersele
Fig. 2 Asymmetric MR of LMO within the ac plane.
Fig. 3 Evolution of the temperature difference between a cooling body and a thermal bath or another finite body, which are connected in an experiment using.
Fig. 1 Experiment description.
Fig. 4 Relationships between light and economic parameters.
Fig. 5 Comparison of the liquid products generated from photocatalytic CO2 reduction reactions (CO2RR) and CO reduction reactions (CORR) on two catalysts.
Schematic of the proposed brain-controlled assistive hearing device
Fig. 1 Distribution of forest tree mycorrhizal types and their associated factors in forests of the contiguous United States. Distribution of forest tree.
Fig. 4 BS-SEM images, ternary diagrams, and phase maps for the text and reverse sides of the TS. BS-SEM images, ternary diagrams, and phase maps for the.
Fig. 3 We posit the hyperconcentrated period of domain news, characterized by high article volume and similarity, which G-causes public attention changes.
Fig. 2 News volume and median article similarity as predictors of public attention in the domain Drones. News volume and median article similarity as predictors.
Fig. 3 Transition of adiabatic driving from the standard continuous protocol to the jumping protocol. Transition of adiabatic driving from the standard.
Fig. 1 Location of the Jirzankal Cemetery.
Fig. 4 Phase diagram showing significant order parameters ∣pk∣ versus T and wc. Phase diagram showing significant order parameters ∣pk∣ versus T and wc.
Fig. 2 Mean field results. Mean field results. (A) Solutions P(x) to Eq. 4 for a range of T and wc = (B) Modulus ∣pk∣ of order parameters versus.
Fig. 4 The relationship between the total mean absolute momentum disturbance 〈∣p∣〉zB (in units of ℏ/D) and fringe visibility V. The relationship between.
Fig. 3 Comparisons of NDVI trends over the globally vegetated areas from 1982 to Comparisons of NDVI trends over the globally vegetated areas from.
Experimenter gender and replicability in science
Fig. 4 Calculated RIXS data of BCIO.
Fig. 4 Mapping of abundance of the most dominant bacterial and archaeal phyla across France. Mapping of abundance of the most dominant bacterial and archaeal.
Fig. 4 Spatial mapping of the distribution and intensity of industrial fishing catch. Spatial mapping of the distribution and intensity of industrial fishing.
Fig. 4 Single-particle contact angle measurements.
Fig. 3 Performance of the generative model G, with and without stack-augmented memory. Performance of the generative model G, with and without stack-augmented.
Fig. 2 Comparison between the different reflective metasurface proposals when θi = 0° and θr = 70°. Comparison between the different reflective metasurface.
Fig. 3 The relationships between air temperature, atmospheric circulation, AISMR, and CESCF for the periods 1967–1990 and 1991–2015. The relationships.
Fig. 3 Calculated electronic structure of ZrCoBi.
Fig. 2 Analysis of 1PCTCO2 experiments (JJA seasonal means).
Fig. 3 Spatial distribution of the shoot density (high densities are represented in dark green and low ones in bright yellow) in a simulation of a P. oceanica.
Presentation transcript:

A quantum machine learning algorithm based on generative models by X. Gao, Z.-Y. Zhang, and L.-M. Duan Science Volume 4(12):eaat9004 December 7, 2018 Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC).

Fig. 1 Classical and quantum generative models. Classical and quantum generative models. (A) Illustration of a factor graph, which includes widely used classical generative models as its special cases. A factor graph is a bipartite graph where one group of the vertices represents variables (denoted by circles) and the other group of vertices represents positive functions (denoted by squares) acting on the connected variables. The corresponding probability distribution is given by the product of all these functions. For instance, the probability distribution in (A) is p(x1, x2, x3, x4, x5) = f1(x1, x2, x3, x4)f2(x1, x4, x5)f3(x3, x4)/Z, where Z is a normalization factor. Each variable connects to at most a constant number of functions that introduce correlations in the probability distribution. (B) Illustration of a tensor network state. Each unshared (shared) edge represents a physical (hidden) variable, and each vertex represents a complex function of the variables on its connected edges. The wave function of the physical variables is defined as a product of the functions on all the vertices, after summation (contraction) of the hidden variables. Note that a tensor network state can be regarded as a quantum version of the factor graph after partial contraction (similar to the marginal probability in the classical case), with positive real functions replaced by complex functions. (C) Definition of a QGM introduced in this paper. The state |Q〉 represented here is a special kind of tensor network state, with the vertex functions fixed to be three types as shown on the right side. Without the single-qubit invertible matrix Mi, which contains the model parameters, the wave function connected by Hadamard and identity matrices just represent a graph state. To get a probability distribution from this model, we measure a subset of n qubits (among total m qubits corresponding to the physical variables) in the computational basis under this state. The unmeasured m − n qubits are traced over to get the marginal probability distribution P({xi}) of the measured n qubits. We prove in this paper that the P({xi}) is general enough to include probability distributions of all the classical factor graphs and special enough to allow a convenient quantum algorithm for the parameter training and inference. X. Gao et al. Sci Adv 2018;4:eaat9004 Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC).

Fig. 2 Efficient representation of factor graphs by the QGM. Efficient representation of factor graphs by the QGM. (A) General form of correlation functions of two binary variables in a factor graph, with parameters a, b, c being real. This correlation acts as the building block for general correlations in any factor graphs by use of the universal approximation theorem (25). (B) Notations of some common tensors and their identities: D is a diagonal matrix with diagonal elements with x = 0, 1; Z is the diagonal Pauli matrix diag(1, −1); and . (C) Representation of the building block correlator f(x1, x2) in a factor graph [see (A)] by the QGM with one hidden variable (unmeasured) between two visible variables x1, x2 (measured in the computational basis). As illustrated in this figure, f(x1, x2) can be calculated as contraction of two copies of a diagram from Fig. 1C as 〈Q|x1, x2〉〈x1, x2|Q〉, where x1, x2 are measurement results of the two visible variables. We choose the single-bit matrix D1, D2 to be diagonal with and . For simplification of this graph, we have used the identities shown in (B). (D) Further simplification of the graph in (C), where we choose the form of the single-bit matrix M†M acting on the hidden variable to be M†M = λ1| + 〉〈 + | + λ2| − 〉〈 − | with positive eigenvalues λ1, λ2. We have used the identities in (B) and the relation HZH = X, where X (H) denotes the Pauli (Hadamard) matrix, respectively. By solving the values of λ1, λ2, d1(x1), d2(x2) in terms of a, b, c (see the proof of theorem 1), this correlator of the QGM exactly reproduces the building block correlator f(x1, x2) of the factor graph. X. Gao et al. Sci Adv 2018;4:eaat9004 Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC).

Fig. 3 Illustration of our training algorithm for the QGM. Illustration of our training algorithm for the QGM. (A) Training and inference of the QGM are reduced to measuring certain operators under the state |Q(z)〉. The key step of the quantum algorithm is therefore to prepare the state |Q(z)〉, which is achieved by recursive quantum phase estimation of the constructed parent Hamiltonian. The variables in the set z whose values are specified are called conditioned variables, whereas the other variables that carry the binary physical index are called unconditioned variables. We group the variables in a way such that each group contains only one unconditioned variable and different groups are connected by a small constant number of edges (representing virtual indices or hidden variables). Each group then defines a tensor with one physical index (denoted by p) and a small constant number of virtual indices (denoted by i, j, k in the figure). (B) Tensor network representation of |Q(z)〉, where a local tensor is defined for each group specified in (A). (C) Tensor network representation of |Qt〉, where |Qt〉 are the series of states reduced from |Qz〉. In each step of the reduction, one local tensor is moved out. The moved-out local tensors are represented by the unfilled circles, each carrying a physical index set to 0. For the edges between the remaining tensor network and the moved-out tensors, we set the corresponding virtual indices to 0 (represented by the unfilled circles). (D) Construction of the parent Hamiltonian. The figure shows how to construct one term in the parent Hamiltonian, which corresponds to a group of neighboring local tensors such as those in the dashed box in (C). After contraction of all virtual indices among the group, we get a tensor Lpqr,ij, which defines a linear map L from the virtual indices i, j to the physical indices p, q, r. As the indices i, j take all the possible values, the range of this mapping L spans a subspace range(L) in the Hilbert space Hp,q,r of the physical indices p, q, r. This subspace has a complementary orthogonal subspace inside Hp,q,r, denoted by comp(L). The projector to the subspace comp(L) then defines one term in the parent Hamiltonian, and by this definition, |Qt〉 lies in the kernel space of this projector. We construct each local term with a group of neighboring tensors. Each local tensor can be involved in several Hamiltonian terms [as illustrated in (C) by the dashed box and the dotted box]; thus, some neighboring groups have non-empty overlap, and they generate terms that, in general, do not commute. By this method, one can construct the parent Hamiltonian whose ground state generally uniquely defines the state |Qt〉 (30) [technically, this step requires the tensor network to be injective (29), a condition that is generically satisfied for random choice of the tensor networks (30)]. (E) States involved in the evolution from |Qt −1〉 to |Qt〉 by quantum phase estimation applied on their parent Hamiltonians. represent the states orthogonal to |Qt −1〉, |Qt〉, respectively, inside the two-dimensional (2D) subspace spanned by |Qt −1〉 and |Qt〉. The angle between |Qt〉 and |Qt −1〉 is determined by the overlap ηt = |〈Qt|Qt −1〉|2. (F) State evolution under recursive application of the quantum phase estimation algorithm. Starting from the state |Qt −1〉, we always stop at the state |Qt〉, following any branch of this evolution, where ηt and 1 − ηt denote the probabilities of the corresponding outcomes. X. Gao et al. Sci Adv 2018;4:eaat9004 Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC).