Overview

“Sometimes it happens that a man's circle of horizon becomes smaller and smaller, and as the radius approaches zero it concentrates on one point. And then that becomes his point of view.”

― David Hilbert

Quantum Computing: Paradigm that explicitly using the properties of Quantum Mechanics, allows to perform numerical computations orders of magnitude faster than currently available non-quantum computers.

Overview

\[ \mu = \frac{1}{N} \sum_{i=0} x_i \]

[ven_brain-inspired_2020] [ven_brain-inspired_20201]

Math Resources

Linear Algebra

Complex Numbers

Infinity

Books and Online Resources

Books - General Overview

Books - Algorithms

Books - Practical

Magazine/Online Resources and Blogs

Courses

Quantum Mechanics and Quantum Computation (Vazirani, CS-191x Edx)

(Old) Courses

Quantum Computing (Dave Bacon, Department of Computer Science & Engineering, University of Washington: CSE-599d)

Quantum Formalism Course 2020

Quantum Machine Learning (Amira Abbas, NITheCS (South Africa)

Slides and notebooks are available in the link above.

CERN Course (Nov/Dec 2020)

QC course from CERN focusing on the practical aspects of quantum computing and on the implementation of algorithms.

Algorithms

Clasical

BB84

Berstein-Vazirani

CHSH

Deutsch

  • Not very useful in practice

Deutsch-Jozsa

  • Generalization of Deutsch
  • Not very useful in practice

Grover

  • Used to accelerate search

HHL

  • For solving systems of linear ecuations exponentially faster than current non-quantum algorithms

Shor

  • Factor integers
  • Implies serious implications for current crptography methods

Superdense Coding

Teleportation

Optimization algorithms

Quantum Annealing

QAOA Quantum Aproximation Optimization Algorithm

VQE Variational Quantum Eigensolver

  • Approximate the ground state of chemical and physical systems.

Quantum ML

QSVM

QGAN

\[ \mu = \frac{1}{N} \sum_{i=0} x_i \]

Quantum Machine Learning

As its name states, that's a research/application topic that interconnects ideas/projects from QC and ML. This connection QC/ML can work both ways: QC <-> ML

  1. QC -> ML: QC can help by speeding up the time taken in the typical ML pipeline of training/evaluating/inferencing a/on a model.
  2. ML -> QC: ML can help on solving current problems/difficulties in the QC realm of research such as develop/explore new Quantum-based algorithms, solve noise/ECC (error-correcting codes) problems in current Quantum HW etc.

Libraries

Tools

An amazing tool:

In Practice

Programming Frameworks

Programming QC

You program a QC as a classical computer, so the interface is "classic" let's say

\[ \mu = \frac{1}{N} \sum_{i=0} x_i \] Apply unitary transformation on a quantum state \[S_{n}\] to move to a quantum state \[S_{n+1}\]

Usually we cant with the current hardware have unitary tranformations represented with big matrices (Which act on many qubits) so we use unitary transformations in simpler 1 or 2 (or 4 tops) qubits.. Quantum gates move a vector representing a qubit on the Bloch sphere to another place

Quantum cirquits are equivalent to programs in the classical world

##Evolution of programming

Past 80s and 90s

Without programming languages Algorithm research based on linear algrebra and basic axioms of Quantum Mechanincs

Outcome: -Accurate algos -Represente d with linear algebra

  • Simon, Grover, Peter Shore's algorithms etc.

Now

there are various programming languages we are in a hybrid phase - Quantum SW is still in it's infancy but developing fast

Outocmes:

Focus on NISQ algorithms (Asdd this to the vocabl) Universal HW agnostic quantum cirquits Compilation & transpilation

QML: Quatum SVM, optimization

End users - mainly quantum information PhDs

Gate Level Coding

Qiskit - Python

Tuning know algorithms

Cirq - Python

Embedding accurate building blocks

Q# - Python

Current state

  • nOw we're limited to gate level/building blocks programming
  • this limits quantum software development creativity and productivity
  • to move from 1965 a brakatrhough is required

Future

Higher leel of abstraction -model base automation syntheses of q cirquits -varios of programming langs

outocmes:

  • more quantum algorithms??? Hope so! -focus changes over time from NISQ towards FaultTolerant algorithms

End Users:

  • Quantum information experts
  • Computer scientist
  • Expert programmers
  • Domain experts

Quantum Algorithm design -Cadmium comprehensive solution

Moore Law is dead?

Computational Models

Gate-based (D-Wave)

Simulated Annealing-based

Superconducting Qubits

Information shared though solid hardwire wires.

IBM, Rigetti DWave

Atomic qbits (trapped ions)

Information shared through lasers.

Ionq, Honeywell

Main QC Conferences

Vocabulary

Adiabatic Quantum Computer/Computation (AQC)

In physics, an adiabatic process does not involve the transfer of heat or matter in/out a thermodynamic system; energy is transferred only as work.

For solving a problem with addiabatic computation we need to:

  1. Find a (usually complicated) Hamiltonian whose ground state describes the solution to the problem.
  2. Prepare a quantum system with a simple Hamiltonian that is initialized with the ground state.
  3. Evolve (compute) the simple Hamiltonian adiabatically to the desired complicated Halmiltonian.
  4. By the adiabatic theorem the system remains in the ground state and at the end of the computation the state of the system describes the solution to the problem.

AQC has been demonstrated equivalent in polinomial time to the quantum cirquit-model-based computation. The paper proves:

"Theorem 1.3 Any quantum computation can be efficiently simulated by an adiabatic computation with two-local nearest neighbor Hamiltonians operating on six-state particles set on a two dimensional grid."

Time complexity in AQC is measured as the time taken to finalize the adiabatic evolution which in turn is dependent on the gap in the energy eigenvalues (spectral gap) of the Hamiltonian. When the system starts in the ground state, the energy gap between the ground state (t=0) and the first (evolved) excited state of H(t) with t=1 is proven to be an upper bound of the rate at which the Hamiltonian can be evolved at any time t. If the spectral gap is small, the hamiltonian evolves slowly. In O notation:

\( T = O( 1/g^2_min \) being \( g^2_min \) the minimun spectral gap for H(t)

(Quantum) Annealing Computing

In this kind of systems, computation is done like this:

  1. Initialize the system with a quantum-mechanical superposition of all possible states with equal weights.
  2. Evolve the system following a Schrodinger equation
  3. The amplitudes of all candidate states keep changing, effectively describing a quantum parallelism.
  4. This causes quantum tunneling between states (based on the time-dependent strength of the traversal of the magnetic fields.)

\( H = (1 - s) H_i + s H_f \) where:

  • \( H_i \) is the ground state that is supposed easy to prepare
  • \( H_f \) is the ground state that is the solution of the problem, a hamiltonian of the corresponging Ising model
  • If s is changed slowly, the final state will be the solution to the problem. s can be changed by modifying the strenght of a magnetic field

DWave Quantum Annealing Information

Part of the quantum community do not consider quantum annealing as real quantum computers.

Circuit-based Quantum Computing

Entangelment

A primary diferentiator between classical and quantum mechanics. In the quantum world, it occurs when >2 particles interact and the state of each individual particle CANNOT be described independently of the state of the others.

Interference

See Deutsch algorithm

Minimum Energy State

Physics systems tend to seen a state of minimum energy; objects slide down hills, a hot object at time t cools down in t+1... this is also true in the realm of quantum physics.

NISQ (Noisy Intermediate-Scale Quantum)

Around 2020, NISQ is the current generation of quantum HW (See Preskill paper. The main drawback of this generation of HW is that it doesn't have efficient error correction. These days quantum algorithms are very sensitive to noise in the execution environments to achieve correct results. So while for example in the same way classical computers can't do big factorizations effectively, a NISQ computer also requires millions of qubits to achieve the task and this is not feasible in many cases. The high number of qubits required is not because the algorithm requires them, but because error correction needs to be done to achieve a required precision in the output (See difference betweent [Logical vs Phisical Qubits](#Logical vs Phisical Qubit)).

Around 2021 we can say that NISQ is the 1st generation of Quantum Computers publicly available.

QPU

The equivalent in the quatumn computing realm to CPUs/GPUs/TPUs.

Quantum De-coherence

The mathematical representation of a quantum system is given by a wave function. If there's a definite phase relation between different states of the system, the system is said to be coherent. In a perfectly isolated environment, coherence is maintained indefinitely. However, the quantum system is impossible to be maniputated. For example, during measurement, the system isolation is broken, so coherence is shared with the environment and it's lost as time passes. This process is called decoherence.

Quantum Gates

QG are basically unitary transformations, that rotate the vectors (Qbits) in the Hilbert space.

Qubit

The (pure) states of quantum systems are modelled with normalized vectors (aka state vectors) in separable complex Hilbert spaces. A vector is normalized if $$\norm\psi\norm = 1$$

A qubit is unitary vector in a two-dimensional complex Hilbert space.

Qubits are manipulated using [Quantum Gates](#Quantum Gates).

Logical vs Phisical Qubit

1 Logical qubit ~> 1:N Physical qubits. This is related to noise disturbations. The lower the N the more efficient the quantum system.

QUBO (Quadratic Unconstrained Binary Optimization)

This problems appear in computer science, specially in Machine Learning. Variables are true/false which corresponds to 1/0 values. QUBO problems are defined by a diagonal matrix Q, an NxN upper triangular matrix of real weights, and x a vector of binary variables, to minimize the function:

\( f(x) = sum_i Q_i,ix_i + sum_i Q_i,jx_i*x_j \)

The diagonal terms Q_i,i are the linear coefficients and the non-zer off-diagonal terms are the quadratic coefficients. In short:

\( min_x x^T*Q_*x \)

Superposition

A superposition it's just a special type of linear combination

Reversible Computation

People:

Simon, David Duestch, Lov Grover, Peter Shor, Seth Llyod, Aram Harrow, Hassidim
Manjunath

Bibliography

[ven_brain-inspired_20201] - Ven, Gido M. van de, Siegelmann, Hava T., Tolias, Andreas S. - Brain-inspired replay for continual learning with artificial neural networks. - 2020.

Summary/Abstract

Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario. However, scaling up generative replay to complicated problems with many tasks or complex inputs is challenging. We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on {CIFAR}-100) without storing data, and it provides a novel model for replay in the brain. One challenge that faces artificial intelligence is the inability of deep neural networks to continuously learn new information without catastrophically forgetting what has been learnt before. To solve this problem, here the authors propose a replay-based algorithm for deep learning without the need to store data.
Citation key: ven_brain-inspired_20201
[ven_brain-inspired_2020] - Ven, Gido M. van de, Siegelmann, Hava T., Tolias, Andreas S. - Brain-inspired replay for continual learning with artificial neural networks. - 2020.

Summary/Abstract

Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario. However, scaling up generative replay to complicated problems with many tasks or complex inputs is challenging. We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on {CIFAR}-100) without storing data, and it provides a novel model for replay in the brain. One challenge that faces artificial intelligence is the inability of deep neural networks to continuously learn new information without catastrophically forgetting what has been learnt before. To solve this problem, here the authors propose a replay-based algorithm for deep learning without the need to store data.
Citation key: ven_brain-inspired_2020