Simplified Quantum Machine Learning (QML) Classification

I have been working with the SWAP Test quite a bit (check out my Basis-Specific SWAP Test if you haven’t already), and so I was excited to read the paper, Carsten Blank et al, Quantum classifier with tailored quantum kernel, npj Quantum Information (2020). DOI: 10.1038/s41534-020-0272-6. However, the paper uses very unclear language (I wouldn’t have understood the paper at all if I hadn’t already known how neural networks and the SWAP Test work) and the circuit requires several seemingly-unnecessary qubits (the paper is connected to another paper that seems to use those qubits, but this paper doesn’t need them). I hope that this article and my version of the circuit, in contrast, are much simpler to understand.

What is the SWAP Test?

The SWAP Test compares quantum states. You measure 0 with a probability of 1 when the states are identical and you measure 0 with a probability of .5 when the states are maximally different. I have read that the SWAP Test works with entangled states, as well, but I haven’t personally played around with that yet.

Looking at the circuit diagram above, the “out” qubits are the control qubits for Fredkin gates, also known as controlled-SWAP gates. The Fredkin gates are sandwiched between Hadamard gates. Finalized with z-basis measurements, these gate combinations each perform a SWAP Test, comparing quantum states as previously noted.

read more

This circuit could obviously be different using Python and Qiskit (or other library of choice). OpenQASM has its limitations, but I prefer the language and it’s visuals. Plus, my goal here is simplicity; you can see everything you need to see in one concise circuit diagram.

Training Data

The paper glosses over the process of mapping training data to qubits, but I hope to cover that in a future article. For now, just know that each “train” qubit represents a classification. The closer a test qubit, or “in” qubit, is to a “train” qubit, as determined by a SWAP Test, the more likely that classification is to be appropriate for that data.

Classifications can be anything. If you are unfamiliar with machine learning, just think of them as categories. If you have stuff and you want to sort the items into piles, you select some criteria for what goes into which pile. Your algorithm “learns” these criteria so when you acquire one more thing it recommends which category, or pile, it probably belongs in.

For purposes of this article, I didn’t use real data; I used numbers from a truly random source: the mind of a teenager. In a future article, I will use meaningful data to compare a simple classical neural network to a simple quantum neural network.

Test Data

The “in” qubits represent the test data; they are all prepared identically. This is necessary in this particular circuit because the SWAP Test is destructive; you can’t keep comparing the same test data to different training data.

Using the previous analogy for those unfamiliar with machine learning, the test data would be that one new thing that you just acquired. You’ve already trained your model to recognize the characteristics of the contents of each classification, and the test data is something that you want to place into the appropriate category.

You may be thinking from this example that machine learning seems really easy, but imagine having voluminous data, many criteria to assess, and many classifications to apply. Classifications at that point may not be obvious, and you will be looking more at which classifications are the best fit, rather than achieving perfect matches.

Results

If you add up the probabilities of each “out” qubit measuring 0, you will find that the “in” state is most similar to the state of train[0]. Therefore, the classification associated with that state is the classification most likely to be appropriate.

Again, the data for this article was chosen at random. In this case, only the third “train” qubit can be ruled out at as unlikely; the other three are all fairly close, with the first one merely being slightly closer than the second and fourth qubits. Applying any margin of error would make these results worthless, but the point of this article is only to show how to obtain these results via SWAP Tests.

I hereby also concede that the analysis would be easier to do using Python, especially with larger and larger circuits, but manually adding the results of only four qubits is still faster than writing the code to do it.

Future Work

There really isn’t any advantage to using this quantum classifier over classical methods. Any “quantum advantage” would probably come from training the model. Maybe that could change for a really large circuit, but current-era (NISQ-era) quantum processors can’t do large circuits.

I just wanted to take the time to simplify this recently-published paper. If your machine learning model is already trained, and your classifications are already mapped to qubits, the SWAP Test is really all you need.

That stated, I’ll work on an article or two for training models and mapping data to qubits. It won’t be Quantum Keras or anything like that, but I’ll show some very simple classification task with meaningful data.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store