A Similarity Between Quantum Computing and Classical Computing
Anyone exploring quantum computing should be aware that the same tradeoff exists between portability and performance that you find in classical computing. This is increasingly important to be aware of, as the industry is currently trending toward portability across vendors’ platforms. In fact, one of my earliest goals in quantum computing was to run one circuit on multiple platforms, comparing their performances.
This reminds me of when I first learned about the Java programming language. There was a lot of buzz about writing once and running everywhere. And, yet, not everyone transitioned to it.
Today, Python is the world’s most popular development language. Like Java, it’s everywhere. However, if you look at Python.org, you’ll find that performance-critical features are written in C. Along those lines, I recently saw a tweet about how Tesla writes initially in Python, but then translates their code into C++.
That’s the classical tradeoff, in a nutshell. Higher-level languages like Java and Python give you portability, but lower-level languages like C and C++ allow better performance.
The analogy with quantum computing stretches a bit here, but anyone who has been involved for a while knows that you can’t create one circuit and run it optimally everywhere. Yes, it might still work. But, you learn to look at how the qubits are physically arranged. It makes a difference.
Keeping this article non-technical, if you want non-neighboring qubits to interact, there is an inefficient process that allows that. However, you want to design your circuit based on which qubits are neighboring which other qubits, thus eliminating that inefficient process. Like any computing operation, you want to minimize the total number of steps involved.
This applies to one vendor, let alone multiple vendors. If I create a 5-qubit circuit, I can run that on every IBMQ device except one, but most of those devices have different qubit arrangements. I cannot write code in either OpenQASM or Qiskit that will be optimized for every device. For performance, I must look at the hardware and target a specific qubit arrangement.
Now imagine adding another layer of abstraction. You create one circuit that is not only not optimized for any one vendor, but now it isn’t optimized for any vendor.
In theory, these frameworks could analyze your circuit and optimize it somewhat — maybe they do — but they’re not going to be able to squeeze out every drop of performance. The far-more-mature classical computing field, despite having code generators and drag-and-drop interfaces and so forth, still has command lines and low-level languages for maximum control. Cases in point, once again: Python.org and Tesla.
I’m now looking in the opposite direction. Instead of adding abstraction with frameworks, I’m interested in OpenPulse and gaining more control over IBMQ hardware. I don’t know yet what I can and cannot do with it, but I know that I need to minimize circuit depth more than other devices currently allow. Instead of writing highly-portable code, I will be writing code that currently works on only one device. I liken it to learning Assembly Language and gaining maximum control over a computer, despite the irony that the pulse controls are in Qiskit, a Python library.
In fact, a quick scan of the documentation shows that you can definitely do things with pulse control that you can’t otherwise do. If you don’t use pulse control, simply put, you’re not taking full advantage of the hardware.
Therefore, just like with classical computing, you can add abstraction and achieve maximum portability, or you can remove abstraction and gain maximum control.