Code repository for quantum circuit discovery for fault-tolerant logical state preparation with reinforcement learning.
This library can be used to train an RL agent for three different tasks:
- Logical State Preparation: Prepare a logical state from a given stabilizer QEC code.
- Verification Circuit Synthesis: Prepare a verification circuit from a given logical state preparation circuit based on flag-qubit protocols [1] to make the state preparation fault-tolerant.
- Integrated Fault-Tolerant Logical State Preparation: Integrates the above two tasks to prepare a logical state fault-tolerantly.
For all the tasks, the user can specify the Clifford gate set and qubit connectivity.
The implementation of reinforcement learning with a non-cumulative reward based on [2] is also possible by setting use_max_reward = True
in the environments.
- Clone the repository
git clone https://github.com/remmyzen/rlftqc.git
cd rlftqc
- Install requirements
pip install -r requirements.txt
For the logical state preparation task, you only need to specify the target stabilizers of your logical state.
For example, the code below will train an RL agent to prepare the
from rlftqc.logical_state_preparation import LogicalStatePreparation
target = ["+ZZZZZZZ", "+ZIZIZIZ", "+XIXIXIX", "+IZZIIZZ", "+IXXIIXX", "+IIIZZZZ", "+IIIXXXX"]
lsp = LogicalStatePreparation(target)
lsp.train() ## Train the agent
lsp.run() ## Run the agent to get the circuit
Refer to the notebook notebooks/01 - Logical State Preparation.ipynb
for more advanced examples (e.g. change the gate set and qubit connectivity).
For the verification circuit synthesis task, you only need to specify the encoding circuit as a stim.Circuit
(see reference) or qiskit.QuantumCircuit
(see reference) instance.
For example, the code below will train an RL agent to synthesize a verification circuit to fault-tolerantly prepare
import stim
from rlftqc.verification_circuit_synthesis import VerificationCircuitSynthesis
## Encoding circuit for the $|0\rangle_L$ of the 7-qubit Steane code.
circ = stim.Circuit(""" H 0 1 3
CX 0 6 1 5 0 4 3 4 3 5 5 6 0 2 1 2 """)
## We can ignore Z error since we are preparing zero-logical of Steane code
vcs = VerificationCircuitSynthesis(circ, ignore_z_errors = True)
vcs.train() ## Train the agent
vcs.run() ## Run the agent to get the circuit
Refer to the notebook notebooks/02 - Verification Circuit Synthesis.ipynb
for more advanced examples.
For the integrated logical state preparation task, you only need to specify the target stabilizers of your logical state.
For example, the code below will train an RL agent to fault-tolerantly prepare the
from rlftqc.ft_logical_state_preparation import FTLogicalStatePreparation
target = ["+ZZZZZZZ", "+ZIZIZIZ", "+XIXIXIX", "+IZZIIZZ", "+IXXIIXX", "+IIIZZZZ", "+IIIXXXX"]
## We can ignore Z error since we are preparing zero-logical of Steane code
ftlsp = FTLogicalStatePreparation(target, ignore_z_errors=True)
ftlsp.train() ## Train the agent
ftlsp.run() ## Run the agent to get the circuit
Refer to the notebook notebooks/03 - Integrated Fault-Tolerant Logical State Preparation.ipynb
for more advanced examples.
Go to this link to see the circuit examples that the RL agent has synthesized for various tasks in PNG, stim, and Latex formats.
The code in this repository is released under the MIT License.
@article{zen_quantum_2024,
title={Quantum Circuit Discovery for Fault-Tolerant Logical State Preparation with Reinforcement Learning},
author={Zen, Remmy and Olle, Jan and Colmenarez, Luis and Puviani, Matteo and M{\"u}ller, Markus and Marquardt, Florian},
url = {http://arxiv.org/abs/2402.17761},
journal={arXiv preprint arXiv:2402.17761},
urldate = {2024-02-27},
publisher = {arXiv},
month = feb,
year = {2024},
note = {arXiv:2402.17761 [quant-ph]},
}
[1] Chamberland, Christopher, and Michael E. Beverland. "Flag fault-tolerant error correction with arbitrary distance codes." Quantum 2 (2018): 53.
[2] Nägele, Maximilian, Jan Olle, Thomas Fösel, Remmy Zen, and Florian Marquardt. "Tackling Decision Processes with Non-Cumulative Objectives using Reinforcement Learning." arXiv:2405.13609 (2024).