Abud, A. AbedAbi, B.Acciarri, R.Acero, M. A.Adames, M. R.Adamov, G.Adamowski, M.2024-03-132024-03-1320231748-0221https://doi.org/10.1088/1748-0221/18/04/P04034https://hdl.handle.net/20.500.12662/4469The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.eninfo:eu-repo/semantics/openAccessDetector modelling and simulations II (electric fields, charge transport, multiplication, and induction, pulse formation, electron emission, etc)Simulation methods and programsNobleliquid detectors (scintillation, ionization, double-phase)Time projection Chambers (TPC)Highly-parallelized simulation of a pixelated LArTPC on a GPUArticle10.1088/1748-0221/18/04/P040342-s2.0-851600131074Q218WOS:000986658100009N/A