Speaker
Description
Hyper-Kamiokande (Hyper-K) is a next-generation water-Cherenkov neutrino detector based in Kamioka, Japan. Designed to have approximately eight times the fiducial volume of its predecessor Super-Kamiokande (Super-K), it is hoped to have enormous physics potential. The inner detector (ID) region is planned to house around 20,000 20-inch PMTs and additional multi-PMTs (mPMTs) to provide photo-coverage for signal detections, whereas the outer detector (OD) region is set to have up to 10,000 3-inch PMTs.
This machine learning (ML) study utilises an innovative convolutional neural network (CNN) model, PointNet [2]. In contrast to the conventional CNN models such as ResNet [3], which requires flattening 3D data into 2D images as inputs, PointNet processes directly from 3D point cloud [2]; therefore, its application to the Hyper-K simulated data ensures that the relative locations of the hit PMTs are preserved, compared to the traditional unfolding methods of the Hyper-K cylindrical tank for 2D CNN. Furthermore, the employment of PointNet for particle identification and reconstruction aims at replacing the traditional likelihood-based method, fiTQun, which could take much longer time to process the data compared to the ML methods.
The preliminary results from this study have shown that the PointNet is performing on par with fiTQun on electron and muon classification, and it performs slightly better on the electron and pi0 front. On the contrary, both PointNet and fiTQun have very underwhelming performance for electron/gamma separation.
Currently, more studies are underway to improve the performance of electron and gamma separations and kinematic reconstructions for all particles using PointNet. Further improvements to the PointNet model are needed to better incorporate mPMT signals into the particle classification results.
[1] Kavatsyuk O, Dorosti-Hasankiadeh Q, Löhner H, KM3NeT Consortium. Multi-PMT optical module for the KM3NeT neutrino telescope. Nuclear Instruments and Methods in Physics Research A. 2012;695: 338–341. https://doi.org/10.1016/j.nima.2011.09.062.
[2] Qi CR, Su H, Mo K, Guibas LJ. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. arXiv:1612.00593 [cs]. 2017; http://arxiv.org/abs/1612.00593
[3] He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs]. 2015; http://arxiv.org/abs/1512.03385