TY - JOUR
T1 - Classifying handedness in chiral nanomaterials using label error robust deep learning
AU - Groschner, C. K.
AU - Pattison, Alexander J.
AU - Ben-Moshe, Assaf
AU - Alivisatos, A. Paul
AU - Theis, Wolfgang
AU - Scott, M. C.
N1 - Publisher Copyright:
© 2022, The Author(s).
PY - 2022/12
Y1 - 2022/12
N2 - High-throughput scanning electron microscopy (SEM) coupled with classification using neural networks is an ideal method to determine the morphological handedness of large populations of chiral nanoparticles. Automated labeling removes the time-consuming manual labeling of training data, but introduces label error, and subsequently classification error in the trained neural network. Here, we evaluate methods to minimize classification error when training from automated labels of SEM datasets of chiral Tellurium nanoparticles. Using the mirror relationship between images of opposite handed particles, we artificially create populations of varying label error. We analyze the impact of label error rate and training method on the classification error of neural networks on an ideal dataset and on a practical dataset. Of the three training methods considered, we find that a pretraining approach yields the most accurate results across label error rates on ideal datasets, where size and other morphological variables are held constant, but that a co-teaching approach performs the best in practical application.
AB - High-throughput scanning electron microscopy (SEM) coupled with classification using neural networks is an ideal method to determine the morphological handedness of large populations of chiral nanoparticles. Automated labeling removes the time-consuming manual labeling of training data, but introduces label error, and subsequently classification error in the trained neural network. Here, we evaluate methods to minimize classification error when training from automated labels of SEM datasets of chiral Tellurium nanoparticles. Using the mirror relationship between images of opposite handed particles, we artificially create populations of varying label error. We analyze the impact of label error rate and training method on the classification error of neural networks on an ideal dataset and on a practical dataset. Of the three training methods considered, we find that a pretraining approach yields the most accurate results across label error rates on ideal datasets, where size and other morphological variables are held constant, but that a co-teaching approach performs the best in practical application.
UR - http://www.scopus.com/inward/record.url?scp=85133904337&partnerID=8YFLogxK
U2 - 10.1038/s41524-022-00822-7
DO - 10.1038/s41524-022-00822-7
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85133904337
SN - 2057-3960
VL - 8
JO - npj Computational Materials
JF - npj Computational Materials
IS - 1
M1 - 149
ER -