2024 Innovations in Intelligent Systems and Applications Conference, ASYU 2024, Ankara, Türkiye, 16 - 18 Ekim 2024
TextNetTopics is an innovative Latent Dirichlet Allocation-based topic selection method for training text classification models. One main limitation is its computationally intensive scoring mechanism, especially when applied to many topics. This scoring mechanism involves training a machine learning model (i.e., Random Forest) on each topic using the Monte-Carlo Cross-Validation approach and assigning a score value based on a specific performance metric (e.g., accuracy or F1-score). Moreover, the measured score does not account for the interactions between all features residing in all topics. This paper presents a new topic-scoring mechanism called Topic Importance Scoring. This computationally efficient approach trains a Random Forest model on all topics simultaneously and leverages the extracted feature importance values to give each topic a score reflecting its classification potential. The experiments on three diverse datasets confirm that the proposed method's performance is superior to the Topic Performance Scoring, which was used in the original TextNetTopics method.