Abstract: Tokenization is a critical preprocessing step for large language models, especially for morphologically rich, low-resource languages like Slovak, where standard corpus-based methods struggle ...