Efficient Computation of Collatz Sequence Stopping Times: A Machine Learning Guided Algorithmic Approach

No Thumbnail Available

Date

2025-10

Journal Title

Journal ISSN

Volume Title

Publisher

Addis Ababa University

Abstract

The Collatz conjecture, first proposed in 1937, remains one of the most iconic unsolved problems in mathematics. It concerns sequences generated by repeatedly applying a simple iterative rule: halving even numbers and mapping odd numbers to 3n + 1. The conjecture asserts that every natural number, when subjected to this process, eventually reaches the number 1. Although deceptively straightforward, the problem has resisted proof for nearly nine decades despite extensive computational verification. This thesis introduces a novel machine-learning-guided, structure-aware algorithm for computing Collatz stopping times. By analyzing statistical patterns and hierarchical regularities identified through regression and clustering experiments, the algorithm exploits the inverse Collatz tree to bypass redundant even paths and minimize repetitive computation. The resulting method achieves a consistent 28% reduction in iteration count relative to state-of-the-art algorithms, and an average execution-time improvement of about 57%. Building on this foundation, the algorithmic concept was further adapted into a novel Collatz-based regularization framework for deep learning. The approach introduces a bounded, deterministic penalty derived from the stopping time of the mean absolute model weights and applies it as a norm-like term in the loss function. When evaluated across image (CNN), tabular (FCNN), and timeseries (RNN) benchmarks, the proposed regularizer achieved stable and superior or comparable performance under varying regularization strengths—maintaining convergence where conventional ℓ1, ℓ2, and Elastic Net penalties degraded significantly at higher strengths. Overall, the findings demonstrate that integrating machine learning insights into algorithmic design can yield substantial computational efficiency, and that deterministic number-theoretic dynamics can serve as a robust, mathematically interpretable regularization mechanism for modern neural networks.

Description

Keywords

Collatz conjecture, stopping time, structure-aware algorithm, executiontime efficiency, scalability, ML-guided algorithm design, bounded regularization, deep learning.

Citation