Paolo Baglioni — University of Parma # Probing Kernel Renormalization in finite-width Bayesian Shallow Neural Networks # Despite the widespread popularity of deep neural networks across various fields, exact results regarding their generalization properties remain elusive. Recently, a significant advancement has been made in the proportional limit, where both the hidden layer sizes and the number of training examples are taken to infinity while maintaining their ratio fixed. In this regime, the kernel that describes the architecture is identified as the (globally) renormalized Infinite Width kernel. This result, derived within a Bayesian framework, relies on a heuristic Gaussian equivalence. For this reason, it is crucial to compare the predictions of the effective theory against the outcome of training experiments. In this talk, I will present extensive numerical simulations of one-hidden-layer neural networks in the proportional regime using both real and synthetic datasets. Our findings indicate that the derived effective theory is predictive for finite-width networks. The good agreement between experiments and theory suggests that kernel renormalization is a critical mechanism for feature learning in Bayesian deep networks.