Hanieh Alvankar Golpayegan - University of Naples Federico II
The Wilson-Cowan model as an associative memory
In our project, we explored how the Wilson Cowan model operates as an associative memory by analyzing a network consisting of N distinct neuronal populations. Each population acts similarly to an Ising spin within the Hopfield model. Our investigation focused on the shift from a functional working memory to a chaotic model.
|
Franco Bagnoli - University of Florence
Synchronization of chains of logistic maps
Our study investigates the synchronisation dynamics of a chain of coupled chaotic maps arranged in a parent-children configuration, with each parent node connected to two children nodes, one of which is also the parent of the next node. We analyse two distinct phenomena: parent-child synchronisation, characterised by the vanishing distance between consecutive nodes, and siblings synchronisation, for which the two children coalesce. Our investigation reveals strong differences in the synchronisation mechanisms between these two phenomena, which can be directly linked to the probability distribution of the parent. Theoretical analyses and simulations using the logistic map support our findings. Furthermore, we explore the propagation of perturbations in a synchronised chain by studying the rate of reabsorption of the perturbation.
|
Gabriele Bandini - SISSA
XY model with vision cone: nonreciprocal vs reciprocal interactions on the lattice
In this work we investigate the possibilities and the shortcomings that arise when implementing nonreciprocal and reciprocal interaction on a lattice model, in particular on a classical XY model with a vision cone. Reference [1] showed that adding a vision cone in a nonreciprocal fashion, i.e. featuring a selfish energy (and not a total energy functional) with non-symmetric couplings, leads to a non-equilibrium state with a long-range order (LRO) phase at low temperatures. Here we consider two reciprocal variation of such a model: (i) the first one is characterized by a total energy functional obtained as the sum of the selfish energies, and whose equilibrium state still has a long-range order, thus showing that such feature is to be ascribed to the vision cone nature of the interactions and not to their nonreciprocity; (ii) the second one considers a particular energy functional with symmetric couplings, for which we introduce the concept of redundant bonds, that leads to a stable quasi-long-range order (QLRO) phase and to an order-by-disorder transition
|
Alberto Bassanoni - University of Parma
Rare Events and Single Big Jump Effects in Ornstein-Uhlenbeck Processes
We study the full probability distribution of the time-integrated velocity of a particle elevated to n-th power, with n greater than two, where the velocity follows the Ornstein-Uhlenbeck dynamics. Using the standard tools of renewal theory, we treat the problem as a decoupled continuous time random walk. The probability distribution of our initial observable is related to the problem of the area under the Ornstein-Uhlenbeck excursions, which we treat with the usual methods of first passage problems. By using a formalism of large deviations, we derive the full probability distribution of the first passage area under the excursions, and we find that it has two different phases, directly depending by the parameters of the initial process: a diffusive Brownian phase in the strong noise limit and an anomalous sub-exponential phase in the weak noise limit, with a dynamical phase transition at the deterministic limit. Once we get the waiting times and the areas statistics, we are able to reconstruct our desired probability distribution through the continuous time random walk mapping. After calculating its typical Gaussian fluctuations, the main focus of our work is on studying its rare events distribution using the single big jump principle. We discover in the study of the tail process that the single big jump formalism returns the same results of other works obtained by the optimal functional method, a path integral approach that identifies the source of the anomalous subexponential scaling with the presence of an instantonic solution in the weak noise regime. This formal analogy will be discussed and tested with extensive numerical simulations, showing that the single big jump principle is more general, and this opens the way to further extensions of our theory in the study of rare events of a broader class of Langevin processes;
|
Lindsay Bassman Oftelie - CNR
Dynamic Cooling of Qubits on Contemporary Quantum Computers
Quantum computers require qubits to be initialized in a pure (i.e., cold) state for successful computation. Dynamic cooling offers a route to effectively lower qubit temperatures beyond what is possible with direct, physical cooling techniques. It works by cooling a subset of qubits, at the expense of heating others, by applying certain logic gates to the entire system. While it was initially dismissed as impractical for the high-temperature NMR-based quantum computers available at the time of its inception, we show how dynamic cooling is substantially more effective and efficient on the low-temperature quantum computers available today. In this talk, we will examine how optimal dynamic cooling scales with total system size, in terms of the minimal achievable final temperature, the work cost, and the complexity of the associated quantum circuits. We will observe the effect of hardware noise on cooling and share results of a successful demonstration of dynamic cooling with a 3-qubit system on a real quantum processor. Finally, we will propose a sub-optimal dynamic cooling scheme with fixed (low) complexity to improve the feasibility of implementation on noisy quantum hardware.
|
Simone Benella - INAF - IAPS
Application of the stochastic thermodynamics to space plasma physics: new insights on turbulence
Turbulence is ubiquitous in space plasmas and arise from nonlinear dynamics as emergent collective behavior, from the largest scales of the energy injection to the smallest scales where dissipation occurs. The turbulent dynamics of velocity and magnetic field fluctuations in nearly collisionless plasmas, such as the solar wind, can be envisioned as a scale-to-scale Langevin process. This allows us to embed the statistics of magnetic field fluctuations in the framework of stochastic process theory, and then to resort to fundamental concepts of the recent theory of stochastic thermodynamics. Magnetic field increments as a function of the scale define the cascade trajectories (viz., the stochastic process) over which we have calculated the stochastic entropy variation. The total stochastic entropy produced along a trajectory can be expressed as the ratio between the path probability of the forward trajectory divided by the path probability of its reversal. Thus, the production of entropy expresses, on average, the imbalance of forward with respect to backward processes, which, in the case of turbulence, are proxies for direct and inverse cascades. By using the stochastic entropy we are able to identify two different regimes where fluctuations exhibit contrasting statistical properties. In the inertial range a net production of entropy is linked to an increase of the flatness, thus indicating the occurrence of intermittency in the sample of fluctuations. On the other hand, cascade trajectories associated with a decrease of entropy are assimilated to a global scale invariance. In the transition region between inertial and ion scales the scenario reverses: trajectories characterized by ΔS<0 exhibit a sudden increase of the flatness due to small-scale intermittency, whereas trajectories with ΔS>0 show a constant flatness. Results suggest how the broad framework of stochastic thermodynamics can provide new insights in the field of space plasma turbulence, allowing us to perform a precise classification of cascading trajectories with opposite behavior based on their stochastic entropy production/consumption only.
|
Indaco Biazzo - EPFL
Boltzmann Autoregressive Neural Networks
Generative Autoregressive Neural Networks (ARNNs) excel in generative tasks across various domains, including images, language, and science. Particularly in physics, they have successfully applied to generate samples from statistical physics models. Despite their success, ARNN architectures often operate as black boxes without a clear connection to underlying physics or statistical models. This seminar explores the direct link between neural network architectures and physics models. I'll show how the neural network parameters align with Hamiltonian couplings and external fields, highlighting the emergence of residual connections and recurrent architectures from the derivation. By leveraging statistical physics techniques, we formulate ARNNs for specific systems, and I’ll discuss a new approach for sampling from sparse interacting systems, crucial for physics, optimization, and inference problems. Our findings validate a physically informed approach and suggest potential extensions to multivalued variables, paving the way for broader applications in scientific research.
|
Lorenzo Buffa - Università di Roma Tor Vergata, CREF
From dense to sparse - Optimal Transport in a random graph framework
Optimal transport (OT) is a mathematical framework that deals with the efficient transportation of mass or resources from one configuration to another while minimizing an associated cost. We propose a null model for bipartite networks that can measure the distance of a real system from its optimal configuration. This distance is measured by a $\beta$ parameter, that controls the relative importance of the cost function. Notably, the $\beta\to 0$ limit of this model can be mapped into the Bipartite Weighted Configuration Model (BiWCM), the weighted version of the Bipartite Configuration Model (BiCM). These are Maximum Entropy null models, widely used in Network Theory for the reconstruction and validation of networks. On the other end, OT is reached at the limit for $\beta\to\infty$. We study this change as a phase transition, in which the order parameter is linked to the connectivity of the graph. The two known limits are the energy-driven state (OT solution) and the entropy-driven state (BIWCM) of the system. Finally, we propose a full study of the critical properties of this model, highlighting the link between the Optimal Transport problem and graph theory.
|
Francesco Camilli - International Centre for Theoretical Physics (ICTP)
Fundamental limits in structured principal component analysis and how to reach them
Principal Component Analysis is a powerful tool for dimension reduction and clustering of high-dimensional data. It has been widely studied in various theoretical settings to assess its performance in retrieving low rank structures inside high rank data matrices. One of the most relevant settings from an information theoretical perspective, is a teacher-student scenario, where a teacher plants a low rank structure, called spike, inside a noise matrix, and the student is tasked with reconstructing it at the best of their possibilities. It turns out that the student’s best possible performance is in direct correspondence with information theoretical quantities such as the mutual information (MI) between the data and the spike. Prior to our contribution [1], the MI was computed only in the hypothesis of i.i.d., and thus structureless, Gaussian noise. With a novel technique, inspired by the theory of spin glasses with rotational invariant couplings, we extended the type of noises allowed to a class of random matrix ensembles of trace-type, with low-degree polynomial matrix potential. The predicted student’s performance is shown to be in perfect agreement with an algorithm, that we named Bayes-optimal Approximate Message Passing, whose iterates are rigorously characterized step by step. Despite the seemingly strong assumption of rotation-invariant noise, our theory empirically predicts algorithmic performance on real data, pointing at strong universality properties. In recent times [2], we were able to extend our analysis to any kind of trace ensemble with an arbitrary matrix potential, and to identify an algorithm whose performance is predicted by theory. Tracking the iterates of the latter rigorously still remains open. [1] J. Barbier, F. Camilli, M. Mondelli, M. Sáenz, "Fundamental limits in structured principal component analysis and how to reach them”, Proceedings of the National Academy of Sciences 120 (30) e2302028120 (2023) [2] J. Barbier, F. Camilli, M. Mondelli, Y. Xu, in preparation.
|
Michele Campisi - Istituto Nanoscienze CNR
False Onsager Relations
Recent research suggests that when a system has a "false time-reversal violation" the Onsager reciprocity relations hold despite the presence of a magnetic field. The purpose of this work is to clarify that the Onsager relations may well be violated in the presence of a "false time-reversal violation": that rather guarantees the validity of distinct relations, which we dub "false Onsager relations". We also point out that for quantum systems "false time-reversal violation" is omnipresent and comment that, per se, this has in general no consequence in regard to the validity of Onsager relations, or the more general non-equilibrium fluctuation relations, in the presence of a magnetic field. Our arguments are illustrated with the Heisenberg model of a magnet in an external magnetic field. M. Campisi, EPL 142 30002 (2023) https://doi.org/10.1209/0295-5075/acd023
|
Giuseppe Consolini - Istituto Nazionale di Astrofisica
Crackling Noise and Magnetospheric Substorm Dynamics
G. Consolini (1) and P. De Michelis (2) 1) INAF-Istituto di Astrofisica e Planetologia Spaziali, Roma, Italy 2) Istituto Nazionale di Geofisica e Vulcanologia, Roma, Italy The dynamics of Earth's magnetosphere in response to changes in the solar wind and interplanetary conditions are very complex, often showing scale-invariant energy relaxation events during the occurrence of magnetospheric substorms. In this presentation, we examine auroral electrojet AL-index bursts during substorms and show how the enhancement of ionospheric electrojet currents shares similarities with crackling noise observed in front propagation models. Our findings make a significant contribution to understanding the small-scale dynamics of the magnetospheric plasma sheet and the mechanisms driving magnetospheric substorms.
|
Jacopo D'Alberto - Università degli Studi di Milano
Simulating the Dynamics of Open Quantum Systems with Quantum Monte Carlo Methods
The interest in quantum technologies has grown dramatically in recent years. In particular, the attempt to achieve the supremacy of quantum simulators over the classical counterpart, is one of the most challenging tasks nowadays. In this context, understanding properly the features of open quantum systems could be the key to the development of increasingly large and powerful quantum computers. Several approximate approaches were developed in recent years to simulate the dynamics of open quantum systems [1], however the results obtained so far are generally limited to small size systems. Our novel approach is based on the time-dependent Variational Monte Carlo method, used in [2], but it exploits the so-called unravelling of the master equations to obtain a set of quantum trajectories evolving with Stochastic Schrödinger Equations (SSE). By finding the solution for several independent trajectories, we will then be able to reconstruct time-dependent observables equivalent to those coming from the master equations. The application of this method to dissipative Ising models is then shown, employing various variational ansätze ranging from the basic Jastrow wavefunction to more sophisticated ones, such as the renowned Neural Network Quantum States. [1] H. Weimer, A. Kshetrimayum and R. Orús, Rev. Mod. Phys. 93 (2021) [2] M. J. Hartmann and G. Carleo, Phys. Rev. Lett. 122 (2019)
|
Luca Maria Del Bono - Sapienza Università di Roma
The de-Almeida Thouless line of sparse isotropic Heisenberg spin glasses
Results regarding spin glass models are, to this day, mainly confined to models of Ising spins. Models of continuous spins, which exhibit interesting new physics connected to the additional degrees of freedom, have primarily been studied on fully connected topologies. Only recently some advancements have been made in the study of continuous models on sparse graphs. We partially fill this void by introducing a method to solve numerically the Belief Propagation equations for systems of Heisenberg spins on sparse random graphs via a discretization of the sphere. We introduce techniques to study the finite-temperature, finite-connectivity case as well as novel algorithms to deal with the zero-temperature and large connectivity limits. As an application, we locate the de Almeida-Thouless line for the model and the critical field at zero temperature and show the full consistency of the methods presented. Aside from these results, the approaches presented can be applied in the much broader context of studying Heisenberg spin glasses. They can therefore be used as a stepping stone to study further the behavior of these systems, thus helping to gain a better understanding of continuous models of spin glasses.
|
Najmeh Eshaqi Sani - University of Parma
Time reversibility of adiabatic process
We study the effect of time reversibility on the adiabatic preparation of populations in the few-level quantum system. The invariance of time-reversal symmetry implying reversibility in adiabatic processes is investigated by considering various sweep functions and system configurations. The Berry curvature and the transitions are pursued as a function of time. The results will be relevant for the optimization and acceleration of experimental protocols for efficient population transfer.
|
federico Ettori - Politecnico di Milano
The effect of anisotropy and quenched randomness on dynamic phase transition for the two-dimensional Ising model
We investigate the dynamic properties of the two-dimensional Ising model with anisotropy or quenched defects. Perturbations in system homogeneity significantly impact the dynamical properties, generally favouring the dynamic disordered phase. We analyse separately the two models. For the anisotropic case, the dynamic critical temperature shares similar anisotropic behaviour to the static critical temperature (associated with the ferromagnetic-paramagnetic phase transition), and it goes to zero in the fully anisotropic case. For the model with quenched defects, the dynamic critical temperature decreases linearly with increasing defect fraction. We also find a good correlation between some suitably defined geometric metrics related to quenched defects and the dynamic properties of the system, such as dynamic susceptibility and critical temperature. These geometric metrics prove instrumental in understanding and forecasting the dynamic behaviour in these complex systems.
|
Diego Febbe - Università di Firenze, Dipartimento di Fisica
Continuous Rate Neural Network: A biomimetic machine learning model for classification
Neural processes occurring in the brain are a source of significant inspiration for the foundation of machine learning technologies. In particular, Deep Neural Networks are composed by a collection of simplified neurons, the nodes of the computing device, organized in successive layers and mutually linked via artificial synapsis. Despite these formal analogies, Deep Neural Networks work as static units, at variance with living brains that operate in a highly dynamical framework. To take one step forward in the direction of reconciling artificial computing networks and the actual brain modeling, we set to train biologically inspired continuous rate model for simple classification tasks. The dynamics is thus steered towards different attractors, depending on the category the supplied input belongs to and following a dedicated learning stage. Our algorithm demonstrates high-accuracy performance in the classification task of synthetically generated images. This conclusion was reached by testing the performance of the algorithm at varying noise levels, as superposed to the images to be classified, in both training and test phases, and furthermore, added as a trainable parameter in the dynamical system. This opens up the perspective to leverage on the inherent endogenous stochasticity – yet another biomimetic concept to be included in the formulation - to enhance the performance of the trained model under the scrutinized dynamical angle. Summing up, the objective of this study is to take a further step towards constructing a stronger conceptual bridge between the exploitation of computational neural networks and their biological foundations.
|
Andrea Gabrielli - Università di Roma Tre e Centro Ricerche Enrico Fermi
Networks with many scales of length and the Laplacian Renormalization Group
Scale invariance profoundly influences the dynamics and structure of complex systems, spanning from critical phenomena to network architecture. Here, we propose a precise definition of scale invariant networks by leveraging the concept of a constant diffusion entropy production rate across scales in a renormalization-group coarse-graining setting. This framework enables us to differentiate between scale-free and scale-invariant networks, revealing distinct characteristics within each class. Furthermore, we offer a comprehensive inventory of genuinely scale-invariant networks, both natural and artificially constructed, demonstrating that the human connectome exhibits notable features of scale invariance. Our findings pave the way for novel avenues to investigate the scale-invariant structural properties crucial in biological and socio-technological systems.
|
Guido Giachetti - Université Paris Cergy CY
REPLICA METHODS IN MEASUREMENT INDUCED PHASE TRANSITIONS
Monitored quantum dynamics, i.e. the dynamics of an open quantum system subjected to external measurements, has been extensively studied in the recent years. In particular, the competition between unitary dynamics and monitoringit can give rise to measurement induced phase transitions (MIPTs), which can be understood in terms of the replica formalism in terms of a breaking of the permutational symmetry. The seminar will focus exact results obtained for the monitored Brownian Heisenberg (arXiv:2306.12166), showing how the presence of a transition can vanish as one takes the replica limit. Finally, some preliminary results on monitored random circuits will be presented.
|
Ivan Gilardoni - SISSA - Trieste
Maximum-entropy principle to improve predictability of Molecular Dynamics simulations
Molecular Dynamics simulations play a crucial role in resolving the underlying conformational dynamics of complex biomolecules. However, their capability to reproduce and predict dynamics in agreement with experiments is limited also by the accuracy of the force-field model. This issue can be tackled by suitable integration with experimental data. To this aim, several approaches, some of them based on the maximum-entropy principle, were proposed in the literature. They can be distinguished in two main classes. The first class includes Ensemble Refinement methods, which act independently on each molecular system, lacking transferability to different systems. The second one encompass Force-Field Refinement approaches, based on a reasonable guess of the corrections to the potential energy, making them transferable to different molecules at the expense of a limited flexibility of the ensembles. These two classes of methods have been so far used in a disjoint fashion. In our work, we show how a suitable combination of them can result in a lower prediction error for a realistic case-study of RNA oligomers.
|
Michele Giusfredi - Università degli Studi di Firenze, Department of Physics and Astronomy
Localization in boundary-driven lattice models
Several systems may display an equilibrium condensation transi- tion, where a finite fraction of a conserved quantity is spatially localized. The presence of two conservation laws may induce the emergence of such transition in an out-of-equilibrium setup, where boundaries are attached to two different and subcritical heat baths. We study this phenomenon in a class of stochastic lattice models, where the local energy is a general convex function of the local mass, mass and energy being both globally conserved in the isolated system. We obtain exact results for the nonequilibrium steady state (spatial profiles, mass and energy currents, Onsager coefficients) and we highlight important differences between equilibrium and out-of-equilibrium condensation.
|
Giacomo Gori - Ruprecht-Karls-Universität Heidelberg
Emergent critical geometry
Critical correlations in a bounded system with ordered boundary are argued to be function of a suitably chosen metric g. This locally isotropic metric rules the order parameter profile according to general scaling arguments. These statements are verified via extensive Monte Carlo simulations. A natural candidate for g is the solution of a differential geometry problem known as Yamabe problem i.e. find a local rescaling of a metric making curvature constant. The correct Yamabe problem to be considered entails a fractional (anomalous in physics) generalization of the Ricci scalar curvature.
|
Lorenzo Grimaldi - Università di Roma "Tor Vergata" e Centro Ricerche "Enrico Fermi"
Entropy and heat capacity of regular fractal graphs
The Laplacian Renormalisation Group (LRG) has been introduced as a means to generalise the usual coarse-graining procedure of homogeneous systems to heterogeneous networks. It draws on the Gaussian model of random graphs and the diffusion equation and treats the diffusion time and the Laplacian operator as the analogous of the inverse of the temperature and the Hamiltonian, respectively. This defines a network propagator and an ensemble of accessible configurations, in analogy with the canonical ensemble of statistical physics. The construction yields natural generalisations of entropy and heat capacity to the case of heterogeneous networks, thus providing us with the tools to study possible phase transitions and to identify the optimal scale at which the coarse-graining can be performed. Recent developments have displayed the ability of the LRG to unveil information about the structure of the considered network. Whenever the heat capacity displays a plateau over a time interval, the system becomes scale-invariant within that interval and the value of the heat capacity yields the spectral dimension of the underlying geometry. Moreover, it has been shown that the Fiedler eigenvalue scales as a power of the number of nodes of the system, the value of the index depending on the specific geometry. In some cases- like the Sierpinski gasket and carpet- such value is again related to the spectral dimension of the graph. We examined the behaviour of some regular fractal graphs- namely variations of the gasket and carpet. Within these frameworks, we observed that the LRG machinery produces the correct value of the spectral dimension of the graph, both from the perspective of the heat capacity and that of the Fiedler eigenvalue. On the other hand, we also analysed different configurations of the Dirac comb, for which the scaling of the Fiedler eigenvalue is not associated to the spectral dimension of the graph. Indeed, the LRG investigation succeeded also in this setting.
|
Mirko Hu - Department of Medicine and Surgery, TecMed Lab, University of Parma
ECG Signals Revisited with Network Science derived Features
Since the introduction of the electrocardiogram (ECG) by string galvanometer in 1901 by Prof. Einthoven, the interpretation of cardiac health has remained largely unchanged. Traditionally, experts analyze ECGs for diagnosis, but the increasing volume of data and advancements in computer-based methods necessitate new approaches for feature extraction. Network science is becoming a common language to describe complex systems. In network science, complex systems are represented as nodes connected with edges. Time series can be considered complex systems too and they can be translated into networks by using visibility graphs. It is possible to characterize these complex systems by calculating the graph properties and by using a cartography-based method. Cardiovascular signals, exhibiting strong pseudoperiodic behavior, present challenges in early detection of arrhythmias, a prevalent cardiovascular disorder. Early recognition as well a prediction of arrhythmias can potentially save many lives. We used the 2017 PhysioNet/CinC Challenge database, and we analysed 1,000 short ECG recordings (500 normal and 500 arrhythmic). One approach to ECG analysis via network science involves segmenting signals into chunks and transforming them into visibility graphs. The visibility condition states that it is possible to connect two time points if they are visible to each other, i.e. it is possible to connect the values of two time points without intersecting the values of the time points between them. Then, the multiple graphs from one ECG were overlapped, and the weights were normalised to obtain a weighted graph with weights between 0 and 1. We used an arbitrary threshold of 0.50 to cut the noisy edges. We obtained, in this way, a unique representative graph for the ECG of a subject. To extract the features that were used to classify an ECG, we performed community detection through Louvain algorithm, and we identified the roles of each node in the graph. The roles depend on the position of a node inside the community. We also calculated some properties of the graph spanning from the diameter to the density. The percentage of the number of nodes for each role, the total number of nodes, the average degree, the density, the diameter, and the average clustering were used as features for a random forest classifier. After optimising the hyperparameters of the machine learning classifier, we obtained an accuracy of 74% and an AUC of 0.81 on the test set (300 ECG recordings, 150 normal and 150 arrhythmic). This work can pave the path for revisiting the traditional ways of reading ECG based on the analysis of the typical ECG waves, such as the QRS complex, the P and the T waves. This work also presents an innovative way of using a cartography-based analysis of the network and of extracting new numerical features from an ECG signal. Finally, this work can help machines to better recognise the arrhythmic patterns absent in the normal signals.
|
Lata Kharkwal Joshi - SISSA Trieste
Measurements of many-body quantum chaos
I will present recent works on theoretical measurement protocols and experimental results on a probe of many-body quantum chaos namely; The spectral form factor (SFF). The SFF characterizes statistics of energy eigenvalues, making it a fundamental diagnostic of many-body quantum chaos. In addition, partial spectral form factors (PSFFs) can be defined which refer to subsystems of the many-body system. They provide unique insights into energy eigenstate statistics of many-body systems. The PSFF have recently been measured using the randomized measurement protocol, which I will discuss through my poster.
|
Ramaz Khomeriki - Ivane Javakhishvili Tbilisi State University
Detection of “spatial” temperature by temporal statistics
We investigate temporal consequences of spatial thermalization in multimode nonlinear optical systems on the basis of waveguide arrays. The recent activities (see e.g. L.G. Wright, F.O. Wu, D.N. Christodoulides, F.W. Wise, Nature Physics, v. 18, 1018, 2022) indicate that spatial optical modes with a given frequency undergo thermalization with respect to propagation constant distribution and thus are characterized by “spatial” temperature and chemical potential. Our aim is to translate these thermal characteristics into the usual temporal domain placing hypothetically classical or quantum particle into the potential created by those spatial modes and study the dependence of temporal statistics of the particle on the “spatial” thermal characteristics of optical modes.
|
Luca Leuzzi - CNR-NANOTEC, Istituto di Nanotecnologia, Sede di Roma, Piazzale A. Moro 5
Logarithmic critical slowing down in complex systems: replica field theory from statics to dynamics in
We consider second-order phase transitions in which the order parameter is a replicated overlap matrix. We focus on a tricritical point that occurs in a variety of mean-field models and that, more generically, describes higher order liquid-liquid or liquid-glass transitions. We show that the static replicated theory implies slowing down with a logarithmic decay in time. The dynamical equations turn out to be those predicted by schematic Mode Coupling Theory for supercooled viscous liquids at a A3 singularity, where the parameter exponent is λ=1. We obtain a quantitative expression for the parameter μ of the logarithmic decay in terms of cumulants of the overlap, which are physically observable in experiments or numerical simulations.
|
Edoardo Marchi - Università degli Studi di Milano
Modelling chromosomes dynamics: how loop-extrusion impacts chromatin self-interaction
Contacts between genomic loci of the chromatin fibre are essential for the control of genetic expression, yet the mechanistic details of these interactions are not completely understood. Loop extrusion by motor proteins like cohesin mediates the attractive interactions in chromatin on the length scale of megabases, providing the polymer with a well-defined structure and at the same time determining its dynamics. Their duration can be observed in living cells by time–lapse fluorescence microscopy, although with limited spatial and temporal resolution. Combining simulations with simple analytical models, we show how to obtain important information about the mechanism of stabilisation of contacts by cohesin from the two–dimensional dynamics of the system on the focal plane. In particular, we find estimations of the relevant distance at which a cohesin-mediated contact occurs and of the cohesin velocity along the chain.
|
Matteo Marsili - ICTP
What is abstraction?
Can we quantify how abstract a representation is? We know that abstraction develops with depth along biological or artificial neural hierarchies. We know that abstraction has to do with data independence and invariances. These insights will be studied and tested quantitatively in deep belief networks. (either a talk or a poster will be fine)
|
Massimo Materassi - Consiglio Nazionale delle Ricerche - Istituto dei Sistemi complessi CNR-ISC
Huge room for Statistical Physics in the Ionospheric Physics and Space Weather
Since its discovery, the Earth’s ionosphere has been treated in many different ways: initially, it was described rather roughly as a channel to communicate through electromagnetic waves with receivers beyond the horizon, due to its capacity of reflecting the signals. However, being a highly variable physical system, in the users’ mind the ionosphere has been viewed more often as a complication than as a tool: its variability was always seen as an annoying characteristic, and it is still regarded this way by a wide community of communication and positioning “users”. The highly variable ionosphere, however, is much more than an annoying detail disturbing the GPS technology, or the scarcely reliable channel for ground-to-ground communication: since some decades, physical aspects of the system gained importance in the scientific and users’ community. As physical models of the ionospheric dynamics grew more and more important, multi-disciplinary aspects of physics and mathematics became central: from plasma dynamics, to electromagnetics in random media; from photochemistry kinetic equations, to the kinetic theory of gases of many species; from turbulence theory, to forced criticality. In this short review, I will try to sketch some aspects under which the state-of-the-art of ionospheric physics and modelling, with its paramount role in Space Weather science, appears to be a scenario with serious opportunities of intervention for the Statistical Mechanics and Dynamical System community, with their unique expertise in non-equilibrium thermodynamics, stochastic theories, dissipative structures and the predictability assessment of chaotic systems.
|
Giovanni Messuti - University of Salerno - INFN Napoli, Gruppo coll. SA
Tuning the Brain to Criticality: Neuronal Avalanches and Cyclic Alternating Patterns during Sleep
(Silvia Scarpetta, Niccolò Morisi, Carlotta Mutti, Nicoletta Azzi, Irene Trippi, Rosario Ciliento, Ilenia Apicella, Vincenzo Palmieri, Giovanni Messuti, Marianna Angiolelli, Fabrizio Lombardi, Liborio Parrino, Anna Elisabetta Vaudano) The human brain consists of billions of neurons, interacting in highly nonlinear ways. Neuronal avalanches are bursts of neuronal activity that propagate through the brain network in a cascading manner. Historically, collective emergent behavior has been observed both in vivo and in vitro in cortical networks, at rest. Recently, it has been shown that neuronal avalanches during sleep exhibit power-law distributions in size and duration, signature of scale invariance. There has been speculation about correlations between sleep and its role in tuning the brain to criticality, as deviations from power-law behavior have been observed in subjects under sleep deprivation. In this study, we record EEG of 10 healthy subjects during sleep. We confirm the scale invariant behavior of the avalanches’ sizes and duration during sleep, demonstrating also its robustness to the threshold defining the occurrence of an avalanche. We find the critical exponents and the scaling relations connecting their critical exponents in agreement with the one predicted in Mean Field Direct Percolation universality class. The investigation on the sleep structure revealed important correlations of avalanches occurrences with the Cyclic Alternating Pattern (CAP), a specific pattern in EEGs typically observed during the deeper phases of NREM sleep. This finding suggests the presence of a functional link between avalanche occurrence and occurrence of CAP phase A during NREM sleep. The large fluctuations involved in the CAP activation phase are a fingerprint of vicinity to criticality. As a result, CAP cycles are a genuine hallmark of tuning to criticality during sleep.
|
Salvatore Micciche' - UNIPA - Dipartimento di Fisica e Chimica Emilio Segrè
Role of correlations in the maximum distribution of multiscale stationary Markovian processes
We are interested in investigating the statistical properties of extreme values for strongly correlated variables. The starting motivation is to understand how the strong-correlation properties of power-law distributed processes affect the possibility of exploring the whole domain of a stochastic process (the real axis in most cases) when performing time-average numerical simulations and how this relates to the numerical evaluation of the autocorrelation function. We show that correlations decrease the heterogeneity of the maximum values. Specifically, through numerical simulations we observe that for strongly correlated variables whose probability distribution function decays like a power-law $1/x^\alpha$, the maximum distribution has a tail compatible with a $1/x^{\alpha+2}$ decay, while for i.i.d. variables we expect a $1/x^\alpha$ decay. As a consequence, we also show that the numerically estimated autocorrelation function converges to the theoretical prediction according to a factor that depends on the length of the simulated time-series $n$ according to a power-law: $1/n^{\alpha^\delta}$ with $\delta<1$, This accounts for a very slow convergence rate.
|
Federico Milanesio - PhD student - University of Turin
Understanding Geometric Compression in Neural Network Dynamics
Neural networks are the most used algorithm in modern machine learning and have achieved incredible performance on different tasks. However, their development relies not on a deep theoretical understanding but on trial and error. One of the main difficulties of understanding NNs lies in the problematic nature of their training dynamics, which seeks optima in a very high-dimensional rough landscape. At the same time, the models avoid becoming trapped in suboptimal minima and converge to points with good generalization, thus escaping the so-called curse of dimensionality. Intuitively, in regression tasks, the optimal representation of the data would be a low-dimensional manifold in which the points align, allowing for easy linear regression. We introduce a entropy-based measure to capture geometric compression and investigate this prediction. Our observations during training reveal a non-monotonic behavior, namely a first compression phase, followed by a subsequent decompression phase where our measure increases again. Such a result is remarkable and unobserved in regression NNs. We prove that this behavior is a property of feature learning, and is quite general for changes in hyperparameters and for different datasets. We hypothesize that the epoch of inversion is when the network has learned to predict the best linear regression possible, and the subsequent decompression phase may indicate a phase of generalization, in which the model becomes more flexible and needs representations decompressed to accommodate more complex functions. This behavior aligns with current literature suggesting that neural networks learn the data distribution's moments sequentially. We theorize that inversion happens when the network has learned the first two moments and starts learning from moments of higher order.
|
Graziano Mileto - Università Popolare Federiciana Dipartimento di Medicina Integrata e Biofisica ”Giuseppe Martines” (https://www.cimb.me/dipartimenti.html)
Dielectrons e+e− signals from fusion of QED Coherence and Nuclear states (GDR) in C-C and Ca-Ca relativistic heavy ion collisions and the d6σ = Wfi |Jinc|d6ΦDS problem
The theory of nucleus-on-nucleus scattering of Mileto’s model, suggest us that the e+e− signals, in relativistic heavy ion collisions, are the evidence of the fusion of electromagnetic and nuclear fields. At the beginning of the calculation, using the relativistic quantum field theory, we may analise the joining of the bosonization of coherent and real photons with the nuclear states coming from the Giant Dipole Resonance which is a nonlinear partial differential sixth-order problem in the calculation of the cross section. Other models fail to describe the whole spectrum of dielectrons, measured from the DLS collaboration at BEVALAC. These models calculate the dielectrons produced from different hadronic generators neglecting the coherent and bosonic effects of electromagnetic fields. Therefore, their calculations underestimate the experimentals results; instead, we present the solution of many body interaction with detailed calculations that are entirely in accordance with measured differential invariant mass spectra of dielectron cross section, for low and medium invariant masses, and with radiative corrections it can also explain high invariant masses. This model implements quantum entanglement. We conclude that, with this model, we may foresee the number of dielectrons produced rescaling the amplitudes and the wave functions with this number, like foreseen by Mileto-Baur-Bertulani-Preparata-Glauber quantum field theory; the calculated spectra are complementary with hadronic generators, and legalizing our model as the probable describer of the heavy ion pile-up scenery as confirmed in the 2019 CERN experiments, having found that γ − γ scattering is a fundamental property of nucleus-nucleus interactions and is a law confirmed with a confidence of 8 σ. This theory is able to explain not only the experimental data of nucleus-nucleus interactions, at relativistc eneergy, but also the data observed by Chandra and Gemini observers from the emission of matter and antimatter by the Pulsar J2030+4415 and other similar pulsars, whose states can be described by General Relativity. [1] Graziano Mileto, Teoria di campo quanto-relativistica - Una guida sia teorica che applicata e di ricerca scientifica, con esempi ed esercitazioni, pagine: 316, libro pubblicato dall’Aracne Editrice-Collana: Il nucleare, n. 14, ISBN 979-12-218-0465-2, 2023 [2] Graziano Mileto, Produzione di coppie di-elettroniche negli urti fra ioni in regime relativistico, pagine: 524, libro pubblicato dall’Aracne Editrice-Collana: Il nucleare, n. 16, ISBN 979-12-218-0688-5, 2023, 3] Graziano Mileto, Produzione di coppie dielettroniche nel regime relativistico, pagine: 212, libro pubblicato dall’Aracne Editrice, ISBN 978-88-548-8221-8, 2015 [4] Graziano Mileto, G. Preparata et al., “Coherent QED, giant resonances and e+e− in high energy nucleus-nucleus collisions”, Il Nuovo Cimento A (1971-1996) 112, 767– 781 (1999). https://doi.org/10.1007/BF03035885, republished: 09 January 2016 on https://link.springer.com/ [5] Graziano Mileto, “Dielectrons e+e− signals from fusion of QED coherent and nuclear states in relativistic heavy ion collisions”, Hadronic Journal. Volume 28, from pag. 409 to pag. 440; 2005, issue number 4 of September 2005. https://www.hadronicpress.com/ [6] Graziano Mileto, Bevalac’s energy ion collisions coherent isodileptons e+e− results fitted by using QED and Hadronic Mechanics, Hadronic Journal. Volume 29, from pag. 143 to pag. 168; issue number 2 of April 2006. [7] Graziano Mileto, M. Milea, Dielectrons e+e− signals from fusion of QED Coherence and Nuclear states (GDR) in Ca-Ca relativistic heavy ion collision (Il codice I.S.B.N. del CD ove `e incluso l’articolo `e: 9788896810040), per il Workshop ”Mathematica Italia 7◦ User Group Meeting, Università degli Studi di Napoli Federico II, 28-29 maggio 2015”, presso l’Accademia delle Scienze di Napoli, per cui `e stato creato un CD unico con tutti i contributi dei relatori. [8] Graziano Mileto, M. Milea, Dielectrons e+e− signals from fusion of QED Coherence and Nuclear states (GDR) in C-C relativistic heavy ion collision, (Il codice I.S.B.N. del CD ove `e incluso l’articolo `e: 9788896810040), per il Workshop ”Mathematica Italia 7◦ User Group Meeting, Universit`a degli Studi di Napoli Federico II, 28-29 maggio 2015”, presso l’Accademia delle Scienze di Napoli, per cui `e stato creato un CD unico con tutti i contributi dei relatori.
|
Daniela Moretti - Università degli studi di Bari
Non-Gaussian behavior in subordinated dynamics for a Brownian particle in a trapping potential.
In recent years, much attention has been paid to the observation of non-Gaussian probability density functions for diffusing systems in a variety of experiments and theoretical models, as such property could entail new physical insight. Here, we solved analytically the Langevin equation of motion in the overdamped regime of a particle moving both in absence and in presence of a harmonic potential, assuming the diffusion coefficient to be a Telegraph Process, i.e. the diffusion coefficient changes stochastically between two states. We computed analytically the first four momenta of the distribution of the particle position observing a non-Gaussian behaviour for short times. We found that the duration of the non-Gaussian behaviour decreases with the strength of the confinement and increases with the increasing of the microscopic time scale of the subordinating process. The relations we determined can become a very useful tool in experiments to determine the value of the unknown microscopic time scale starting from the value of kurtosis.
|
Stefano Mossa - CEA Grenoble - IRIG
Theory and simulation of instantaneous normal modes in liquids
In liquids, the vibrational density of states of glasses is replaced with the Instantaneous Normal Modes (INM) spectrum. While in glasses instantaneous system configurations correspond to minima of the potential energy landscape with the eigenvalues of the associated Hessian matrix all positive (stable modes), in liquids this is no longer the case, and negative eigenvalues (unstable modes) appear. The latter provide important information on liquid dynamics and transport properties, and have been characterized numerically in the past. A systematic deeper theoretical understanding of the matter is provided by the Heterogeneous Elasticity Theory (HET). Here, space-dependent fluctuating moduli are included in the elasticity equations, naturally reproducing many aspects of the low-frequency vibrational excitations in glasses. In this talk I will present our extension of the HET to the liquid state [1], where the instantaneous-normal-mode spectrum of the liquid is described as that of an elastic medium with local shear moduli exhibiting strong spatial fluctuations, including a large number of negative values. This view provides quantitative predictions which consistently reproduce the outcome of extensive Molecular Dynamics simulations of a model soft-spheres liquid. We have characterized in depth the spectrum of the Hessian matrix, which displays a sharp maximum close to zero energy and has a strongly temperature-dependent shape, symmetric at high temperatures and becoming rather asymmetric at low temperatures, close to the dynamical critical temperature. Most importantly, we have demonstrated that the theory naturally reproduces a surprising phenomenon, a zero-energy spectral singularity with a cusp-like character developing in the vibrational spectra upon cooling. This feature, although noticed in a few previous numerical studies, was generally overlooked due to a misleading representation of the data. I will provide a thorough analysis of these issues, based on both accurate predictions of the theory and simulation data for systems of large size. [1] S. Mossa, T. Bryk, G. Ruocco, W. Schirmacher, "Heterogeneous-elasticity theory of instantaneous normal modes in liquids", Sci. Rep. 13, 21442 (2023)
|
Matteo Negri - Università di Roma Sapienza
Random Feature Hopfield Networks generalize retrieval to previously unseen examples
It has been recently shown that, when an Hopfield Network stores examples generated as superposition of random features, new attractors appear in the model corresponding to such features. In this work we show that the network also develops attractors corresponding to previously unseen examples generated with the same set of features. We claim that this surprising behaviour is due to the formation of attractors in correspondence of mixtures of features, and we support this claim by calculating analytically the phase diagram. Finally, we discuss how this framework could be used to predict generalization capabilities of modern neural networks.
|
Giuseppe Negro - University of Bari
Controlling flow patterns and topology in active emulsions
Active emulsions and liquid crystalline shells are intriguing and experimentally realizable types of topological matter. We numerically study the morphology and spatiotemporal dynamics emulsion, where one or two passive small droplets are embedded in a larger active droplet. We find that activity introduces a variety of rich and nontrivial nonequilibrium states in the system. First, a double emulsion with a single active droplet becomes self-motile, and there is a transition between translational and rotational motion: both of these regimes remain defect-free, hence topologically trivial. Second, a pair of particles nucleate one or more disclination loops, with conformational dynamics resembling a rotor or chaotic oscillator, accessed by tuning activity. In the first state, a single, topologically charged disclination loop powers the rotation. In the latter state, this disclination stretches and writhes in 3D, continuously undergoing recombination to yield an example of an active living polymer. These emulsions can be self-assembled in the lab and provide a pathway to form flow and topology patterns in active matter in a controllable way, as opposed to bulk systems that typically yield active turbulence.
|
Daniele Nello - Sissa
Thermodynamics of adiabatic pumping in quantum dots
I will present a comprehensive study of adiabatic quantum pumping through a resonant level model, consisting of a single-level quantum dot connected to two fermionic leads. A self-contained thermodynamic framework for this model is developed using adiabatic expansion techniques, incorporating variations in the energy level of the dot and the tunneling rates with the thermal baths. This approach enables a detailed analysis of various examples of pumping cycles, with key thermodynamic quantities such as entropy production and dissipated power being computed. Important insights are revealed into the relationship between these thermodynamic quantities and the system's transport properties, including the pumped charge and charge noise. Notably, the entropy production rate approaches zero in the charge quantization limit, while the dissipated power obeys a quantization rule. These findings enhance the understanding of quantum pumping mechanisms and contribute to the broader field of quantum thermodynamics by linking transport phenomena with fundamental thermodynamic principles.
|
Davide Pirovano - UniTo
Should we always train neural networks on subclasses?
In classification problems, the model has to predict class labels given the data features. However, in several datasets the classes are naturally organized in hierarchical structures. While classification task is defined at a given level of the class structure, labels with a finer granularity can be associated to the data points and used during training. For example, if we are interested in separating images of vehicles from images of animals, we can train the model directly using these two labels or, alternatively, on the multi-class problem defined by the finer labels associated to the specific type of vehicle or animal, such as car, ship, dog, cat etc. Empirical evidence suggests that the second strategy can be advantageous for performance. Our goal is to test the generality of this effect and understand its origin using real and synthetic datasets. We will show that training on fine-grained labels does not always boost the classification performance. The optimal training strategy significantly depends on the geometric structure of the data and its relation to the labels, rather than solely relying on the granularity of the labels. Factors such as the complexity of the task, dataset size, and model capacity also significantly influence the potential advantages of training with fine-grained labels.
|
Sebastiano Stramaglia - Università degli Studi di Bari Aldo Moro & INFN
Disentangling high order effects in the transfer entropy
Transfer Entropy (TE), the primary method for determining directed information flow within a network system, can exhibit bias—either in deficiency or excess—during both pairwise and conditioned calculations, owing to high-order dependencies among the dynamic processes under consideration and the remaining processes in the system used for conditioning. Here, we propose a novel approach. Instead of conditioning TE on all network processes except the driver and target, as in its fully conditioned version, or not conditioning at all, as in the pairwise approach, our method searches for both the multiplets of variables that maximize information flow and those that minimize it. This provides a decomposition of TE into unique, redundant, and synergistic atoms. Our approach enables the quantification of the relative importance of high-order effects compared to pure two-body effects in information transfer between two processes, while also highlighting the processes that contribute to building these high-order effects alongside the driver. We demonstrate the application of our approach in climatology by analyzing data from El Ni\~{n}o and the Southern Oscillation.
|
Ivan Saychenko - University of Parma
2D structure formation in nonlinear quantum mechanics
We investigate the formation and the stability of 2D structures in nonlinear quantum mechanics. In particular, we are comparing two mean-field models, motivated as models for either ultracold atoms with short-range interactions or for fuzzy cold dark matter with long-rang interactions. Future studies should extend our numerical simulations to include excitations beyond mean field using the truncated Wigner method of statistical physics.
|
Matteo Scandola - University of Trento
Analyzing Active Droplet Dynamics: Leveraging Image Processing for Non-Equilibrium Statistical Physics
Active matter can harness energy from its surroundings and propel itself away from equilibrium. Its constituents absorb energy from the environment and dissipate it, e.g. through motion or the exertion of mechanical forces. The investigation of these systems offers promising new perspectives on the field of non-equilibrium statistical physics, further paving the way for the design of innovative life-like materials and devices. In this work, we analyse the behaviour of a synthetic active matter system consisting of liquid Ethyl Silicate droplet surfers [1], whose self-propulsion decays over time, within a Sodium Dodecyl Sulfate (SDS) solution [2]. By relying on a synergetic combination of techniques, such as computer vision algorithms for accurate droplet detection [3] and analyses grounded on non-equilibrium statistical mechanics and graph theory, we quantitatively characterise all the stages of the dynamic evolution of the system, from its initial diffusive regime up to the generation of large clusters of droplets that appear as the activity wanes. The presented work showcases a comprehensive analysis of an actively evolving system, offering not only a general pipeline for the investigation of analogous problems, but also a deep perspective at the intersection between physics and synthetic biology. 1] C. Watanabe, S. Tanaka, R. J. Löffler, M. M. Hanczyc and J. Gorecki, Dynamic ordering caused by a source-sink relation between two droplets, Soft Matter, 2022, 18, 6465–6474. [2] Shinpei, Tanaka et al. “Dynamic Ordering in a Swarm of Floating Droplets Driven by Solutal Marangoni Effect.” Journal of the Physical Society of Japan 86 (2017): 101004. [3] Martin Weigert, Uwe Schmidt, Robert Haase, Ko Sugawara, and Gene Myers. Star- convex polyhedra for 3d object detection and segmentation in microscopy. In The IEEE Winter Conference on Applications of Computer Vision (WACV), March 2020.
|
Silvia Scarpetta - Dept. of Physics "E.R.Caianiello" University of Salerno, Italy
Ensemble of Convolutional Neural Networks mitigating overconfident predictions of seismic traces.
(Giovanni Messuti, Silvia Scarpetta, Ortensia Amoroso,Ferdinando Napolitano, Mariarosaria Falanga,Paolo Capuano) Deep Neural Networks (DNNs) demonstrated remarkable power in various domains, achieving state of the art performance in several tasks such as image recognition, natural language processing and so on. DNNs often face significant challenges with out-of-sample data, where the model encounters data points that differ substantially from the training set, producing unjustified overconfident predictions that lead to poor generalization. Ensemble learning is a technique where multiple models are trained to solve the same problem. Aggregating the predictions of several models, the ensemble can achieve better generalization and robustness than any individual model. Ensembles are also known to enhance uncertainty estimation, offering more reliable confidence intervals and mitigating the risks associated with out-of-sample data. In our study, we build an ensemble of Convolutional Neural Networks to classify seismic traces based on first motion polarity, aggregating the predictions of individual networks through Unweighted Model Averaging. Our ensemble model demonstrates a greater ability to manage out-of-sample data, mitigating the effect of overconfident predictions. As a result, noise-only waveforms and waveforms lacking polarity information are correctly distinguished from waveforms containing polarity information.
|
Tommaso Tonolo - GSSI (Gran Sasso Science Institute)
Generalized Lotka Volterra model on the Bethe lattice
Recent times have witnessed a burst of activity on the application of equilibrium and non-equilibrium statistical mechanics to study the behaviour of large ecosystems, in particular their stability and the nature of their equilibria. In particular, many results on the coexistence of many species have been obtained using the Generalized Lotka-Volterra model. The latter, under appropriate hypothesis on the shape of the interaction matrix between species and the stochasticity concourring to the dynamics (demographic noise), allows to recast the dynamical stability problem in terms of equilibrium statistical mechanics. We present here for the first time results on the equilibrium statistical mechanics of the Generalized Lotka-Volterra model with an interaction network between species which is sparse (Bethe lattice). Our analysis, at variance with the standard approach which makes use of a dense interaction network, reveals novel and highly non-trivial heterogeneity effects in the populations distributions, as for instance strong deviations from Gaussianity when increasing the heterogeneity of intra-species interactions. These results are in accordance with data from real ecosystems and also with other different models for ecological communities, as in [J. Grilli, Nature communications 11(1), 4743 (2020)]. In this talk I will review how the effective Hamiltonian for species interactions derived in [A. Altieri, F. Roy, C. Cammarota, G. Biroli, Phys. Rev. Lett. 126, 258301 (2021)] can be used to generate local marginals for populations distribution abundance when interactions are sparse, and I will present the main results obtained varying both the temperature (strength of demographic noise) and the heterogeneity of species interactions.
|
Davide Valenti - Università degli Studi di Palermo
Role of the noise and stochastic modeling in biological systems
We present two examples of complex systems modeled by stochastic differential equations: i) an ecosystem consisting of two interacting bacterial populations in a food product [1]; ii) a marine ecosystem described by the 0-dimensional stochastic version of the well known biogeochemical flux model (BFM) [2]. In the first system, two generalized Lotka-Volterra equations are used to describe the time behaviour of Listeria monocytogenes and lactic acid bacteria (LAB), during the fermentation period (168 h) of a typical Sicilian salami [1]. The two differential equations are set with the temperature (T ), pH, and water activity (aw ) being treated as stochastic variables. The dynamics of each of these variables indeed is governed by two ”drivers”: a linear decrease as a function of the time t; an additive noise term which mimics the effects of random fluctuations. Suitably setting both the values of the interaction coefficients between LAB and L. monocytogenes and the noise intensities, the model provides results in a better agreement with the experimental data in comparison with those obtained by the corresponding deterministic model. In the second system, starting from the experimental data of the solar irradiance, collected on the marine surface, which clearly show an intrinsic stochasticity, the ecosystem dynamics is studied by modeling the noisy fluctuations of the irradiance as a self-correlated Gaussian noise [2]. Nonmonotonic behaviours of the coefficient of variation (a proxy of the variance) of the biomasses are found as a function of both the intensity and the autocorrelation time of the noise source, which indicates a noise-induced transition of the ecosystem to an out-of-equilibrium steady state. Moreover, evidence of noise-induced effects on the organic carbon cycling processes, underlying the food web dynamics, are highlighted. We conclude noting that the stochastic modeling of biological systems can be fruitfully used to devise more realistic models for the dynamics of an ”agricoltural ecosystem”, a natural (open) system governed by nonlinear dynamics, including the effects of both deterministic environmental forcings and randomly varying perturbations. This idea agrees with the approach, used during the last two decades in predictive microbiology, which allowed to get better and more reliable predictions for bacterial growth in food products [1, 3]. [1] A. Giuffrida, DV, G. Ziino, B. Spagnolo, A. Panebianco, A stochastic interspecific competition model to predict the behaviour of Listeria monocytogenes in the fermentation process of a traditional Sicilian salami, Eur. Food Res. Technol. 228, 767 (2009). [2] R. Grimaudo, P. Lazzari, C. Solidoro, DV, Effects of solar irradiance noise on a complex marine trophic web, Sci. Rep. 12, 12163 (2022). [3] DV, G. Denaro, F. Giarratana, A. Giuffrida, S. Mazzola, G. Basilone, S. Aronica, A. Bonanno, B. Spagnolo, Modeling of sensory characteristics based on the growth of food spoilage bacteria, Math. Model. Nat. Phenom. 11, 119 (2016).
|