The recent surge of multiscale modeling from the smallest scale (atoms) to full system level (e.g., autos) related to solid mechanics that has now grown into an international multidisciplinary activity was birthed from an unlikely source.
Since the US Department of Energy (DOE) national labs started to reduce nuclear underground tests in the mid-1980s, with the last one in 1992, the idea of simulation-based design and analysis concepts were birthed.
Within ASCI, the basic recognized premise was to provide more accurate and precise simulation-based design and analysis tools.
Because of the requirements for greater complexity in the simulations, parallel computing and multiscale modeling became the major challenges that needed to be addressed.
In addition, personnel from these national labs encouraged, funded, and managed academic research related to multiscale modeling.
Since more degrees of freedom could be resolved by parallel computing environments, more accurate and precise algorithmic formulations could be admitted.
At LANL, LLNL, and ORNL, the multiscale modeling efforts were driven from the materials science and physics communities with a bottom-up approach.
Each had different programs that tried to unify computational efforts, materials science information, and applied mechanics algorithms with different levels of success.
At SNL, the multiscale modeling effort was an engineering top-down approach starting from continuum mechanics perspective, which was already rich with a computational paradigm.
From the DOE national labs perspective, the shift from large-scale systems experiments mentality occurred because of the 1996 Nuclear Ban Treaty.
Once industry realized that the notions of multiscale modeling and simulation-based design were invariant to the type of product and that effective multiscale simulations could in fact lead to design optimization, a paradigm shift began to occur, in various measures within different industries, as cost savings and accuracy in product warranty estimates were rationalized.
In other words, to run an atmospheric model that is having a grid size (very small ~ 500 m) which can see each possible cloud structure for the whole globe is computationally very expensive.
On the other hand, a computationally feasible Global climate model (GCM), with grid size ~ 100 km, cannot see the smaller cloud systems.
So we need to come to a balance point so that the model becomes computationally feasible and at the same time we do not lose much information, with the help of making some rational guesses, a process called parametrization.
[citation needed] Besides the many specific applications, one area of research is methods for the accurate and efficient solution of multiscale modeling problems.