During the early 20th century, pioneers such as Jan Tinbergen and Ragnar Frisch advanced the computerization of economics and the growth of econometrics.
As a result of advancements in Econometrics, regression models, hypothesis testing, and other computational statistical methods became widely adopted in economic research.
The scientific objective of the method is to test theoretical findings against real-world data in ways that permit empirically supported theories to cumulate over time.
Traditional economics partially normalize the data based on existing principles, while machine learning presents a more positive/empirical approach to model fitting.
Although Machine Learning excels at classification, predication and evaluating goodness of fit, many models lack the capacity for statistical inference, which are of greater interest to economic researchers.
For example, economics researchers might hope to identify confounders, confidence intervals, and other parameters that are not well-specified in Machine Learning algorithms.
In addition, reduced emphasis on data analysis would enable researchers to focus more on subject matters such as causal inference, confounding variables, and realism of the model.
Under the proper guidance, machine learning models may accelerate the process of developing accurate, applicable economics through large scale empirical data analysis and computation.
[13] Dynamic modeling methods are frequently adopted in macroeconomic research to simulate economic fluctuations and test for the effects of policy changes.
DSGE models utilize micro-founded economic principles to capture characteristics of the real world economy in an environment with intertemporal uncertainty.
Economists embraced Stata as one of the most popular statistical analytics programs due to its breadth, accuracy, flexibility, and repeatability.