Artificial Intelligence in Economic Modeling and Forecasting
Tarih
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Erişim Hakkı
Özet
Artificial Intelligence (AI) originated from multiple disciplines, with Alan Turing’s work on the Turing Machine laying its foundation. Early developments such as neural networks and the Turing Test marked the beginning of AI’s evolution through cycles of enthusiasm and setbacks. Recent breakthroughs in big data, GPU computing, and deep learning have made AI a part of daily life, from healthcare and translation to robotics and gaming. However, its rapid expansion raises serious concerns regarding autonomous weapons, surveillance, labor displacement, and the existential threat posed by unsupervised superintelligent systems. Reflecting this surge, AI and machine learning publications have skyrocketed since 2019, with most research emerging only in the past five years and spanning diverse fields beyond computer science. In parallel, machine learning techniques have increasingly influenced modern economics, building on econometric tools such as regression, principal components, and ARIMA models. Concepts such as supervised learning and methods like logistic regression, LASSO, and neural networks have bridged the gap between traditional econometrics and data science, enhancing predictive accuracy and flexibility. Lawrence Klein’s Current Quarter Model (CQM), which leverages high-frequency indicators and bridge equations to nowcast GDP, exemplifies this integration. His approach, now echoed in MIDAS regressions and global modeling efforts like Project LINK, remains vital. The COVID-19 shock underscored the need for adaptive, interdisciplinary forecasting frameworks that incorporate health, behavioral, and environmental variables in an interconnected world. The use of AI, specifically machine learning (ML), in official statistics is very recent compared to other disciplines and areas. This may seem contradictory to the objectives of official statistics. At the same time, the digital revolution led to an abundance of all types of data and the demand for data has considerably increased. Several reasons may be specified for this delay. One reason may be the structure of official statistical organizations and the role of statisticians in these organizations. The breakthroughs in AI technology and the use of satellite imagery have disrupted the way official statisticians collect, process, and analyze data. There is skepticism among some official statisticians about employing new technological developments. Official statisticians are reluctant to work with data that does not rely on probability samples and legacy methods. Concerns about quality, ethics, and privacy are the major factors contributing to this unwillingness. There is also an insufficiency of both financial and human resources. Over the last seven years, the efforts of the United Nations Economic Commission for Europe (UNECE) within the framework of modernizing official statistics, along with its Machine Learning Group, have played a significant role in many national statistical offices adopting and applying machine learning methods. In two years, it grew from 120 statisticians from 23 countries to more than 400 statisticians from 35 countries. Currently, many NSOs and international organizations are involved in developing applications of ML in various areas of data collection. In various areas of data collection, many NSOs and international organizations are also involved in developing applications of ML. The IMF has developed the PortWatch Platform, utilizing satellite-based vessel data to provide real-time indicators of port and trade activity. Statistics Colombia is predicting poverty rates using daytime and nighttime satellite imagery. Statistics Indonesia has a similar project. Statistics Netherlands is using webscraping to identify different types of companies. The U.S. Census Bureau and the Bureau of Transportation Statistics (BTS) jointly produce the Commodity Flow Survey. They reduced manual workload by using machine learning methods. Federal Statistical Office of Switzerland developed StatBot, a chatbot for sharing statistical information, soon to provide services in three different languages. The Swedish Land Registry (SLR) is the government agency with the mission of securing the ownership of real estate and making geodata available for the society. SLR uses handwritten text recognition together with neural networks to get information from documents going back to 1850s. The Australian Bureau of Statistics is undertaking a comprehensive review of the Australian and New Zealand Standard Classification of Occupations using large language models. Statistics Canada also has explored the use of large language models to automate and enhance statistical report generation, aiming to improve efficiency and reduce manual workloads. The experience of statistical offices showed that machine learning has proven to contribute to producing data that is more relevant, with better quality, in a faster or more cost-efficient manner, without any significant reduction to any of these dimensions. Machine learning is advantageous particularly in processes that are labor intensive, repetitive and stable, such as in classification and coding. Another lesson from the activities of the Machine Learning group is that sharing and collaboration within and between statistical organizations are also essential to advance the use of machine learning based on lessons learned on where it adds value, where it shows promise and where it offers less value. Artificial intelligence (AI) is reshaping economic policymaking by enabling more dynamic, data-driven analysis and forecasting. Unlike traditional models, AI systems, especially those utilizing machine learning, can adapt to changing conditions and extract insights from massive datasets. Central banks and institutions, such as the IMF and World Bank, now utilize AI for inflation tracking, labor analysis, and risk forecasting, while natural language processing aids in interpreting media and public sentiment. AI enhances forecasting accuracy for key indicators, such as GDP and inflation, by continuously updating projections. Deep learning and reinforcement learning further enhance real-time decision-making in an increasingly volatile global economy. AI is also transforming fiscal policy, trade, and regulation. Governments use predictive analytics for tax reform, compliance, and investment planning, while AI models assess trade shocks and climate risks. However, the rapid adoption of AI raises concerns about bias, transparency, and inequality, particularly in developing countries that lack data infrastructure and expertise. Ensuring AI systems are ethical, auditable, and inclusive is essential. Ultimately, AI's societal impact will hinge not just on innovation but on building governance frameworks that safeguard human rights and promote equitable outcomes. The global approach to AI regulation is fragmented. The EU leads with comprehensive laws, while countries like the U.S. favor sector-specific guidelines, and China pursues centralized, state-aligned control. International bodies such as the OECD promote ethical principles, but challenges remain, including cross-border enforcement, rapid innovation, and definitional ambiguity. To govern AI ethically, regulations must embed transparency, explainability, and oversight from the start. Independent audits, impact assessments, and robust privacy protections are crucial, especially in sensitive sectors such as healthcare and justice. Public trust depends on democratic participation and the inclusion of marginalized voices in shaping AI governance. Human-centered AI (HCAI) presents an alternative vision, one that supports rather than replaces human decision-making, and promotes usability, accountability, and equity. In fields such as education and healthcare, HCAI can enhance services while upholding ethical standards. However, AI’s labor market effects are concerning, as automation threatens jobs and exacerbates inequality. Without deliberate policies, such as reskilling and fair labor protections, especially in the Global South, AI could deepen global divides. Yet with inclusive governance, AI has the potential to reduce poverty, empower workers, and create a more equitable digital economy. The labor market implications of AI are profound. Cognitive automation threatens both lowskill and middle-income jobs while concentrating wealth among those who own the technology. These risks are widening income inequality and weakening social cohesion. Scholars such as Daron Acemoglu warn of "excessive automation" that replaces workers rather than empowering them, while others like Erik Brynjolfsson advocate for worker-augmenting AI and institutional reform to ensure inclusive innovation. Global disparities are stark—developed nations invest in reskilling and infrastructure, while developing economies face job displacement without adequate digital capacity. AI can be a force for upward mobility or social fragmentation, depending on how societies manage the transition. The impact of AI on poverty will depend heavily on policy choices. While it has already enabled life-saving advances in agriculture, healthcare, education, and microfinance in countries like Kenya, Colombia, and India, it also risks excluding low-income workers through automation and exploitative digital labor models. The rise of precarious gig work, digital piecework, and content moderation in the Global South underscores the need for inclusive labor protections, fair compensation, and recognition of data as a form of labor. Without intervention, the benefits of AI will continue to deepen global inequalities. With deliberate governance, however, AI can help build a fairer, more resilient, and more equitable digital economy.












