Tackling AI, taxation, and the fair distribution of AI’s benefits

""

Artificial intelligence poses significant challenges to income growth, inequality, and meaningful work. At the same time, AI has geopolitical significance, with many national and supranational governments shying away from intervening in the rapidly growing industry to preserve its potential for global economic leadership and leverage it fully for national security.

Consequently, a handful of companies have become the sole players that are sufficiently resourced to compete in the heated global AI market that is already accelerating inequality both domestically and globally. As these highly capitalized tech corporations bank on AI’s disruptive power and future success, the question of how to level the playing field—whether by leveraging existing corporate income taxation regimes or introducing a novel AI tax—becomes extremely urgent. 

AI-powered services are slippery. They can be traded easily across borders, making it difficult for national governments to ensure that consumers, producers, government agencies, and workers benefit from them equitably. The emergence of multinational digital behemoths that amass significant economic and political clout thanks to their control of data and computing resources has been heightening social inequalities. Importantly, research finds this concentration of resources contributes to lower economic dynamism and less innovation—a longstanding concern both in the United States and around the world.

Unsurprisingly, then, governments, academics, and civil society are now looking for concrete approaches to address AI’s inequality issue, with some looking to the tax code as a potential avenue for change. Taxation can redistribute wealth and fund social services, reduce wealth gaps, and promote greater economic equity in society. Therefore, if taxation can address the high levels of concentration in the digital industry, then it will likely help stimulate a more equal, innovative, and prosperous society.  

Tax lawyers already have started investigating how so-called informational capitalism—the increasing role of data, networks, and digital platforms in driving growth, labor market transformations, and the distribution of gains from technological progress—affects states’ fiscal capacities and potentially contributes to tax avoidance by firms, including companies that dominate key parts of the AI supply chain. At the same time, scholars and policymakers alike have centered the fear of social costs flowing from large-scale automation through AI by contemplating a “robot tax” intended to disincentivize firms’ replacement of workers with machines.

Much of these discussions have informed significant advances regarding the global taxation of digital services. The Organisation for Economic Co-operation and Development’s Two‐Pillar Solution, for example, is designed to address tax avoidance and harmonize international tax rules by implementing a 15 percent minimum tax rate for multinational enterprises operating in the digital economy, regardless of the location of their operations. Yet this ambitious framework does not explicitly address AI, and the success of its implementation and enforcement remains to be seen.

Moreover, increasing corporate taxes can help rectify the inequity arising from taxing workers’ wages more than companies’ profits, yet increasing taxes on corporations’ incomes from AI risks reducing digital innovation—or, worse, encouraging tax evasion.  

AI-powered digital services also pose specific problems that are not properly addressed by broad-stroke approaches staked on a digital economy frame. First, utilizing AI systems requires significant and costly computing resources. This creates barriers to use by low-income countries or firms operating on thin margins, unevenly distributing the potential benefits of AI-driven automation and innovation.

Additionally, AI tools are trained on electronic data that are freely available on the internet—without the owners of that data being properly remunerated. Not only do the originators of these data used for AI training not benefit financially from their data being used, as shown by the recent writers’ strike in the U.S. film industry, but also governments may find it difficult to tax revenue streams from data and AI services that are intentionally based in jurisdictions that minimize tax liabilities and financial scrutiny.

Taxing inputs instead of outputs—that is, taxing the provision of data to AI developers through mobile applications or the use of cloud services as they build and train their systems—does not allow for differentiation between the actual contribution of these data and services to profits and the added value of the products and services produced by AI systems. Indeed, not all data used to train AI tools is valued to the same extent. How companies go about this valuation process is inaccessible to outsiders since disclosure is not required. A regulatory solution—whether through stress-testing the market or by gaining access to training data valuations processes—is urgently needed to ensure fair and competitive markets.

The development, training, and use of AI also causes significant negative externalities, including environmental and social costs, which are not fully accounted for. Indeed, the high energy costs and related carbon emissions produced by ever-increasing computing requirements to develop AI help explain the increasing concentration of the industry since it has become very expensive to break into the industry. Environmental taxation might provide a route to generate a fair distribution of incomes in the AI field, but such a step would need to address the challenges of existing frameworks, such as carbon credit trading. Additionally, taxing data inputs based on energy consumption only loosely reflects the value these tools create.

Clearly, the nascent field of AI taxation needs both expansion and deepening. Concrete and incremental strategies for taxing AI will be key, including:

  • Examine valuation practices and AI-relevant legislation, including intellectual property rights. There are court rulings in Europe, the United States, and Australia that ascribe a monetary value to AI systems, such as in product liability cases, antitrust rulings, or data-breach cases that quantify damages. These instances can be instructive for thinking about the profit-making dynamics of AI, especially the expropriation of intellectual property rights.
  • Sketch out the potential international AI tax base and how it can be measured. It remains unclear whether and how AI can be measured as an asset, as income, or as economic activity, as well as what components would need to be included, such as model, data, server, chips, and so on. Answers to these questions are key for considering potential AI tax liabilities but would require the development of a new accounting system for the digital commons, alongside (digital) compliance mechanisms. Interestingly, AI could potentially optimize tax, accounting, and compliance systems, making them more efficient.
  • Investigate loopholes and avoidance strategies in existing tax systems. Such an analysis would need to bedriven by the question of how the tax code incentivizes key AI actors and key adjacent industries, and how key players are evading current taxation regimes. Large global tech companies—notably those with the resources to develop and deploy AI—routinely avoid taxes by shifting revenue and profits through tax havens or low-tax countries, or delay payment of taxes. In this regard, tax authorities should themselves consider the use of AI for fraud detection and tax evasion.
  • Evaluate the size and impact of AI’s negative externalities across its entire value chain and integrate it into existing regulatory mechanisms. There are already models for ascribing monetary value to AI’s most immediate effects—for example, existing carbon emission frameworks can and do apply to the environmental footprint of AI. Assessing these existing frameworks could be instructive for considering strategies for AI taxation.
  • Assess how to leverage AI taxation frameworks both to prevent excessive use of AI that may heighten worker surveillance rather than improve productivity and to incentivize meaningful AI use. Increasingly, evidence is mounting that AI is not providing vastly improved productivity across all industries and types of jobs. Shifting the tax burden away from labor toward (digital) capital can go a long way to prevent inefficient automation and strengthen incentives for productivity-enhancing innovations.

In a world in which rapid AI deployment and aggressively heightened inequalities collide, it is time to find answers to the question of whether a bold AI taxation framework can be the key to curbing inequality, tackling environmental costs, and reshaping the unchecked dominance of tech giants.

Mona Sloane is an assistant professor of data science and media studies at the University of Virginia. She studies the intersection of technology and society, specifically in the context of AI design, use, and policy. She is a faculty lead in the Digital Technology and Democracy Lab at UVA’s Karsh Institute of Democracy, affiliated faculty with the Department of Women, Gender and Sexuality, and faculty affiliate with the Thriving Youth in a Digital Environment research initiative. Sloane also convenes the Co-Opting AI series and serves as the editor of the Co-Opting AI book series at the University of California Press, as well as the technology editor for Public Books. Her growing research group, Sloane Lab, conducts empirical research on the implications of technology for the organization of social life and spearheads social science leadership in applied work on responsible AI, public scholarship, and technology policy.

Ekkehard Ernst is chief macroeconomist at the International Labour Organization, where he is responsible for understanding the future of work and analyzing alternative paths for jobs and earnings to improve upon current trends. His work helps decision-makers understand developments in skills and labor costs around the globe, providing them with the necessary intelligence to make effective long-term decisions. Before joining the ILO in 2008, he worked at the Organisation for Economic Co-operation and Development and the European Central Bank. He has published extensively in the area of labor market trends and reforms and the impact of financial markets on jobs. Ernst studied in Mannheim, Saarbrücken, and Paris and holds a Ph.D. from the École des Hautes Études en Sciences Sociales.


Did you find this content informative and engaging?
Get updates and stay in tune with U.S. economic inequality and growth!

Related

Connect with us!

Explore the Equitable Growth network of experts around the country and get answers to today's most pressing questions!

Get in Touch