Will User-Contributed AI Training Data Eat Its Own Tail?
This paper examines and finds that the answer is likely to be no. The environment examined starts with users who contribute based on their motives to create a public good. Their own actions determine the quality of that public good but also embed a free-rider problem. When AI is trained on that data, it can generate similar contributions to the public good. It is shown that this increases the incentive of human users to provide contributions that are more costly to supply. Thus, the overall quality of contributions from both AI and humans rises compared to human-only contributions. In situations where platform providers want to generate more contributions using explicit incentives, the rate of return on such incentives is shown to be lower in this environment.
Published Versions
Joshua S. Gans, 2024. "Will user-contributed AI training data eat its own tail?," Economics Letters, .