Finding Viable Management Solutions for Artificial Intelligence

2024-01-19

Concerns about the spread of generative AI technology seem to be as fast as the technology itself. These concerns are driven by anxiety about the unprecedented scale at which false information can spread, as well as concerns about unemployment and loss of control over creative works. The more futuristic concern is that AI becomes so powerful that it leads to the extinction of the human species.


These concerns have sparked calls for the regulation of AI technology. For example, the European Union has responded to citizen-driven calls for regulation, while countries like the UK and India have taken a more laissez-faire approach.


In the United States, the White House issued an executive order titled "Secure, Safe, and Trustworthy Artificial Intelligence" on October 30, 2023. It sets guidelines for reducing immediate and long-term risks from AI technology. For example, it requires AI vendors to share security testing results with the federal government and calls on Congress to enact consumer privacy legislation to address the collection of personal data by AI technology.


Given the push for AI regulation, it is important to consider which regulatory approaches are feasible. This question has two aspects: what is technically feasible today and what is economically feasible.


1. Respect Copyright


One approach to regulating AI is to limit training data to public domain materials and copyrighted materials that AI companies have obtained permission to use. AI companies can accurately determine which data samples can be used for training and can only use licensed materials. This is technically feasible.


It is economically feasible. The quality of AI-generated content depends on the quantity and richness of the training data. Therefore, it is advantageous for AI vendors not to limit themselves to using only licensed content. However, some generative AI companies are now claiming that they only use licensed content. An example is Adobe and its Firefly image generator.


2. Attribute Output to Training Data Creators


Attributing the output of AI technology to specific creators - artists, singers, writers, etc. - so that they can be compensated is another potential regulatory approach to generative AI. However, the complexity of the AI algorithms used makes it impossible to determine which input samples the output is based on. Even if that were possible, it would be impossible to determine the extent to which each input sample contributes to the output.


Attribution is an important issue because it is likely to determine whether creators or their licensors choose to embrace or resist AI technology. The 148-day Hollywood writers' strike and the concessions obtained demonstrate this issue.


This type of regulation at the output end of AI is not currently feasible technically.


3. Distinguish Between Human and AI-Generated Content


One direct concern about AI technology is that it can generate false information. This has already happened to some extent, which is a significant concern for the public, who rely on reliable news sources for democratic purposes.


There are many activities in the startup field aimed at developing technologies that can distinguish between AI-generated and human-generated content, but so far, this technology lags behind generative AI technology. Current methods focus on identifying patterns of generative AI, which is almost certainly a losing battle.


This type of regulation of AI is not currently feasible technically, although there may be rapid progress in this area.


4. Attribute Output to AI Companies


Attributing the output of AI to specific AI vendors is possible by using well-understood and mature cryptographic signature technology. AI vendors can encrypt sign all their system's outputs, and anyone can verify these signatures.


This technology is already embedded in basic computing infrastructure - for example, when a web browser verifies the website you are connecting to. Therefore, AI companies can easily deploy it. The different question is whether one wants to rely on AI-generated content from a few large, long-established vendors whose signatures can be verified.


Thus, this form of regulation is technically and economically feasible. It focuses on the output end of AI tools.


It will be important for policymakers to understand the potential costs and benefits of each form of regulation. But first, they need to understand which regulatory approaches are technically and economically feasible.