Bill Summary
The Identifying Outputs of Generative Adversarial Networks Act, also known as the IOGAN Act, is a bill passed by Congress that aims to direct the Director of the National Science Foundation (NSF) to support research on generative adversarial networks (GANs) and other similar technologies that can produce manipulated or synthesized content, known as deepfakes. The bill also calls for the National Institute of Standards and Technology (NIST) to support research and standards for examining the function and outputs of GANs. Additionally, the bill requires a report to be submitted to Congress on the feasibility of public-private partnerships to detect manipulated or synthesized content. The term "generative adversarial network" is defined in the bill as a type of artificial intelligence that involves two competing neural networks, one generating content and the other detecting it, in order to produce more accurate and high-quality outputs. This legislation recognizes the potential national security and societal impact of GANs and aims to support research and development in this area.
Possible Impacts
1. This legislation could potentially affect researchers and scientists who are studying artificial intelligence, as it directs the National Science Foundation to support research on the outputs of generative adversarial networks. This could lead to increased funding and resources for these individuals, but could also put pressure on them to produce results in a timely manner.
2. The public could also be affected by this legislation, as it calls for research on public understanding and awareness of manipulated and synthesized content. This could lead to educational initiatives aimed at helping people discern the authenticity of digital content, but could also raise concerns about privacy and the spread of misinformation.
3. Private companies, particularly digital media companies, could be impacted by this legislation as well. The report called for in section 5 will explore the feasibility of partnerships between the private sector and the government to detect manipulated or synthesized content. This could lead to potential collaborations and sharing of resources, but could also result in increased regulation and scrutiny for these companies.
[Congressional Bills 116th Congress] [From the U.S. Government Publishing Office] [H.R. 4355 Referred in Senate (RFS)] <DOC> 116th CONGRESS 1st Session H. R. 4355 _______________________________________________________________________ IN THE SENATE OF THE UNITED STATES December 10, 2019 Received; read twice and referred to the Committee on Commerce, Science, and Transportation _______________________________________________________________________ AN ACT To direct the Director of the National Science Foundation to support research on the outputs that may be generated by generative adversarial networks, otherwise known as deepfakes, and other comparable techniques that may be developed in the future, and for other purposes. Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, SECTION 1. SHORT TITLE. This Act may be cited as the ``Identifying Outputs of Generative Adversarial Networks Act'' or the ``IOGAN Act''. SEC. 2. FINDINGS. Congress finds the following: (1) Research gaps currently exist on the underlying technology needed to develop tools to identify authentic videos, voice reproduction, or photos from manipulated or synthesized content, including those generated by generative adversarial networks. (2) The National Science Foundation's focus to support research in artificial intelligence through computer and information science and engineering, cognitive science and psychology, economics and game theory, control theory, linguistics, mathematics, and philosophy, is building a better understanding of how new technologies are shaping the society and economy of the United States. (3) The National Science Foundation has identified the ``10 Big Ideas for NSF Future Investment'' including ``Harnessing the Data Revolution'' and the ``Future of Work at the Human- Technology Frontier'', in with artificial intelligence is a critical component. (4) The outputs generated by generative adversarial networks should be included under the umbrella of research described in paragraph (3) given the grave national security and societal impact potential of such networks. (5) Generative adversarial networks are not likely to be utilized as the sole technique of artificial intelligence or machine learning capable of creating credible deepfakes and other comparable techniques may be developed in the future to produce similar outputs. SEC. 3. NSF SUPPORT OF RESEARCH ON MANIPULATED OR SYNTHESIZED CONTENT AND INFORMATION SECURITY. The Director of the National Science Foundation, in consultation with other relevant Federal agencies, shall support merit-reviewed and competitively awarded research on manipulated or synthesized content and information authenticity, which may include-- (1) fundamental research on digital forensic tools or other technologies for verifying the authenticity of information and detection of manipulated or synthesized content, including content generated by generative adversarial networks; (2) fundamental research on technical tools for identifying manipulated or synthesized content, such as watermarking systems for generated media; (3) social and behavioral research related to manipulated or synthesized content, including the ethics of the technology and human engagement with the content; (4) research on public understanding and awareness of manipulated and synthesized content, including research on best practices for educating the public to discern authenticity of digital content; and (5) research awards coordinated with other federal agencies and programs including the Networking and Information Technology Research and Development Program, the Defense Advanced Research Projects Agency and the Intelligence Advanced Research Projects Agency. SEC. 4. NIST SUPPORT FOR RESEARCH AND STANDARDS ON GENERATIVE ADVERSARIAL NETWORKS. (a) In General.--The Director of the National Institute of Standards and Technology shall support research for the development of measurements and standards necessary to accelerate the development of the technological tools to examine the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content. (b) Outreach.--The Director of the National Institute of Standards and Technology shall conduct outreach-- (1) to receive input from private, public, and academic stakeholders on fundamental measurements and standards research necessary to examine the function and outputs of generative adversarial networks; and (2) to consider the feasibility of an ongoing public and private sector engagement to develop voluntary standards for the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content. SEC. 5. REPORT ON FEASIBILITY OF PUBLIC-PRIVATE PARTNERSHIP TO DETECT MANIPULATED OR SYNTHESIZED CONTENT. Not later than 1 year after the date of the enactment of this Act, the Director of the National Science Foundation and the Director of the National Institute of Standards and Technology shall jointly submit to the Committee on Space, Science, and Technology of the House of Representatives and the Committee on Commerce, Science, and Transportation a report containing-- (1) the Directors' findings with respect to the feasibility for research opportunities with the private sector, including digital media companies to detect the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content; and (2) any policy recommendations of the Directors that could facilitate and improve communication and coordination between the private sector, the National Science Foundation, and relevant Federal agencies through the implementation of innovative approaches to detect digital content produced by generative adversarial networks or other technologies that synthesize or manipulate content. SEC. 6. GENERATIVE ADVERSARIAL NETWORK DEFINED. In this Act, the term ``generative adversarial network'' means, with respect to artificial intelligence, the machine learning process of attempting to cause a generator artificial neural network (referred to in this paragraph as the ``generator'' and a discriminator artificial neural network (referred to in this paragraph as a ``discriminator'') to compete against each other to become more accurate in their function and outputs, through which the generator and discriminator create a feedback loop, causing the generator to produce increasingly higher-quality artificial outputs and the discriminator to increasingly improve in detecting such artificial outputs. Passed the House of Representatives December 9, 2019. Attest: CHERYL L. JOHNSON, Clerk.