MINUTES

Study Committee on Artificial Intelligence and Regulation of Internet Access by Minors

Senator Steve Kolbeck, Co-Chair

Representative Mike Weisgram, Co-Chair

First Meeting, 2024 Interim Room 414 – State Capitol

Tuesday, June 18, 2024 Pierre, South Dakota

The first meeting of the 2024 Study Committee on Artificial Intelligence and Regulation of Internet Access by Minors was called to order by Representative Mike Weisgram at 9:00 a.m. (CT) in Room 414 of the State Capitol in Pierre, South Dakota.

A quorum was determined with the following members answering roll call: Representatives Amber Arlint, Byron Callies (remote), Eric Emery (remote), Chris Karr, Bethany Soye (remote), Tony Venhuizen, and Mike Weisgram, and Senators Jim Mehlhaff and Steve Kolbeck. Representative Duffy and Senators Duhamel, Johnson, Larson, Walsh, and Wheeler were excused.

Staff members present included Amanda Marsh, Committee Services Administrator; Jacob Carlson, Research Analyst; Mitch Honan, Fiscal Analyst; Hilary Carruthers, Legislative Systems Analyst; and Joleh McCullough, Administrative Specialist.

NOTE: For the purpose of continuity, the following minutes are not necessarily in chronological order. Also, all referenced documents distributed at the meeting are hyperlinked to the document on the Legislative Research Council website. This meeting was live streamed. The archived live stream is available at the LRC website at sdlegislature.gov.

Opening Remarks

Representative Weisgram welcomed everyone to the committee. He said AI will bring advancements to society, yet there are many concerns regarding how AI can be used for ill intentions. He said regarding internet access by minors, many are convinced that heavy use of social media is causing mental health problems in children and teenagers. He added that social media addiction, anxiety, depression, eating disorders, low self-esteem, and suicidal tendencies have caused the filing of social media lawsuits. He emphasized that an increasing number of states have passed child safety legislation that limit social media exposure.

Representative Arlint said she is looking forward to discussing guardrails that could be put in place regarding social media, and discussing how that can help mental health in the youth population.

Representative Emery emphasized that AI is a growing trend that the legislature should learn about and prepare for.

Representative Karr said it is important to be proactive with new technology and to learn about how it will affect state government and education. He also emphasized the importance of discussing what other information the committee will need to move forward with legislation.

Senator Mehlhaff said it is important to understand AI, which can bring many potential benefits as well as potential harm, so that the committee can craft regulations for it. With respect to regulating minors' access to harmful content on the internet, Senator Mehlhaff said it is important to put guardrails in place. He also commented that it would be helpful to keep a narrow scope to this study, as the internet is an expansive topic.

Representative Soye said she is an advocate for parents and children. She emphasized the importance of including pornography as a part of their discussions about internet access and social media and suggested that the committee consider what other states have done regarding these topics. She also expressed interest in learning about AI and how it will be regulated in regard to education.

Representative Venhuizen commented that it is important for the committee to discuss ways to prevent or criminalize wrongdoing as it relates to AI. He expressed interest in the impact AI will have on South Dakota schools. He added that there is an aspect of social pressure regarding the use of social media by teenagers and children.

Department of Education (DOE)

Dr. Joseph Graves, Secretary, DOE, said the educational community in South Dakota has taken steps to address the reality of AI. He said school districts through their governing boards have adopted policies. He said the Associated School Boards of South Dakota (ASBSD) has drafted a model AI policy and has provided the model to each public school district in the state. The policy has been adopted by each district, with minor edits to individualize the policy to each district's needs. A copy of the model policy was provided to the committee (Document 1). Dr. Graves said the DOE does not oversee these policies and their implementation or provide endorsement for these policies.

Dr. Graves spoke on how American education is engaging with AI. He said the Education Commission of the States provided an overview of the topic last December (Document 2). Dr. Graves said this document has been circulated among educators across the country. Policies discussed in the document include reporting requirements when using facial recognition, AI coursework, and the use of AI teaching assistants (including holograms).

Mr. David DeJong, Dean of College Education and Human Performance, Dakota State University (DSU), spoke on a joint venture of DSU, the School Administrators of South Dakota, ASBSD, and the DOE to provide four, one-day, regional development opportunities regarding AI in August of last year. Mr. DeJong said there were four regional trainings across the state in Rapid City, Aberdeen, Harrisburg, and Chamberlain. He said the audience members consisted of teachers, principals, counselors, superintendents, and school board members. Mr. DeJong said the trainings were well received, and there are plans to have more trainings this fall.

Senator Kolbeck asked about DOE's concerns regarding inequalities for students if different schools adopted different policies on AI. Dr. Graves responded that currently schools are using individual approaches, but the DOE will discuss the outcomes of those different approaches to determine the best practices.

Representative Karr asked if teachers are using AI to scan research papers to determine whether they were AI generated. Dr. Graves responded that some districts have subscriptions to companies and software processes that scan papers for plagiarism, and some have begun to use AI to determine if papers were AI-generated. Dr. Graves added that schools also have access testing, which uses AI to check for AI-generated responses.

Representative Arlint asked about the feedback from teachers on the role cell phones play in the classroom regarding distractions and access to AI and harmful materials. Dr. Graves responded that generally, teachers regard cell phones as a challenge, as they are very distracting and negatively affect students' quality of life.

Representative Weisgram asked if school districts have requested a template for how to utilize or regulate AI in schools. Dr. Graves replied that while superintendents have asked for ideas and input, they have not asked for directions from the DOE.

Representative Venhuizen asked Dr. Graves for his opinion on cell phone usage in schools. Dr. Graves responded that there are solutions, and limiting cell phone usage has had a positive impact on the school day. If there is a statewide policy implemented, Dr. Graves emphasized the need to respect local control while schools experiment with different cell phone policies throughout the state.

National Conference of State Legislatures (NCSL)

Ms. Heather Morton, Director, NCSL, provided an overview of recent state legislation regarding AI (Document 3). She said no consensus has emerged in AI definitions, as it is still a developing field and industry. This can make it challenging for policymakers as they seek to create a regulatory framework. Examples of AI definitions can be found in the National Artificial Intelligence Initiative Act of 2020, Connecticut Senate Bill 1103, Texas House Bill 2060, and the European Union Artificial Intelligence Act.

Ms. Morton said bipartisan efforts in state legislatures are seeking to find a balance between protections for citizens and enabling innovation in state government services and in the commercial use of AI. Ms. Morton said looking back to the 2023 legislative session, there has been a large increase in AI-related legislation. She covered AI legislation regarding healthcare, deepfakes and other synthetic media, elections, and recent state legislation and related court challenges regarding social media and children. Ms. Morton gave examples of social media legislation, including Arkansas's Social Media Safety Act, which would require age verification and parental consent for the use of social media.

Ms. Morton highlighted cases that have been filed by NetChoice - a trade association for internet companies. NetChoice has filed lawsuits against enacted laws that seek to require parental consent or age verification to open or use general social media platforms and apps in Arkansas, Mississippi, Ohio, and Utah. To date, NetChoice has won preliminary injunctions in Arkansas and Ohio. Ms. Morton also covered another set of legal challenges brought by the Free Speech Coalition (FSC) - the trade association of the adult entertainment industry based in the United States. The FSC is challenging state laws that set age verification requirements for the distribution of sexual material harmful to minors through adult content websites, applications, and other digital and virtual platforms.

Senator Kolbeck asked if the first step for the South Dakota Legislature regarding AI would be to create a definition for AI, or if AI definitions are typically included within each piece of legislation. Ms. Morton said many of the bills in state legislatures regarding AI include definitions, enabling each bill to describe how it will regulate AI.

Representative Karr asked for a definition of an addictive feed. Ms. Morton replied that algorithms on social media apps are created to cause addiction, and states with social media legislation are attempting to regulate how social media companies use those feeds.

Dakota State University (DSU)

Dr. Jose-Marie Griffiths, President, DSU, presented on the fundamentals of AI (Document 4). She said AI has existed since the early 1950s. Early investigations in AI attempted to create an artificial brain. Since then, there has been continued research, including federally funded research in the National Science Foundation, the Department of Energy, and the National Institute of Health.

Dr. Griffiths classified the types of computer algorithms that exist within AI: sequential algorithms, pattern recognition, classification, and statistical prediction. She said a simple example of statistical prediction is autocorrect, and a complex example is ChatGPT.

Dr. Griffiths highlighted the potential concerns for using AI. She said transparency is a concern, as consumers will want to know whether a product is produced using generative AI. A second concern is authenticity, as AI provides the ability to produce fakes. A third area of concern is bias. Bias exists because algorithms are written and trained using data, so if incorrect data is used, a biased result will be produced.

Senator Kolbeck asked if Dr. Griffiths had concerns with the use of AI in state university admissions. Dr. Griffiths said in academic institutions, there are two main areas for AI use: classroom use for teaching and learning and use for administration of the institution. The challenge, Dr. Griffiths stated, is how data is used for the development of AI models.

Representative Venhuizen asked Dr. Griffiths to speak to cybersecurity threats that AI could pose or be used for. Dr. Griffiths replied that one threat is that AI gathers data on individuals' habits and behaviors. Cybersecurity attacks, ransomware, and data poisoning are also threats. Regarding data poisoning, Dr. Griffiths added that there are two types of data: training data and live data. If live data is corrupted, anything produced or completed by that AI is poisoned. Dr. Griffiths added that computer models themselves also need to be protected, because if an algorithm in a piece of software is changed, it can damage every analysis done using that software.

Tech Net

Mr. David Edmundson, Senior Vice President for State Policy and Government Relations, Tech Net, explained that AI refers to the computerized ability to perform tasks commonly associated with human intelligence. He said leading AI companies in the United States are proactively addressing concerns by rigorously testing AI systems before release to ensure their safety and reliability.

Mr. Edmundson said state legislation on AI has tended to fall into three categories. First, there is AI legislation focused on specific use cases of generative AI: misinformation in election-related advertising and deep fake imagery. Second, some bills have sought to create a catalog of state government use of AI with a particular focus on high-risk use cases. Third, many bills created task forces or committees to study AI's benefits and potential drawbacks and then subsequently make recommendations to the legislature.

Mr. Edmundson spoke on TechNet's recommendation on approaching AI and social media legislation. He said TechNet would welcome a federal framework to avoid a state patchwork of rules and regulations. Mr. Edmundson said in the absence of such a framework, TechNet strongly recommends ensuring uniformity and interoperability across state lines.

Additionally, Mr. Edmundson covered TechNet's policy principles on AI, which can also be found at technet.org. He said the first policy is that comprehensive and interoperable data privacy laws should precede AI regulations. Second, TechNet encourages legislators to avoid blanket prohibitions on AI. Third, states should leverage existing authorities under state law that already provide substantive legal protections.

Mr. Edmundson then covered social media and children. He explained that there are multiple efforts within the industry to incorporate protective design features into websites and platforms. He added that parents have options for filtering the content for their children's experiences online and that there are many commercial and free content filtering and blocking solutions that enable consumers to protect their families and themselves from illegal or inappropriate content.

Mr. Edmundson covered some states that have introduced age verification mandates related to the internet. Mr. Edmundson spoke on those mandates in two categories: pornography and social media platforms. Mr. Edmundson also highlighted TechNet's efforts to identify and eradicate child sex abuse material (CSAM). He said the TechNet industries work to actively detect and remove CSAM from their training data, as well as reporting found CSAM to appropriate authorities. Mr. Edmundson added that federal law requires that online providers report instances of CSAM to the National Center for Missing and Exploited Children (NCMEC).

Representative Weisgram asked for examples of TechNet working with state legislators to draft legislation regarding social media and children. Mr. Edmundson replied that TechNet has worked proactively with legislators to help update criminal statutes to ensure that computer-generated images that are not tied to a specific child are considered CSAM.

Representative Venhuizen asked if AI-generated CSAM has been subject to court challenges. Mr. Edmundson replied that he is unaware of any challenges. He added that challenges would likely be a defendant accused of the crime.

Committee Discussion

Representative Karr said the committee has three different issues before it: AI, social media, and pornography. He encouraged the committee to look at ways to regulate and define these areas. He said he would like more information on how AI would impact state government and education and what role the legislature should play in regulating AI. Representative Karr also commented that the committee should seek involvement from the Attorney General regarding pornography.

Representative Venhuizen commented that working on age verification for pornography may be a simple task within the scope of this study for the committee to accomplish. He mentioned that regarding AI, many regulations will be federal, as state governments do not have the capacity to regulate it. Representative Venhuizen also expressed that he would like to learn how agencies within state government are already using AI.

Senator Mehlhaff said there may be issues regulating AI because of the interstate commerce clause. He said based on what the committee has heard, they should consider working with existing laws when looking at regulating AI.

Representative Soye expressed interest going forward in hearing testimony on how social media affects children, and on the relationship between social media regulation and constitutional law.

Adjournment

The next meeting of the Study Committee on Artificial Intelligence and Regulation of Internet Access by Minors will be held on August 14, 2024, in Room 414 of the State Capitol, Pierre, starting at 9:00 a.m. (CT).

Representative Venhuizen moved, seconded by Representative Karr, that the Study Committee on Artificial Intelligence and Regulation of Internet Access by Minors meeting be adjourned. The motion prevailed on a voice vote.