The emergence of artificial intelligence as a buzzword––catapulted by newly available generative AI algorithms for public usage such as ChatGPT––has brought learning algorithms to the forefront of public attention. Artificial intelligence (AI) is an umbrella term for computers that are trained to perform specific tasks; to be clear, we do not yet have truly sentient artificial intelligence systems.
Present systems like ChatGPT and Bard, called Generative AI, operate similarly to Searle’s Chinese Room Thought Experiment. This means they are trained on a set of data using techniques like generative adversarial networks (GANs), variational autoencoders (VAEs), or large language models (LLMs), and are taught to apply that training to a broader set of information made available to the system bounded by regulations imposed to maintain security. For example, ChatGPT cannot access any information published or released after 2021. These generative systems are comprised of complex algorithms that filter out data and implement pattern recognition processes at the behest of the organization that trains them. This places liability of the system into the hands of the creators, but once a fully autonomous system has been developed, three questions will become increasingly pertinent: (1) who is legally responsible for the behavior of the system, (2) how can legislation be implemented to protect safety and privacy with respect to these “smart” technologies, and (3) what are the copyright rules regarding the creations and inventions of the artificial intelligence system itself?
Last year, the White House released a blueprint for an “AI Bill of Rights” that looks to establish five baseline principles in future legislation to ensure protection from safe or ineffective systems, promising equity and lack of discrimination, completely banning abusive data practices, prompting notices of use and explanation of impact, and requiring the ability to opt out from the service wherever appropriate. This blueprint is not a proposal for legislation but an idea of what the White House looks to protect in the future; however, it is simply not enough. Governmental bodies have trended towards the slower end of reactivity when it comes to imposing regulations on technology. Even with social media, most users are governed by the community guidelines established by the owners of the network, leading to discrepancies in what is deemed “acceptable” from app to app. As we are realizing from CEO Shou Chew’s public hearing before Congress regarding safety concerns of his company, TikTok, industry safety standards like “age gating” (asking for the user’s age when creating the account) are being scrutinized as easily surmountable and not protective enough against online predators. Children are becoming exposed to the internet and digital media at a much younger age and at an increasing rate than in previous years, and it is paramount that as technology becomes more intelligent and develops greater capabilities, stricter standards are set in place quicker to anticipate and preemptively protect their safety.
Hundreds of large technology companies and their CEOs have been vocal in issuing warnings that the law needs to become more integrated into technology to regulate and safeguard its users to prevent massive scale violations. Take, for example, the MGM national data breach that took place this September––not to be confused with the MGM data leak of 10.6 million individuals’ personal data in 2019. The breach allowed hackers to take remote control of all of their hotels, locking all the rooms, closing down slot machines and ATMs, and accessing the loyalty program database that contained members’ driver’s license numbers and social security numbers. This hack began with a simple phone call to MGM’s employee customer support line and ended with an incapacitation of the entire network and digital system.
Cybersecurity and artificial intelligence possess the power to greatly impact society in either direction based on the individual at the helm. Establishing legislation that enforces the inclusion of greater safeguards both on protecting data and on reigning in the access artificial intelligence has to broadband networks, national and private digital data can be more difficult to access, and incidents like what took place with MGM or worse can be prevented.
 Science & Tech Spotlight: Generative AI, U.S. GAO (Sept. 6, 2023), https://www.gao.gov/products/gao-23-106782.
 Bernard Marr, What Is Generative AI: A Super-Simple Explanation Anyone Can Understand, Forbes (Sept. 19, 2023), https://www.forbes.com/sites/bernardmarr/2023/09/19/what-is-generative-ai-a-super-simple-explanation-anyone-can-understand/?sh=ee10cd433e24.
 Ali Azhar, Generative AI Defined: How it Works, Benefits and Dangers, TechRepublic (Aug. 7, 2023), https://www.techrepublic.com/article/what-is-generative-ai/.
 What’s in the US ‘AI Bill of Rights’ – and what isn’t, World Economic Forum (Oct. 14, 2022), https://www.weforum.org/agenda/2022/10/understanding-the-ai-bill-of-rights-protection/. See also Blueprint for an AI Bill of Rights | OSTP, The White House, https://www.whitehouse.gov/ostp/ai-bill-of-rights/.
 Catherine Thorbecke, TikTok CEO in the hot seat: 5 takeaways from his first appearance before Congress, CNN Business (Mar. 23, 2023), https://www.cnn.com/2023/03/23/tech/tiktok-ceo-hearing/index.html.
 Robert Hart, Elon Musk And Tech Leaders Call For AI ‘Pause’ Over Risks To Humanity, Forbes (Mar. 30, 2023), https://www.forbes.com/sites/roberthart/2023/03/29/elon-musk-and-tech-leaders-call-for-ai-pause-over-risks-to-humanity/?sh=645aa3ce6dfc.
 MGM hack exposes personal data of 10.6 million guests, BBC News (Feb. 20, 2020), https://www.bbc.com/news/technology-51568885.
 MGM reeling from cyber ‘chaos’ 5 days after attack as Caesars Entertainment says it was hacked too, ABC News (Sept. 14, 2023), https://abcnews.go.com/Business/mgm-reeling-cyber-chaos-5-days-after-attack/story?id=103148809.