Dan Huttenlocher, chair of the MIT Jameel Clinic, is part of an MIT committee that has released a set of policy briefs regarding the governance of AI in the United States. Titled 'A framework for US AI governance: Creating a safe and thriving AI sector,' the white paper's central premise is to extend regulatory frameworks and liability paradigms to supervise AI in a more pragmatic matter.
Dan Huttenlocher, who helped steer the project, said “The nation already regulates numerous high-risk domains, offering governance in those areas. While this isn’t deemed sufficient, beginning with sectors where human activity is rigorously regulated and identified as high risk by society is a pragmatic starting point in approaching AI.” He also stressed the need for more flexibility in managing the interactions between humans and machines, recommending the establishment of a novel, self-regulatory organisation that has greater flexibility but is government-sanctioned.
مقتطفات
A Committee of MIT leaders and scholars has released a set of policy briefs outlining a framework for the governance of artificial intelligence as a resource for U.S. policymakers. The strategy involves expanding existing regulatory and liability methods to effectively supervise AI in a pragmatic manner. MIT has emerged as a guiding force with the recent release of white papers that aim to help broadly enhance U.S. leadership in AI while limiting the harm that could result from the new technologies and encouraging exploration of how AI deployment could benefit society.
Titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” the flagship policy paper stands as a comprehensive roadmap. Its central premise revolves around extending existing regulatory frameworks and liability paradigms to oversee AI tools pragmatically. This approach emphasizes the need to align regulations with the purpose of AI applications.
Dan Huttenlocher, Dean of the MIT Schwarzman College of Computing, emphasized, “The nation already regulates numerous high-risk domains, offering governance in those areas. While this isn’t deemed sufficient, beginning with sectors where human activity is rigorously regulated and identified as high risk by society is a pragmatic starting point in approaching AI.”
Asu Ozdaglar, the Deputy Dean of Academics in the MIT Schwarzman College of Computing and Head of MIT’s Department of Electrical Engineering and Computer Science (EECS), commented, “The devised framework provides a tangible method to contemplate these matters. It offers a structured approach to addressing AI governance issues.”