报告一名称:Safety, Ethics and Frontier AI Models
报告时间:7月24日17:00
报告地点:滴水湖国际软件学院502室
报告摘要:
The developers of Foundation AI models, e.g. OpenAI, have “signed up” to Foundation AI Safety Commitments (FAISC) which, in some cases, include using safety cases. However, the approaches to safety cases adopted to support the FAISC do not build on the methods for developing safety cases that have been established in the more traditional safety-critical industries. The talk will discuss the challenges of assuring Frontier AI models, and illustrate the difficulties with some practical examples, e.g. using ChatGPT for safety engineering. Further, it will discuss how to extend established approaches to safety cases to include ethics, and ways in which safety, ethics and Frontier AI assurance can be integrated. The talk will conclude with a discussion of the regulatory challenges related to a fast moving technology such as Frontier-AI.
报告二名称:Safety of Autonomous Systems and AI
报告时间:7月25日15:00
报告地点:滴水湖国际软件学院502室
报告摘要:
The Centre for Assuring Autonomy (CfAA) has worked on assurance and regulation of robotics and autonomous systems for more than seven years, building on thirty years of work on safety of software-intensive systems, especially work on safety cases. The talk will outline the challenges of assuring autonomous systems (AS) and machine learning (ML), and give an overview of the CfAA’s approaches to assurance which produce safety cases for the AS as a whole and its ML components. The talk will illustrate the use of these approaches with practical examples, most likely drawn from transportation of healthcare. It will discuss the use of the approaches across a range of application domains and consider the remaining research challenges for enabling widespread use of the approaches.
报告人简介:
John McDermid became Professor of Software Engineering at the University of York in 1987 and the Lloyd’s Register Foundation Chair of Safety in January 2024. His research interests include systems, software, and safety engineering. He is Director of the Lloyd’s Register Foundation funded Centre for Assuring Autonomy (CfAA) which focuses on the safety of robotics and autonomous systems. The CfAA is developing assurance frameworks and regulatory principles for autonomous systems and machine learning, either used as part of an autonomous system or stand-alone. These frameworks are being used across a range of sectors including healthcare and major transport modalities including aerospace, automotive, and maritime. He has advised government and industry on assurance and regulation of AI and autonomous systems, including contributing to the documents supporting the AI Summits in the UK, Korea and France, and participating in the AI Action Summit in Paris in 2025. He became a Fellow of the Royal Academy of Engineering in 2002 and was awarded an OBE in 2010.