9月18日:Marta Kwiatkowska
发布时间:2018-09-10 浏览量:3370
报告题目:  Reasoning about Cognitive Trust in Human-Robot Interactions
报告人:   Prof. Marta Kwiatkowska  University of Oxford
主持人:     张民 副教授
报告时间:9月18日  周二 9:30-11:00
报告地点:中北校区数学馆201报告厅
 
 
报告人简介:
Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Kwiatkowska has made fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. She led the development of the PRISM model checker (www.prismmodelchecker.org), the leading software tool in the area and winner of the HVC Award 2016. Probabilistic model checking has been adopted in many diverse fields, including distributed computing, wireless networks, security, robotics, game theory, systems biology, DNA computing and nanotechnology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska was awarded an honorary doctorate from KTH Royal Institute of Technology in Stockholm in 2014 and the Royal Society Milner Medal in 2018. Her recent work was supported by the ERC Advanced Grant VERIWARE “From software verification to ‘everyware’ verification” and the EPSRC Programme Grant on Mobile Autonomy. She is a Fellow of the ACM and Member of Academia Europea.
 
报告摘要:
We are witnessing accelerating technological advances in autonomous systems, of which driverless cars and home-assistive robots are prominent examples. As mobile autonomy becomes embedded in our society, we increasingly often depend on decisions made by mobile autonomous robots and interact with them socially. Key questions that need to be asked are how to ensure safety and trust in such interactions. How do we know when to trust a robot? How much should we trust? And how much should the robots trust us? This paper will give an overview of a probabilistic logic for expressing trust between human or robotic agents such as ``agent A has 99% trust in agent B's ability or willingness to perform a task'' and the role it can play in explaining trust-based decisions and agent's dependence on one another. The logic is founded on a probabilistic notion of belief, supports cognitive reasoning about goals and intentions, and admits quantitative verification via model checking, which can be used to evaluate such trust in human-robot interactions. The paper concludes by summarising recent advances and future challenges for modelling and verification in this important field.

华东师范大学软件工程学院
学院地址:上海中山北路3663号理科大楼

                上海市浦东新区楠木路111号
院长信箱:yuanzhang@sei.ecnu.edu.cn | 办公邮箱:office@sei.ecnu.edu.cn | 院办电话:021-62232550
www.sei.ecnu.edu.cn Copyright Software Engineering Institute