People are talking about the risks of AI, and the importance of AI alignment. But what does this mean in practice? And what can be done about it? This talk attempts to inject some formal rigour into both those questions. If there's time, we'll also look at why answers in the area are so fraught and varied, and why expertise is of limited use.
Guest Speaker: Dr. Stuart Armstrong
Alexander Tamas Fellow in Artificial Intelligence and Machine Learning
Stuart Armstrong’s research at the Future of Humanity Institute centres on formal decision theory, general existential risk, the risks and possibilities of Artificial Intelligence (AI), assessing expertise and predictions, and anthropic (self-locating) probability.
He has been working on several methods of analysing the likelihood of certain outcomes and in making decisions under the resulting uncertainty, as well as specific measures for reducing AI risk. His collaboration with DeepMind on “Interruptibility” has been mentioned in over 100 media articles.
His Oxford D.Phil was in parabolic geometry, calculating the holonomy of projective and conformal Cartan geometries. He later transitioned into computational biochemistry, designing several new ways to rapidly compare putative bioactive molecules for virtual screening of medicinal compounds.
No ticket event. We ask for a £3.00 donation per person towards speaker expenses and advertising costs.