People are talking about the risks of AI, and the importance of AI alignment. But what does this mean in practice? And what can be done about it? This talk attempts to inject some formal rigour into both those questions. If there's time, we'll also look at why answers in the area are so fraught and varied, and why expertise is of limited use.
Guest Speaker: Dr. Stuart Armstrong
Alexander Tamas Fellow in Artificial Intelligence and Machine Learning
Stuart Armstrong’s research at the Future of Humanity Institute centres on formal decision theory, general existential risk, the risks and possibilities of Artificial Intelligence (AI), assessing expertise and predictions, and anthropic (self-locating) probability.
He has been working on several methods of analysing the likelihood of certain outcomes and in making decisions under the resulting uncertainty, as well as specific measures for reducing AI risk. His collaboration with DeepMind on “Interruptibility” has been mentioned in over 100 media articles.
His Oxford D.Phil was in parabolic geometry, calculating the holonomy of projective and conformal Cartan geometries. He later transitioned into computational biochemistry, designing several new ways to rapidly compare putative bioactive molecules for virtual screening of medicinal compounds.
— ADMISSION IS FREE — One of our key founding principals was that anyone who wanted to attend, should be able to. That's why we have never charged for entry to our events. However, if you can spare any donation towards speaker expenses, we can continue to run these events. SiTP is a not-for-profit organisation and an average donation of around £3.00 per person at each event (to cover speaker expenses) has kept us going for 7 years.