Category: AI

the Montréal declarAIon

In conjunction with Yoshua Bengio being one of the three recipients of the 2018 Alan Turing award, Nature ran an interview of him about the Montréal Déclaration for a responsible AI, which he launched at NeurIPS last December. “Self-regulation is not going to work. Do you think that voluntary taxation works? It doesn’t.” Reflecting on […]

Bayesian intelligence in Warwick

This is an announcement for an exciting CRiSM Day in Warwick on 20 March 2019: with speakers 10:00-11:00 Xiao-Li Meng (Harvard): “Artificial Bayesian Monte Carlo Integration: A Practical Resolution to the Bayesian (Normalizing Constant) Paradox” 11:00-12:00 Julien Stoehr (Dauphine): “Gibbs sampling and ABC” 14:00-15:00 Arthur Ulysse Jacot-Guillarmod (École Polytechnique Fedérale de Lausanne): “Neural Tangent Kernel: […]

Bayesian intelligence in Warwick

This is an announcement for an exciting CRiSM Day in Warwick on 20 March 2019: with speakers 10:00-11:00 Xiao-Li Meng (Harvard): “Artificial Bayesian Monte Carlo Integration: A Practical Resolution to the Bayesian (Normalizing Constant) Paradox” 11:00-12:00 Julien Stoehr (Dauphine): “Gibbs sampling and ABC” 14:00-15:00 Arthur Ulysse Jacot-Guillarmod (École Polytechnique Fedérale de Lausanne): “Neural Tangent Kernel: […]

Nature Outlook on AI

The 29 November 2018 issue of Nature had a series of papers on AIs (in its Outlook section). At the general public (awareness) level than in-depth machine-learning article. Including one on the forecasted consequences of ever-growing automation on jobs, quoting from a 2013 paper by Carl Frey and Michael Osborne [of probabilistic numerics fame!] that […]

AIQ [book review]

AIQ was my Christmas day read, which I mostly read while the rest of the household was still sleeping. The book, written by two Bayesians, Nick Polson and James Scott, was published before the ISBA meeting last year, but I only bought it on my last trip to Warwick [as a Xmas present]. This is […]

Melanie Miller says, “As someone who has worked in A.I. for decades, I’ve witnessed the failure of similar predictions of imminent human-level A.I., and I’m certain these latest forecasts will fall short as well. “

Melanie Miller‘s piece, Artificial Intelligence Hits the Barrier of Meaning (NY Times behind limited paywall), is spot-on regarding the hype surrounding the current A.I. boom. It’s soon to come out in book length from FSG, so I suspect I’ll hear about it again in the New Yorker. Like Professor Miller, I started my Ph.D. at […]

The post Melanie Miller says, “As someone who has worked in A.I. for decades, I’ve witnessed the failure of similar predictions of imminent human-level A.I., and I’m certain these latest forecasts will fall short as well. “ appeared first on Statistical Modeling, Causal Inference, and Social Science.

A.I. parity with the West in 2020

Someone just sent me a link to an editorial by Ken Church, in the journal Natural Language Engineering (who knew that journal was still going? I’d have thought open access would’ve killed it). The abstract of Church’s column says of China, There is a bold government plan for AI with specific milestones for parity with […]

The post A.I. parity with the West in 2020 appeared first on Statistical Modeling, Causal Inference, and Social Science.