Happy new year, everyone - I hope you had a good party yesterday evening and today's morning is not too rough. For January I'm really looking forward to exploring distributional forecasts more in-depth.
In case you know someone who might care about receiving the notes semi-regularly, just drop me a line.
Understanding ML models:
For linear models, explaining the drivers of the model is easy. For everything else, there is Shap (SHapley Additive exPlanations), a game theoretic approach to explain the output of ML models.
From NeurIPS 2019:
Celeste Kidd's talk on "How to know" is definitely worth watching:
Improving human decision making using ML:
A very curious paper on the the biases of human decision making (turns out we are not perfect either) and how machine learning could improve the situation in the context of judicial decisions.
Progress in linear algebra: Eigenvectors from Eigenvalues:
It is fascinating that fundamental new theorems can still be discovered in long established fields: See the short note of Denton et al. on a new Eigenvector identity (the link is the blog post of Terry Tao, who was also involved in it)
Now Reading: The Food Lab - J. Kenji Lopez-Alt
For Christmas, I received this very readable and down-to earth book on the science and principles behind cooking. I'd say it's more approachable than Modernist Cuisine and it's cousins, while providing good insights and easy-to-execute recipes.