Notes of the month, July Edition

Very soon, the registration for the Champenoise race around Reims on 30. May 2020 will open. It's a highly entertaining dress-up run (18k), with music, great atmosphere and four glasses of Champagne on the way. If this sounds like your kind of fun, let me know - I'm currently trying to set up a team, 4-5 people already expressed interest.

Also, due to emails bouncing when sending this out directly, I migrated to mailchimp. Let's see how it works...

Upcoming Events in London
29.8.: Reinforcement Learning & Creative Applications
4.9.: CFA Society: Robert Shiller on Narrative Economics
Every Thursday, 12:30 is the ML Paper Club at Google Campus

AI-generated art selling for >400k EUR
A GAN (general adversarial networks)-generated piece of art actually sold for 432k EUR at a Christie's auction at the end of 2018. It depicts a slightly blurry man...
Here is the artwork, it's best if you form your own opinion:

Discovering physical laws with machine learning
End of 2018 Raban Iten, Tony Metger from ETH Zurich wrote an interesting research paper on discovering physical laws by training a neural network to learn a low-dimensional representation of state-space based on experimental observations. It worked in cases ranging from the identification of conservation laws for colliding particles to building a heliocentric model from the positions of sun and mars observed from earth.
Here is the arxiv preprint:

Poincaré Embeddings for Learning Hierarchical Representations
One of the challenges of when preparing features  for Machine Learning is the representation of words in a suitable format. Technically you can use, for example, a 50.000-dimensional vectors, with each dimension taking the values 1 or 0 and representing one word. For obvious reasons, that is a moderately smart idea. An alternative is the use of euclidian embeddings, such as Word2Vec, where the words are are represented by a 100-400 dimensional vector, and a dot product gives you something like a similarity measure. (Also, this allows you to do "word math", like King - Male + Female = ?,
This concept can be taken one step farther, as shown by Facebook research scientists. By using a Poincaré embedding instead of an Euclidian embedding, both hierarchical dependencies and similarity information can be captured. They demonstrated an improved reconstruction of the data from the embedding and very good link prediction results, i.e. successful identification of related words.
Here is the paper:

Iterative Prisoner's Dilemma strategies
Probably most of you already heard of the Prisoner's Dilemma, a game where two players have to decide if they cooperate or not. For the iterated versions, where the two players go multiple rounds, some strategies became quite well-known, such as Tit-for-Tat (i.e. cooperate, then do the same thing the opponent did last round). Here is an interesting overview over some of the more elaborate strategies that have been developed, such as Adaptive Strategies or Zero-Determinant Strategies:

Currently reading: A man for all markets - Edward Thorp
It's the autobiography of Edward Thorp, the first inventor of a roulette computer, (together with Shannon, giving them a 40% edge in the 1960s). Beside this, he was blackjack card counter, maths professor and successful hedge fund manager - an entertaining read, if you liked Feynman's "surely you are joking".

Show Comments