Machine learning is nothing new. Many of the techniques which now come under the umbrella term of machine learning have been around for decades. However, machine learning recently become much more popular, spurred by the availability of vast amounts of data and cheaper computing power. For one week earlier this month, over 8,000 data scientists, including myself, converged on Los Angeles for the annual NIPS (Neural Information Processing Systems) conference. Started over 30 years ago, NIPS is now one of the world’s biggest events on machine learning. What were my key takeaways from NIPS 2017?
1. Intelligence: it isn’t just about recognizing patterns
Josh Tenenbaum (MIT) described how his goal is to discover how the mind works from an engineering perspective. Whilst there have been many successes in artificial intelligence, in practice most of the systems built simply do one thing. Intelligence isn’t simply a matter of recognizing patterns, he continued. It’s about learning and modelling the world. We are decades away from that, he suggested. Kids have a common sense understanding of the world, which AI doesn’t yet have. So hopefully computers won’t take over, just yet!
2. Computers can be biased: it’s in the training set
Is a computer more neutral when making decisions compared to a human? In practice, this is not true. Kate Crawford (Microsoft) noted that the use of AI and machine learning has social implications. Bias is present in training data, which can affect the output of a machine learning algorithm. They could propagate racial and social bias. She gave several examples, such as Google Translate system, which was shown to have a gender bias when translating Turkish.
3. Finance is behind the machine learning times
There was a cohort from the finance industry at the event. However, it was notable that there was significantly more participation from tech companies, as well as from academia. Indeed, tech companies such as Google and Facebook employ many experts in machine learning to handle the masses of data they collect. Machine learning could become more popular in finance. However, it is still in its infancy in the sector. There are many reasons for this, including the difficulty with interpreting the output of machine learning models and the fact that the behavior of markets changes over time.
4. Deep learning is hot: what’s new?
Deep learning was a hot topic at NIPS. How does it differ from more traditional forms of machine learning? – Say we want to estimate the temperature based on the environment. Typically, we think about features which we feel are important, such as the distance from the equator, the altitude and so on. We feed these features into a machine learning model alongside the recorded temperature. Our model can figure out how we can gauge the temperature from those features. However, let’s say the problem is deciding whether there is a cat in an image. We can use deep learning to allow the “data to talk”. This involves allowing the algorithm to extract the important features from the data itself, rather than hand picking them.
Scott Read (DeepMind) and Nando de Freitas (Oxford) went through the various techniques associated with deep learning. They noted how it is already being used in many consumer applications, including image classification. Whilst deep learning has been successful, they suggested that it needs a lot of input data to work effectively.
5. Having your learning cake and eating it: supervised, unsupervised and reinforcement learning
Lastly, I heard a great cake analogy from Pieeter Abbeel (UC Berkeley) to describe various forms of machine learning. Abbeel said supervised learning is the cake: it involves labelling a training set in pairs of inputs and output. For example, say we want to build a model to translate from English to French. Our training set is made up sentences in English and their direct translations in French. The icing is unsupervised learning. Here, we provide many texts in English and French as our training set, without needing to translate each one. The cherry on top is reinforcement learning, where we define simple rules to govern how a model can behave and seek to maximise a reward. DeepMind used this approach to allow a computer to discover the best strategy to win at chess, by repeatedly playing against itself.
Saeed Amen is a systematic FX trader, running a proprietary trading book trading liquid G10 FX, since 2013. He developed systematic trading strategies at major investment banks including Lehman Brothers and Nomura, and runs Cuemacro, a consulting and research firm focused on systematic trading.
Have a confidential story, tip, or comment you’d like to share? Contact: firstname.lastname@example.org
Get real time update about this post categories directly on your device, subscribe now.