photo of keyboard with the word Learn on one of the keys

This week, former investment banker, finance lecturer and writer Dave Coker  takes us on his personal journey through the history of machine learning from early expert systems to trading bots that beat the market.

The early systems

After my undergraduate (Math and Computer Science) my career started at AT&T Bell Labs in New Jersey, building Expert Systems. Grandly named, these were, at the core, programmes that could reach conclusions. By today’s standards there were rather crude, but by standards of the time they were impressive. The programmes would present a series of YES / NO questions and, after you’d answered enough of the queries, would reach a conclusion. More properly known as a “forward chaining expert system”, they were pretty breathtaking at the time but very, very constrained in terms of utility.


Of course, I built those Expert Systems using 1980s technology, both hardware and software. Most people are very familiar with the pace of innovation in hardware: personal computers are, today, several orders of magnitude more capable than they were in the 1980s. Not to mention pervasive – when I bought my first PC in the 1980s owning one was considered strange and unusual. But most people haven’t paid attention to software simply because it’s behind the scenes, out of sight and out of mind. But advances in software are equally breathtaking, especially since the emergence of a key new development paradigm, Object Oriented Programming in the 1990s. The late Steve Jobs used to say “objects are closer than you think” and he was so so right on many levels.

Earlier approaches to programming were what we call “procedural”. Programmers would write instructions that manipulated data. The time it took to write increasingly complex programmes increased in a non-linear manner. We found our programmes taking much, much longer to debug, simply because of the size as measured in lines of code or programme instructions. A programme containing twenty thousand or so lines of code was considered “large” and difficult to debug. By the early 1990s we’d reached the limit of what could be achieved with traditional programming techniques. In other words, The Expert Systems I’d built in the 1980s were the pinnacle of achievement.

The revolution begins

But then the Object-Oriented revolution began! Programmers no longer wrote increasingly complex instructions, rather would create a relatively small and simple “object” — a package of both data and instructions. Then create a more complicated object that be constructed, like building blocks, from the simpler objects. And so on. Think about how our bodies function so well – molecules form cells, which in turn form tissue, which comes together to comprise a functional organ and there you go. A properly debugged human lives! Sadly, some of us, in the context of the human genome, do contain bugs. But the greater majority of us function as designed. And the same paradigm can be applied to cars, television sets, even the hardware that comprises a modern Personal Computer; these relatively complex objects are formed of sets of increasingly simpler objects.

Image of Object Oriented Programming for expert systems to bots article

Now Object Oriented programming really was a revolution, because not only could far, far more complex systems be created, they were much easier to debug. As long as you tested the simplest objects first, then each more complex object – comprised of debugged simpler objects – you could gain much more confidence the entire system would perform as expected. Not to say that modern systems are bug free, anything but. However, we can build far, far more complex systems using modern software development paradigms, than we could in the 1980s.

The future is NOW!

And that brings us to the present. We’re in the midst of something that even a decade ago would have been considered straight out of Science Fiction movie – THE MACHINES CAN LEARN! How is this possible?

There are three key approaches to programming a computer to learn: Supervised learning, Unsupervised learning and Reinforcement learning. Each operates pretty much as you’d expect from the name, and more than likely you’ve already interacted with a system — “bot” – programmed via one of these approaches. For example, many commerce websites these days feature “chatbots”, which offer to help you complete simple tasks. In China Tencent’s WeChat bot allows you to undertake a very wide variety of routine tasks – pay bills, order food, reserve tickets, etc, etc – simply by chatting with an entity via your mobile phone. You’re not chatting with a human, rather a software bot which is capable of not only understanding your request, but can act upon it, asking for additional details if necessary, and completing your activity with minimal interaction from yourself. And this isn’t niche technology – WeChat claims to have seven hundred million active users, all of whom are comfortable asking a bot to remove some of the routine drudgery from their lives.

Compare these programs to the relatively simple Expert Systems I built in the 1980s. These programs were constructed with all the knowledge built in. They reached a conclusion by asking relatively simple, YES / NO questions, which lead, inevitably, to a conclusion. By contrast, machine learning today allows the computer program itself to acquire knowledge. Each of the three key approaches differ in how this knowledge is evaluated, however the fundamental point remains: the machine can now educate itself. About a problem or a topic, the machine can learn on it’s own, with minimal human intervention. In other words, unlike a primitive Expert System machine learning bots can be launched as near blank slates, adding to their knowledge base as they successfully solve problems. The problem can be as complex as booking a flight or reporting the current weather and train schedules. This is indeed science fiction-class technology, but available now.

Objects are all around you

Again, the Object-Oriented paradigm allows us to construct systems capable of staggeringly complex behaviour. These days programmers rarely construct such systems from scratch, rather they build their specific application using what are called frameworks, or collection of related objects, which already provide relatively complex functions. These frameworks are easily extended into disparate directions, to achieve a wide variety of goals. But there’s more!

As WeChat and various Western commerce websites have demonstrated, machine learning – and objects – are all around us now. And they’re the future. Already we’re seeing many routine tasks typically undertaken by people now being accomplished by bots driven by machine learning techniques. But ordering a pizza for you is just the start. Already more complex tasks are being handed over to bots, freeing humans to pursue additional, value added activities. For example, my Doctoral Research focuses on high frequency trading (i.e., buying and selling shares in hundreds of milliseconds compared to the seconds it would take a human to respond). Upon initial examination of what’s called “tick by tick” trading data captured from The New York Stock Exchange, we were surprised to find evidence of what appeared to be a number of bots, all exploiting pricing discrepancies lasting less than one second. These were opportunities humans would never be able to recognise then act upon. But the bots can, and we’re now seeing them push their response time down, moving faster and faster as they purchase trading profits. So bots aren’t only on our phones and commerce websites, they’re in the markets and taking finance in directions unimaginable just a few years ago.

Pic of New York Stock Exchange for expert systems to bots post
New York Stock Exchange

Bots managing your money?

Yes. Over the past year or so we’ve seen the widespread deployment of bots actively managing investment portfolios. Just a few years ago it was possible to get a finance degree and take a job managing a portfolio for an investment bank or hedge fund, earning a relatively good wage by matching – or better yet beating– a market benchmark such as The S&P 500 or The FTSE 100. But these jobs being taken by, you guessed it, bots. Not only do the bots work for less money, but they also work tirelessly, twenty-four hours a day if necessary. And their attention to detail is superb.

Many of these bots are programmed along the Reinforcement learning paradigm. We start by programing the basics of portfolio construction as we teach it to students today; assets have both risk and return, and what we call pairwise correlation. We give the bot a collection of stocks its can buy or sell, and a target benchmark to beat. At this point the bot can, by itself, learn without human interaction, how to construct a portfolio that not only matches, but also beats, that target benchmark.

Often these bots are programmed using proprietary portfolio construction techniques, and routinely we know these bots not only beat target benchmarks, but also replace human mangers. But it’s not all bad news.

They took our jobs!

Like any labour saving device mankind has ever invented, all these bots will do is free people to pursue additional creative activities. And by creative I don’t mean writing poetry or music (although I have recently reformed my 1980s era punk band, much to the chagrin of my supportive wife). I mean using bots and the work they accomplish as a foundation to achieve more. For example, we’re now seeing the emergence of active trading strategies, driven by people, intended to fool and flummox bots. This allows humans to take advantage of our inherent creativity and natural ability to “think outside the box”. We’re seeing people creating what are called smart beta trading strategies, allowing part of the assets under management to minimally match a benchmark, and often outperform it. Others are creating new financial products, intended to capture components of the market typically not directly observed nor traded, for example, volatility. Not concerned with the usual buy or sell decision, these traders will generate profit no matter which way the market moves, as long as it moves.

Image representing machine learning
“Machines could always work harder than humans. But can they now outthink humans? Maybe. Their cognitive abilities will soon rival humans.”

So, bots won’t be elbowing humans out of finance in our lifetimes any more than self-driving cars will be replacing all taxi and delivery drivers. Instead, bots will allow the nature of work to change, just as certainly as the day routine of the trading desk has changed during my career.

The only constant is change

To maximise benefit from change you must be flexible. Your degree is only the start. All of us working in the field must continue to ensure our skills and knowledge are not only current, but that we’re also aware of what some call “the bleeding edge”. Where is your field headed? Why? Most importantly, how can you get there before the crowd, your competitors for the top jobs and the best clients? Adopting this mentality will give you a broad perspective on your field’s direction, and a motivation to succeed in an era of change.

Like the machines, our learning needs to continue. Maybe not for a bot-like twenty-four hours a day – you do need to set time aside to see my band perform – but we all need to be actively engaged in our fields.

The Greek philosopher Heraclitus said “change is the only constant in life”. If this was true in 500 BC, we know it’s even more true at a time when changes happen at dizzying speed. Remain flexible and actively pursue life-long learning opportunities, and – like the trading bots – you will succeed. Especially if you come to one of my gigs.

Dave c/oker for expert systems to botsDave Coker is an ex-expatriate New Yorker who has lived in London since 1997. He started work in Investment Banking in the early 1980s, with Dow Jones, before moving to Deutsche Bank, where he was Vice President of Global Risk Management. He has also been responsible for Professional Services in Europe, The Middle East and Africa for Moody’s, and Global Programme Manager Risk Management Technology for ABN AMRO Internationally educated, he’s completed a PhD in Finance (Zurich), an MSc in Quantitative Finance (London), an MBA (London), and studied Mathematics and Computer Science at undergraduate level (New York). Dave also writes and sell market commentary to several banks and hedge funds and consults on Credit Risk to a Global Tier 1 Investment Bank. Dave’s band – Tweets of Rage – will be performing in London October 2017, and their album will be out in 2018.

If you are interested in Westminster Business School’s upcoming Fintech courses, email

Leave a Reply

Your email address will not be published. Required fields are marked *

University of Westminster
309 Regent Street, London W1B 2UW
General enquiries: +44 (0)20 7911 5000
Course enquiries: +44 (0)20 7915 5511

The University of Westminster is a charity and a company limited by guarantee.
Registration number: 977818 England