The real world, despite having lost some mind-share to reality tv, Tim Geithner and the internet, remains an interesting place and a potentially fruitful source of inspiration for our own efforts at strategy development. In this pictorial post, I’ll take a look at some of the physical phenomena which inspire or reflect various trading strategies/domains.
One of the challenges of algorithmic trading is that although there’s plenty of interest in the space, practitioners aren’t generally forthcoming about their observations. Academics, instead, focus on things that are frequently not very immediately practicable, or when they might be, always seem to set-up a little hedge-fund on the side while publishing colorful chum about how markets are ‘behavioural’ or somesuch.
Even if it’s hard to find good stuff, one must still look as there’s always more information that can help you than you can effectively process or retain. A few weeks ago I was trying to formalize the expected profit function of an algorithm I’m developing and wanted to see what people had written about the topic. I entered ‘define profit function for trading algo’ into google and was pleasantly surprised to see a paper entitled ‘Multi-strategy trading utilizing market regimes’ by Mlnarik, Ramamoorthy and Savani. It doesn’t directly cover the topic I was looking for, but instead addresses a number of related topics I’ve been interested in for some time:
- the treatment of a strategy as an instrument in its own right
- composing portfolios comprised of strategies
- using regime switching techniques to manage portfolios of strategies
In this post, I’ll briefly review their paper, illustrate how one can easily model strategies in relevant ways using the strategy ‘object model’ I’ve described previously through an example, and conclude with some thoughts on how these kinds of strategies might be implemented and further explored.
I’ve never been a hardware guy. Hardware has gotten so fast throughout my professional life that it has just never been a big issue. Also, on wall st we had a robust and annual budget for h/w so I’d routinely sign-off on hundreds of thousands of dollars on all sorts of machines I’d never lay eyes on and somehow they always did the trick.
Before 9/11, they’d be in server racks in the building or down the street, but since then they might also be in increasingly far-flung places like weehawken or long island, tampa, even texas or beyond. The machines always seemed unbelievably overpriced – I remember over the years pretty consistently paying something like $40K for a low-end db server. But that’s what it cost and you could only purchase approved products from approved channels, so nobody spent much thought on it. Now that I don’t have the same kinds of constraints – or budgets! – I increasingly have to think of hardware.
As a software engineer, the hardware itself is also insisting that I pay some uncharacteristic attention to it. The evolution of processors has reached a point where the programming paradigms many of us have fruitfully employed over many years are no longer suited for getting full performance out of today’s machines. The recent introduction of remarkably powerful and inexpensive parallel-computing platforms based on GPUs like nvidia’s cuda also outline a future that even current university training doesn’t address in a fashion practically adapted for institutional application. Cores are multiplying like Tribbles.
The lines between persistent storage and main memory are also blurring as consumer SSDs push up from the ‘low’-end while exotic ioDrives and the like offer a glimpse of a world where the performance gap between the two approaches nil and after their long reign myriad metallic platters will spin no more.