photo credit: Jochen Abitz @flickr
This past spring I was compelled to rejoin what one of my former partners had longingly referred to as “civilization.” The process of rejoining the civilized was itself of note in an environment so changed as to be unrecognizable, but I’ll skip that for now. Instead I have some observations on the interesting spot in which I’ve found myself: writing dark pool aware algos from the inside. That is, I’m working for a block trading ‘dark pool’ working on the team that develops their quantitative strategies.
While still within the world of algorithmic trading, this is a substantial change from what I’d been doing before and has proven a rich ground for learning, in particular about market structure. The biggest aspect of the change – besides being civilized – is the change of perspective from the prop trader to, effectively, an execution trader. As a prop trader you are looking to identify and execute trading opportunities. Seeking alpha. Instead, as an execution trader you receive orders and need to execute them with some highly customized sets of constraints. You want to get things done over some time frame with some appropriate balance of aggressiveness and stealth. Liquidity seeking. The ‘what’ has already been decided for you; it’s the ‘how’ you need to worry about. Thus, there’s some loss of ‘agency’ in going from the former role to the latter and this corresponds precisely and inversely with the notion of agency trading.
Going from alpha-seeking to seeking-liquidity is a change of perspective, but the blocking and tackling are constant. In the end, you’re trading – managing orders and positions and deluges of market data and analytics; familiar, fun stuff.
What I’ve found most interesting is the new perspective I’m afforded on market structures.
People have asked me how I go about implementing a strategy in Stratbox. While I’ve illustrated a good number of strategies running in Stratbox in these pages, I’ve never walked through a non-trivial example from conception, through design, implementation and iteration. Today we’ll go through a reasonably complex example in total (I’ll provide source) detail.
The example I’ve chosen is, I think, very nice because it’s a portfolio-oriented strategy, which is pretty much the only kind I care to explore; it’s also based around the concept of pairs trading, which is something which most can easily relate to; and finally, it’s already public domain and yet almost certainly has some juice in it for those who care to understand it and extend it intelligently.
The example comes from the blog of a company, Palantir which does (something like?) analytical / decision support software for both finance and intelligence-gathering services (quants and spooks – spooky quants?). The specific example is here and is described thusly:
Today we return to our series on regime switching and the topic of managing portfolios of strategies. In particular, we build on the examples illustrated in sensitivity testing and steppin’ out, in which we showed historical and then real-time ‘forward-walking’ of strategies. The next step we’d described was to evolve the techniques illustrated to support the real-time management of a portfolio of strategies.
In the example below, we look at another ‘meta’ strategy named StrategyPortfolio which maintains a dynamic portfolio – P – of strategies which it will select from a set of strategies – S – running concurrently in simulation. The constituents of P as well as their cash allocations and parameterizations will be rebalanced/adjusted regularly after an initial ‘out-of-sample’ period during which only the S strategies are run.
Apart education, the intention of this strategy, as I’d originally suggested here, is to ‘back-into’ a regime-switching strategy without attempting to directly quantify the regimes explicitly.
This has proved to be even more interesting than I’d expected, not so much because it performs particularly well (though it’s promising), but because of all of the things it has taught us. In particular, the transitions are a killer and there are properties of strategies which (dis-)qualify them from being effective in such a scheme…
We’ve been looking at what we’ve been calling “meta-strategies” – strategies that act upon other strategies – with the goal of implementing something like we’d described in the recent regime-switching post. (Please note that since then I’ve added a category to capture this thread.)
Last time we saw an example of historical forward-walking of a portfolio-oriented day-trading strategy which utilized daily data. This time we do something a bit more interesting and correspondingly complex. Today we’ll look at a real-time forward-walk of a moderate-frequency strategy (trades perhaps a few hundred times in a day) which looks at the top-of-the-book but doesn’t use market-depth. The strategy is a simple mean-reverter that we’ve described before though we’ve had to make some small changes to get it to behave in the context we’re looking at now…
'optimization' or 'search'?
We’ve been looking at how a strategy container might view and implement a variety of modes for strategies it will launch and contain. Last time I documented a uniform initialization process for many of them, including a posited walk-forward parameter optimization mode. I’ve implemented an initial version of this that I’ll illustrate through a screencast (first ever – be gentle) below, but before continuing want to raise a couple of cautionary notes about the slope we’re traversing here.
From the very first post on this blog I’ve tried to underline the danger that over ‘optimization’ poses in view of the simple unalterable fact that if you look at enough random junk, you are bound to see things that look impossibly good. Doesn’t mean they’re actually good. In the context of trading strategy development, this is a particular danger as strategy parameter optimizers are easy to come by and can be very misleading if employed naively. I think this is in part due to the term ‘optimization’ which is really a stretch for what these tools do. They’re better described as search tools as they are really searching through a tuple-space of possible parameter combinations that you’ve specified, and then ranking them by some criteria you specify.
They’re still useful, but less as ‘optimizers’ and more as tools for judging the sensitivity of the strategy to different parameterizations. If the strategy demonstrates good performance and stability over a variety of market conditions and parameterizations, you may just have found yourself a winner…
Anyway, I felt that had to be said…
There seems to be a developing meme out there suggesting that algorithmic-, and in particular high-frequency, trading is some kind of gold-rush route to easy money which brings to mind…
…this revision of a paper I’d read previously: “Statistical Arbitrage in the US Equities Market” by Avellaneda and Lee. It’s a detailed and thoroughly worked (and now re-worked) paper illustrating the development and analysis of a US equity stat-arb strategy based on Principal Component Analysis (PCA) and then revised to use ETFs.
I came across this paper as I have still never used PCA in any of my own strategy development work and read Carol Alexander’s excellent Market Models over my summer vacation with an eye towards giving a PCA hedging model a spin in the near-term. Thus, I wanted another look at this paper as a reference point. Although it’s an excellent paper, I’m not going to urge you to go out and read it immediately unless you have a reasonably pressing practical interest. Instead, I find it interesting largely because of one of its authors – Professor Avellaneda – and its conclusions in the form of its strategies’ performance.
I’ve seen Prof Avellaneda speak a number of times at a variety of quant meetups organized by the relevant Columbia/NYU financial engineering depts. His paper reminds me that at least once during my noisome adolescent years, my father intoned darkly that:
the streets are littered with brilliant minds
mean-reverting strategies (Picture: wikimedia)
The real world, despite having lost some mind-share to reality tv, Tim Geithner and the internet, remains an interesting place and a potentially fruitful source of inspiration for our own efforts at strategy development. In this pictorial post, I’ll take a look at some of the physical phenomena which inspire or reflect various trading strategies/domains.
One of the challenges of algorithmic trading is that although there’s plenty of interest in the space, practitioners aren’t generally forthcoming about their observations. Academics, instead, focus on things that are frequently not very immediately practicable, or when they might be, always seem to set-up a little hedge-fund on the side while publishing colorful chum about how markets are ‘behavioural’ or somesuch.
Even if it’s hard to find good stuff, one must still look as there’s always more information that can help you than you can effectively process or retain. A few weeks ago I was trying to formalize the expected profit function of an algorithm I’m developing and wanted to see what people had written about the topic. I entered ‘define profit function for trading algo’ into google and was pleasantly surprised to see a paper entitled ‘Multi-strategy trading utilizing market regimes’ by Mlnarik, Ramamoorthy and Savani. It doesn’t directly cover the topic I was looking for, but instead addresses a number of related topics I’ve been interested in for some time:
- the treatment of a strategy as an instrument in its own right
- composing portfolios comprised of strategies
- using regime switching techniques to manage portfolios of strategies
In this post, I’ll briefly review their paper, illustrate how one can easily model strategies in relevant ways using the strategy ‘object model’ I’ve described previously through an example, and conclude with some thoughts on how these kinds of strategies might be implemented and further explored.
My son recently had his first birthday and amazes me daily with his new feats as he runs around increasingly stably exploring the world around him. It occurs to me that the system I use to trade every day, Stratbox, is approaching its fourth “birthday” in the next few months. I hadn’t originally intended to write a system – an algorithmic trading platform – but found that existing products were limited, expensive and didn’t fit my mental model of what they should do.
This isn’t surprising as I wanted the system to support all of the activities associated with our algorithmic trading. It turns out that that’s a lot to ask of a system. It also turns out that you learn as you go and so the system continues to evolve. A few years ago I’d posted about the basics of a strategy container and in this post I’m going to come back to this topic and describe some of the layers of code and thought developed since then.
First, let’s consider the role of a strategy container. Its job is to intermediate between trading strategies and the external environments with which they interact. It must also provide services that strategies can use (e.g., position management) and that it wouldn’t make sense for each strategy to re-implement. In the past I’ve focused on the former responsibility of adapting strategies to external environments. Why is this necessary and interesting? Because it allows us to take the same exact strategy and run it live, or in simulation or in backtest, etc. Interesting and necessary, but not what I want to focus on this time. Instead, I want to look at the services provided to strategies; the ‘ecosystem’ a strategy container provides in the hope that strategies might flourish within it.
I’ve been saving the above image in a stubbed-out blog post I’ve wanted to write since a conversation I’d had in Jerusalem last fall. The recent attention to high frequency trading and all of its attendant evils has reminded me that the topic is relevant and so I relate various thoughts at the risk of jumping on a cacophonous bandwagon of rumbling misinformation.
First of all, the conversation. It was with a talented guy who acted as the CFO for a variety of companies including a small startup hedge fund which traded US equities at a high frequency. Although he was a part-time cfo, he seemed pretty plugged-into their trading operations and noted that they use an agency-only brokerage service for automated traders I’m familiar with and that they were “looking at full data for many” hundred stocks concurrently. He remarked that their trading was going well but that their hit rate was something like 4% and dropping. By hit rate, he meant that they were placing limits frequently and generally pulling the orders if they didn’t get hit immediately. He didn’t specify, but I imagine that “immediately” might range from milliseconds out to a second or twenty. If the market is composed of makers and takers, then these guys were definitely makers of liquidity in the strict sense that they were placing limits and making markets.
At the time I thought it was interesting because it seemed that so many people were focused on the very, very short term trade that the frequency was becoming saturated. It looked like a reminder that trading frequencies populate a spectrum; in this case, this part of the spectrum was becoming so saturated that returns were becoming increasingly difficult to obtain as more players crowded into it. I’m not sure how this hedge fund has fared, but at the time I remember thinking that they were going to have a tough time competing if they were only geared for high-frequency trading as the space becomes increasingly expensive to play in as the inevitable talent and technology arms race marches on.
Lo and Khandani provide the below image illustrating this phenomenon happening to a class of contrarian strategies Lo & MacKinlay had described in 1990. The strategies stop working as people squeeze out the alpha.