If attempts to predict the switching from a bull to a bear market were even slightly successful, one could focus on just this one type of switching and call it a day. If only it were that easy! In fact, the difficulty with predicting this type of bull/bear switching has encouraged researchers to examine other types of financial markets regime switching instead, in the hope of finding some that may be more amenable to existing statistical tools.
Some of the other common financial or economic regimes studied are inflationary vs. recessionary regimes, high vs. low volatility regimes, and mean reverting vs. trending regimes. Among these, volatility regime switching seems to be the most popular. Hardly surprising, given the long history of success among financial economists in modelling volatilities, as opposed to the underlying stock prices. Unfortunately, while such predictions of volatility regime switches can be of great value to options traders, they are of no help to stock traders.
Academic attempts to model regime switches in stock prices generally proceed along the following lines:
- Propose that the two (or more) regimes are characterised by different price probability distributions. In the simplest cases, the logarithms of the prices of both regimes may be represented by normal distributions, except that they have different means and/or standard deviations.
- Assume that there is some kind of probability of transition among the regimes.
- Determine the exact parameters that specify the regime probability distributions and the probability of transition by fitting the model to past prices, using standard statistical methods.
- Based on the fitted model above, find out the expected regime of the next time step and more importantly, the expected stock price.
Despite the elegant theoretical framework, such regime-switching models are generally quite useless for actual trading purposes. The reason is that they assume a constant probability of transition among regimes at all times. In practice, this means that, at any time, there is always a very small probability of the stock transitioning from a normal, quiescent regime to a volatile regime. But this is useless to traders who want to know when - and under what precise conditions - the probability of transition suddenly peaks. This question can be tackled by the use of turning points models.
Turning points models take a datamining approach and enter all possible variables that might predict a turning point or regime switch. Variables such as current volatility, last-period return, changes in macroeconomic numbers such as consumer confidence, oil price changes, bond price changes etc., can all be part of this input. In fact, in a highly topical article about turning points in the real estate market by the noted Yale economist Robert Shiller, it was suggested that the crescendo of media chatter about impending boom or bust might actually be a good predictor of a coming turning point. (See 'All the News that's Fit to Trade' - p. 40, Automated Trader Q1 2008)
Data-mining to detect turning points
The following example illustrates how it might be possible to detect turning points using a data-mining approach with just simple technical indicators built on stock price series as inputs, while using stock returns of multiple holding periods as outputs. A prominent brokerage stock - Goldman Sachs (GS) - is used as a proxy for the financial sector. The objective is to detect the turning points of this sector where it goes from bull to bear and back. The initial hypothesis is that major shifts of interest rates, a release of government macroeconomic data or earnings announcements are likely triggers of turning points. In this example, a large percentage change in GS's stock price is used as a proxy for such news releases. Furthermore, it is assumed that whenever GS reaches an N-day high or low just before this large drop or rise in price occurs this represents a good signal that the previous regime is close to an end. So this condition is used as an additional input. The search problem has a number of different elements. How large a percentage change is sufficient to trigger a regime switch? What should N be in the N-day high/low condition? And how long does the new regime generally last? (In other words, what is the optimal holding period?) To answer these questions in an old-fashioned, manual way would be extremely time consuming, as it would be necessary to run multiple simulations with different thresholds for the independent variables, and multiple return horizons for the dependent variables.
The independent variables of the model are just the one-day returns of GS. The dependent variables are the future returns of GS with various holding periods. The objective is to find an optimal rule, or an optimal combination of rules, which will lead to the best backtest performance. In this case, each percent-change threshold can be encapsulated as a rule. Two thresholds are used for buys and two for shorts: -1 per cent, -3 per cent, +1 per cent, +3 per cent. Similarly, each holding period can be encapsulated as a rule, and in this case six such periods are used: 1, 5, 10, 20, 40 and 60 days.
Separate time series are generated for price, percentchange, and ten-day high/low time series for the search. (For the sake of simplicity, N is fixed to ten for the strategy, but this parameter could of course be optimised as well.) The price series for the test is shown in box S1 of Figure 1. Several pre-packaged rules are then created that compute the one-day percent-change, and the ten-day moving highs and lows of the time series.
The entry rules are then created. Figure 2 shows rule box R3 for the buy and sell rule based on a change of ±1 per cent. A similar rule is created for ±3 per cent in a further R3 box. Note that by default, subsequent entry signals will override positions established by previous signals. The outputs of boxes R3 and R4 are then fed to boxes I5 and I6 respectively (see Figure 1), which contain the six holding periods (1, 5, 10, 20, 40 and 60 days) mentioned above.
Finally, a perceptron (see sidebar) learning algorithm is run on the outputs of I7 and I9. This algorithm will find out the best weights for the different rules with different holding periods (among other parameters) based on a moving window of historical training data with the objective of maximising the total profit in this window. Based on these optimised weights, the perceptron will trigger a buy and sell decision at the end of each period. Examples of other algorithms that can be selected are a genetic algorithm and a K-nearest-neighbour clustering technique (see sidebar).
Interestingly, the perceptron will not force us to hold a position for exactly N days, even though the component rule was constructed to do this in the moving window. Every day the strategy will decide whether to buy, sell or do nothing, based on the latest parameter-optimisation using the latest data in the moving window and the resulting linearly weighted decisions from the different rules.
Figure 3 shows three of the best equity curves that come from the perceptron optimisation. The best curve belongs to a model using a 50-day moving window for the optimisation. (The length of the moving window can of course itself also be optimised, but in the interests of simplicity that step is omitted here.) The sidebar of the chart shows that this strategy has a 37.93 per cent gross cumulative return over a sixmonth backtest period, with 89 roundtrip trades. (Compared with the 15.77 per cent return of a buy-and-hold strategy on GS, with a 14 per cent drawdown.) Also shown is the best equity curve from the various holding period routines, which quantifies the improvement from the optimisation (holding period of ten days in I5 on the 1 per cent rule at R3), which has a gross cumulative return of 18.55 per cent over the period.
Though the backtest period is short, this return nevertheless looks very impressive. So could anything be amiss? In particular, what about the data-snooping or data-mining bias that always seems to creep into every strategy that is based on machine learning or artificial intelligence? One possible solution is to optimise all rules and all parameters in a backward-looking moving window, so that absolutely no unseen future data is used for the backtest. Of course, data snooping bias can still creep in because we can abandon a whole category of models when the performance is poor and try one new category after another until it improves. But then, this is unavoidable whenever we are in the business of backtesting.
As can be seen, it is possible to create a credible regime-switching model with the simplest of technical indicators as long as it is possible to optimise efficiently over a large number of parameters using a backward-looking moving window. The performance of the model could possibly be further enhanced if the price moves were confirmed with macroeconomic or company-specific news. The same general technique is of course potentially applicable to instruments other than stocks, such as exchange-traded funds, futures or currency pairs.