# The Science of Defundamentalisation

#### Published in Automated Trader Magazine Issue 30 Q3 2013

## Short-term return reversal is a long-established phenomenon in financial markets. Yet despite this longevity, the challenge of isolating fundamental return drivers from their non-fundamental counterparts in a reversal model still persists. Automated Trader talks to Zhi Da, Associate Professor of Finance at the University of Notre Dame and author of a recent paper on the subject, about a novel approach to this conundrum.

#### Zhi Da, University of Notre Dame

##### "...the primary objective was simply to measure investors' changing expectations of future cash flow - and equity analysts are essentially providing these expectations directly."

**AT:** *What prompted this particular line of
research?*

**Zhi:** Researchers in finance love to solve
puzzles and short-term return reversal is considered one of the
oldest puzzles of these. The fact that past returns can predict
future returns is somehow fascinating and has therefore been a
subject of academic debate since the 1960s, which intensified
after a period of detailed research in the 1990s. One of the most
interesting facets to the phenomenon is that it has been shown to
be extremely robust and not just an artefact of data mining,
which makes teasing out its underlying causes especially
fascinating. This, plus the possibility of enhancing the
traditional reversal strategy, was the motivation for our
research and the publication of its associated paper "A Closer
Look at the Short-term Return Reversal".

**AT:** *Your paper cites three fundamental
components ^{1} of stock return, but
in order to measure the second of these (cash flow news due to
changing expectations about fundamental future cash flows) you
use analysts' earnings estimates revisions, which has not been
done before. What prompted this choice?*

**Zhi:** Academic finance literature already
contains very rigorous studies of return decomposition. A classic
example of this is the work of Campbell and Shiller in the 1980s
and early 1990s. In their papers they demonstrated algebraically
that it was possible to decompose return into a cash flow
component and a discount rate component. The cash flow component
basically summarises changes in investors' expectations about
future cash flow. However, this is not just about the cash flow
expectations for the next period, but for multiple periods all
the way out to infinity. So in essence it is concerned with
revising expectations about a sequence of cash flows.

While this is straightforward enough in theory, estimating it in practice is problematic, so this has been the subject of extensive research over the past 20 years. Initially research focused on predictive regressions, whereby they set up vector auto-regressive regressions in an attempt to arrive at a representative predictor of future dividends returns. The same basic approach would then also be used to calculate expected earnings revisions.

However, over the last seven or eight years researchers have started to appreciate that there are numerous issues associated with using this statistical approach. Apart from complexity and inaccuracy, the approach is essentially very indirect. By contrast, we reasoned that the primary objective was simply to measure investors' changing expectations of future cash flow - and equity analysts are essentially providing these expectations directly. Therefore why not just use these, rather than jumping through numerous statistical hoops and probably obtaining a noisy result anyway?

A further advantage is that these analysts are not just providing one cash flow forecast, but also estimates for future periods out to the very long term. All that is then required is the calculation of some differences in order to measure changes in earnings expectation over time, without the need for estimation and statistical modelling.

One potential problem is that individual analysts may have a particular bias, but since one is concerned with changes in forecast rather than their absolute level, a persistent bias in either direction doesn't actually matter. The only real caveat is whether or not an analyst's bias is persistent.

**AT:** *How do you handle things such as
outliers in changes of analysts' forecasts?*

**Zhi:** In actual fact this is not a major issue if
you are trying to implement the strategy at a portfolio level,
because extreme outliers on both sides effectively cancel each
other out. If people are investing in a sufficiently diversified
portfolio, then these outliers become less of a concern. However,
at an individual firm level I don't think there is any scientific
way of dealing with these outliers. What people tend to do in
these circumstances is apply some form of Winsorising ^{2} , so if the change number looks too
extreme then it will be adjusted to say 95% tau.

**AT:** *What is your process for calculating the
final number actually used for expectation changes in future
stock
cash earnings?*

**Zhi:** We obtain some measure of cash flow
expectations for one and two years, and so on out to infinity.
However, in practice there are only three forecasts commonly
available for the majority of stocks that receive coverage. These
are earning forecasts for the current fiscal year (referred to as
A1t in the paper), the next fiscal year (A2 t), and a long-term
growth forecast (LTG t). This last is the forecast growth rate in
earnings over the next three to five years. After A2 t we use the
long-term growth forecast to extrapolate future earnings.
However, sometimes when you look at long-term growth forecasts
you see numbers such as 40%, and you logically wouldn't expect
the company to keep growing at that rate forever. There therefore
needs to be a steady-state stage and a typical assumption for
this is that in the long run all firms will be growing at a rate
that is close to GDP growth (typically in the 4% to 7% range).
Alternatively one can use a sector-specific measure, such as the
average historical growth rate in earnings within each sector.

Our approach is to extrapolate LTG t to the long run steady-state growth value from year five out to year 15. Beyond year 15 we assume that everything will be growing at the steady-state rate. In this fashion we calculate a set of earning forecasts from the current fiscal year out to infinity. Then in the next period we do the same calculation to produce a new set of forecasts. We then calculate the difference between the two sets of forecasts, which gives us the change in overall cash flow expectation along the complete spectrum that has taken place over the intervening period.

**AT:** *How do you weight the different
forecasts along the timeline from next year out to infinity?*

**Zhi** : This is something that is implied from the
earlier decomposition algebra in the paper by Campbell and
Schiller. Their paper lays out the methodology which is based
upon the idea that the weighting is declining over time and
decays at a rate of about 0.95 or 0.96 per year. So this year's
cash flow forecast receives a weighting of one, while next year's
forecast will receive a weighting of 0.95, the following year it
will receive a weighting of 0.95 squared, the following year 0.95
cubed, and so on out to the end of the horizon raising the
exponent by another step each year.

**AT:** *In the paper you state that most of the
alpha is captured within the first month after entry, so why do
you need forecasts going out to 20 years and beyond?*

**AT:** *The process of "defundamentalising" data
in this manner surely has a wider application? Such as pairs
trading or other forms of statistical arbitrage?*

**Zhi:** Yes indeed. Over the past 30 years both
practitioners and academic researchers have identified a large
number of statistical anomalies associated with financial
markets, with some 90 of these anomalies already being well
documented. Past returns are frequently factors in these
anomalies, so you have short-term reversal, medium-term momentum,
long-term reversal and so on. In fact you could argue that any
anomaly that depends upon past return is an area where our
defundamentalisation technique could be applicable. If you can
use this as a mask to tease out the non-fundamental element of
past return, this can then be used as an enhanced predictor of
future return.

**AT:** *Does the process of defundamentalisation
introduce jumps into the resulting time series that might make
traditional statistical tools invalid?*

**Zhi:** We haven't as yet done a direct comparison
between the time series of the raw price data and the
defundamentalised data, so I can't give you a precise answer.
However, I would suggest that the results of our tests would
indicate that this isn't an empirical issue. We used the
defundamentalisation techniques as part of an otherwise
conventional return reversal strategy: buying the bottom decile
of lowest-performing stocks from the previous month within each
sector and shorting the highest performing. When applied to some
2,350 stocks from January 1982 to March 2009, the strategy
achieved an average monthly return of 1.57%, in comparison with
an average monthly gain of 1.2% when using raw return data.

**AT:** *While that's a significant difference,
it seems that the basic strategy when applied within sectors is
still pretty good?*

**Zhi:** Yes, this is documented in existing
academic literature. If you condition the reversal strategy
within industries you can improve profitability. The underlying
objective is to extract the non-fundamental component of total
return and, while we did this by teasing out the cash flow
component, you can also get closer to the same objective by
comparing the return stocks within the same industry. This is
because you are essentially comparing two stocks that are likely
to have similar cash flows, so the relative movement of the two
stocks expresses some element of non-fundamental return. However,
in our approach you take things a step further by
defundamentalising at an instrument rather than at a sector
level. This is in stark contrast with the simplest application of
the strategy where the top and bottom performance deciles are
calculated across the entire market, and not per sector. When we
conducted comparative tests on this basis using the same data
set, it only generated an average monthly return of 0.67%.

**AT:** *Presumably, because of the way in which
you calculate the residual return series, forward-looking bias
isn't an issue?*

**Zhi:** Yes that's correct, which isn't something
that can be said of the alternative statistical approach, which
does tend to run up against that issue. There are also frequency
issues here for any statistical model because any such model
requires a measure of cash flow as input. If you use earnings,
you then have the problem that these are recorded at most four
times a year. This immediately limits your frequency of analysis
to four times a year, so you can't really implement a trading
model based on this at any higher frequency than quarterly. That
in turn raises issues regarding the validity of back testing this
approach on stocks that do not have an extensive history. For
example, if you are dealing with a stock that has only existed
for five years, you will be basing your analysis on a grand total
of 20 data points.

As part of our research we discovered that our defundamentalisation technique is also pretty robust. For example, you can still achieve an appreciable performance list over the vanilla reversal strategy even if you ignore all the cash flow calculations and just base decisions on simple directional changes. For example, if you see a stock that is a winner in the top decile for the last month, but over the same month analysts revised down their earnings estimates for the stock, then the divergence between the two numbers is a good sign that the return will revert over the next period.

**AT:** *What has been the feedback from market
practitioners?*

**Zhi:** Colleagues in the hedge fund industry have
been positive. I presented the paper this year at the annual
American Finance Association meeting and our discussant who
formerly worked in a hedge fund as a consultant mentioned that
they had implemented a very similar strategy with positive
results. A colleague in China also advised us that hardly any
strategies that work in the US market also work in China, but
that this was one that did. Various practitioners have also told
us that the strategy still works, even after transaction and
shorting costs.

**AT:** *So how does your approach sit as regards
the theory of efficient markets?*

**Zhi:** Some proponents of market efficiency might
argue that what we describe as the profit from a trading strategy
is actually nothing more than compensation for providing
liquidity. In the case of long entries, you are buying a losing
stock that you believe will revert. In doing so you are acting as
a willing buyer for a distressed holder of the stock, who is now
able to dispose of it. The fact that you are prepared to step in
and take the other side of the trade by buying the stock is
therefore explainable under efficient market theory as nothing
more than rational compensation for liquidity provision. By the
same token, if you are prepared to sell a previously
outperforming stock, you are also providing liquidity in a stock
that is in short supply. In both cases, you could be said to be
effectively acting in the role of market-maker.

**AT:** *How sensitive is your approach to the
timing of the release of analyst estimates?*

**Zhi:** The database of analysts' estimates we are
using - I/B/E/S - provides monthly consensus earnings forecasts
broken down by sector. This is usually computed on the third
Thursday of each calendar month. So if you want to match
everything precisely, you would probably want also to look at the
return on the stock up to the same date each month. We found that
adjusting the date cut-off point didn't actually change the
results very much.

**AT:** *Did you see any major variations in
returns between long and short positions, and over what period
was the optimal return achieved?*

**Zhi:** Although we didn't include the figures in
the paper, there was very little difference between long and
short position returns. As regards the optimal alpha capture
period, the bulk of any profit is typically made within two weeks
of entry. This is in line with the results of other research
based upon traditional approaches to short-term return reversal.

**AT:** *Could short sale restrictions be a
significant return factor and did you exclude stocks where put
options were available from your testing?*

**Zhi:** We used a dummy variable that set a flag
depending on whether or not options were available. Our average
return figure of 1.57% included stocks where options were
available, so better results might indeed be possible if they
were excluded. However, the shorting costs for the remaining
stocks could in some cases be quite high, which would erode any
premium effect of short sale restrictions. It's also worth
bearing in mind that this factor can be investor-specific. For
example, an institutional investor already holding the relevant
stocks wouldn't incur any shorting costs if they wanted to apply
the strategy to an existing portfolio as an overlay. By contrast,
other participants not already holding the relevant stocks would
be hit by shorting costs.

**AT:** *Finally, of the three fundamental
components ^{3} of stock return you
mention in your paper, you disregarded discount rate news on the
grounds of insignificance, but retained expected return that
reflected rational compensation of risk. To measure this
component, you used the Fama and French
three factor model - did you conduct any tests
using this alone?*

**Zhi:** No, because the expected return measure
derived from this has minimal effect on the overall results. You
are putting on a cross-sectional strategy that compares one stock
with another over the same period, so what really matters are the
cross-sectional variations. The cross-sectional variation of the
realised return is an order of magnitude higher than the
cross-sectional variation of any expected return model. This is
because the expected return will typically be in the range of 5%
to perhaps 20%, which is quite small. By contrast, the range of
the realised returns can be enormous - anything from -75% up to
+200% or more. So including or excluding the expected return
model has very little effect on the overall results because the
variation in realised return will massively outweigh it. We
included the expected return in the paper for the sake of
completeness rather than significance.

# Hat Tip

Our thanks to Radovan at Quantpedia ( www.quantpedia.com ) for alerting us to Zhi's work in this area.

# Footnotes

• Expected return that reflects rational compensation of
risk

• Cash flow news that is due to changing expectations
about fundamental future cash flows

• Discount rate news due to changing expectations of
rational future discount rates

• The limiting of extreme values in data to minimise the impact of potentially spurious outliers.

• Expected return that reflects rational compensation of risk

• Cash flow news that is due to changing expectations about fundamental future cash flows

• Discount rate news due to changing expectations of rational future discount rates