This post is motivated by the notion that the market will outperform most individual asset managers after fees in the long run. I propose a more efficient alternative to holding an SP Index Fund that has less risk and only mildly increases complexity.

This post is a work in progress. All code for this exercise will be available soon on my GitHub and linked here.

Warren Buffett States:

My advice … couldn’t be more simple: Put 10% of the cash in short-term government bonds and 90% in a very low-cost S&P 500 index fund. (I suggest Vanguard’s.) I believe the trust’s long-term results from this policy will be superior to those attained by most investors — whether pension funds, institutions or individuals — who employ high-fee managers.

*Intelligent Investor*, Ben Graham suggests that an investor not willing to dedicate significant time and energy to security analysis is better off investing in an index fund. However, holding the market alone can leave much to be desired.

- Consumer Discretionary (XLY)
- Utilities (XLU)
- Consumer Staples (XLP)
- Energy (XLE)
- Financials (XLF)
- Healthcare (XLV)
- Industrials (XLI)
- Materials (XLB)
- Real Estate (XLRE) – Not included in this analysis
- Technology (XLK)

For more information on the assets, please see the Select Sector SPDR ETF site that has lots of information and a number of widgets.

These sectors provide exposure to a subset of the greater index. At any given moment, one sector could be outperforming the others. I’d like to capitalize on that while still poised to capture the general trend of the market, and not put myself at unnecessary risk.

I will begin the exercise by sharing an analysis of the returns of all sectors to be included in the portfolio. I will continue with naive portfolio construction, portfolio construction with a DCC GARCH model, and finally portfolio construction with a recurrent neural network.

# Sector Analysis

For this section, I utilize the packages below. Notable packages are,`alphavantager`

, a package that accesses the Alpha Vantage free stock API (get your free API key at the link), `tidyquant`

, a package that brings financial analysis into the tidyverse (see here), and `grid`

, a package that facilitates writing tables to the R graphics device.

require(alphavantager) require(tidyquant) require(PerformanceAnalytics) require(tseries) require(stats) require(moments) require(car) require(lattice) require(latticeExtra) require(graphics) require(corrplot) require(gridExtra) require(grid) require(reshape2) require(ggplot2)

From the Alpha Vantage API, I grab monthly data for the 9 Sector ETFs used in this analysis, as well as “the Market”, ticker `SPY`

. I feed a list tickers into my function `get_alphavan_stocks`

and get a named list of dataframes back. From this list, I’ll be grabbing the close price for each month and the monthly returns. I put them in objects `list_sectors_close`

and `list_sectors_rets`

respectively. I take a look at the close price for all sectors over time using the code below.

#Plot All Close Prices sector_close_prices <- do.call('cbind', list_sectors_close) sector_close_prices <- cbind(list_sectors[[1]]$timestamp, sector_close_prices) colnames(sector_close_prices) <- c("Date", "XLY", "XLF", "XLE", "XLU", "XLK", "XLP","XLB", "XLV", "XLI") melted_sector_close <- melt(sector_close_prices, id = "Date") ggplot(data = melted_sector_close, aes(x = Date, y = value, group = variable, color = variable)) + geom_line() +scale_x_date(date_breaks = "5 year") + ylab("Close Price") + ggtitle("Close Prices of Sectors\n Monthly Feb. 2000 to March 2018")

I then take the returns of all sector ETFs (and the market) and feed them into a function to create a PDF and examine the monthly returns of each ticker. I’ll examine the outputs of this function using XLY as an example.

First, I create a simple time series plot of the monthly returns. We can see high volatility periods around 2001 and 2008, as expected.

Second, we can use `hist`

to create a density histogram of the monthly returns bucketed into 5% intervals and plot the kernel density estimation of the monthly returns on top. We see that these returns seem to be skewed left with a mean greater than zero. This makes sense, as the value of the market increases in the long run but is prone to dramatic corrections.

I use the `grid`

and `gridExtra`

packages to output some information about the moments and certain quantiles. Shown are mean, variance, standard deviation, skewness, and kurtosis. The negative skewness and an excess kurtosis indicate that this data might not fit a normal distribution. The Jarque Bera Test, a goodness-of-fit test that will let us know if the data seems to match a normal distribution, confirm it.

Lastly, we can use the `car`

package to create Quantile-Quantile plots, or QQplots. This is another view of the distribution of the sector’s returns. These diagrams show the distribution of one dataset plotted against another. We see the points seem to form a steeper line than the red 45° line, indicating that the stock returns have fatter tails than the normal distribution. In addition we have convenient envelopes that reflect the 95% confidence interval of the normal distribution. It’s close in the middle, but the tails fall out of bounds. The PDF function additionally outputs a QQplot that compares the data with a Student-t distribution that has 3 degrees of freedom, and another that has 9. This sector’s returns seem to fall between the two.

Having the above information about the distribution of returns for a given sector helps us understand sector returns individually. However, it’s important to also examine sector returns in context with other sectors. An easy way to quickly examine the risk-return tradeoff in sectors is a plot of means vs standard deviations. I’ve included SPY as well. Note that these means and standard deviations are calculated based on the entire series of monthly returns from Feb 2000 to April 5th 2018.

Unsurprisingly, consumer staples (XLP) has the lowest standard deviation of the bunch. It’s followed closely by healthcare (XLV), the market (SPY) and utilities (XLU). The leader in returns is consumer discretionary (XLY). Interesting are financials (XLF) and technology (XLK). These points don’t fit the up-and-to-the-right imaginary line that would indicate a proper risk and return trade-off. These ETFs seem to be riskier with a lower return. A large impact on their showing here is likely because of the 2001 dotcom crash and the 2008 financial crisis being included in the dataset. I’m sure the results look different at a different snapshot in time.

In addition, we can look at correlations between sector ETFs.

Technology and Utilities seem to be the least correlated pair. In fact, a quick glance shows that Utilities appears to be the least correlated with the rest of the group.

# Naive Portfolio Construction

After analyzing the sectors, let’s examine constructing optimal portfolios with them. We will be constructing global minimum variance (GMV) and targeted returns (TRET) portfolios. These portfolios are considered naive as they depend on a return covariance matrix of past data even though we have no assurance that historical covariance will reflect the future.

The GMV portfolio will provide asset weights that, based on a covariance matrix derived from a period of returns, will provide the minimum possible variance. Given a portfolio with assets and a vector of weights , we solve the constrained minimization problem below.

Using this to constuct a lagrangian and solving for we arrive at the equation below.

For this experiment, consider all based on 5 years of monthly historical data for the 9 sector ETFs. Again, the result of the equation above, , is a vector of weights corresponding to each asset.

The TRET portfolio attempts to minimize variance while achieving a targeted level of returns. We are solving the same minimization problem as above with an additional constraint.

In the equation above, is a vector containing the means of all assets in the portfolio in the same period as the , is is the return value being targeted. From this equation we can use matrix algebra to derive an equation that will output optimal weights. This equation is:

Where , , and

With these equations, we can construct the efficient frontier. The efficient frontier line represents what the “limits” are of portfolios you can construct of a given set of assets. It tells you what amount of return you may achieve for a given amount of risk, or vice versa. Below is the efficient frontier for portfolios containing the nine asset ETFs.

This line was constructed by first calculating for all assets over the entire time horizon of data. With we calculate the leftmost point of the frontier by calculating the GMV portfolio for the full length of the dataset and determine what level of returns comes with the smallest possible level of standard deviation. From there, we calculate TRET portfolios starting from the mean return of the GMV portfolio and slowly stepping upwards. Again, this line tells us the best mean returns per month from a portfolio bought in 2000 and held that we can achieve with a given level of risk. Important to note is that this line was calculated with one set of weights, it’s possible that we could surpass this line by adjusting portfolio weights intermittently throughout time.

Onto portfolio construction. Again, we calculate based on 5 years of monthly historical data and construct a new optimal portfolio at the end of each month. We construct two TRET portfolios, one that targets the return of the best performing sector for the previous time period and one that targets a constant 1% return.

As we can see above, only considering returns, all portfolios beat the market. However, none of them achieve positive Jensen’s alpha. We do see that the GMV and TRET1 portfolios are able to beat the market with a lower variance. The “success” of these portfolios relies on what your goals are as a portfolio manager. Below, I plot the cumulative returns of these portfolios and SPY over time.

The three portfolios and the market track closely, but notable is the severity of the drop in 2008. All three portfolios managed to lose less than SPY. On the other hand, they didn’t gain anything from avoiding the plunge, as SPY’s sharp rebound immediately caught up. Putting the portfolios on the plot with the efficient frontier from earlier, we can see that the GMV portfolio gets quite close. The other portfolios seem to only add risk without adding any return.

In conclusion, a naive GMV portfolio performs better than the SPY after monthly re-weights. The naive TRET and TRET1 portfolios do as well but stray farther from the efficient frontier.

# DCC GARCH Portfolio Construction

Instead of assuming that next period’s will remain the same, in this section we will attempt to forecast next period’s in order to obtain more optimal weights for the coming period. To do this we will use a modification of a GARCH model, called the DCC GARCH (Dynamic Conditional Correlation). GARCH models are a family of models that help us model variance in a series of data. The general functional form can be represented as

Where follows a Gaussian distribution with mean 0 and variance .

The DCC GARCH model can help us understand how different series of data move relative to each other. After modeling asset returns, one can use it to forecast a covariance matrix for the next period. Our hope is that this forecasted covariance matrix is a more accurate prediction of the future than naively assuming that the covariance matrix will remain the same from one period to the next.

I plan to add latex equations to clarify how to get to a forecasted from return data. For now, see the R code I use at each timestep i below.

selection <- sector_returns[i:(estimation_period+i-1),2:10] selection_means <- colMeans(selection) selection_mean[i,] <- selection_means selection_dm <- selection - matrix(colMeans(selection), estimation_period , n_assets, byrow = T) a_step <- diag(cov(selection)) A_step <- diag(rep(.3,n_assets)) B_step <- diag(rep(0.75,n_assets)) DCC_parameters_step <- c(0.01,0.98) DCC_regression_step <- dcc.estimation(inia = a_step, iniA = A_step, iniB = B_step, ini.dcc = DCC_parameters_step, dvar = selection_dm, model = "diagonal") var_step <- DCC_regression_step$h Coeff.step <- DCC_regression_step$out[1,] a_step <- matrix(Coeff.step[1:n_assets],n_assets,1) A_step <- diag(Coeff.step[(n_assets+1):(2*n_assets)]) B_step <- diag(Coeff.step[(2*n_assets+1):(3*n_assets)]) phi_step <- Coeff.step[(3*n_assets+1):(3*n_assets+2)] forecastH[i,] <- a_step+A_step%*%(as.numeric(selection[estimation_period,]^2))+B_step%*%t(t(var_step[estimation_period,])) DCC_DCC_step <- matrix(DCC_regression_step$DCC[estimation_period,],n_assets, n_assets) tildeE <- matrix(as.numeric(selection[estimation_period,]/sqrt(var_step[estimation_period,])),n_assets,1) barR <- cor(selection) Q_step <- (1-phi_step[1]-phi_step[2])*barR+phi_step[1]*(tildeE%*%t(tildeE))+phi_step[2]*DCC_DCC_step #R t+1 RR <- solve(diag(sqrt(diag(Q_step))))%*%Q_step%*%solve(diag(sqrt(diag(Q_step)))) sigma[[i]] <- diag(sqrt(forecastH[i,]))%*%RR%*%diag(sqrt(forecastH[i,]))

Unfortunately, these seem to perform only slightly better than their naive counterparts. Below, you can see the GARCH portfolios with their naive counterparts. (Note that colors have changed from the previous plot)

We can also examine the model’s next-period estimates for asset variance. By looking at the diagonals in the forecasted ‘s we can see asset variance, and compare the forecasts to actual asset variance over time in the data.

I notice two things here, first is that the forecasts (red) can be improved. Second, this model tends to do well when shifts are gradual and steady. Note how many of the forecasts tend to converge towards the end (low volatility period). After looking at these forecasts for sector variance, I wanted to improve the overall quality of the forecasts. It’s a fair assumption that more accurate forecasts of can drive better returns and less risk. For this task, I’ve chosen a recurrent neural network model.

# Recurrent Neural Network Portfolio Construction

A common difficulty faced when dealing with neural networks is obtaining a suitable amount of data. Neural networks are data-hungry and prone to overfitting. Because of this, I had to take a step back from the problem at hand. I can’t have enough data to train a neural network that can output optimal portfolio weights for the next period for these sector ETF’s only. Instead, I must attempt to create a neural network that can forecast the next period covariance matrix for a portfolio of assets. By abstracting away the specifics of what the precise data is I might be able to create a model that hasn’t overfit the sector data and handles the task. I approach this problem by obtaining price data for a large number of stocks, sampling of them to create a large number of portfolios, and then generating a large number of time-series of ‘s.

With an input as a time series of ‘s, I attempt to predict the next to generate optimal weights with it. As I’ve found out, this is no easy task. More than anything, the network learns to replicate the last value of the covariance matrix. As a result, a network that tries to forecast the next value in a sequence of covariance matrices is at its core performs no better than a naive portfolio with a huge level of added complexity.

Now, since I am already attempting to internalize a GARCH model in the neural network, I consider attempting to internalize the function above that derives weights. So, instead of predicting the next , I attempt to forecast the next optimal set of weights for the GMV portfolio. At the moment, this forecast is no better than an equal weight portfolio, which tells me that this network isn’t learning either. I believe I need to add more information to the training data. Perhaps adding more input data (SPY Returns, past sector performance, the ranking of stock performance within each portfolio, the ranking of stock correlations, anything that would help the model understand what relative movements are happening and why) can help these predictions, but that’s an area of further research. I’ll continue to update this section if any developments are made.