Many investors are familiar with the Morningstar rating system for mutual funds. Funds receive between 1 and 5 stars, with 1 star being the lowest rating and 5 stars being the highest. What most investors don’t know, however, is how Morningstar decides how many stars a fund receives.
Many are a little surprised to find out that stars are awarded based on past performance, not on how well Morningstar thinks the fund will do going forward. Additionally, funds are rated to similar funds, not all funds in general. The top 10% of funds in a particular Morningstar Category (Large Blend, Small Value, Intermediate-Term Bond, Real Estate, etc.) with the best performance over the previous 3-, 5- and 10-year periods receive 5 stars. The next 22.5% receive 4 stars, the middle 35% receive three stars, the next 22.5% receive two stars, and the bottom 10% of funds within a category receive just one star.
There’s a perception that 5-star funds are “better” than 4 star funds: we often hear prospective clients tell us that they relied pretty heavily in Morningstar’s ratings in putting together a portfolio of 4- and 5-star funds. But are investors rewarded for picking funds with more stars?
Dimensional Fund Advisors (DFA) recently ran a study that tested whether or not there was any predictive power in the Morningstar ratings. They looked at all of the US Stock funds with inception dates prior to January 1, 2000 (a sample size of about 1750 funds), and separated the funds based on their star rankings as of December 31, 2004. DFA then examined the subsequent 5-year returns of each fund.
If the stars were predictive of performance, 5-star funds should have performed better on average than 4-star funds, 4-star funds should have performed better than 3-star funds, and so on. What DFA found was that the average 5-year annualized return from 2004-2009 was pretty much the same for each star grouping, as was the spread about the average. Basically, the stars didn’t have any predictive power. Investors would have had just as high a chance of above average (or below average) future performance if they picked a 1-star fund instead of a 5-star fund.
The DFA study reminded me of a story I’d heard on the Freakonomics podcast. One of the people on the podcast described an experiment involving a wine tasting, where he served wine experts glasses of wine that he claimed were different but were in fact from the same bottle. As part of the experiment, he told the experts that “each” bottle had scored a different number of points, based on the somewhat subjective and non-standardized method of ranking wines.
After sampling each wine, the experts thought that the wine with the “higher” number of points was far superior to the “lower” number of points…even though the wines were exactly the same! But the experts were swayed by the higher number of points, just like a lot of investors are influenced when selecting a mutual fund by the higher number of stars the fund receives.
Mutual fund companies aren’t dumb. They recognize the tremendous marketing potential that comes with having a 5-star fund; getting such a high rating can be very lucrative. A recent article on Smartmoney.com noted that: “A 2001 study by the Atlanta Federal Reserve found that a fund whose debut rating is 5 stars gets 54% more inflows over the next six months, compared to what would be expected, while an upgrade from four stars to five generates a 35% boost.” (http://www.smartmoney.com/invest/mutual-funds/new-grades-for-mutual-funds-1308089563595/).
Essentially, the high ratings lead to inflows from performance-chasing investors.
So what’s the bottom line? An inexpensive, 80-point bottle of wine might be just as good as a very expensive 95-point bottle, and is probably a better value. Similarly, when picking mutual funds, the value typically lies in selecting funds with the lowest expenses, which are often funds that simply try to passively track a particular index. They might not have the highest rating, but you need to look beyond the stars.