Efficient Market Hypothesis (EMH): Are Markets Really Efficient?
Updated: Oct 6, 2022
The Efficient Market Hypothesis (EMH) posits that the market is efficient, and, therefore, cannot be consistently beaten because share prices reflect all information. Proponents of EMH recommend investing in a passive, low-cost portfolio because stocks always trade of their fairest value. Opponents of EMH believe it is possible to outperform the market because stocks can and do deviate from fair market values.
The 3 Forms of EMH
There are 3 forms of EMH: weak, semi-strong, and strong.
The weak form states that stock prices reflect all of the data of past prices, and that no form of technical analysis can beat the market. However, investors may still be able to beat the market through fundamental analysis.
The semi-strong form states that stock prices reflect all public information. Therefore, neither technical nor fundamental analysis can beat the market. In this form, only non-public information (i.e. insider information) can beat the market. Note that trading on non-public information is illegal, but that doesn't necessarily mean it doesn't still happen on some level.
The strong form states that all information, public and non-public, is reflected in stock prices. Therefore, it is impossible for investors to beat the market using any technique. This is the strictest form of EMH.
Limitations of EMH
Although EMH is supported by a large body of research in academia, an equal amount of evidence to the contrary also exists. Most notably, EMH assumes that all agents in the market are rational. However, we know that this is not the case at all.
Opponents will also point to investors and funds, such as Warren Buffett and Renaissance Technologies, that have been able to consistently beat the market over long periods of time. If EMH were completely true, then such investors and funds would not exist.
Additionally, asset bubbles should not exist if we subscribe to the idea of EMH. By definition, bubbles occur when asset prices deviate significantly from their fair value. If investors acted rationally 100% of the time, then bubbles would never occur. However, we know for a fact that bubbles have occurred in the past and will continue to occur in the future.
So What?
So what does all of this mean? EMH is probably true in some form and to some extent, but it is likely not as strong as academia makes it out to be. It is true that the majority of investors will not be able to consistently beat the market over the long-term. Thus, most investors would be better off employing passive, low-cost strategies.
However, this doesn't mean that the market can't be beat. A study done by Morningstar found that 23% of active managers were able to outperform their passive peers over a 10-year period starting in June 2009. The key is being able to identify which managers have a consistent, repeatable edge. In modern market environments, there are 3 main types of advantages that enable managers to consistently beat the market: speed, information, and information processing.
A speed advantage is simply when a manager is quicker than their peers. Stock prices may reflect all available information, but if a manager is one of the first to trade on that information, then they will be able to outperform others. This is the main focus of many high-frequency trading and arbitrage firms.
An information advantage is when a manager trades on information that is not yet widely known. Perhaps the manager has discovered information that is public but not yet trending. Or perhaps the information is non-public (and therefore illegal), but the manager trades on it anyways. In either case, having an information advantage can lead to outperformance.
An information processing advantage is when a manager is able to process available information in a unique way to generate signals that are not easily found by others. Here, the manager is attempting to identify and trade on hidden trends before the broader market does. This is the focus of many quantitative firms, especially those who are using AI to process extremely large datasets.