Updated:

Since the late 1980s, electrical engineers and computer scientists have been engaged in a battle against the laws of physics and information theory.

More than five years in the making, the new 5G standard brings several new tools to that fight. In this article, we explore how 5G New Radio pushes the limits of Shannon's Law to achieve faster data rates.

**Data can only be transmitted so quickly.** Whether it’s over copper wire, through fiber-optic cables, or wirelessly, there are theoretical limits for each medium. Those limits have been known since the early 1950s as a result of the groundbreaking work by Claude Shannon. Shannon was a contemporary of Alan Turing and is considered by many to be the father of Information Theory.

After laying the foundation for much of modern cryptography during his work at Bell Labs during World War 2, Shannon developed his noisy channel coding theorem. The theory sets an upper bound for the efficiency of the error-correction algorithms necessary when data is transmitted through a medium with noise (i.e. any medium!). Combined with earlier work by Ralph Hartley, Shannon’s theorem sets an upper limit on the rate at which data can be transmitted over any communications channel, whether wired or wireless.

As cellular communication has progressed in the last two decades, we’ve rapidly approached the theoretical limits for wireless data transmission set by Shannon’s Law. Every successive cellular generation has brought dramatic increases in data rates. 2G networks offered a maximum theoretical data rate of 40 kbps–but today’s 4G LTE-Advanced networks have peak theoretical data rates of 1 Gbps. 5G takes that one step further; next-generation networks will have peak theoretical data rates of 20 Gbps for downlink and 10 Gbps for uplink.

Theoretical peak rates are just that: *theoretical*. You probably don’t see 1 Gbps download speeds on your LTE Android or iPhone handset. The more useful metric defined by the International Telecommunications Union (ITU) for the IMT-2020 standard (basically the 5G standard) is *user experience data rate*, which is the data rate experienced by users in at least 95% of the locations where the network is deployed for at least 95% of the time. By this measure, at a minimum of 100 Mbps, 5G should be at least five times faster than average 4G speeds.

To understand how 5G achieves these higher data rates, we need to dig into Shannon’s Law to see how engineers have tackled each of the limiting factors from previous generations.

Please note, we’re completely ignoring latency here. Latency, or the time it takes to reach a server, is not limited by Shannon’s Law and has a huge impact on everyday Internet usage. We’ll cover how 5G networks improve latency in a future post.

This is a simplified version of Shannon’s Law:

5G improves data rates by attacking the first two components of Shannon’s Law directly:

**More Spectrum (W):**5G uses a wider range of frequencies to communicate between devices and towers.**More Antennas (n):**5G utilizes arrays of antennas in both devices and towers to create spatial diversity.

Additionally, 5G uses **higher-order modulations schemes** to improve data rates when the signal to noise ratio (SNR) is high, allowing the real-world data rates to reach closer to the theoretical Shannon Capacity.

Let’s dive into each of these!

Spectrum is a scarce resource: there’s a limited amount of frequencies at which devices can transmit wirelessly. To prevent interference, each country regulates how the airwaves can be used within its borders. in the US, the Federal Communications Commission (the FCC) auctions frequency bands to cellular carriers.

For the release of 5G, the FCC is expanding spectrum availability compared to today’s 4G spectrum in two main ways:

- It’s licensing a whole new category of spectrum for cellular applications:
**high-band, “millimeter wave” frequencies.** - It opens up a greater range of
**mid-band frequencies.**

Today’s 4G LTE devices and towers use two frequency ranges to transmit between cell towers and devices:

- 4G low-band: Everything under 1 GHz;
- 4G mid-band: From 1 GHz to 2.6 GHz

There’s a total of around 700 MHz of spectrum available today for 3G and 4G LTE networks operated by national and local cellular carriers in the US. But this existing low-band and mid-band spectrum is already congested: 50% of 4G cell sites in the US will run out of capacity by 2020.

5G expands the range of mid-band spectrum accessible to cellular networks, but also adds new high-band spectrum:

- 5G low-band: Everything under 1 GHz;
- 5G mid-band: From 1 GHz to 6 GHz
- 5G high-band: From 24 GHz upwards, also known as millimeter wave (mmWave).

In the US, the FCC is making an additional 6 GHz of spectrum available (1 GHz of mid-band, 5 GHz of mmWave high-band) for 5G networks. That’s almost 10 times the spectrum available today for 4G LTE service.

Greater spectrum allocations are helpful, but it’s not quite so simple. Cell towers and devices actually need to be able to *use* more spectrum. The 5G NR standard makes that possible.

The biggest limitation to spectrum utilization is how much bandwidth towers and devices can transmit and receive on at any one time. The first 4G LTE devices released in 2010 could use a maximum of 20 MHz of spectrum to send data from the tower to a user. That number has increased over time with updates to the LTE specification. The introduction of LTE Advanced and “carrier aggregation” allows today’s 4G networks to use up to 100 MHz of spectrum between towers and devices.

The 5G standard goes significantly further. Instead of 20 or even 100 MHz, the 5G NR specification allows devices and towers to use up to 800 MHz of spectrum at any one time. Demodulating 800 MHz of RF into bits and bytes is a huge feat, requiring significantly more complicated (and costly) modem chipsets.

While 1 GHz of new mid-band spectrum will improve data rates significantly, the real promise of 5G is the 5 GHz of high-band mmWave spectrum that the FCC is opening up.

Unfortunately, though, not all spectrum is equal. There’s a reason why 2G, 3G and 4G LTE networks started with low and mid-band spectrum, and not 5G’s new mmWave bands. The higher the frequency of a radio frequency signal, the lower the distance it travels in free space, and the more easily it’s absorbed by obstacles.

mmWave signals at 24 GHz and above are at such high frequencies that a single 5G tower’s coverage area is much, much smaller. A typical 4G LTE cell tower can serve a user 10 km away, but a 5G mmWave tower operating might cover just a 100-meter radius.

High-band, mmWave 5G requires a huge density of towers. That means we’re likely to see high-band 5G only in urban and suburban areas. mmWave 5G, with its huge bandwidths and super-high data rates, won’t be in rural areas anytime soon. And 5G towers won’t be “towers” – instead, they’ll be “small cells,” mini-cell sites mounted to light poles that cover just a small area.

Another constraint of mmWave is that it doesn’t penetrate buildings. The 24+ GHz mmWave frequencies are so high that they’re blocked not just by drywall, but even by glass. That’s a huge downside: outdoor 5G networks won’t work indoors unless the building has a signal booster.

To address this problem, carriers have developed 5G “small cells” and 5G distributed antenna systems that would allow mmWave 5G service inside buildings, with the first trial deployments happening in stadiums today.

The second factor in our Shannon’s Law equation, the number of antennas, is perhaps a little misleading: more antennas alone does not mean faster data rates. The antennas must be configured to enable “spatial multiplexing” – which increases the number of physical streams of signal that can be sent between a tower and its users.

SU-MIMO was simply called “multiple-input and multiple-output,” or “MIMO,” when implemented in today’s 4G LTE networks. All modern LTE phones support this type of MIMO, and your smartphone likely supports 2x2 or 4x4 SU-MIMO.

SU-MIMO exploits a combination of signal polarization and reflected signal paths (known as “multipath effects”) to achieve spatial multiplexing. The result is multiple streams of data being sent to a user and an increase in data rates–all without needing more spectrum.

MU-MIMO also utilizes the same multipath effects, but instead of increasing the capacity for any one user, it uses the different spatial streams to connect to different users. As a result, MU-MIMO increases the total capacity of the system. For MU-MIMO, the system must have as many antennas as there are users connected to the tower.

Massive MIMO is a 5G-only technology. The tiny sub-centimeter wavelengths of mmWave frequencies will allow devices to pack in many, many more antennas to create “phased-arrays.” These phased arrays of antennas allow 5G networks to achieve much higher levels of spatial multiplexing.

For example, a standard cellphone can fit an array of 72 antennas operating on the 39 GHz mmWave band. A similar 72 antenna array in the 700 MHz low-band frequency would be larger than a typical home door.

The sheer density of antennas allows for “beamforming.” By adjusting the phase of the signal going to each of its many antennas, a 5G mmWave small cell can create a wireless “beam” pointed in whichever direction it needs.

Beamforming has the potential to unlock huge improvements in capacity and data rates. By directing multiple beams of signal, 5G networks can substantially increase the signal-to-noise ratio experienced by each user’s device.

Higher signal-to-noise ratios are one half of the equation. A high SNR factor increases the total Shannon Capacity of the system, but in order to benefit from these higher SNR factors, we need higher-order modulation schemes.

Digital modulation is the act of converting digital data – ones and zeros – into radio waves. In the last two decades, Quadrature Amplitude Modulation (QAM) has become the de facto standard for digital modulation, utilized by everything from cellular to Wi-Fi to cable modems.

We won’t get into the nitty-gritty of QAM here. But critically, at higher levels of signal quality (Signal to Noise Ratios), it’s possible to increase a QAM signal’s “constellation size” to increase data rates and spectral efficiency. When 4G LTE was first released, it supported a QAM “constellation size” of up to 64. Updates to 4G LTE have added support for constellations of up to 256, and 5G NR promises to support 1024 QAM and beyond in future releases.

These higher-order modulation schemes only become useful when signal quality is very high. Since 5G mmWave networks require the use of “small cells” covering smaller areas, interference between neighboring cells is dramatically decreased. Along with beamforming, this should make higher quality signal levels and high-order modulation schemes much more common, increasing the data transmission rates between towers and users.

Higher modulation schemes don’t just help individual users: they also increase the capacity of the network as a whole, bringing it closer to the Shannon Capacity. While 4G has a downlink spectral efficiency between 0.074 to 6.1 bits/s/Hz (bits per second per hertz), future 5G networks promise efficiencies of between 0.12 - 30 bits/s/Hz.

As IoT devices become more prevalent, support for higher capacity is critical. While 4G networks theoretically support up to 10,000 active users per km², 5G should eventually support more than 1,000,000 active devices per km².

By taking advantage of the physical properties of higher frequencies, 5G is able to utilize more spectrum, more antennas, and higher-order modulation schemes. These, in turn, push the upper limit of Shannon’s Law, giving us faster data rates and higher network capacity. Just like 4G, which has evolved over the last decade with hundreds of enhancements, 5G, too, will evolve and push boundaries over the coming years, to keep up with the ever-growing demand for data and the insatiable thirst for speed. 5G is still getting started, and another decade of pushing the limits of Shannon’s Law has just begun.