Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a reason we keep trying to use higher frequencies in every new wireless standard (Wi-Fi, 5G, now 6G) instead of trying to increase the maximum possible bitrate per second into lower frequencies? Have we already reached the physical limits of the amount of data the can be encoded at a particular frequency?

Lower frequencies have the advantage of longer distances and permeating through obstructions better. I suppose limited bandwidth and considerations of the number of devices coexisting is a limiting factor.



> Have we already reached the physical limits of the amount of data the can be encoded at a particular frequency?

Basically, yes (if you take into account other consideration like radiated power, transmitter consumed power, multipath tolerance, Doppler shift tolerance and so on). Everything is a tradeoff. We could e. g. use higher-order modulation, but that would result in higher peak-to-average power ratio, meaning less efficient transmitter. We could reduce cyclic prefix length, but that would reduce multipath tolerance. And so on.

Another important reason why higher frequencies are preferred is frequency reuse. Longer distance and penetration is not always an advantage for a mobile network. A lot of radio space is wasted in areas where the signal is too weak to be usable but strong enough to interfere with useful signals at the same frequency. In denser areas you want to cram in more base stations, and if the radiation is attenuated quickly with distance, you would need less spectrum space overall.


>Longer distance and penetration is not always an advantage

Exactly. When I was running WiFi for PyCon, I kept the radios lower (on tables) and the power levels at the lower end (especially for 2.4GHz, which a lot of devices still were limited to at the time). Human bodies do a good job of limiting the cell size and interference between adjacent APs in that model. I could count on at least a couple people every conference to track me down and tell me I needed to increase the power on the APs. ;-)


That works if you control all the radios. If there is some other device screaming into the void you are screwed either way. (been there)


One event I particularly remember, the venue had ONE AP (and they had assured us that they could provide WiFi coverage for our 500 users, that was set to high power, their AP I found during the event, it was on the floor under a bench outside the master ballroom. This was the venue that I eventually tracked down was handing out DHCP leases with IP addresses that had a gateway address in a different subnet than the client IP. That was, admittedly, 2005, but the confidence they had in being able to serve our attendees, despite us telling them it wasn't going to be as easy as they thought, was stunning.


We don’t move to higher frequencies just because we’ve run out of ways to pack more data into lower bands. The main reason is that higher frequencies offer much wider chunks of spectrum, which directly leads to higher potential data rates. Advanced modulation/coding techniques can squeeze more capacity out of lower bands, but there are fundamental physical and regulatory limits, like Shannon’s limit and the crowded/heavily licensed spectrum below 6 GHz that make it harder to keep increasing speeds at those lower frequencies.


In addition to what others have said, Often from a network perspective you want smaller range.

At the end of the day, there is a total speed limit of Mb/s/Hz.

For example, in cities, with a high population density, you could theoretically have a single cell tower providing data for everyone.

However, the speed would be slow, as for a given bandwidth six the data is shared between everyone in the city.

Alternatively, one could have 100 towers, and then the data would only have to be shared by those within range. But for this to work, one of the design constraints is that a smaller range is beneficial, so that multiple towers do not interfere with each other.


5G can operate at the same low frequencies as 2G/3G/4G. It's not inherently a higher frequency standard.

It just also supports other bands as well.


5G's trick is MIMO. Basically just using more channel space for more data. In some places that means 3G/4G spectrum + 24GHz + 60GHz. And responding when you close a door and the 60GHz goes away. In some parts of the world where licensing worked out differently, it might just be a couple chunks of old 4G spectrum. Its not a monolith.


In most places it's 2G/3G/4G bands, either repurposed or through dynamic spectrum sharing, plus sub-6 bands.

mmWave is a flop.


Its a flop in this circumstance for sure.

I used to have some early engineering material outlining what had been approved for use in each country and 24GHz was pretty damn common. Could be that changed I havent kept up.

I do know in Australia we have sweet FA and 5G isnt very interesting at all.


mmWave is amazing in any kind of packed arena/stadium. Never was able to even use my phone at a basic level before, now can get low latency gigabit+ speeds which is insane.


In practice sub-6 bands are just good enough in most scenarios. We're 5+ years into mmWave deployments in the US and there's still very little interest or regulatory push worldwide.

For instance it's completely stalled in South Korea which has one of the highest 5G coverage and market penetration. In Japan I found articles from 2023 claiming the mmWave coverage was "0.01%" then, I don't know if it expanded in the meantime. In Europe there's virtually zero production deployments or devices sold with the compatible modem/antennas. While there are small deployments in Australian cities, Apple doesn't bother selling compatible models. Etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: