I haven't used mathematica but look forward to learning it one day as an indulgence. I code mostly in R, and it's renowned for having some hurdles to jump through in order to get a package published on its primary repository (CRAN), but what that high standard means is you can download almost any library and expect to find extremely well documented functions with examples that can be run with minimal (usually zero) additional data/dependencies. It's a real treat, and I miss it massively when using languages with less rigorous and less uniform approaches to documentation.
I don’t know about this cable specifically, but it can be done by transferring more power to the optical signal.
Erbium-doped fiber amplifiers work by utilizing a nonlinear optical effect where energy is transferred from a pump laser to the signal. This is in principle possible in any optical (glass) fiber, but by doping with exotic elements, the amplification characteristics can be optimized. Erbium is suitable for the conventional communication wavelengths.
For reference I have a PhD in information theory and signal processing for fiber channels.
This is still a good practical reference I like to point out, when people ask: https://www.youtube.com/watch?v=mWqe8_5SUvk Richard A. Steenbergen has also other good talks, e.g. on traceroute. There are multiple versions of these talks that include more or less the same stuff with occasionally more information here and there.
The crush to the cable could be a number of things but without knowing the terrain, and knowing these cables just lie on the sea floor, it could be caused by the cable sitting on some jagged rock and has been pulled tight elsewhere (perhaps by fisherman dredging the seabed) resulting in the cable being forced onto the jagged rock and it being crushed onto the rock.
Likewise, but unlikely, some heavy object from above has some how landed on the cable, perhaps even a submarine of sorts resting on the seabed.
Again knowledge of the terrain of the sea floor where the cable crush took place is key into gaining some idea of what might have happened, but I think its the first scenario, a fisherman dredging the sea floor elsewhere has caught and pulled the cable tight and the cable crush is the damage from it resting on rocks where its snagged and crushed itself from the tautness.
Rock climbers and abseilers using ropes will see this with their ropes.
> I used to work at a hedge fund known for its diagnosis process, a logical process of clearly defining the suboptimal outcome that occurred, agreeing on how it should have gone and what had happened, and if it's indeed wrong to have gone this way, what to do about it.
Do you know of some publicly available material on this? I’d love to hear about their approach.
Well, the technology is just as impressive either way, but I think "we experimentally demonstrate transmission of 1.84 Pbit s–1" is misleading. The capacity was demonstrated piecewise but that data rate was not demonstrated.
The two other comments gave very good general answers, but I happen to have worked on this specific project, so I can give some very specific details (as far as my memory goes.)
Lab testing of this scale of transmission involves a bit of “educated simplification”. We had some hundreds of wavelength channels, 37 fiber cores and two polarizations to fill with data. That is not realistic to actually do within our budget, so instead e split the system into components where there is no interference. For example, if there is different data on all neighboring cores compared to the core-under-test, then we dare to assume that the interference is random, without considering neighbors’ neighbor etc.
This reduces our perspective to a single channel under test with known data and then at least one other channel which is just there as “noise” for the other channels. The goal is to make the channel-under-test have a realistic “background noise” from neighboring interference. This secondary signal is sometimes a time-delayed version, sometimes a completely independent (but real) data signal.
This left us with a single signal of 32 GBd (giga symbols / s). This is doable on high-performance signal generators and samplers.
Ah ok so you just extrapolate the capacity of the pipe based on that, you don't actually generate petabytes of data. That makes a lot of sense, thanks!
I should clarify that we did measure every channel (polarization, wavelength and fiber core) individually. It would not be fair if we just measured one and multiplied ;)
(And yes, that took forever. A shout out to A. A. Jørgensen and D. Kong for their endurance in that.)
The chip which produced the laser is indeed “just” CW with data modulated on separately. And novelty indeed lies in the width of the comb source and the SNRs of the obtained channels.
Late to the thread, but I took part of this research (7th author in the list). I worked on the signal processing, information coding etc and is happy to answer any questions :-)
Does this work imply that the same tech could create ultra-high-speed switches that could match this bandwidth, thereby routing and propagating, and not just flow between two points?
Optical saves a heck of a lot of power, and is obviously much faster than copper, so that's the way it's all going.
The longer answer requires reliable and appropriately sized/cost transceivers to get the data back to electrical to match the speed of the optical, and those are going to be a while coming, and this tech is still in the lab.
At the top end subsea cables have very high cost and traditionally bulky transceivers, and it's all about data volume, not switching.
At the other end of the scale is inside the data centre, where most switching needs to occur, there is a move towards optical interconnections and co-packaged switches. (1 and 2)
what are the optical link budgets in this 8 km dark fiber path?
what's the tx launch power?
what's the frequency bottom end and top end, in nanometers or THz? does this all run in the normal ITU DWDM range from approx. 1528nm up to 1568nm, or wider than that?
I’d say that there is at least a 10 year delay between the lab and commercial deployment. Even then we are talking about deployment in large fiber systems and not to the home.
However, not all ideas in the lab ever make it into deployment.
We used constellation shaping and a rate adaptive code to tailor tailor the bitrate of each channel. It varied between something along 64-QAM and 256-QAM depending on the SNR in the channel.
Post processing times were not too bad. It ran on a standard desktop computer and gave an estimate of the data rate in about a minute (can’t remember exactly). Of course, compared to actual transmission that is terrible slow, but that was only due to the implementation and need of this experiment.
I can’t answer for the chip aspect (which is the truly novel part of this research), but many of the signal processing and coding techniques are being deployed in new optical transmission systems. Constellation shaping and rate adaptive coding were two techniques we used in this paper to ensure that individual channels were as ideally utilized as possible.
Devil's advocate here. How do you feel about the social significance of this type of work? Do you think "enough bandwidth" is a thing? If only the cost drops further, will it affect society? If we can already stream anything in the collective consciousness within seconds, what is the purpose of more? Is it likely to enable unnecessary levels of video surveillance by state actors?
I must confess that I have never been concerned along those lines.
I have thought a lot more about the environmental impact of transmission technology. It is a massively energy consuming industry and the expectation is to provide more capacity, while the expectations on efficiency do not add up to an actually reduced energy use.
I appreciate your honesty. You are not alone in working without considering social impact, it's rife in tech and I am previously guilty too.
Alzheimer's seems a challenge! Here in China they apparently approximate it for research purposes by dosing primates with MDMA... should be easy to find volunteers!
Cell phone cells are (ideally) shaped to account for the expected pattern of how handover will occur. Along roads and train lines, the cells are (at least in GSM) supposed to be tailored to allow for easy routing and handover as the devices travels in the direction of the way.
While I have never read anything concretely analyzing the handover pattern of devices on airplanes, I would expect that since a very large number of cells are almost equally visible/equal signal strength, the network would have to frequently handover the device from one cell to another.
The handover process is, for voice traffic, very resource intense. (in GSM) it involves duplication of traffic to the neighboring cell and a lot of coordination.
I think that could be the reason for why mobile operators find airplane-borne devices annoying.
The article gives a few examples where a pseudo-class fits better with how CSS normally works. For example, the selector matches the last item in the selector. With ‘<‘ it would match ‘parent’ in your case.
You can also match descendants of something with the :has() selector. It really is more than a “parent selector”
which matches a link inside a div that also contains bold text. The important part is that 'b' and 'a' can be very different places in the hierachy, they don't need to be same like like a 'b + a' selector.
https://reference.wolfram.com/language/ref/Flatten.html
Just look at the amount of examples and details for this function. It is like that for every function.