Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
‘Impossible’ particle discovery adds key piece to the strong force puzzle (quantamagazine.org)
192 points by theafh on Sept 27, 2021 | hide | past | favorite | 70 comments


Can anyone informed and eager enough tell me how it's possible to measure something that only exists for "12 sextillionths of a second"? In my computer minded mode of thinking in sensors and clock cycles I can't imagine a way this is done.


It isn't. They measure the decay products and infer what happened.


That makes sense. Thank you.


The answer is you don't measure the particle directly, but instead measure its byproducts. I spent my graduate career studying the Upsilon meson, which is a particle dominated by two valence bottom quarks. The Upsilon exists for a similar amount of time and there is no way we can measure it directly. However, about 3% of the time, it will decay to a pair of highly energetic electrons and another 3% of the time, it will decay to a pair of muons. These extremely energetic leptons (think 99%+ the speed of light) are something we can detect as they come screaming out of the collision. (side note: an electron weighs ~ 511 keV in particle physics units. The Upsilon meson weighs at least 9.46 GeV depending on the state. That means each electron has at least 4.73 GeV of kinetic energy with a mass of 511 keV, or ~9000x more kinetic energy than its energy in mass). We have ways of measuring their energy, so we can reconstruct the mass of the original particle via $E=mc^2$ plus some kinetics.


This is a great answer. Thank you.

So in these situations how do you tell apart electrons from one source compared to another? In the article they mention how the LHC collides particles at a rate of "40 million times each second". I can imagine there are a lot of electrons and other particles flying around from other collisions. What makes an electron discernible between one type of particle and another?


Truly, you never really know which pair of particles came from a specific decay, and which come from some other processes and just happen to line up with the mass/energy you're looking at. Fortunately, for most particles, the combinatorial background signal follows a smooth curve around the energies you're looking at, so you can fit a curve to that background signal and then attribute the rest of the signal to the particle production. For an example, see the main plot on the Upsilon page in Wikipedia (https://en.wikipedia.org/wiki/Upsilon_meson). You can see there is a linear decline (in log space) of the background signal but then there's another peak around 9.5 GeV which is the additional signal from the Upsilon decay.

The point is, we cannot tell which pair of electrons/muons come from the decay of a specific particle, but we can tell how many extra occurred beyond what we would expect from all other known processes.


Thanks so much for the explanation and the example. I at least know a little more about these complicated endeavours now.


Happy to do it! Thanks for asking insightful questions. :)


Statistics like the op said. If you are expecting your decay products from an interaction to be 20% X and 80% Y but after 100 billion attempts which should have averaged out to the expected outcome you instead get 21% X and 79% Y something in your calculation is wrong.


I did an undergrad in physics, and at one point I wanted to be a particle physicist. But I didn't want to go straight to grad school, so I spent a couple years teaching middle school math and science. I loved that, and spent 25 years teaching instead of going back to physics.

If you don't mind my asking, what are you doing now after spending years studying such a specific area of particle physics?


I've been a software engineer for the better part of a decade now. My areas of focus during that time have been NLP, messaging, teaching, politics, and now data analysis and processing around criminal justice.


Are the decay modes (3% one way, 3% another, 94%?) experimentally determined, or does theory predict this distribution?


As in most things with particle physics, it's a combination of both. Theory predicts a wide band and then experimentalists come in with the best estimate they can make. Some theories are excluded and other are refined. There are another round of predictions and when the experiments get powerful enough, they can challenge or support some predictions.

Some of the best data for branching ratios comes from e+e- (electon-positron) colliders such as LEP (literally, the Large Electron-Positron collider). In these colliders, we can fine-tune the energy to produce massive amounts of particles we care about. From that, we can see how they decay. Mostly, Upsilons decay into massive sprays of hadrons and leptons (called jets in particle physics). These can come from decaying Tau particles (the much much heavier cousins of muons and electons) or from quarks/hadrons decaying over and over and over again into things like Kaons, pions, muons, electrons, photons, and other lightish particles. In the relatively clean environment of a e+e- collider, we can reconstruct these jets and determine which may have come from Upsilons. Combining this with a whole bunch of other measurements (and some theory) lets us determine the branching ratio (how often a particle decays into certain things).


Something I never quite understood: When you shoot particle A at particle B in some accelerator, how does the formation-and-almost-immediate-decay of particle C affect the end result? Like how can we know that those leptons didn't just come from the initial collision?


The scattering matrix predicts the distribution of output particle momenta given input particle moments and output particles.

The scattering matrix is calculated by including all the possible interactions you expect. So a matrix including some intermediate C will be different from one that does not.

Then you can line up what you actually observe and select the matrix that most accurately describes it.


So if I understand it correctly, it's a statistical phenomenon. Looking at one particular (heh) collision, you cannot be sure. But if you run it a lot of times, the frequency of the occurence will be the tell-tale signal. And what the existence of the particle does is provide an additional path. It allows the pair to be formed in two different ways, which raises the overall probability. That right?


Yes it’s statistical. In fact all physical measurements are statistical. When you measure the length of a table, there is some error term due to imperfections in your measuring stick as well as possible errors when you aligned/looked at the stick and table.

The extra path does not necessarily raise the probability. The interactions are more complex than that. The simplest thing to say is that it affects the distribution.


s/mass/rest mass/g

It's 511 keV at rest, and you are sensing a 4.73 GeV electron coming out of the decaying Upsilon. E=m and c=1 in the units you are using.


frobs answer is of course right, but theoretically - given enough resources, almost all phenomena should be observable. Imagine having a oscilloscope which measures once every 100 ms (Really slow i know). Now all you need is a 100 oscilloscopes, put in a phalanx seperated 1 ms each and a way to measure their clocks drifting. Given good enough "stitching" algos, any event above the theoretical limit of the nyquist theorem should be observable.

https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampli...

This answer simplifies alot and of course,one only gains a stop-motion picture show of the observed phenomena, but none the less..


I’m curious: could anyone knowledgeable share a bit about these QCD lattice models — are they incredibly complicated? Are there open source implementations of these models and if so, would the code make any sense to a non-physicist programmer?


> are they incredibly complicated?

That depends on what you mean by "complicated". Conceptually they are not too bad, it's basically just quantum field theory (which some might consider "incredibly complicated" so YMMV). But the devil is in the details. From the article:

"While physicists know the exact equation that defines the strong force — the fundamental force that binds quarks together to make the protons and neutrons in the hearts of atoms, as well as other composite particles like tetraquarks — they can rarely solve this strange, endlessly iterative equation, so they struggle to predict the strong force’s effects."

> Are there open source implementations of these models

That I don't know.

> and if so, would the code make any sense to a non-physicist programmer?

Having seen code written by scientists, I can confidently answer this with: almost certainly not.


>Having seen code written by scientists, I can confidently answer this with: almost certainly not.

Got a kick out of that, thanks. I think there is a kind of compartmentalization of environments going on when one engages in scientific programming. It seems to often be structured more akin to math than code. I once worked with a brilliant multitalented individual who held a PhD in a specific field of physics that I cannot recall. In one instance they ported the orchestration software for a supercomputer from python into java, in a week. It was beautiful to read. This same person used single letter variables in their ML models and had virtually zero comments which made it somewhat difficult for me to follow their updates after a week of having not seen the code.


Here's an analogy that I can speak from more direct experience: if you study the theory of elliptic curves, and then look at elliptic curve cryptography code, there will appear to be no connection between the two. This is because in between the theory and the code are a zillion implementation details and tricks of the trade that aren't generally mentioned in the theory. Add to that the fact that the people who write scientific code aren't coders by trade, so they learn to write code that is just good enough but no better, and the result is something that looks like a horrible mess. (Also, the code is often written by grad students, so that makes it even worse. I look back on some of the code that I wrote back when I was a grad student that was the basis for publications and I shudder.)


This reminded me of the chemical PhD who created Clasp, a full common lisp implementation using LLVM as a JIT compiler, able to call even template C++ code from Common Lisp (a feat which no other language I know of even tries, except perhaps .NET with managed C++).

[0] https://github.com/clasp-developers/clasp


To me, it's a difference in professional gestalts(?) in highly trained individuals.

Physical engineers, especially, have a very formalized, standardized way of looking at the world. They all share it, and so their code follows from that.

Physicists, biologists, software developers all have different ways of looking at the world (some more standardized, some less). Our code follows from that.

So while we may gripe they don't leverage (insert language arcana / convention), my intuition is our code would still look very different even if they did. Biggest example, as you point out: math-structured code, from math-heavy disciplines.


From what I’ve heard the core of the issue is it’s similar to non-online game development. Most scientists make some programs for a single paper, and then just start from scratch for a new project. When you don’t have to care about long term maintainability readability becomes much less of a concern to people.


The Fortran code at particle research groups I saw looked pretty solid to me.


"Fortran" and "solid" are not two words that go together in my mind ;-)

But the OP didn't ask if the code was good (or "solid"), they asked if it would make any sense to a non-physicist programmer. And the answer to that question, I'm guessing, is most likely not. But I would be very happy if I turned out to be wrong about that.


You can write bad code in any language. The verbosity and restrictions of (modern) Fortran standards leave significantly less room for developer mistakes, much better than C/C++.


It probably boils down to if there are any non physicist Fortran programmers...


I’m a LQCD practitioner.

There are a variety of open-source implementations; I’ll just point to a few. The one funded by the Department of Energy through the SciDAC program is the USQCD software stack [0]. There’s also the GPU library quda, which is maintained by Nvidia employees (and others in the community). There’s Grid [2], development led by Edinburgh in close collaboration with intel (to make sure it compiles down to sensible high-performance primatives). There’s openQCD [3], coordinated by CERN researchers.

As to whether it’ll be readable to you—-maybe? How transparent each library is differs. The most important parts are typically (1) the generation of gauge configurations (typically by HMC, which was discovered by the LQCD community [4]), which are MCMC samples and (2) the calculation of observable on each sample. Both rely on highly optimized (and preconditioned, and maybe multigrid-ed) linear solves—-the most important kernel.

Some libraries are written to be as transparent as possible; some to be as portable as possible. All are written to handle massive data parallelism across hundreds of high-performance nodes with some mixture of OpenMP, MPI, #pragma acceleration, etc.

Finally, the code will only “make (big picture) sense” to you if you understand lattice quantum field theory.

[0] http://usqcd-software.github.io/ [1] http://lattice.github.io/quda/ [2] https://github.com/paboyle/Grid [3] https://luscher.web.cern.ch/luscher/openQCD/ [4] https://www.sciencedirect.com/science/article/abs/pii/037026...


I'm not an expert on particle physics, but the verbiage sounds to me like one of the standard problems in physics: We can write the differential equations. But even simple differential equations can be unsolvable. For example, consider the simple Three Body Problem. The differential equations are simple enough that you can use them as an introduction to the concept of differential equations themselves, but the Three Body Problem is not in general solvable.

The three body problem can generally be acceptably approximated for a reasonable period of time. But that problem only involves inverse squaring of distances. Strong forces decay much faster than that, which makes them more sensitive to errors. Plus you end up with one of the fundamental problems we have trying to understand our universe, which is just how monstrously enormous and monstrously slow we humans are. We operate at "meter" scales and "second" time frames, and particle physics operates at somewhere around 30 and 40 orders of magnitude smaller, respectively. (Not quite all the way down to the Planck sizes, but closer to those than to the macroscopic world.) So when you try to numerically approximate the differential equations, you don't get very far in time or space before your approximations have critically diverged from reality.

It's like we're trying to work out the fundamentals of chemistry and our primary tool is smashing planets together.


When people say that the three-body problem is "unsolvable" they really mean that there's not a way to write out an analytic solution to the problem. In the same way, you can call quintic polynomials "unsolvable", because there's no way to take arbitrary quintic polynomial equations and express the solutions as ordinary algebraic formulas.

However, it's very misleading to call quintic equations unsolvable, because we know where the solutions are, and we can use various numeric methods to calculate the solution with arbitrary precision. Any time we can calculate the answer with as much precision as we want, I'd like to say that the problem is "solved" in a very real and meaningful way.

The problem is worse with quantum mechanics. With quantum mechanics, not only do we lack analytic solutions to many of the equations used in QM, but we also lack good numeric solutions (using real hardware, at least).


Following the analogy, as a layperson, what's the nature of the difference with quantum mechanics that doesn't allow it to be solved to arbitrary precision (even if through brute force iteration)?

Is it that we haven't discovered the solution generating algorithms? That the state/probabilities of quantum mechanics are fundamentally untenable to similar calculation? Or something else entirely?


Many problems in physics are perturbative. Then, approximate methods suffice to get reliable answers. In Asimov’s “relativity of wrong” it’s the perturbative nature of gravity—-gravity is very weak—-that makes “the Earth is a sphere” less wrong than “the Earth is flat”. So, if you need an answer to only such-and-such precision, the approximation scheme lets me know how hard I must work to achieve reliability at that scale. Once I get the precision I need I can stop.

Electrodynamics is like that. There is a number, the fine structure constant, about 1/137, that gives the natural scale for how big the next step in the approximation is, compared to the size of the current step. So, if I need to know the answer to 1 part in 10^9, I’m going to need to do 4 or 5 steps of fixing up the approximation (each fix, of course, being a great deal more arduous).

QCD, and other “strongly coupled” or “non-perturbative” problems are not like that. If you make the dumbest approximation (flat-earth) and then fix it up a bit (sphere earth), answers don’t change just a little. They change completely. In QCD the number that characterizes the “obvious” approximation (the Feynman diagram approach)—-the number that’s 1/137 for electrodynamics—-is about 1.5. That’s a disaster! The approximation scheme is obviously no good—-you learn that you can never stop improving your approximation, because if you only worked “a little harder” your answer could change completely.

Other approaches are required.


It's nothing fundamental, as far as I am aware. Depends on what you mean by "fundamental".

There is no known way to simulate a quantum computer, using a classical computer, in polynomial time. A quantum computer is just a kind of quantum system, so we know that some quantum systems cannot be efficiently modeled (barring revolutionary advances in simulation algorithms).

When your simulations take superpolynomial time, it tends to be easy to find problems which you simply do not have the computational resources to solve, and you may not be able to solve interesting versions of the problem. There are lots of examples of problems like this. However, I don't consider this to be a fundamental difference.

For example, satellite navigation systems are just fine calculating directions for driving all the way across the continental US, even though that's a very "large" instance of the problem that they are solving. But if you try to find the fastest route for a delivery driver to make a hundred deliveries within one city, good luck. This is just an analogy, and I'd like to emphasize that "no KNOWN algorithm" efficiently solves these problems, and that we haven't proven whether such an algorithm exists.


Thanks! By "fundamental," I was curious about the nature of the problem's difficulty, moreso than the progress on solutions. And computational intractability due to system properties makes sense!


Also a layperson, but as I understand it. We do have the algorithms to simulate quantum systems with n states to arbitrary precision through brute force. But they require that you can put ~n bits into a superposition state.

If your computer doesn't have that capability, you can simulate it to arbitrary precision with ~2^n bits.


Well put, though one nitpick: the three body problem is solved. It illustrates another case: even if a solution is known, it might not be practical.

https://www.math.uvic.ca/faculty/diacu/diacuNbody.pdf


> Are there open source implementations of these models

there are some open source packages for lattice QCD. e.g. https://jeffersonlab.github.io/chroma/


Goodness me, that's a blast from my very distant past! The venerable MILC code is still available: https://web.physics.utah.edu/~detar/milc/milc_qcd.html


Also found this GitHub topic page: https://github.com/topics/lattice-qcd


As far as I recall, QCD gives you a nice simple formula - a path integral over quantum operators - for any quantity you want. Lattice QCD works out the answer with great big Monte-Carlo integration. The tricky bit is defining a field theory on a finite discrete space-time lattice that is provably equivalent to the continuum theory after controlled extrapolations to the continuum. And you have to show that you can define the correct operators in the lattice theory that really do give the quantities you want. Then you need to implement this in code that runs blazingly fast over huge data sets. These considerations, and the fact that there are all sorts of solutions to these problems which might supported by a code base, tend to make code implementations complicated.


A Quick search found this: https://github.com/akio-tomiya/LatticeQCD.jl

Not my field of expertise though, I am in experimental gamma-ray astronomy


I've wondered this too, and if it's possible for a toy implementation to help teach the concepts, but I suspect not. Probably the translation to code obscures more than it illustrates.


You might enjoy Lepage’s “Lattice QCD for novices” which develops a lot of the ideas without getting bogged down in the specifics of QCD (it focuses on the harmonic oscillator + a harmonic oscillator).

[0] https://arxiv.org/abs/hep-lat/0506036


> Polyakov’s analysis suggested that the four quarks banded together for a glorious 12 sextillionths of a second before an energy fluctuation conjured up two extra quarks and the group disintegrated into three mesons.

Poetic and informative. What a sentence.


These articles don't make clear distinctions between discoveries standard model framework, and completely new physics.

This seems to be discovery within standard model. A new composite particle.


If it was a discovery outside the standard model it would be in the New York Times.


Finally, warp drives.


It is my general impression with Quanta Magazine: an interesting phenomenon + a clickbaity title and headline (bordering on crackpotty).


Interesting read. But I'll wait for what Sabine has to say about it. :D


Superdeterminism predicts that it was always pre-determined I wouldn't really understand this article, and the universe left me no choice, so I have nothing to feel bad about.

Sabine's blog exists in an interesting superposition of comforting and existential dread =]


I like Sabine, but I found her "takedown" of the multiverse/MWI ignored the most compelling argument for it, the computational argument proposed by Deutsch, and that was enough to make me skeptical of her other positions. Anybody know if she's ever addressed that anywhere?


Well, you should be skeptical in general, no? What’s the alternative, just adopting some other person’s opinions?

If this is about Sabine Hossenfelder, she has opinions and she has arguments. Sometimes she’s right, sometimes she’s wrong. Like everyone. Sometimes her opinions are well supported, sometimes I think not. Some of her explanations are sound, others contain mistakes.


I've never read Sabine Hossenfelder, so I can't comment specifically on her.

However, life is about trusting other people and adopting their opinions. I do not have the time or energy to be an expert on every subject, so naturally I will have to read the opinions of other people who have spent time in that particular subject.

Someone's previous record of factual accuracy as well as considering reasonable opposition to their points obviously will affect how much I trust them to influence my worldview, and how likely I am to believe what they say is true.


What is the computational argument?



I figured this might be the "quantum computation is computation done on parallel worlds". It just isn't that compelling [1], and is in fact arguably false [2]. That's probably why Sabine didn't touch on it.

[1] https://www.sciencedirect.com/science/article/abs/pii/S13552...

[2] https://arxiv.org/abs/1110.2514


Both of those papers, and others like it, are by single authors in philosophy journals, not physics journals. I don't think they are 'serious' in the same way. That Arxiv one is particularly bad.


"Particularly bad" in what sense? The notion that quantum computation happens in parallel universes is a philosophical position, not a scientific one, so of course philosophers of science are evaluating it.


It introduces the fundamental question but never answers it. He quotes Deutsch partially, "explain how Shor’s algorithm works" - but doesn't answer this question. He also leaves out the important part of the quote:

"To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10^500 or so times the computational resources than can be seen to be present, where was the number factorized? There are only about 10^80 atoms in the entire visible universe, an utterly minuscule number compared with 10^500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?"

Computation is real - it requires matter and energy. If Shor's algorithm can factor a number so large that it would require more matter than there is available in the universe: _where is the computation occurring_? I have never seen this plainly addressed. I'm a layman, of course, which I why I would hope she'd break this argument down in her MWI video.


> but doesn't answer this question

The author doesn't have to answer that question to dispute Deutsch's alleged answer.

Secondly, Deutsch's challenge of explaining Shor's algorithm presupposes that quantum computation requires an explanation in terms of classical computation. While I'm sympathetic to that view, this assumption is easily rejected by people who don't view reality as fundamentally classical or local. So for these people, there is no challenge to meet.

Thirdly, while you can speculate that Shor's algorithm will scale to factoring numbers so large they require more atoms than are in the universe, no one has demonstrated that this is the case. Just because our current models describe this happening, that doesn't mean the model corresponds to what will actually happen in reality. It could easily be the case that the model is not accounting for noise or other factors that will prevent entanglement from scaling to the levels you describe. This is the position of some determinists, in which case Deutsch's challenge is also neutered.

Finally, other interpretations of QM can also provide explanations for speedups. For instance, any interpretation of QM that accepts its non-locality has an escape hatch via relativity: non-locality is effectively time travel in GR, but in a form that cannot be exploited for superluminal signalling. There are many other possible answers given by other interpretations of QM.

I personally think it's an interesting question, but it's not a compelling argument for many worlds, not least because the "many worlds as parallel computations" doesn't actually work beyond trivial examples.

> Computation is real - it requires matter and energy.

Yes, classical bits require a certain amount of matter and energy, but qubits do not have the same matter and energy requirements, which seems to be what you're expecting. If you expect there to be an answer of this sort, then I think you must give up believing that quantum computation will scale. Basically, you are expecting reality to actually be classical, and so have some deterministic classical computation happening behind the scenes (hidden variables), and these hidden variables will more than likely disrupt scaling quantum computations.


There are only about 1080 atoms in the entire visible universe...

Should this be read as 10^80? The way it's written is confusing.


Indeed.

Those sentences should be like this: "When Shor’s algorithm has factorized a number, using 10^500 or so times the computational resources than can be seen to be present, where was the number factorized? There are only about 10^80 atoms in the entire visible universe, an utterly minuscule number compared with 10^500."


You dropped these: ^^^


weird that people love to fanboy/fangirl anyone who has a reputation for criticism.


Why is it weird? Criticism draws attention.


She’s not even an expert in this subject but she has name recognition so OP wants her opinion. Just weird.


OP is not a physicist and Sabine is able to break down the problems and issues with many of the scientific and unscientific theories that are thrown around so that even a layperson can follow the rational.

So, I get the drift of the article. Do I have enough knowledge to make any reasonable assumptions about the validity of the findings it presents? Nope, not at all.

That's why I cherish Sabine. She's down to earth and has an understanding of science that suits me. She's definitely more than enough physics under her belt[0] to be qualified to talk about these topics. Go away with your 'She not an expert'. Have you seen her qualifications?

"Naw," I hear you say "she's a theoretical physicist, not a particle physicist."

Yeah, well. I'm a Software Engineer. I can still tell if a Network Engineer tells bullshit.

I do understand why some people hate on her; she's abrasive and irreverent. Some people don't like their ivory towers to be besmirched.

[0] https://portal.dnb.de/opac/simpleSearch?reset=true&cqlMode=t...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: