Hacker Newsnew | past | comments | ask | show | jobs | submit | sam-2727's commentslogin

The beginning of the conclusion of the original study [1] is worth repeating:

No one should try to reform or rehabilitate the ranking. It is irredeemable. In Colin Diver’s memorable formulation, “Trying to rank institutions of higher education is a little like trying to rank religions or philosophies. The entire enterprise is flawed, not only in detail but also in conception.”

Students are poorly served by rankings. To be sure, they need information when applying to colleges, but rankings provide the wrong information. As many critics have observed, every student has distinctive needs, and what universities offer is far too complex to be projected to a single parameter. These observations may partly reflect the view that the goal of education should be self-discovery and self-fashioning as much as vocational training. Even those who dismiss this view as airy and impractical, however, must acknowledge that any ranking is a composite of factors, not all of which pertain to everyone. A prospective engineering student who chooses the 46th-ranked school over the 47th, for example, would be making a mistake if the advantage of the 46th school is its smaller average class sizes. For small average class sizes are typically the result of offering more upper-level courses in the arts and humanities, which our engineering student likely will not take at all.

[1]: http://www.math.columbia.edu/~thaddeus/ranking/investigation... (section 8)


> Trying to rank institutions of higher education is a little like trying to rank religions

Did anyone try to make a ranking for religions similar to those for universities? Sounds like it would be a fun project making a great point.

Measures like "Fees", "Diversity", "Alumni Salary", "Financial Aid Provided" would all be interesting to see for each religion.


I’d love to see that too. Mormonism for example would rank very high in “Alumni Salary” and “Quality of Alumni Network”, but low in “Diversity”, “Fees”, and “Amount of Bullshit You Need to Swallow”.


Here's the response:

Universities are mostly unimportant in terms of what they offer: class sizes, education, curriculum, etc... All mostly bs that doesn't matter.

The real role of universities is to gather together smart people as they develop. This requires mostly a sort of self-selection of applicants, who need to agree independently to go to the same university. Hence rankings, prestige, and all that nonsense.


This is just the wrong way to look at it. Clearly, there is real demand for rankings by students. No one is stupid enough to think that there is some real difference between #47 and #48. But obviously #47 is very different from #26.

Just because you can't get an exact measurement does not mean that a metric does not exist or is not useful.


I would argue that the difference between 47 and 26 comes down largely to field of study or cost of attendance.

The top 10-15 offer an almost indisputable advantage with the top 5 or so being a tier unto itself. Anything outside of those groups is largely "it depends" and 50-100 forms another tier where total cost of attendance largely dictates whether one school is "better" than another.


I couldn’t agree more. The schools I got into were ranked something like 12, 13, 25, and 30. I went to 12 for only that reason and always regretted it. Was it my own dumb fault? Of course. I was 17.


I've been looking at the bias in rankings for a little while. I think one way to identify and raise awareness of the biases, is just put rankings together side-by-side. I did this for computer science programs, and there's some interesting differences that I noticed:

https://jeffhuang.com/computer-science-open-data/#bias-in-co...


The focus on best paper awards is odd as the major conferences of some CS subfields dole out awards as if they were party favors, and those of other subfields don't have best paper awards at all.

For instance SIGCHI 2021 had 28 best papers out of 747 accepted papers (or 3.7%) whereas CVPR 2021 had one best paper out of 1660 accepted papers (0.06%).

I have no opinion about whether it's "better" to be stingy or generous with best paper awards. But obviously any kind of ranking that doesn't account for differences between conferences and subfields is going to be quite suspect.


Fair point, it's something I'm aware of and it's a bit intentional to counterbalance the "normalization bias" already done in other rankings, if that's what you meant by "account for differences".

To put another way, to count in a way such that a CVPR best paper is worth 28-fold a CHI best paper is also quite suspect. There's a rabbit hole you can go down to find the best conference to submit to where the submission-to-best-papers is low, to optimize for this.

PS: note that there's an upper bound to best paper awards, which is "< 1% of submitted papers".


This is completely true of undergraduate studies. There is a very real reason to think that department (not university) rankings in graduate studies matter.


Anyone have a link to the source journal article? I can't find it.



Seems it was published in Proceedings of the Japan Academy, Series B. , and the one in Science is 'a separate study'.

Found this abstract: https://www.jstage.jst.go.jp/article/pjab/98/6/98_PJA9806B-0...


Closest paper I could find is https://doi.org/10.1126/science.abn7850, but it made no mentions to amino acids in the main text or the supplement.


I think it is pretty standard practice to keep cheating confidential (indeed, I think for a lot of universities professors aren't allowed to publicize student names in incidents of cheating). I understand where you're coming from, but college can be an extremely stressful place that can lead students to do actions they otherwise wouldn't have done (or so goes the typical reasoning from universities).


While it won't be able to image more sharply on its own, JWST can still help to constrain certain factors in their modeling, thus obtaining better images.

See, e.g., https://www.stsci.edu/jwst/phase2-public/2235.pdf (although this was written when there was no image, certainly it would still be useful).



You will likely not have to wait long to see, given that the supreme court is most likely going to strike down affirmative action next year


Here's a picture of the same color suit on the same person in 2015: https://twitter.com/OlegMKS/status/605693958015057921


You know what happened in 2014? We just don't know why does everyone feel the need to be right and no one can admit that from an external point of view SOMETIMES you just can't know the real reason behind actions of other people.


> Can you do QM with just the real valued probability distribution?

You can't. The key fact is that other observables, such as momentum, depend on the complex part of the wave function.


Complex amplitudes are a qubit the universe gives us for free.

If you were to append some fictitious spin system to whatever quantum state, you could put all the real amplitude on the spin down state and all the imaginary amplitude on the spin up state.

Perhaps "the simulation" is on a quantum computer and some of the qubits are not directly encoding things we can touch.


But those have real valued probability distributions too, right? Does modeling the evolution of the joint distribution of all the observables not count?


There is not really a meaningful joint distribution. Some quantum observables are incompatible with each other which essentially means they can't be assigned simultaneous values.

More precisely a quantum observable is a map (a function) that takes in a quantum state and outputs a probability distribution, representing the probabilities of the various outcomes you could get if you measure that observable on that state. The equivalent statement is also true of classical observables and classical states.

Under classical rules it turns out that if you have many observables acting on the same system you can come up with a joint observable, that maps a state to a joint probability distribution for all the observables. For incompatible quantum observables this is emphatically not the case. Given two quantum observables there is generally not a joint observable representing simultaneous measurement of them.


Yes you're right, every superposition of quantum systems have a real valued probability distribution for some observable variable.

QM has very specific rules when it comes to how two random variables are dependent. That specific rule comes up when you have quantum interference and this is best described by the conjugate square of two complex functions A^2 + B^2 + A*B + B*A. This is just a special case of the union probability: P(A) + P(B) - P(AB).


Yes, but you have more variables than necessary. Why model all the observables independently when you can model the system with one complex wave function?

It's the interaction between variables that is key.


1. What do you mean by simulator? The wavelength range/sensitivity of the various instruments has been simulated.

2. I think you use this tool to submit proposals: https://jwst-docs.stsci.edu/jwst-astronomers-proposal-tool-o...

3. They go through a panel to be peer-reviewed and compared to other proposals (similar to how grants are allocated by the government). I'm fairly certain this process also includes review by instrumentation experts. The process is double-blind and the first cycle has already been allocated: https://www.stsci.edu/jwst/science-execution/approved-progra....


A simulator which would show you what your selected configuration would return for an image.

Like if exposer time etc. Is correct.



Ok, we've changed to that from https://twitter.com/NASAWebb/status/1479880178021060609 above. Thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: