Seems like this is written by a conspiracy nut. Facts are wrong (Michael Mann isn't a professor at UPenn) and the story is one-sided. There's no mention of what else Tallbloke wrote on his blog or the fact that the equipment seizure was due to the email hacking that occurred (http://www.washingtonpost.com/world/europe/uk-police-seize-e...), not because of any comments.
I was reading on my phone in bed, but your comment so missed the point that I was compelled to get up in the middle of the night and write a full response.
1) Yes, the author confused Penn State with the UPenn. The Financial Post should probably correct it, but I find it excusable that a Canadian author writing in a Canadian paper might not properly distinguish them. This did not cause me to discount the rest of the article.
2) Yes, it's one sided. It's an opinion piece in the comment section, and labelled as such. This is indicated by the breadcrumbs at the top reading "Home / Opinion / FP Comment".
3) No mention of email hacking? What about the "link to a zip file posted online at a Russian Web address" that "contained 5,000 emails written by some of the most prominent names in climate science." Sure, it didn't use the word hacking, but for all we know it might have been an inside job. Unless you have personal knowledge to the contrary? (cue the police coming to seize your computers)
4) Wait, they didn't seize his computers because of one particular comment left on his blog by someone unknown? You do realize that Tallbloke is not a suspect in the case, and that his computers were seized _explicitly_ in an effort to track down the source of that comment containing that link? Is this the first time you've come across this story?
5) "Seems like this is written by a conspiracy nut". Well, that's a matter of opinion, but "seems" is an odd word choice. As the byline mentions, she's the proud author of the "recently published exposé of the Intergovernmental Panel on Climate Change, The Delinquent Teenager Who Was Mistaken for the World’s Top Climate Expert." I haven't read it, and so can't comment on its quality. Have you? I've read some of Matt Ridley's books, though, and he called it "one of the most important pieces of investigative journalism in recent years". (http://www.rationaloptimist.com/blog/delinquent-teenager)
5) I do appreciate the link. In particular the part where he retracted his clearly libelous statements, in which he coyly refers to "certain things that could be misinterpreted", as if "the seizure of computer equipment that appears to be linked to the storage and dissemination of the stolen documents" might have a completely different meaning Mr Tatershall's legal jurisdiction? And the part where he offers Mr Tatersall space for a response in return for not "pursuing legal action that was previously suggested"? Was this perhaps added after you read it? Or is that the part you are calling attention to?
> Yes, it's one sided. It's an opinion piece in the comment section, and labelled as such. This is indicated by the breadcrumbs at the top reading "Home / Opinion / FP Comment".
This strikes me as a rather poor excuse (if it's intended as such). I agree nobody should expect an opinion piece to be vigorously researched, cited, etc....but all opinions are not equal. There's a difference between someone who is just casually writing their thoughts with the understanding that they are not fully vetting them...and someone who is clearly biased and pushing a particular agenda.
> Wait, they didn't seize his computers because of one particular comment left on his blog by someone unknown?
Well, we can't know all their reasons...but based on the facts so far...no they didn't. They seized them because he referred to a possible hacker as "our old friend" and wrote an article/linked to the hacked emails....AND because a potential hacker commented on his blog, possibly leaving evidence in his account/on the server...then refused to cooperate with the investigation.
Are they justified doing so? I don't know. I think the "old friend" comment is innocent. I also think based on what we know it's pretty frivolous. Unless they have some clear evidence linking this guy, I think they are overstepping their bounds. Of course they might have it...they're not gonna say.
One thing I would also say is that I don't think this guy is being treated any differently than anyone else who gets in the way of any government investigation (that is to say, poorly).... I hope climate change denialists don't hold this up as some sort of proof of their conspiracy theories.
This strikes me as a rather poor excuse (if it's intended as such).
I didn't really intend it to excuse the quality of the content, only to assert that it's allowed to be one-sided. My view (biased, but not self-interested) is that the piece is both well researched and biased, and that the research produced the bias. The author isn't author casually writing scattered thoughts, but rather reiterating the book that she's spent the last year writing. She may be wrong or misleading, but she's spent a lot of effort learning about the subject.
Well, we can't know all their reasons...but based on the facts so far...no they didn't.
You're right. I'm presuming that Tallbloke was merely an running a blog that allowed comments, rather than a conspirator. I conclude this because there were three blogs being investigated simultaneously, and I find it highly unlikely that all of them are colluding. But I guess we'll have to wait to see how it turns out. If they have inside knowledge not yet released, I'd have to revise my thinking.
I hope climate change denialists don't hold this up as some sort of proof of their conspiracy theories.
I'm sure some will, but I don't think there's any monolithic set of denialists. On the blogs in question (http://climateaudit.org/, http://noconsensus.wordpress.com, and http://tallbloke.wordpress.com, all more skeptical than denialist), most of the commenters seem to be concentrating on the civil liberties aspect. It's seen as a precursor of the potential downsides of SOPA. There's a lot of libertarian beliefs among the skeptics.
You're right, I was probably frothing a bit too much. I disagree a little bit with the concept of "both sides". Rather, I think there are a multiplicity of viewpoints, and no binary split is possible.
But I did genuinely appreciate the link. As you might gather from my response, I've been seeing mostly a different side, and I think it's good to have contrast. For contrast of contrast, here's Tallbloke's response: http://tallbloke.wordpress.com/2011/12/21/winter-solstice-pa...
Reading the two, I'm struck by the difference in tone. I like Tallbloke's better, but am uncertain if that's a subconscious reaction to his message, or simply a preference for different style of prose. ButI agree that the combination is better than any single source.
I'm starting to believe that memory usage may be a red herring in modern operating systems. Memory prices have been crashing, every day on Slickdeals I see 8GB of notebook (and netbook) memory for less than $30. Is this a problem worth solving anymore?
Memory usage is reflected in general performance, even when the whole system fits into RAM, because RAM hasn't infinite bandwidth. Larger memory usage -> more traffic between CPU and RAM. And cache effects make the issue even more important. Modern systems may have gigabytes of RAM, but still only couple megabytes of CPU cache.
Memory usage is pretty significant. All those 10s add up a lot. Remember 50mb less from the OS = 50mb more for your running app.
Bigger issue: The low priority memory. AVs are a big memory hog and cause problems, this will mitigate it. MS is acknowledging that AVs are a necessary evil and there are not the only ones that do this, and this allows you to just make friendly programs.
> Larger memory usage -> more traffic between CPU and RAM.
Well, if the memory were lower, the call would have been made CPU->HDD which is orders of magnitude slower, so you'd expect more RAM would necessarily improve the general performance.
Yes, more RAM improves performance, but using less RAM with your program also improves performance independantly of RAM volume. As an extreme example, a program that fits in L2 will flay the (excuse my french) living sh*t out of a 2GB program loaded in RAM.
When I read that, I wondered how long we will keep using DRAM in mobile devices. If Moore's law holds, we will easily be able to put 10GB of DRAM in a phone in ten years. I think having 1 GB of static RAM might be preferable.
If 1 GB of static ram were even remotely cheap enough and dense enough, our processors would have more than a few megs of L1-3 combined cache. Especially since high-end processors cost hundreds of dollars already.
CPUs have a few megs of L1-3 cache because any more would slow it down. If you made the caches larger, it would increase the distance a signal has to travel, meaning more latency in the signal. With the clock speed of modern processes, this does matter.
The best possibility would be fully realising the NUMA architecture, and giving each core a stack of dedicated SRAM or DRAM at sizes of 1GB (these would have to be off-die though).
Yes, that is the thing about SRAM. It is not 10x more expensive than DRAM. Oh no. No, no, no, no, no. If it were to ever fall to only 10x more, it would be like the 2nd coming of Memory.
On modern CPU's, HALF- or MORE of the silicon is used to afford 4-16MB L3 caches. A CPU die is not much smaller than a DRAM chip, and a 1GB chip of DRAM is less than $10 these days judging by the prices of 16GB, 16-chip sticks of DRAM.
- CPU caches are wired much more complex than SRAM memory modules would need to be. It may be shared between CPUs, and being n-way surely requires silicon, too.
- if the ratio is way more than 10, why, then, do I find zillions of references stating that a) DRAM needs a transistor per bit, and b) SRAM can be built with 6 transistors per bit?
Besides the direct performance issues already discussed, it is my opinion that:
1) A programmer who programs something of decent size and ceases to concern themselves with memory entirely will write code that will continue to bloat unneccesarily-so for the life of the software. At least some attention to memory is neccesary to keep usage reined in. You don't have to fight for KiB, but think about it. It rather seems to be a resource you can use 5% of, but without proper attention rapidly wind up consuming 100% of.
2) A programmer who disposes of the idea of using memory efficiently has probably discarded the idea of algorithmic efficiently in any way whatsoever. Pursuing memory optimization is a decent proxy for all forms of optimization.
The copy path button is only necessary because Explorer hides the real path with the fake address bar they use now. I'd bet that mostly power users need to copy the path anyways, and they'll be able to figure out how to click on the address bar and Ctrl+C.
It may feel gimmicky to you, but it's very functional. Each area of the breadcrumb bar has a distinct and useful function (jump up the hierarchy, list other folders at that hierarchy level, enter textbox mode). It's insanely useful in comparison to any other method I've seen.
In the visual hierarchy on the page, the links at the top are secondary to the main content. So it's better to have them faded out so your eyes aren't drawn to them.
Fading out the links is all well and good, but that only works when the bar itself doesn't draw your eyes to it. On my browser (uncostomized Google Chrome), it is by far the most distinctive element on my screen.
I find Facebook's blue header far more distracting than Google's dark grey one. But noone really complains about that. Maybe because they are used to it.
That's useful to see how something is done, but not why or the different options available. For that, there's still a definite lack of a clear guide. The guide on the Express website reads more like an API reference and doesn't go into detail on how to use the different features, which I think would be very helpful.
Right now I'm working on my first application using Express. Over the Summer, when I have free time, I'd like to write a tutorial series for Express and put it up on github so it can be updated as Express evolves. That's really one of the main issues right now, all this stuff is changing so rapidly that a tutorial can become outdated in a few months.
They actually came out and said "building a new browser for the ten-year old version of Windows that came with IE6 didn’t make sense to us because of the limitations of its graphics and security architectures." Considering IE 9 dropped support for XP, I don't think they consider it "news" that IE 10 won't support XP.
Why? Everyone else is supporting XP with no major problems. Meanwhile, MS has the source code for XP, and for some reason can't replicate that success?
It's over 10 year old operating system already. Software has its shelf life. I can't blame them for wanting to not spread their support matrix too wide.
I don't blame them for trying to kill it off. Truly. It's been long enough, especially now that 7 is actually a solid upgrade for nearly every use.
But I do blame them for all the under-handed, customer-spiteful tactics they've used. DirectX 10 on Vista only, though it's almost 100% compatible with DX9 (having 9 report version 10 allows many DX10 games to run). IE9+ on Vista/7 only. It's complete bullshit, through and through, and I see no reason to defend their methods.
> (having 9 report version 10 allows many DX10 games to run
The only games with which this will work is games that support DX9 explicitly and, for some reason, disable it. The D3D9 and 10 APIs are completely different. Having worked on an implementation of DX10 for XP, I can say for sure that this is 100% incorrect.
That was my project, and not very well haha. It worked, if you consider a lack of shaders, lighting (IIRC), and other critical things to be "working". Shaders were a PITA for a few reasons, not the least of which being that the D3D10 shader bytecode was completely undocumented. Spent a couple months doing nothing but reversing the bytecode format, and things sort of fell apart after that. All the code's out there, though, as is the complete story of the project and the company around it: http://daeken.com/alky-postmortem
I'm not entirely convinced mimicking anything Microsoft puts out is a good idea, even under the best circumstances. I've lately been getting pretty far into .NET, and the more I see, the more it terrifies me.
I'm on the fence with them supporting XP. One comparison to take into account is Safari's releases have dropped support for an older OS, much less than 10 years, they did change processor architecture though and have less business users to support. Also Microsoft offer free support.
I don't think they are criticising, I think they are making the point that they aren't the only ones who think you can't do hardware acceleration in XP. Whether that statement is true or not is irrelevant.
Here's the opposite side of the story with its own biases: http://scienceblogs.com/gregladen/2011/12/computers_of_crimi...