Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hopefully we'll see improvements in videogames, which I understand are among the harder type of program to multithread.


You may find this presentation[1] interesting, written by Tim Sweeny of Epic Games (engine that powers a lot of games).

In the section on concurrency, he notes the hardest part is updating the game logic (tens of thousands of interacting objects) and notes that synchronizing it is hopeless. Instead, STM:

"~2-4X STM performance overhead is acceptable: if it enables our state-intensive code to scale to many threads, it’s still a win. Claim: Transactions are the only plausible solution to concurrent mutable state"

It's also a neat presentation as it's from the perspective of someone running a real large scale, commercial, time-sensitive, _real world_ software project. Yet he talks about the most common types of bugs, and how FP and other advanced concepts would help (like dependent typing).

1: http://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-...


Thanks for the link. It's interesting to see a game developer's view on FP, though the presentation is a little undermined by the conclusion that includes statements like:

> By 2009, game developers will face CPUs with 20+ cores.


I think that line should be prefaced with the assumption from the previous slide that CPU and GPU would merge.

Anyways, fact we only have ~16 core CPUs today just means he was off on the timeline, and got a slightly bit more "free lunch" for a few more years: a single threaded program gets 12% instead of 5% of a CPU's capability.

The underlying "we're screwed without better tools" still stands. Besides, the work required to take advantage of 8-way concurrency on mutable state is the same required for 32-way.


Doubtful. Most games are GPU bound not CPU bound so CPU threading really isn't the issue.


Yes and no. If you were playing the kind of games we used to have in the Core 2 Duo days, then, yes, this kind of game would be GPU-limited. (Note, I don't mean get a 10 year old game and try playing it, I mean those games updated to run on today's graphics engines).

But today's games involve a completely different paradigm. Almost all the games today have such a huge focus on open worlds and modeling the interaction between hundreds of thousands of objects in real time. Even the most GPU-intensive such games can still realize bigger FPS gains with a CPU upgrade than a GPU upgrade.


Has the pendulum swung back? I'm pretty sure GPUs overtook CPUs in the past 5 years, and before the i7 was released CPUs were the bottleneck.


Depends on the game, really. Take a look at minecraft and dwarffortress - they are both cpu-bound due to

1. Simulating a large world using complex entities and voxels

2. Being single-threaded with no clear way to make them multi-threaded

Minecraft in particular is a pretty interesting problem since it only simulates the part of the world that's within a radius of a player, it would make sense to have each player's machine simulate their own part of the world and the server to somehow merge those together.


> Minecraft in particular is a pretty interesting problem since it only simulates the part of the world that's within a radius of a player, it would make sense to have each player's machine simulate their own part of the world and the server to somehow merge those together.

I can't imagine people would trust each other enough to allow them to simulate themselves.


Good point. This is most obvious on Minecraft.

Minecraft, to me, is a visual 3d database -- digging dirt doesn't remove something, it merely changes the value for that block in the database -- from "dirt" to "air" -- which changes how the game's algorithms act. And the game still ships with "developer graphics" which would have been easy for a TNT to run.


I never got the impression that DF was single-threaded due to theoretical constraints, only that the game was originally single-threaded and that refactoring to make use of multiple threads is just too difficult for a lone developer on such a sprawling codebase.


Simulating a large world seems like a textbook example of an "embarrassingly parallel" problem, doesn't it? At least that's true for cellular automata.


I think the problem comes from that the world is a large mass of interconnected mutable state. You don't know if updating a particular object will update another.

Think of a player crouching on a plank that another player just phasered. If you continue simulate the first player's movement "crawl forward" by itself, how do you integrate the result of the plank being disintegrated, causing the player to fall? The first player's simulation outcome depends on multiple inputs, but they aren't findable directly from that player's perspective. You have to first simulate the phaser beam to know the plank is gone to know the player is falling now, not crawling.

And then imagine that with far more complex rules and a few hundred thousand objects having similar interactions, each one possibly modifying any other one.


If you limit communication to "speed of light" , then each tick involves only local message passing, which is easily parallelizable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: