It was so fun, when I first went to grad school Pat, Peter, and I figured out the fundamentals how to separate the lighting from the surface reflection using multi-resolution textures maps. And it was so fun sitting in the Princeton Graphics lab hacking up the Renderman shaders to implement it for the first time. Great memories, and what amazing insights and tools built by so many others to make what we have now possible. :)
Yep. For the radiosity based global illumination things, it's the most common way of framing and solving the way light energy is passed through an environment. And most of the work is in minimizing the computationally expensive visibility operations (shadows, etc) by determining how much light energy is going to pass between two surfaces (based on distance, size, orientation, visibility, etc), and then computing using the fewest rays and the right level of resolution of the energy to efficiently utilize compute resources. Basically... :)
I'm currently working on a cute little rasterizer in JS; a software radiosity engine would be really fun! Would you mind shooting me an email (link in profile) with any links/resources you think might be helpful for a noob?
Does anyone remember a magazine spread that came out before Quake's release that featured a bunch of high res widescreen screenshots of the early game? They were big, full bleed images. I believe they also made a big deal in the article about scanning the bodies of id employees and mapping them onto the player, saying that when the game was released players would be able to do the same.
What's confusing to me is that all the early screenshots online I can find online are 320x240. I am remembering some high res screenshots, and they were some of the first widescreen shots I ever saw of a game. Carmack was an early adopter of the widescreen monitors. It seems impossible that it was Quake 1, but I remember it being kind of blue and castle/cathedral-y, in the style of Quake 1. But the resolution was very high and the textures were pretty high res too, so it seems impossible that it was Quake 1.
For some reason, I feel like it was levels that were never released.
It's been bugging me I can't resolve the errors in this memory. Anyone else have any memory of this? It's right on the cusp of pre-internet, so it's hard to track down.
I found out about BluesNews from an issue of PC Gamer and was a loyal visitor for many years in the late 90's. Video games are what got me sucked into the PC world. Modding and map making were my introductions to programming. Had no idea it was still around. Surely it's got to be one of the oldest continuously updated web properties by now?
Please explain? I haven't heard of blues news before. And the FAQ doesn't explain much... (I gather it's some kind of games related blog/news aggregation site?)
From what I understand it's great but has a lot of information that is outdated and downright counterproductive on today's hardware. I kinda wish people would take old gems like this and bring out an annotated version that adds updated information.
The technical knowledge in it is both out of date, and more relevant than ever.
The particular details are long since irrelevant - even in 1996 optimizing for the 8086 was irrelevant. But the thought process, approach and lessons learnt from optimizing the 8086 are still relevant today.
In fact, they're probably more easily demonstrated through and example using the 8086 than in modern environments. I still use Abrash's S3 FIFO buffer story when I'm mentoring people.
For more up to date technical knowledge, I really liked his software 3d renderer articles. He talks about Pentium 4s and such, which are old of course, but he gives good tips on optimizing for a cache hungry, superscalar processor (which is pretty much relevant today).
I have an old Pentium M notebook with 256 RAM lying around with Linux for optimizing. If you get it to run fast in it, it should run pretty much anywhere. I'm looking to moving to an old netbook though, but I'm not sure because of the architecture of the first Atoms.
If I was optimizing for ARM the RPi would be a great machine :)
The Penitum M is sort of like "the missing link" between old and newer architectures. From it "evolved" the Core Duo brand (I also have one of those around), Core 2 Duo, and Core i3/5/7 architectures. And while there are differences, the M is still similar enough to be representative.
I feel the same way about "Peter Norton's Guide to Assembly for the IBM PC". I once contacted Penguin(?) about getting some rights to annotate it but I was scared away by the forms I was sent to sign.
Isn't it sad that every generation of programmers forgets, probably not deliberately, that previous generations of programmers existed and were just as smart and talented as we are today?
More likely most programmers hear about the previous generation...
I remember reading Abrash's articles in Dr. Dobb's back in 1995 and 1996. They were written as Quake was being finished, and it was brilliant to follow the progress month for month.
I was in high-school, but based on the articles, I was able to build my own Quake-like rendering engine at about the same time as quake was released. I knew x86 assembler and had picked up various tricks for texture-mapping beforehand. However, I implemented a BSP-builder and a sorted edge rasterizer inspired by ID. It got pretty good. I remember struggling with perspective correct texture mapping a lot (couldn't find a reference to it. Had no internet). I cracked it solely based on a sentence in one of Abrash's articles about the "distance division". Suddenly his paragraphs about using the FPU in parallel with the CPU for texture mapping fell into place. I even developed various tools for lighting, level editing, scripting. I always wanted to do a game with it, but never managed to actually build it. Within a few years, software rendering was obsolete anyway, and the Quake engine far far behind -- I think my primary interest was in the algorithms not building the game.
I've implemented a lot of algorithms since then, and every time I work on a new difficult problem, I always think back at these articles and the challenges described.
To me, Abrash's articles were an inspiration for at least the following reasons:
1) They show that even really hard problems can be solved by tenacious individuals or small teams.
2) He presents and compares various solution approaches (with pros and cons) -- Don't settle with the first idea.
3) He and Carmack compromised and "cheated" (e.g. BSP and surface caching) for performance reasons -- Something I've ended doing myself many times. If the problem is too difficult, try to simplify it.
4) That smart people struggle solving problems too. Don't give up! Eventually you find a solution -- Carmack hadn't slept much the weekend he implemented the Potential Visibility Set, which was the big breakthrough when developing Quake.
5) Overall friendly and accessible explanations of clever ideas.
Sadly, the Quake engine is irrelevant today, and I wish someone would write a similar serious of articles about their development efforts and battles.
Does anyone know of anything similar from the past 10 years?
> Sadly, the Quake engine is irrelevant today, and I wish someone would write a similar serious of articles about their development efforts and battles.
The original one perhaps, but there are several updated and improved versions out there that are far from irrelevant.
Take a look at DarkPlaces and especially FTE for instance. FTE has OpenGL4.x/DX9/DX11 and even a Vulkan renderer and can be used as a base engine to create your own game on top of. These engines stem from Quake and although most of the things have been improved or rewritten, FTE can run Quake, Quake2, Quake3, Hexen etc etc.
Wow, I never knew the articles were written concurrently with writing the engine. It's truly inspirational to hear that you as a high school student could create a similar engine at the same time by following along; favourited!
I really should read the Black Book, the snippets I've read are great, and anyway I've started writing a software rasteriser for fun...
I'm not a 3D graphics programmer, so can you (or someone) elaborate on this? Aren't modern engines just the same as Quake, but pushing more pixels? Thanks!
The problems that were challenging and interesting back in the Quake era are now just standard commands built into your GPU, and so modern engines have very little in common with Quake's engine.
I always like to think about the fact that they went to the moon in the 1960s. We're all a bunch of babies compared to what they pulled off then.
"The on-board Apollo Guidance Computer (AGC) was about 1 cubic foot with 2K of 16-bit RAM and 36K of hard-wired core-rope memory with copper wires threaded or not threaded through tiny magnetic cores. The 16-bit words were generally 14 bits of data (or two op-codes), 1 sign bit, and 1 parity bit. The cycle time was 11.7 micro-seconds. Programming was done in assembly language and in an interpretive language, in reverse Polish."
"Oh gee, people were so much more talented when they had to build a chair by hand rather than run a whole factory of chair building machines. If only those modern programmers working on LCH/ATLAS had the chops to compute some moon landing trajectories."
I would say on the whole, yes - that is actually true. Now you only need one "smart" person to program up the machinery, whereas in the past you did need multiple skilled craftsmen in a variety of roles - not exactly apples vs apples, but I think that one of the unfortunate side effects of too much specialization is a lack of a broad knowledge base that lets you apply skills from one field in a different area.
You want to make your way in the Computer Science field? Simple. Calculate rough time of amnesia (hell, 10 years is plenty, probably 10 months is plenty), go to the dusty archives, dig out something fun, and go for it. It’s worked for many people, and it can work for you.
http://dl.acm.org/citation.cfm?id=192171