It generates a (random) UUID. According to Wikipedia "the intent of UUIDs is to enable distributed systems to uniquely identify information without significant central coordination. In this context the word unique should be taken to mean "practically unique" rather than "guaranteed unique". Since the identifiers have a finite size it is possible for two differing items to share the same identifier. The identifier size and generation process need to be selected so as to make this sufficiently improbable in practice. Anyone can create a UUID and use it to identify something with reasonable confidence that the same identifier will never be unintentionally created by anyone to identify something else. Information labeled with UUIDs can therefore be later combined into a single database without needing to resolve identifier (ID) conflicts.
One widespread use of this standard is in Microsoft's globally unique identifiers (GUIDs). Other significant uses include ext2/ext3/ext4 filesystem UUIDs, LUKS encrypted partitions, GNOME, KDE, and Mac OS X, all of which use implementations derived from the uuid library found in the e2fsprogs package."
One related topic that I find interesting is how some programmers feel compelled to make a pivotal distinction between "unique" and "practically unique" with regards to random UUIDs. They refuse to insert UUIDs into a database blindly, even if checking for duplicates implies a heavy performance penalty or even makes scalability impossible.
Personally I compare this distinction to something like the odds of the Earth being destroyed by a meteor. It could happen and it would be a disaster, but the probability is so low that I just decide not worry about it.
So the interesting question is: Which one is the better programmer, the one who trusts "practically unique" or the one who always requires "unique"?
so.. with type 1 UUIDs with the mac address/timestamp you can generate 10,000 globally (providing you dont duplicate your mac address) unique ids per millisecond per system without conflict. With type 4 if just going random the chance of a collision is somewhere up there with having two of your data centers simultaneously hit with a meteorite (providing numbers are random) - although I suppose this would be useful if you are on a limited embedded system or something with no RTC or mechanism to generate static/random data and timestamps and random number generation is very unreliable. I assume this is a joke?
'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
return v.toString(16);
});
That should be pretty good, right? Well, it's pretty good depending on your needs. If you need a UUID that is (practically) guaranteed to be unique within a single HTML document, and that UUID never leaves the scope of that page, then this function is a great solution. But if your client-side-generated UUID is sent to the server where it meets up with many other UUIDs generated from the same JavaScript code that ran in other browsers, then this function won't cut it. Why not? Because generating a random UUID in JavaScript relies on the use of Math.random(), which in most browsers uses the current datetime as a seed, and that's only a fine seed if you're building Tetris.
Given enough time, two browsers will eventually generate a UUID at the same moment, meaning they both use the same seed and therefore generate the same UUID.
So, why not seed a pseudo-random number generator in JavaScript yourself with something better than the current time? Because client-side JavaScript doesn't have access to good sources of entropy. Within the browser, your sources of entropy are limited to things like the current time, the window dimensions, the user agent string, the number of plugins installed, etc. You could capture mouse movements and keyboard clicks over time, but it would take a while to generate sufficient entropy for a cryptographically secure random number. Also, if you need to generate a UUID on page load you can't wait for the user to jiggle their mouse.
Meanwhile, the server has access to better sources of entropy. For example, many /dev/random implementations use the time between hard drive seeks as a source of entropy. Of course this entropy pool would be exhausted quickly, but you could replenish the pool with outside sources of entropy such as white noise from a radio ( https://www.random.org/history/ ) or even radioactive decay ( http://www.fourmilab.ch/hotbits/ ).
I don't know what sources of entropy http://uuid.me is using to generate random UUIDs, but it might be better than what JavaScript is capable of on its own. If uuid.me served its UUIDs in JSON, then you could make a JSONP call in JavaScript, providing you with a UUID that is much less likely to ever collide with another client's UUID.
At least v4 will use Java's SecureRandom, and my server is running a recent Oracle JVM7, so I would argue that the randomness should be decent enough. Maybe not perfect, but good enough to make collisions extremely unlikely.
That being said, this service really was written as joke. Particularly the JSON and XML outputs.
As to /dev/random, no, it's not simply derived from time between hard drive seeks (at least for the operating systems I care about).
Actually, I find pittsburgh's comment interesting. Nowadays every new piece of technology seems to offer HTTP and JSON support, but UUID support might be lacking, or its quality might be poor. Funnily enough that's the case for the Go standard library (compensated by an external module).
I could actually imagine someone trying to create UUIDs and not being bothered to implement that functionality correctly in their software.
I hope it'll never happen as relying on a third-party web service has really bad implications, but the world is ready now :)
FWIW, I added support for UUID sets in the API, so now one can ask for thousands UUIDs at once, if one ever needs to :)
As entertaining as this project has been for a few hours, it's time for me to move on.
Can you expand on your definition of "wrapper culture" (not to be confused with rapper culture)?
Because if I understand your meaning, your comment could be applied to pretty much the entire "www". Almost everything offered by a "server" or "as a service" is something that anyone can run on their own machine. Windows is on the decline, UNIX is taking over. It has come to pass. OSX, iOS, Android, ... all UNIX. All UNIX machines can be clients, servers, or both.
The idea that someone would believe such machines are limited only to being "clients" is... PATHETIC.
It generates a (random) UUID. According to Wikipedia "the intent of UUIDs is to enable distributed systems to uniquely identify information without significant central coordination. In this context the word unique should be taken to mean "practically unique" rather than "guaranteed unique". Since the identifiers have a finite size it is possible for two differing items to share the same identifier. The identifier size and generation process need to be selected so as to make this sufficiently improbable in practice. Anyone can create a UUID and use it to identify something with reasonable confidence that the same identifier will never be unintentionally created by anyone to identify something else. Information labeled with UUIDs can therefore be later combined into a single database without needing to resolve identifier (ID) conflicts.
One widespread use of this standard is in Microsoft's globally unique identifiers (GUIDs). Other significant uses include ext2/ext3/ext4 filesystem UUIDs, LUKS encrypted partitions, GNOME, KDE, and Mac OS X, all of which use implementations derived from the uuid library found in the e2fsprogs package."
Source: http://en.wikipedia.org/wiki/Universally_unique_identifier