That could have interesting implications for the Chrome Notebook (I haven't really used one, but based on what I've heard).
The potential appeal of the Chrome notebook is its security-by-simplicity. If the only thing you can do is use Chromium, then your notebook is basically as secure as Chromium itself is (in theory, at least).
The irony is that we're essentially moving back to the way computers were a few decades ago - people used terminals to access a central computer. We moved to personal desktop computers for a while, then laptops (and mobile phones). It'll be interesting to see if the rise of interest in 'cloud computing' means that, in five years, our primary computers will just be portable terminals to access some main computer remotely.
Personally, I might be fine with that. Network speeds/reliability aside, everything I do could be accomplished easily over SSH, with X forwarding for a few applications. It's possible that these simple tools that we've had for decades could be packaged in a way that appeals to a more general use case.
I hope we go the way of the dumb terminal from an ease-of-use standpoint; I want my data to just "be", and not have to worry about syncing/backups/etc.
From the freedom/privacy standpoint, however, I'm not sure. I find it surprisingly easy to imagine a US government agency empowered by anti-terror laws to get access to my data.
This is why I want encrypted cloud computing for all my stuff. I dunno if the large players could actually do that and have the US govt be ok with it.
From the privacy standpoint am still uncomfortable at the idea of having my data on the cloud even if it's encrypted and would prefer to keep it local. Ofcourse, my rational side points out that i don't have any data that is worth my cloud provider to run brute-force attacks and decrypt....
But then there's also the fact that i have access to a decent set of hdd's and can store it locally rather with ease.with the cost being the attention and time of running smartctl once in a while to check the health of the hdd's and getting a backup...
The encryption key must reside only on trusted (client side) devices to allow your data to be hosted by a third party without them actually seeing your data.
There's an interesting area of research called Homomorphic encryption (http://en.wikipedia.org/wiki/Homomorphic_encryption). In this schema, certain properties of the data are maintained even after encryption. In the general case you would be able to stored encrypted data in the cloud and perform operations on it without needing to decrypt the data.
Client side encryption doesn't seem to fit with the ChromeOS concept. All your information is in your Google account, all your documents on Google Docs and none of it is encrypted as far as I know (Open to Correction). An engineer at Google could theoretically look at all your bookmarks, documents, browsing history etc. Add aggressive US "anti-terror" laws into the mix and it looks like ChromeOS is not a good choice for those looking for real privacy.
If the government wants to read your documents, it's just as easy for them to get a warrant to search your house as it is for them to get a warrant to compel Google to give them your docs.
Except, as we've seen with wiretapping and so many other data breaches, if it's 'just there' and can easily be indexed and searched with the click of a mouse, then it will be. Searching someones house requires proper channels to be followed. Looking at files on a PC is so much easier, can be done without the person knowing and is, imho, too great a temptation to resist. The emergence of laws in the US to make such access as easy and automated as possible proves this.
I'm not actually sure that's true. It's not a full-fledged ssh client; it may well be a shell builtin. (It can't be the standard openssh client, because that lets you escape to a real shell.)
Back in the early days of ChromeOS, there was a much less restricted security model than I presume exists today based on what I'm reading. I haven't use it in a while, though, so I do not know how it works these days.
Basically when I installed it, it had to be compiled by you and installed on an SD card. You could easily change a setting on the system and drop to a shell. It was running on a stripped-down Ubuntu installation, so you could actually install Firefox inside of ChromeOS. It was nice having the base, fast-boot installation combined with the ability to basically run a full system if you needed to.
This just in: there are a massive amount of developers working on a single project. Not all of them are on the stability team. Just because they're adding feature X doesn't mean that they're taking any amount of resources away from bugfix Y. It's a false dichotomy typically imagined by those who have never worked on medium to large-sized products.
It would be false if he had said no work on stability was happening. However, new features come with a cost in stability (both through introduced errors and less manpower on the stability teams). Google has been putting a lot of features in (nacl, dart, etc) and it is coming at the expense of the basic browsing experience.
Interesting that as Mozilla figures out it needs to focus on the basics after getting slammed by the smaller, faster Chrome, Google does the exact same thing they did.
It's just Zawinski's law in a cycle. “Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.”
"Various studies have shown that software broadly contains something like 6-16 bugs per 1000 lines of code."
As more features are added, the number of bugs will increase, so in a sense everyone who is not on the stability team is on the instability team. Which means the stability team would need more developers as a counterweight...
Here are a couple of details about programming at Google. First, every commit requires a code review. Secondly, in addition to writing your own tests and testing tools, there is a separate set of developers that do nothing but write testing infrastructure, ensuring that it's as easy as possible to test everything as it's written.
Bugs still happen, of course, but they're going to be of the more obscure variety. My chromium 14.0.835.202 has never crashed. Always good to get a backtrace and read the code nearby, and if nothing makes sense, run memtest86.
Not necessarily. It only takes one bureaucrat to reject code that doesn't pass tests or follow required conventions. For example, a good rule of thumb is that no new code will be accepted without documentation.
"It's a false dichotomy typically imagined by those who have never worked on medium to large-sized products"
In my experience it's not necessarily false, especially when the test cycle has a weak regression suite.
One project I worked on was an embedded system with tight resource management requirements. The teams were split into features and stability/performance scrums. Each new feature would have the performance people crying out loud at the new overheads introduced.
A performance/stability-minded code reviewer on each team alongside a strong performance regression test cycle might have seen the project succeed but the pace imposed by a fast agile dev cycle precluded this sort of discipline.
Marketing wanted features and wanted them now - The demo is Friday, chop chop!
Not a false dichotomy but a manufactured one, perhaps.
Yeah, Chrome represents a pretty different model from most desktop apps we run. Chrome's seamless auto-update has brought web-app style continuous iteration to the desktop. That means continuous change, and presumably improvement, but also no real ability to opt-out of new features/bloat and newly introduced bugs.
This was a brilliant move for Google because it removes the friction they face to push web-client functionality where they want it to go. For example, I woke up one day and suddenly I could run all these new webgl apps. We have all granted them implicit license to push code out to all our desktops -- at the cost of no longer controlling what version and bug/feature tradeoff you are running.
Whereas App Store apps, where the user controls updates, are somewhere in the middle of this shrinkwrap vs. web app continuum. You get the constant nagging to download version N.epsilon.epsilon of iTunes, with 20 pages of block text describing updates that are only interesting if you are the person driving Apple's grand product strategy, but you at least get the ability to say no thanks, don't ask me again. (Of course they will ask you again, but that's just due to inevitably skewed incentives.)
I use portable version of Chrome running from TrueCrypt partition for commerce related things (along with Firefox as my main browser) and that way Chrome autoupdates are frozen.
There's probably some kind of law in there. My browser history is Mosaic -> Netscape -> Galeon -> Firefox [1] everytime jumping to a new slender browser since the old one got too bloated or too old. Firefox lost its slenderness a long time ago but I never found anything to replace it with.
I had the exact same problem. I was very dissapointed of Chrome, until I reformated my computer, when Chrome updated the bookmarks it became slow and buggier... that's when I deleted all of the bookmarks of the Bookmarks Bar, and VOILA!, Chrome was reborn for me.
>Chrome should try and remove features, not add things nobody is asking for.
Can you be a little more specific as to what kind of thing should be removed? I've noticed a definite difference in speed, but I wouldn't say the browser is bloated.
I agree, this is exciting. I just hope, if they implement terminal withing chrome, when one presses ctrl+p it will select previous command from buffer, and not print the page. I enjoy working in chrome devtools, I use it just like a shell and love it... and then I press ctrl+p and get all frustrated...
For those that can't wait for google, there are already some pretty mature browser-based terminals out there. For example shellinabox (https://code.google.com/p/shellinabox/) which is a full javascript implementation of a vt100 compatible terminal plus a server written in C providing pty streams over http. Web shell (http://code.google.com/p/web-shell/) is another one, doing all the terminal emulation on the server written in python and transferring full lines to the browser.
If you're interested to write your own terminal emulator, have look at pyte (https://github.com/selectel/pyte), a very clean and beautiful python library to write vtxxx compatible terminal emulators.
There's actually a really interesting article/email I read a long time ago saying that emacs main competitor was the browser, and that it was likely to be overtaken by it unless emacs reacted in some way.
I can't find the article with a cursory google search right now, but maybe someone else knows it can find it.
Scroll down a bit to the heading "The bad news: the competition isn't the IDEs"
"IDEs are draining users away, but it's not the classic fat-client IDEs that are ultimately going to kill Emacs. It's the browsers. They have all the power of a fat-client platform and all the flexibility of a dynamic system. I said earlier that Firefox wants to be Emacs. It should be obvious that Emacs also wants to be Firefox. Each has what the other lacks, and together they're pretty damn close to the ultimate software package.
"
Yup, that's it! Thank you. I love being able to tap into collective memory :)
It strikes me as true though, step by step we're moving functionality to the browser. In some ways it feels great, but in others it's terrifying - the thought of this (and the next) generation poorly re-implementing emacs in the browser...
This announcement reminded me too of when Linus commented (upon the integration of virtual memory into Linux) that Linux had become the Emacs of terminal emulators.
Pretty funny that by integrating a Terminal emulator, Chrome becomes the Emacs of web browsers.
NaCl doesn't have the low level networking capabilities needed for SSH, Telnet, etc. because they would be a security risk (a "game" could run a portscanner or look for open SMB shares on the network).
NaCl intentionally has the same sandbox, same security model, and same I/O capabilities as JS. In this case, NaCl would still be limited to XHR or WebSocket.
I'm not saying open it up to just anything, but if this is for ChromeOS, they could certainly grant access to manually-installed extensions (not extension gallery extensions) and/or to OS components.
Shhh, now that embedded Java does everything we need, we must pretend it no longer exists — how else could we justify all the insane technological contortions that we'll need in order to justify profiting through sequestering users' data, renting 1980s-era features to them, and selling their attention, intimate habits, and social bonds to advertisers?
Makes sense. Using the console to query information about what's going on inside Chromium's processes is already similar to the "shell experience". (This might explain why recent Chromium builds have gone from 45MB to 235MB!!)
Google or someone at Google? I cannot figure that out from that page, and I do not see a business reason for doing it. Or is this intended to be the software equivalent of JTAG on Chrome OS? What am I missing?
Does anyone remember Konqueror has a webbrowser/file browser methodology of handling protocols such as SFTP through IOSlaves called FISH?
I know Konqueror shares a codebase with Chromium through the KHTML renderer and maybe more, but maybe Google's Chrome is dipping into the well of ideas that comes from the KDE project but using a different implementation, of course.
I have a vague memory of using a telnet URL in a browser to play a text-only MUD back in the 90's. Was exotic compared to the default way of doing it in a terminal shell session.
The potential appeal of the Chrome notebook is its security-by-simplicity. If the only thing you can do is use Chromium, then your notebook is basically as secure as Chromium itself is (in theory, at least).
The irony is that we're essentially moving back to the way computers were a few decades ago - people used terminals to access a central computer. We moved to personal desktop computers for a while, then laptops (and mobile phones). It'll be interesting to see if the rise of interest in 'cloud computing' means that, in five years, our primary computers will just be portable terminals to access some main computer remotely.
Personally, I might be fine with that. Network speeds/reliability aside, everything I do could be accomplished easily over SSH, with X forwarding for a few applications. It's possible that these simple tools that we've had for decades could be packaged in a way that appeals to a more general use case.