Hacker Newsnew | past | comments | ask | show | jobs | submit | skidooer's commentslogin

> When bugs are discovered in the old version most likely they will not be fixed because everybody has already moved on.

Is this a real problem? If you encounter a bug in an older version of Rails, why not just fix it and carry on as usual? Fixing bugs is part of the job of being a developer. If you are afraid to fix bugs, your application isn't going to last long no matter what language, framework, or platform you choose.


I'm talking about bugs in the Rails framework. I think very few people would like to be having to maintain an outdated framework. And for the ones that actually do decide to maintain a framework that they did not develop themselves it will cost them a lot of money.


> I'm talking about bugs in the Rails framework.

I think skidooer was as well.

> I think very few people would like to be having to maintain an outdated framework. And for the ones that actually do decide to maintain a framework that they did not develop themselves [...]

How is maintaining an open-source framework that somebody else made different from maintaining a framework built in-house? At the end of the day, bugs will still be found and need to be fixed. Yes, your devs might be more familiar with your framework than with a third-party one, but I think that is only the case if you can keep your team small and prevent churn and specialization. Regardless of which direction you go, devs need to understand what is happening inside the framework - it can't be a magical black box.

> [...] it will cost them a lot of money.

More than writing something from scratch?


Well, I guess we just have to agree to disagree. When choosing a framework one of my criteria is that somebody else is doing all the work to maintain it so that I can benefit from their work. If I have to maintain it then what is the point? I'm trying to save time here, not add more time to my schedule.


You certainly make a valid point, but I must add:

Rails is a fast moving target because they are always looking for new ways to save you time. As I mentioned in a previous post, the Rails 1.0 API is painful compared to the current generation. You are saving massive amounts of time during development because the project has evolved so far.

Spending a few minutes patching a framework bug once every five years pales in comparison to the gains you are seeing in development time.

To each their own, but I'd rather have a framework that is better than have a framework that knows it could be better, but won't make the changes because it might break some several year old app.


I too was talking about the Rails framework. It's actually a really nice read. I've fixed a couple of bugs in it myself.

The thing to remember is that each major release of Rails is fairly well tested. The number of actual bugs you are going to encounter are low. The investment in fixing them is therefore going to be low, should you actually encounter any to begin with.

The Rails codebase is simply an extension of your own codebase. While it is not fun to fix bugs in any capacity, there is no reason to fear fixing bugs in third party code. It is no worse than fixing your own, which also costs a lot of money.

I've been using Rails since around the 0.8 release timeframe. I have seen many major changes along the way. They have all been positive improvements to my workflow and the theoretical issues you describe have never been real issues. I still have one app chugging away on Rails 1.1 and it works just fine. The only problem it has is that it is not nearly as fun to maintain because it doesn't have all the newer major improvements. It would be a sad day if we had to go back to, or were still using, the 1.0 API. The Rails people are doing the right thing.


The article states that only the filename needs to be passed for CSS url() resources. However, using that method, there appears to be no means to append the MD5 to the precompiled asset filename while in production. The result is an unnecessary trip to the application to generate the asset, bypassing the compiled assets completely.

I believe the preferred method it is to generate your CSS with ERB, using the asset_path method. Though hopefully someone can correct me if I'm wrong.


You're unfortunately correct. This worked initially fine for us, as all resources were served through CDN. However, when images were changed and deployed the results weren't expected.

ERB helpers seem to be the only way currently.


There should be a way to have a sass mixin that generates the md5 timestamp for the asset filename in the sass files, instead of having erb process the whole sass file.


Colleges transitioned from being a place of higher learning to a place "where dreams come true" several years ago. Preparing your students to build the next Angry Birds or Facebook instead of reimplementing a search algorithm that has been implemented a million times before is much more in line with the goals of both parties. A strong foundation in CS is not necessary, or even important, when your goal is financial success.

I do agree that the fundamentals are very important for those who are interested in pure academic pursuits. It is very unfortunate that college has become the goto place to get a job, not a place to learn. But the truth of the matter is that the vast majority are only in class because they are looking for future wealth. Colleges, being businesses, will naturally tend towards catering to their customers.

The good news is that with the proliferation of the internet, academics can now learn about CS fundamentals even if the formal CS programs go into decline. Of course it is not too late to fix the education system, we just have to get past the idea that college equals job and return school back to its roots of a place to research and study.


No one is going to get past the idea that "college equals job" until it's not true anymore.

The conundrum we're in now didn't start with higher education catering to customers who wanted future wealth. It started with employers offering high-paying jobs realizing that higher education was a sign of all the qualities they wanted, so they started mandating it. Universities adjusted accordingly.

I'm pointing out the obvious, of course, but it's because I've seen a lot of people say things like "we need to get over this idea." That's not going to happen until there is a ready supply of high-paying jobs that don't require college degrees, or until someone finds a method for achieving a high-paying job that doesn't require college but is just as straightforward and successful.


I'm not sure that it is true. There is no data I have been able to find that supports the claim and there has been several articles on HN lately that strongly support the opposite view.

The best I have been able to find on the matter is one study that shows a loose correlation between those who have a formal education tend to have a higher income. Which, of course, says nothing about the effect of the education on the resulting job.


The effects of this vocationalization of the university extend beyond school, too. It used to be that just having a college degree meant a good chance of getting a job in a wide range of fields. Now, though, if you don't specialize or target your degree at a specific field, you're out-competed by people who did.

(Yes, I'm whining about my Comparative Literature degree again, but I pursued that field seriously and rigorously, unlike many of my colleagues, and I feel like I'm being judged unfairly because of it.)


> Colleges, being businesses, will naturally tend towards catering to their customers.

There was a time when people thought about universities as not being a business. Of course, in the present age everything is a business so I shouldn't be surprised at all.


I admittedly haven't spent much time with Rails 3.1, but isn't that what rake assets:precompile is for?

Automatic compilation of coffee scripts is nice in development, but not overhead you want to add for production anyway.


The Web 2.0 movement was about opening up the data in computer consumable formats and APIs. Nothing more.

While exchanging data over the internet is obviously nothing new, having large organizations provide easy programmable access to their private databases was somewhat revolutionary. The whole App craze was born out of being able to create new interfaces to existing services, thanks to Web 2.0.

Web 2.0 was nothing new from a technical perspective, but it was a revolutionary social shift.


Web 2.0 for me was mainly about user-generated content like blogs, wikis, social networking etc.

You could do a lot if not all of that before as a tech and not-so-much tech but chances were that you did not. "Blog" made it hip and cool for everyone to write a lot about usually not a lot - but if you wanted to have a website you could very well do so before.

I assume you mean RSS and what followed the blogs back then - but this was not really the main idea of Web 2.0. Web services were happening at the same time but they are not "the" web 2.0, in my opinion.


> how come not all great programmers are great writers?

I imagine most programmers are great writers from a structural point of view. The rest is emotion. Programmers know how to appeal to machines, but often are not able to connect with people in the same way; something that extends beyond writing, if stereotypes are any indication.


Ruby essentially is Objective-C without the C.

If Apple was looking for a language that moved away from the low level C, what benefit would an "Objective" language bring over MacRuby, which is already using the native Objective-C system frameworks?


A non-runtime type system.

First class familiar Objective-C-style messaging and block syntax.

Stable, mature, well defined language invariants on par with Apple's requirements for its own APIs and languages.

"Objective-C without the C" would look more like Smalltalk or Strongtalk with a near identical syntax, not Ruby.


I may be misunderstanding you, but:

When you remove the C bits, Objective-C and Ruby use the same type system, conceptually speaking. They are both descendant from Smalltalk; the main feature differences really come down to the syntax alone.

I really like the idea of a higher level Objective-C-based language, I just don't necessarily see the business appeal of creating and maintaining a brand new language that only brings a more familiar, to Objective-C developers at least, syntax. Especially given the amount of effort Apple has been putting into MacRuby.


When you remove the C bits, Objective-C and Ruby use the same type system, conceptually speaking. They are both descendant from Smalltalk; the main feature differences really come down to the syntax alone.

Objective-C is typed -- not just the C part, but the 'Objective' part too. You can cast around type system, but it's there.

The new compiler even uses inference of those types in order to implement ARC.

Especially given the amount of effort Apple has been putting into MacRuby.

Not Apple, just a few people that also work for Apple.


The type system exists, but when working with the Objective half of the language, it is mostly meaningless, at least from a developer's point of view. With the C parts removed, you could replace every type definition with id and your program would run just fine.

The article says that MacRuby is bundled with Lion as a private framework. Surely they wouldn't bundle it if they weren't using it? And being a private framework, it is not there for the benefit of third-party developers.


Objective-C is typed, and that's considered (by Apple, and most developers) as a feature, not just a legacy inheritance from C.


Objective-C types are optional and the language is dynamic, and that's considered (by Apple and most Objective-C developers) a feature.

    id something = nil;
    [something countForObject: nil];
Completely valid, won't crash your program, and only enough type information to satisfy the compiler (but largely meaningless for anything but the most basic static analysis). The only requirement for the above to compile is that countForObject: is a selector defined somewhere in the include path for the file. Even that is a relatively soft requirement since you can pass arbitrary selectors to any object.

And none of this has anything to do with ARC, as far as I can tell.

There was a first class language on Mac OS X which was fully statically typed. It was deprecated with Leopard and never introduced on iOS. The dual nature of Objective-C is one of its attractive properties.


Most of what you just said is simply not true. Without the method types, the compiler will print a warning, infer the wrong ABI and generate the wrong code. If am ambigious match ia made, the wrong code will be generated. What you just wrote may work, but only because the compiler works to match against defined methods types, and even then it can and will get it wrong.

The support for 'id' is only intended to serve as a mechanism to get around the lack of parameterized types, and as part of ARC, the compiler does now infer the types for alloc/init.


> Without the method types, the compiler will print a warning, infer the wrong ABI and generate the wrong code.

Did you even try my example? No compiler warnings are generated (nor should the be). What do you mean by defined method types? It is simply looking for any selector which matches on any class because there is not enough statically available information to know any different. Messages are always passed dynamically.

Are we talking about Objective-C? Are you familiar with NSInvocation? Or performSelector:, performSelector:withObject:, performSelector:withObject:withObject:? Or NSNotificationCenter's addObserver:selector:name:object:? This is all done at runtime. No special type information is available to the compiler when using these. Objective-C messagse are always sent dynamically, so the only ABI concerns are how the stack is prepared, and not the interface of the class of an object. You can define methods and swap them out at runtime, this feature would be useless if everything had to be known at compile time.

ARC needs to know that the types of Objective-C objects, id still works fine, beyond that it needs to know no other type information from what I can tell.

It seems we are talking past each other. Objective-C is not like C++, though. All methods are virtual, always. The runtime goes through great pains to make that efficient and still allow complete dynamism. This is orthogonal from ARC.


> Did you even try my example? No compiler warnings are generated (nor should the be).

Only because it managed to match on a defined method type. If a class declaration hadn't been found at compile time with the given declared method, it would have issued a warning.

If the match was ambiguous and the types incorrect, it would have emitted incorrect code, and possibly a warning (or always, with -Wstrict-selector-match).

> What do you mean by defined method types? It is simply looking for any selector which matches on any class because there is not enough statically available information to know any different.

By 'defined method types', I mean methods declared on visible classes that match the given selector.

If it matches on the wrong one, the wrong dispatch function and/or the wrong function call epilogue will be emitted.

Method calls are ABSOLUTELY NOT ABI identical for all possible types. I can't possibly emphasize this enough.

For example:

  - (void) performWithObject: (NSObject *) object;

  - (void) performWithObject: (NSObject *) firstObj, ...;
The instructions emitted for a vararg dispatch ARE NOT the same as the non-vararg dispatch on all platforms, and incorrect method selection will result in undefined behavior on dispatch.

> Are you familiar with NSInvocation? Or performSelector:, performSelector:withObject:, performSelector:withObject:withObject:? Or NSNotificationCenter's addObserver:selector:name:object:? This is all done at runtime.

> No special type information is available to the compiler when using these.

Yes, it is. Methods have associated type encodings that describe the return and argument types, and that's used to perform runtime dispatch with NSInvocation. This is why NSInvocation is so slow -- similar to libffi, it must evaluate the types and construct the call frame at runtime. It does this by evaluating the type data associated with method implementations by the compiler.

Methods such as performSelector rely on specific type conventions (such as void return, optional single object argument) and will fail if used with targets that do not match the expected convention.


You're right. I hadn't realized the compiler not only ensures the selector exists, but does type C-style type checking on dynamic calls as well. I was surprised to see that two messages with the same selector but different parameter types required a type cast to use.

Of course IMPs aren't identical if they take different parameters. This doesn't affect interchanging Objective-C types though. Yes, the arity and order are important, but the compiler doesn't enforce anything beyond that a pointer is passed for id types.


Cheers to the peer comment regarding HN discourse. Unfortunately (?) I have more :)

> This doesn't affect interchanging Objective-C types though. Yes, the arity and order are important, but the compiler doesn't enforce anything beyond that a pointer is passed for id types.

This is true prior to ARC: all ObjC pointers are the same size, and hence ABI-compatible given equivalent arity/order. It's theoretically possible that a future ABI could be incompatible between two methods returning void vs pointer return value, but currently, all supported ABIs return pointer-sized values in a register.

However, with ARC, this changes. The type system has been effectively extended to denote the required referencing behavior for calling code. This means that for a given arity/order, you must also have equivalent referencing attributes.


I still find it disconcerting on HN when an argument ends with someone saying, "you're right". It's like the normal rules of the Internet just don't apply here. It's one of my favourite things about this community. To you, personally: kudos for maintaining that spirit!


I estimate 90% of the people who use my app have pirated it. That is based on server communication with the app that would not be easily duplicated by outside sources like Google crawlers. There is some margin of error: Each purchased copy is able to be installed on up to five devices, for example. But I find it to be still fairly telling.

What is more interesting is that I noticed a dramatic drop in sales when it started to be distributed on the pirate websites. I'm not quite sure what to make of that, but it does give some indication that piracy did affect me.

For what it is worth, my app appeals to the geek crowd. From the data I have been able to acquire, I would say the majority of my users, including legitimate ones, are running jailbroken devices. That may help skew the piracy rates towards the higher end.

Ultimately, I'm not worried about it. I wasn't banking my livelihood on the success of the app. It has made enough money to recoup my time investment, which is an added bonus. I do feel for the users who did pay for my app, however. I do feel that piracy has prevented me from spending more time making the product better for them. That is the real unfortunate side.


Seems like a better way to put it is: Design first, then develop. Which, in my experience, is a good idea whether you are responsible for the design or not.

If you have no data structures or other programming limitations to work with, you can focus on perfecting the interface and then worry about making it happen. If you already have the program written, it is natural to want to take the easy way out and just slap up a form that matches the code.

An aside, being someone who enjoys playing both roles, I find the design phase goes a long way to improving the structure of my application because I have time to get a better understanding of the requirements, program flow, etc. Every element I draw automatically turns into code in my head, thinking about how it is going to be implemented as best as possible. The program is already written long before I ever touch a text editor.

Furthermore, programming is design. A programmer's job is to write code that is not only functional, but code that is visually appealing. Visual appeal is the factor that makes code maintainable or not. It is basic human nature to want to work on pretty code and reject ugly code. As such, it is wrong to say programmers do not have artistic talent. They exercise it each and every day. The only thing many programmers lack is practise in designing visual interfaces.

Given all of that, I find it very unfortunate that we try to separate the design and development jobs. I understand the business appeal of trying to do the job twice as quickly with two people, but from a fundamental point of view, the separation only goes to hinder the quality of our software, in my opinion.


Not sure I agree with the sequencing. The basic message is that designing your presentation with the expected viewer in mind is really important. For code, it's designing it such that another programmer can grok it easily. For an interface, it's an end user.

To that end, I think it's a good thing that we recognize these are different roles. The fact that each role has a different set of audiences means they can specialize in catering to those audiences, which improves effectiveness. The issue is when we DON'T recognize the difference and try to lump them together.


> just that they shouldn't attempt to make a project look like employment if it isn't.

What is the difference? I am having a little trouble drawing the distinction.

- If a self-funded project is not employment, does it become employment when someone else funds it?

- If you self fund a project and you are able to sell the result of your project, does that make the work leading up to the sale employment? What if you are never able to make a sale?

- Do you have to be just another cog in the wheel of a big business to be considered employed?

- Why is searching for a job not the same as searching for new customers (i.e. sales, a real profession)?

As far as I can tell, they are all exactly the same. Where does the line get drawn?


What is the difference? I am having a little trouble drawing the distinction.

Employment means you are employed. If you start a company, you are an employee of said company ("self-employed" is a misnomer if you have a corporation). "A project," implies that there is not a company, otherwise you would say that you started a company. "A project," does not constitute employment.

You can talk about edge cases of pet projects making sales, but it doesn't change the fact that claiming a project as employment experience is unlikely to get you very far.

Again, I am not saying that people should not work on projects. Just don't confuse projects with employment.


On a resume, you would state that you worked under "Your Name", and you can elaborate by describing "A project" to stick with generally accepted formatting rules, but in day to day dialog I have to disagree with your assertions.

A project is exactly how I describe what I am working on, whether I am being paid by someone else or if I am paying myself – someone is always paying for your time, even if that someone is you. I'm sure even you would agree that my day job projects are employment.

I believe my question still remains. If not all projects are employment, when does working on a project become employment? What criteria need to be met?


I believe my question still remains. If not all projects are employment, when does working on a project become employment? What criteria need to be met?

I believe I answered that very clearly in my last response. One likely works on projects as part of employment. One may work on projects outside of employment. 'Project' does not imply 'employment' though typically 'employment' does imply 'project'.

If you start a business, you legally become employed by that business. If you work on a project without a business, you have not constituted employment. As for interviewing, even if you start a business and work on a project, with no completed product or sales to speak of, I think you'll still have a difficult time claiming legitimacy.


> One likely works on projects as part of employment. One may work on projects outside of employment.

I guess my confusion in your original response comes from the notion in my belief that all work is employment. Although I do believe I have a little bit of a better understanding of where you are coming from now.

With that said, even when I'm hacking away for fun on purely personal projects, I still consider that an act under the umbrella of my business – which does happens to be a corporation in my case, but it need not be. If the project turns into something that is marketable, it will be sold under my business. That also adds to the confusion of where to draw the line.

Ultimately, I strongly believe the employer is going to be interested in what you have been doing no matter what the circumstances. If it is interesting and applicable to the job, it is not going to matter who commissioned the work or how much you were paid to do it and it is certainly going to look a lot better than a job at McDonalds.


Agreed.


What about when people are employed to work on a project?

It's not a semantic game, contractors are hired this way all the time. Programming skills like most crafts can be applied equally to paid and unpaid work. When you get hired for a programming job, presumably the main concern is whether you can program, which has nothing to do with whether or not you were previously employed to do it (that has more to do with your cashflow situation).


What about when people are employed to work on a project?

Then they are employed. Working on a project without an employer is not employment. Working on contract for an employer is generally referred to as "self-employment" although, technically, if you're operating under your own business, you're an employee of your own business. I really don't understand why HN is so upset by this distinction. Everyone here is perfectly happy to distinguish between "project" and "start-up" but not, apparently, "project" and "employment".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: