I write Ruby for my day job (about a year and a half now) and C# for my own stuff (about ten years) and I'd go with Ruby as a pedagogical language ten times out of ten. C# is a good (not great, but good) language for building stuff you want to put in a production environment, but there's so much drag in the language that I would seriously worry about discouraging novices long before they get to the point where Ruby would bite them in the ass. C#, meanwhile, forces a novice programmer to understand types at a fairly deep level before they actually know how to make things work.
Ruby, on the other hand--well, SublimeLinter or equivalent provides detailed and well-explained warnings if Ruby code doesn't make sense or is potentially problematic. And while, IIRC, RubyMine offers a good interactive debugger if you're willing to pay for it, Pry freely offers an interactive REPL experience at any point in the codebase that, while less mousey, is IMO a better exploratory tool than you get with C#--you can't just drop into your app, at a specific breakpoint, and start whacking at things in the shell, it doesn't work. This is a huge demerit for exploratory programming. And I would gently suggest, and I'm not trying to rip on you in saying this, that it would be worth introspecting to determine whether Ruby's perceived inconsistencies were more on your preconceptions than on the language--because I felt the same way until I internalized it a little bit and went "whoa, that makes total sense, I was just looking at it backwards".
I think you have to understand Types when working with Ruby. Otherwise it's just syntax voodoo.
That's a big part of what helps a language like c# continuously reinforce a mental model that leads to better/easier understanding. IMO.
You can call `open` and pass it a URL in Ruby. But what does that return? What's the Type? Where do I look up documentation for it? What's available to me _right at this point in my code_?
Ruby makes copying examples a bit easier, but you have to hold a lot more in your head before you could be considered "proficient".
And then there's all kinds of caveats. You want to write an O/RM in .NET? Pick up the PoEAA. You want to do the same in Ruby? Well, one of the first things I did was write DataObjects. Because there isn't a consistent database access API for you already.
From there you want to map Rows into Objects? Be prepared to play Ruby Golf. Because your first shot will be unusably slow. Not because it's wrong. But because it turns out Ruby's performance has real world implications. So you memoize anonymous classes. You cache method handles. You run a thousand different micro-benchmarks on the performance difference between re-binding a cached method handle for a setter, or just calling instance_variable_set.
I think the complete lack of type declarations actually makes developing in Ruby much more complex. Even experience programmers can end up debating wether a breaking change between "truthy" and an actual Boolean is a good or bad thing.
With as much experience as I have in Ruby, `extend` and the self class stanza are still just weird.
Even after developing in Ruby for years I'd still run into code that was just real difficult to understand how it worked at all.
I guess what it comes down to is I'd argue declaring all your method parameters as `Object` in c# is not going to make writing working code easier. It may make compiling easier, but that's not really the goal.
It sounds like you're a fan of Pry. I never cared for it personally. I found it much less intuitive than clicking in the gutter, running my program, and being able to mouse over a variable to see it's value, or look in a panel to see the full program state at this point. For a learning tool, I feel like that's got to be light years better than the solutions I've seen in Ruby. I managed that in c# without any help at all.
As far as exploring, Types generally tell me all I need. In Scala and IntelliJ I just hit ^J. Or I'll jump to the source of a method I'm calling with COMMAND+B. Or the implementation of an interface with COMMAND+SHIFT+B. These are just things I got out of the daily tips popup.
Seeing a function called: `generateDownloadUrl: Photo => String` tells me more in less time and space than the equivalent Ruby method or lambda ever did. Because in Ruby you don't know the requirements until you read the source or documentation. Whereas in Scala (which I wouldn't actually recommend to a beginner, but the same is true in c#) you have to resort to documentation or reading the source far far less frequently. Which for me at least is a much lower cognitive load.
Because of checked exceptions, switch statements and FactoryFactoryFactories I probably wouldn't suggest Java. But I think that languages that self-document the Types at declaration points are much easier to grasp than languages that still have the types, still require an understanding of them to be proficient, but omit those declarations (like Ruby). In Ruby you basically have to memorize a large chunk of the standard library before you feel proficient. The same isn't really true for Scala, Java or c#.
It's easier to build a mental model (for me) in those languages. And that's the biggest barrier to understanding and feeling like you grok it (at least for me).
It's interesting that you say all that--because none of that rings true to me. Quite literally none of it. Even the idea of understanding types is foreign to me, because I only think of types as collections of messages that objects respond to and use them only as shorthand for exactly that; like, my yardocs are full of [#to_sym] as a "type" instead. Things like `extend` are trivial to me, and I can explain both their semantic behavior and their implementation in three sentences. I find Ruby fairly consistent and its libraries no more difficult than .NET--and seeing as how because of plenty of bad decisions you can't really trust IntelliSense in the first place (like, say, arrays implementing IList<T> but throwing an exception for Add()), I find myself Googling no less for the documentation in .NET. And much, much less than in Scala, though I am very comfortable in that, too.
I'm curious, though. When you say "Ruby", how much of that was outside of Rails? I don't intend that as an ad-hominem, but rather in the exploration of a theory that I've had for a while. I am wondering if the approach one takes to learning the language and the ecosystem influences how much "magic" there is to Ruby. What you describe sounds familiar from friends and colleagues who learned Rails, and Ruby incidental to it. I only vaguely know Rails at all, I don't use Ruby for web applications beyond a Sinatra server as a dumb API.
(And, as I said, RubyMine has a stop-the-world, click-around debugger, much like Visual Studio. I've only used it once or twice, because the REPL is comfortable to me, but it does exist.)
Ruby, on the other hand--well, SublimeLinter or equivalent provides detailed and well-explained warnings if Ruby code doesn't make sense or is potentially problematic. And while, IIRC, RubyMine offers a good interactive debugger if you're willing to pay for it, Pry freely offers an interactive REPL experience at any point in the codebase that, while less mousey, is IMO a better exploratory tool than you get with C#--you can't just drop into your app, at a specific breakpoint, and start whacking at things in the shell, it doesn't work. This is a huge demerit for exploratory programming. And I would gently suggest, and I'm not trying to rip on you in saying this, that it would be worth introspecting to determine whether Ruby's perceived inconsistencies were more on your preconceptions than on the language--because I felt the same way until I internalized it a little bit and went "whoa, that makes total sense, I was just looking at it backwards".