Huge spikes over a short period of time are hard to deal with for anyone, and payment processors have their own infra whose scalability Steam can't necessarily control.
Some LLM APIs let you give a schema or regex for the answer. I think it works because LLMs give a probability for every possible next token, and you can filter that list by what the schema/regex allows next.
It sounds like they are describing a regex filter being applied to the model's beam search. LLMs generate the most probable words, but they are frequently tracking several candidate phrases at a time and revising their combined probability. It lets them self correct if a high probability word leads to a low probability phrase.
I think they are saying that if highest probability phrase fails the regex, the LLM is able to substitute the next most likely candidate.
You're actually applying a grammar to the token. If you're outputting, for example, JSON, you know what characters are valid next (because of the grammar), so you just filter out the tokens that don't fit the grammar.
The design of generics in this one seems rather well balanced in simplicity vs power.
The part about interface-typed values [1] is interesting. They do dynamic dispatch as a fallback when the function is too polymorphic to be be specialized.
In Rust terms it's as if it picks between `dyn` and `impl` automatically. It looks convenient, but also a bit of a non-obvious performance pitfall.
It seems like the type checker could call out polymorphic functions both when compiling and even highlighting them in the editor when checking for type errors.
> Even after mapping the entire visual cortex to all visual features, it just doesn't get us any closer to understanding that raw experience of "sight" that happens when you look at something. It's not the same thing.
Seems plausible that a detailed enough mapping of correlates one day uncovers a single thing/feature/location/architecure/pattern/whatever that is present only when qualia are, and absent in all other processes. That would at least point us in a concrete direction.
Yeah but that's like saying what propels a car forward is a green stoplight, it's just a correlation you can observe that could have nothing to do with it - might not even be indirectly causal.
When you think deeply about the Hard Problem, it's really The Impossible Problem because there's no way to solve it within our subject:object and "self" paradigms. Yet the problem persists.
Take something like the memory of the first time I tied my shoes - maybe you can find the exact neuronal correlations to that experience I have when I think of that, but they won't be that memory, nor do they even give up the ingredients to engineer that in some alternate setting. The memory itself seems like a completely different phenomenon that the neurons (themselves, concepts in consciousness) can never explain on their own.
Qualia is not that hard to me to explain when you consider that the outside world as you experienced it, is just a mental representation. A very faithful one, but still just a mapping. Why there is seeming dichotomy between the inner and outside world in terms of quality can be attributed to how the brain compartmentalized it. Both are in your head. This is qualia of the brain, how its seemingly apart from the 'real' world is because it's made so, because it's useful. So in short, it's as simple as 'the brain make some transformations to the signal'. I find that acceptable.
How about the real real outside world? We don't know it. No one knows the thing in itself. So far, to see the world you have to be embedded in it and see something from a specific POV.
Why there is at all consciousness, idk. Why not? Consciousness seems functional... because humans can't really do anything while unconscious. So it seems to me very much part of evolution.
Yes it exists and is probably some biological product of evolution, but the brain is apparently not like a heart that pumps, or a lung that is essentially a bag that pulls oxygen out of air, it's doing something else that isn't explained using any mechanical analogy (AKA the materialist worldview).
In materialism everything we know is machines and mechanisms: From atoms to trees to buildings. It works for explaining most things, but maybe there are some things that aren't machine-like, leaving us with no way to understand it (at least not using those concepts).
Think of it like - you can program a computer to do all kinds of things, but it will never "feel pain" because that is a totally different class of phenomenon that we have no idea how to produce - no amount of code will ever make pain or joy happen to the computer - it's not a code problem. "Feeling of pain" etc. are the primitives of our experience, these little experiences add up to make up our lives and selves, but we have no idea what they are or how to produce them. We only know adjacent physical things that happen alongside it like neurons firing - which not only doesn't even begin to explain it directly but is itself a concept of the consciousness machine we're trying to explain.
A human body can be in a zombie state while doing something. So a human can do something while unconscious, although we may rightfully call such a body not a real human. Conscious experience begins when that human connects to his body.
absent mindedness is still conscious tho. And sleepwalking is not the level of function im talking. You can't do your job while sleepwalking. I'm not convinced humans can do intelligent work without being conscious. No such states that i've seen anyways.
I agree in the sense that answering "what really are qualia" may be as impossible as answering "what really are quantum fields". At some point we may be forced to accept "it just is", but that doesn't mean we can't make meaningful progress first, just like in physics.
For instance, either new physics is involved or it isn't. Finding solid evidence for or against it does not seem impossible, and it would definitely constitute meaningful progress, at the very least by ruling some theories out.
Interestingly enough, in practice, there are various declarative SwiftUI-style nuget packages for Avalonia[0][1][2] and Uno[3]. You don't actually have to touch XML if you don't want to.
Edit: as someone else noted, the website itself links to a built-in option to do so as well - https://platform.uno/c-markup/
I agree. It's just quite a lot of verbose-looking XML. "I have to write that much XML" is probably not the first impression you want to give. I did scroll down and see that you don't actually have to write such XML by hand, but many will not.
My project isn't large or even mid-sized, but it has over a hundred dependencies. Building the dependencies certainly takes some time on my raspberry pi 4, but after that initial hit, every change to the project builds a release in about 15 seconds, and a debug build in about 10.
And on my Macbook Air M2, where I actually develop, these things happen fast enough to call them instant. Perhaps I'm a bit spoiled there due to the excellent hardware. As a comparison, a Typescript project I'm working on using a more powerful Macbook always takes about 5-10 seconds to build.
I don't doubt that actually large Rust projects take a long time to build, though, but even these small and mid-sized were rather slow to build a few years ago.
To clarify, dependencies significantly affect incremental builds too. Seems loading information about compiled dependencies into the compiler and/or resolving stuff about them can take significant time.
I know you said 4s is good but have you tried changing the linker?
Number of dependencies likely won't affect incremental build times except for linking and replacing it might offer some good gains for incremental builds.