You might want to try dumping your work from Resolve out in ProRes 422 HQ or DNxHR HQ and then encoding to h264/h265 with Compressor (costs $; it's the encoder part of Final Cut as a separate piece of software) on a Mac or Shutter Encoder. Also, I'm making a big assumption that you're using the paid version of Resolve; disregard otherwise. It might not be worth it if your input material is video game capture but if you have something like a camera that records h264 4:2:2 10bit in a log gamma then it can help preserve some quality.
> Even now, barreling towards 40, there’s aspects of social capabilities where I come up quite short relative to my peers.
I identify with your post as a rural kid who mostly didn't socialize with classmates after school. I went to public school, and I'm 40 now. I think the human experience is that you are inevitably going to encounter social situations where you feel outmatched or simply don't belong. I do agree with making sure your kids experience public school, but I think that's about the bare minimum of what you can offer your kids.
Yeah, there are always going to be outliers, that can't be avoided. The problem lies with the kid never having been given the opportunity in the first place.
> because photos don't require highest speed cards
That hypothesis is certainly getting tested these days in specific niches. With high megapixel sensors, pre-capture, and cameras capable of pushing between 30fps and 120fps worth of compressed raws or high quality JPEGs, you can obliterate your camera's write buffer and CFExpress write bandwidth. You can make many bad photos of an animal, bird, or athlete with extreme ease -- and hopefully find that one winner in the haystack.
I would say the line between movies and photos is getting blurred, but it's unlikely you're using a shutter speed that allows for motion blur with these bursts of photos!
> high megapixel sensors, pre-capture, and cameras capable of pushing between 30fps and 120fps worth of compressed raws or high quality JPEGs
Surely those are buffered in the RAM first, then flushed to the card. When the buffer is full, cameras either stop recording or have to flush continuously, which reduces the burst rate.
Yes, that's correct. Buffer sizes are also all over the place, so if you want to shoot continuously, you need to pick carefully. Check https://www.fredmiranda.com/forum/topic/1856860/0 for a thorough analysis starting with the Sony A9iii (which can fill the buffer incredibly quickly with its premier feature, the 120fps 14-bit raw output global shutter). Deeper in the thread compares to the Nikon Z9.
I realized if someone were to assign me the ticket for fixing this behavior, I would have no idea where to begin with solving it even with this blog post explaining the problem, so I'm very curious to know what the most practical solution is. (They obviously aren't adding "If someone asks you about a seahorse emoji, there isn't one available yet, no matter how strongly you believe one exists." to the system prompt.)
I bet they probably are adding that to the system prompt at least in the short term while people are paying attention before looking for a longer term answer.
The system prompts I've seen are absolutely massive.
> This attention scarcity stems from architectural constraints of LLMs. LLMs are based on the transformer architecture, which enables every token to attend to every other token across the entire context. This results in n² pairwise relationships for n tokens.
The n² time complexity smells like it could be reduced by algorithm engineering. Maybe doing a preprocessing pass to filter out attending to tokens (not sure what the right term of art is here) that do not contribute significantly to the meaning of the input. Basically some sort of context compression mechanism.
People really really want LLMs to output a highly reliable finished product, and I suspect we're probably never gonna get there. Lots of progress over the past couple years, but not on that.
I think it's much more interesting to focus on use cases which don't require that, where gen AI is an intermediate step, a creator of input (whether for humans or for other programs).
I agree, but I still suspect OpenAI and other LLM companies do stuff like that, when an example of a hallucination becomes popular.
If I see some example of an LLM saying dumb stuff here, I know it's going to be fixed quickly. If I encounter an example myself and refuse to share it, it may be fixed with a model upgrade in a few years. Or it may still exist.
Something about how you have to keep repeating "There is no seahorse emoji" or something similar reminded me of the Local 58 horror web series where it seems like the program is trying to get you to repeat "There are no faces" while showing the viewer faces: https://www.youtube.com/watch?v=NZ-vBhGk9F4&t=221
"This behavior is a function of the core AI technology we use, we are unable to resolve this issue with a standard software patch or update at this time.
For the time being this issue can be mitigated by not asking about seahorse emoji.
We are closing this support ticket as the issue is an inherent limitation of the underlying technology and not a bug in our specific implementation."
Sucks that the Blu-ray experience is dreadful for 4K content. You've gotta find specific Blu-ray drives with specific firmware versions to do rips, or watch on a PlayStation or similar locked-down console. There isn't even a non-pirate way to watch on a laptop or desktop anymore since Intel SGX is dead.
If you want a fast and easy way to rip 4k Blu-ray buy a drive ready for it. People sell prepatched drives to rip with. They don’t mark them up much. I grabbed a couple and after two yeas I still haven’t worn through the first one yet.
I bought a Gigabyte X870E board with 3 PCIe slots (PCIe5 16x, PCIe4 4x, PCIe3 4x) and 4 M.2 slots (3x PCIe5, 1x PCIe 4). Three of the M.2 slots are connected to the CPU, and one is connected to the chipset. Using the 2nd and 3rd M.2 CPU-connected slots causes the board to bifurcate the lanes assigned to the GPU's PCIe slot, so you get 8x GPU, 4x M.2, 4x M.2.
I wish you didn't have to buy Xeon or Threadripper to get considerably more PCIe lanes, but for most people I suspect this split is acceptable. The penalty for gaming going from 16x to 8x is pretty small.
IIRC, X870 boards are required to spend some of their PCIe lanes on providing USB4/Thunderbolt ports. If you don't want those, you can get an X670 board that uses the same chipset silicon but provides a better allocation of PCIe lanes to internal M.2 and PCIe slots.
Even with a Threadripper you're at the mercy of the motherboard design.
I use ROG board that has 4 PCIe slots. While each can physically seat an x16 card, only one of them has 16 lanes -- the rest are x4. I had to demote my GPU to a slower slot in order to get full throughput from my 100GbE card. All this despite having a CPU with 64 lanes available.
I don't think Threadripper platform is to blame that you bought a board with potentially the worst possible pcie lane routing. Latest generation has 88 usable lanes at minimum, most boards have 4x 16x, and Pro supports 7x Gen 5.0 x16 links, an absolutely insane amount of IO. "At the mercy of motherboard design"- do the absolute minimum amount of research and pick any other board?
Okay, but then I need to ask what kind of use case doesn't mind the extra latency from ethernet but does care about the difference between 40Gbps and 70Gbps.
Though for the most the performance cost of going down to 8x PCIe is often pretty tiny - only a couple of percent at most
[0] shows a pretty "worst case" impact of 1-4% - that's on the absolute highest-end card possible (a geforce 5090) and pushing it down to 16x PCIe3.0. A lower end card would likely show an even smaller difference. They even showed zero impact from 16xPCIe4.0, which is the same bandwidth as 8x of the PCIe5.0 lanes supported on X870E boards like you mentioned.
Though if you're not on a gaming use case and know you're already PCIe limited it could be larger - but people who have that sort of use case likely already know what to look for, and have systems tuned to that use case more than "generic consumer gamer board"
For Skylake, Intel ran 16 lanes of pci-e to the CPU, and ran DMI to the chipset, which had pci-e lanes behind it. Depending on the chipset, there would be anywhere from 6 lanes at pci-e 2.0 to 20 lanes at pci-e 3.0. My wild guess is that a board from back then would have put m.2 behind the chipset and no cpu attached ssd for you; that fits with your report of the GPU having all 16 lanes.
But, if you had the nicer chipsets, wikipedia says your board could split the 16 cpu lanes into two x8 slots or one x8 and 2 x4 slots, which would fit. This would usually be dynamic at boot time, not at runtime; the firmware would typically look if anything is in the x4 slots and if so, set bifurcation, otherwise the x16 gets all the lanes. Some motherboards do have PCI-e switches to use the bandwidth more flexibly, but those got really expensive; i think at the transition to pci-e 4.0, but maybe 3.0?
Indeed. I dug out the manual (MSI H170 Gaming M3), which has a block diagram showing the M2 port behind the chipset, which is connected via DMI 3 to the CPU. In my mind, the chipset was connected via actual PCIe, but apparently, it's counted separately from the "actual" PCIe lanes.
Intel's DMI connection between the CPU and the chipset is little more than another PCIe x4 link. For consumer CPUs, they don't usually include it in the total lane count, but they have sometimes done so for Xeon parts based off the consumer silicon, giving the false impression that those Xeons have more PCIe lanes.
Yeah… I stopped doing stable horde after admitting my electric rate was never going down, and it’s already about 55 cents per kWh. I try to put as many electric things to sleep or off now.
The UK can get pretty rude, for example. ~40p/kWh is not unheard of for residential. (Natural gas price shocks, unfettered greed post-privatization, badgers in the transformers, idkwtf, etc.)
> Not sure the details of Omaha, but where I live in Los Angeles, insane building codes make it incredibly difficult to profitably develop affordable housing. The very nice apartments I lived in when I lived in Tokyo would all have been illegal in Los Angeles.
That's not what affordable housing is. You probably didn't live in affordable housing in Tokyo (edit: of course I could be wrong, but it's not my first guess). Yes, your housing was affordable. But affordable housing in US cities is subsidized and price capped, with income restrictions.
Well it wasn’t affordable housing in that sense, but it was affordable in the more general sense: my rent in Tokyo was around 15% the median Tokyo salary. Good luck finding that in California.