{"video_id":"fp_udQOfjhWzn","title":"Nvidia RTX 5090 Review","channel":"Linus Tech Tips","show":"Linus Tech Tips","published_at":"2025-01-23T14:01:00.027Z","duration_s":1310,"segments":[{"start_s":0.0,"end_s":4.5600000000000005,"text":"It's here, it is ripping fast, and it's $2,000 US dollars.","speaker":null,"is_sponsor":0},{"start_s":4.5600000000000005,"end_s":7.8,"text":"But in exchange for your least favorite of kidneys,","speaker":null,"is_sponsor":0},{"start_s":7.8,"end_s":11.96,"text":"NVIDIA promises that their brand new GeForce RTX 5090","speaker":null,"is_sponsor":0},{"start_s":11.96,"end_s":17.48,"text":"will deliver a level of performance that obliterates their only real competition.","speaker":null,"is_sponsor":0},{"start_s":17.48,"end_s":20.56,"text":"NVIDIA, more GPU cores, boom.","speaker":null,"is_sponsor":0},{"start_s":20.56,"end_s":23.76,"text":"More VRAM, and faster VRAM, boom, boom.","speaker":null,"is_sponsor":0},{"start_s":23.76,"end_s":31.8,"text":"Enhanced RT and AI cores, wider memory bus, and PCIe Gen 5, boom, boom, boom.","speaker":null,"is_sponsor":0},{"start_s":31.8,"end_s":37.0,"text":"On top of that, NVIDIA has packed in a deep learning super shed load of new features that I would","speaker":null,"is_sponsor":0},{"start_s":37.0,"end_s":41.56,"text":"love to tell you about. But unless NVIDIA also invented AI teeth extraction,","speaker":null,"is_sponsor":0},{"start_s":41.56,"end_s":46.52,"text":"I won't be able to. So instead, I leave this review in your capable hands.","speaker":null,"is_sponsor":0},{"start_s":46.52,"end_s":47.4,"text":"See you later.","speaker":null,"is_sponsor":0},{"start_s":49.92,"end_s":52.92,"text":"I got it, I got it, I got it. First up, graphics performance.","speaker":null,"is_sponsor":0},{"start_s":52.92,"end_s":54.0,"text":"Let's cut to the chase.","speaker":null,"is_sponsor":0},{"start_s":56.52,"end_s":64.76,"text":"Let's get right to raw gaming results.","speaker":null,"is_sponsor":0},{"start_s":64.76,"end_s":68.36,"text":"No ray tracing, no upscaling, and we're starting with 1440p.","speaker":null,"is_sponsor":0},{"start_s":68.36,"end_s":73.0,"text":"Across our suite of games at 1440p, the 5090 never falls flat on its face, obviously,","speaker":null,"is_sponsor":0},{"start_s":73.0,"end_s":76.12,"text":"but still manages to be underwhelming.","speaker":null,"is_sponsor":0},{"start_s":76.12,"end_s":81.52,"text":"In the Vulcan-based Red Dead Redemption 2, we see less than a 10% improvement over the 4090,","speaker":null,"is_sponsor":0},{"start_s":81.52,"end_s":84.88,"text":"and that lackluster uplift is repeated in F124.","speaker":null,"is_sponsor":0},{"start_s":84.92,"end_s":90.96,"text":"More problematic is the embarrassingly small 30% lead over the 7900XTX, which frequently","speaker":null,"is_sponsor":0},{"start_s":90.96,"end_s":94.52,"text":"goes for a little over two-fifths of the price.","speaker":null,"is_sponsor":0},{"start_s":94.52,"end_s":100.04,"text":"Oof. And Returnal doesn't bring better news. But as we move on to newer, more graphically intensive games,","speaker":null,"is_sponsor":0},{"start_s":100.04,"end_s":104.64,"text":"the 5090 does start to pull away from the pack. In the gorgeous thriller, Alan Wake 2,","speaker":null,"is_sponsor":0},{"start_s":104.64,"end_s":112.52,"text":"it beats the 3090 Ti by more than double and looks great in Blacksmith Wukong, beating the 4090 by 27%.","speaker":null,"is_sponsor":0},{"start_s":112.56,"end_s":116.8,"text":"Cyberpunk is another strong point compared to the previous generations, but as low as they are","speaker":null,"is_sponsor":0},{"start_s":116.8,"end_s":122.24,"text":"on the chart, it's worth noting AMD's strong performance per dollar in this game, at least when ray tracing","speaker":null,"is_sponsor":0},{"start_s":122.24,"end_s":127.96,"text":"isn't enabled. We'll get to that later. For now, this might be obvious, but if you are a 1440p player,","speaker":null,"is_sponsor":0},{"start_s":127.96,"end_s":131.56,"text":"the 5090 is overkill with the current crop of CPUs.","speaker":null,"is_sponsor":0},{"start_s":131.56,"end_s":135.96,"text":"If you're on the latest 9800X3D, you might find that the 5090 exerts a little bit more","speaker":null,"is_sponsor":0},{"start_s":135.96,"end_s":139.0,"text":"of a commanding lead, but I think that anyone with this setup","speaker":null,"is_sponsor":0},{"start_s":139.0,"end_s":142.72,"text":"should be putting their money into a new monitor rather than a new CPU.","speaker":null,"is_sponsor":0},{"start_s":142.72,"end_s":147.32,"text":"Let's move on to 4K testing, where the CPU bottlenecks are less likely to rear their ugly heads.","speaker":null,"is_sponsor":0},{"start_s":147.32,"end_s":150.4,"text":"In Cyberpunk, the 5090 is the first card","speaker":null,"is_sponsor":0},{"start_s":150.4,"end_s":153.56,"text":"to ever crack triple digits at our ultra preset,","speaker":null,"is_sponsor":0},{"start_s":153.56,"end_s":157.52,"text":"scoring a 30 FPS lead over the 4090.","speaker":null,"is_sponsor":0},{"start_s":157.52,"end_s":162.44,"text":"In Alan Wake 2, the story remains largely the same, offering a noticeable difference in playability","speaker":null,"is_sponsor":0},{"start_s":162.44,"end_s":167.04,"text":"compared to any previous flagship. Blacksmith Wukong at cinematic settings, however,","speaker":null,"is_sponsor":0},{"start_s":167.04,"end_s":170.52,"text":"is the Everest-like summit, where even the 5090","speaker":null,"is_sponsor":0},{"start_s":170.56,"end_s":176.44,"text":"falls short of 60 FPS average. Perhaps an overclocking Sherpa could get us to the peak,","speaker":null,"is_sponsor":0},{"start_s":176.44,"end_s":182.0,"text":"but that's a subject for another day. In Red Dead Redemption 2, the 5090 does not impress,","speaker":null,"is_sponsor":0},{"start_s":182.0,"end_s":186.16,"text":"especially when you consider its price. And in F124, the 5090 continues to operate","speaker":null,"is_sponsor":0},{"start_s":186.16,"end_s":191.36,"text":"at that kind of level of performance that no one else can touch. The problem is that for all the hype,","speaker":null,"is_sponsor":0},{"start_s":191.36,"end_s":198.32,"text":"the performance bump is roughly on par with the price bump, making the 5090 look less like a truly next-generation product","speaker":null,"is_sponsor":0},{"start_s":198.36,"end_s":202.04,"text":"and more like a 4090 Super GT Zikaiburkei.","speaker":null,"is_sponsor":0},{"start_s":202.04,"end_s":207.36,"text":"But what's the deal? I thought that Blackwell was supposed to be the giant leap forward with all that flip metering","speaker":null,"is_sponsor":0},{"start_s":207.36,"end_s":210.8,"text":"and neural rendering and increased rate triangle intersections.","speaker":null,"is_sponsor":0},{"start_s":210.8,"end_s":214.8,"text":"What the fuck? Whoa, whoa, whoa, whoa, all right, hold on. That's a lot of words,","speaker":null,"is_sponsor":0},{"start_s":214.8,"end_s":217.92,"text":"but to understand how they're going to impact performance,","speaker":null,"is_sponsor":0},{"start_s":217.92,"end_s":221.48,"text":"and they will, we need to understand what they mean.","speaker":null,"is_sponsor":0},{"start_s":221.48,"end_s":225.36,"text":"See, Blackwell brings so many new enhancements","speaker":null,"is_sponsor":0},{"start_s":225.4,"end_s":230.0,"text":"that NVIDIA marketing doesn't even call it a GPU architecture.","speaker":null,"is_sponsor":0},{"start_s":230.0,"end_s":234.2,"text":"No, it's called a neural rendering architecture.","speaker":null,"is_sponsor":0},{"start_s":234.2,"end_s":238.24,"text":"So what is that? As far as we can tell,","speaker":null,"is_sponsor":0},{"start_s":238.24,"end_s":242.2,"text":"it's about equal parts, genuine innovation and marketing fluff.","speaker":null,"is_sponsor":0},{"start_s":242.2,"end_s":247.52,"text":"We'll start with the innovation. Up until this point, NVIDIA's AI accelerating tensor cores","speaker":null,"is_sponsor":0},{"start_s":247.52,"end_s":252.48,"text":"could not be accessed by a graphics API like Vulkan or DirectX.","speaker":null,"is_sponsor":0},{"start_s":252.48,"end_s":258.04,"text":"But through collaboration with Microsoft, DirectX now has the Cooperative Vectors API,","speaker":null,"is_sponsor":0},{"start_s":258.04,"end_s":261.44,"text":"which means that gamers can use neural shaders.","speaker":null,"is_sponsor":0},{"start_s":261.44,"end_s":266.4,"text":"Unlike typical shaders, this allows geometry to be imbued with extra properties.","speaker":null,"is_sponsor":0},{"start_s":266.4,"end_s":270.2,"text":"But now that extra property could be a small neural network","speaker":null,"is_sponsor":0},{"start_s":270.2,"end_s":274.84,"text":"that could generate more geometry or help ease ray tracing calculations.","speaker":null,"is_sponsor":0},{"start_s":274.84,"end_s":280.0,"text":"For instance, mega geometry. This one allows for real time generation","speaker":null,"is_sponsor":0},{"start_s":280.0,"end_s":284.64,"text":"of level of detail steps without requiring any normal maps.","speaker":null,"is_sponsor":0},{"start_s":284.64,"end_s":289.4,"text":"Think of like UE5's Nanite, which helps ease jarring LOD change effects","speaker":null,"is_sponsor":0},{"start_s":289.4,"end_s":292.88,"text":"and saves developer time, but with, you know, AI.","speaker":null,"is_sponsor":0},{"start_s":292.88,"end_s":297.24,"text":"To take advantage of these features, NVIDIA loaded the 5090 with the hardware it needs","speaker":null,"is_sponsor":0},{"start_s":297.24,"end_s":300.88,"text":"to accelerate them. It's got fifth gen tensor cores,","speaker":null,"is_sponsor":0},{"start_s":300.88,"end_s":307.44,"text":"which drastically reduces memory usage for simpler AI models that don't require high precision.","speaker":null,"is_sponsor":0},{"start_s":307.44,"end_s":310.96,"text":"As for the non-AI stuff, we get upgraded fourth gen RT cores,","speaker":null,"is_sponsor":0},{"start_s":310.96,"end_s":316.48,"text":"which now double the ray-triangle intersection rate with just 75% of the memory footprint.","speaker":null,"is_sponsor":0},{"start_s":316.48,"end_s":321.28,"text":"And as for the regular old CUDA cores, well, those just don't seem to have changed very much.","speaker":null,"is_sponsor":0},{"start_s":321.28,"end_s":324.64,"text":"So far, the 5090 has managed a best case scenario","speaker":null,"is_sponsor":0},{"start_s":324.64,"end_s":331.72,"text":"of about 30% faster than its predecessor, seemingly entirely thanks to the 33% higher GPU core camp.","speaker":null,"is_sponsor":0},{"start_s":331.72,"end_s":335.16,"text":"This, combined with their reuse of TSMC's 4N process node","speaker":null,"is_sponsor":0},{"start_s":335.16,"end_s":340.08,"text":"from last gen, explains why the new chip is so big and why NVIDIA had to sacrifice some clock speed","speaker":null,"is_sponsor":0},{"start_s":340.12,"end_s":344.44,"text":"to keep their yields, and therefore pricing still attainable to the 1%.","speaker":null,"is_sponsor":0},{"start_s":344.44,"end_s":349.72,"text":"GDDR7, on the other hand, is kind of a big deal. It boasts double the data rate of GDDR6","speaker":null,"is_sponsor":0},{"start_s":349.72,"end_s":354.4,"text":"while using half as much power per bit. This is an enlarged part thanks to the shift","speaker":null,"is_sponsor":0},{"start_s":354.4,"end_s":359.1,"text":"to PAM3 signaling. PAM, short for pulse amplitude modulation,","speaker":null,"is_sponsor":0},{"start_s":359.1,"end_s":362.52,"text":"is akin to how we store data in multi-level cell flash storage.","speaker":null,"is_sponsor":0},{"start_s":362.52,"end_s":365.8,"text":"GDDR6 uses PAM4, meaning that each clock","speaker":null,"is_sponsor":0},{"start_s":365.8,"end_s":369.52,"text":"can be encoded for four different states, rather than just two.","speaker":null,"is_sponsor":0},{"start_s":369.52,"end_s":375.12,"text":"But it came with a big trade-off, the error rate, since the signals are so similar in amplitude","speaker":null,"is_sponsor":0},{"start_s":375.12,"end_s":378.72,"text":"that sometimes they can be hard to tell apart, especially when there's interference.","speaker":null,"is_sponsor":0},{"start_s":378.72,"end_s":383.24,"text":"PAM3 improves the situation by just trying to handle three states instead of four,","speaker":null,"is_sponsor":0},{"start_s":383.24,"end_s":387.32,"text":"giving a little bit more room between each of them. This improves signal integrity,","speaker":null,"is_sponsor":0},{"start_s":387.32,"end_s":390.48,"text":"allowing GDDR7 to run at higher frequency","speaker":null,"is_sponsor":0},{"start_s":390.48,"end_s":397.04,"text":"while consuming less power to make up for the trade-offs. And let's not forget that we finally got 32 gigs of VRAM.","speaker":null,"is_sponsor":0},{"start_s":397.04,"end_s":402.04,"text":"This will be a huge jump for AIDarks, and maybe gamers someday.","speaker":null,"is_sponsor":0},{"start_s":402.04,"end_s":406.96,"text":"But there are some other cool things, like NVIDIA's new ninth-gen N-Vanc hardware video encoders,","speaker":null,"is_sponsor":0},{"start_s":406.96,"end_s":410.16,"text":"which support higher-quality 422 10-bit video.","speaker":null,"is_sponsor":0},{"start_s":410.16,"end_s":413.32,"text":"This, for the right people, is a huge deal,","speaker":null,"is_sponsor":0},{"start_s":413.32,"end_s":417.96,"text":"and might make Blackwell a must-have upgrade. And for the folks out there who own monitors,","speaker":null,"is_sponsor":0},{"start_s":417.96,"end_s":423.52,"text":"hi, Ploof, we finally get a card that can actually take advantage of DP 2.1 UHBR20,","speaker":null,"is_sponsor":0},{"start_s":423.52,"end_s":427.28,"text":"a new DisplayPort standard that can drive 4K 240Hz","speaker":null,"is_sponsor":0},{"start_s":427.28,"end_s":431.28,"text":"without display-stream compression. And all of this while talking to your computer","speaker":null,"is_sponsor":0},{"start_s":431.28,"end_s":436.94,"text":"at PCIe Gen 5. It's 2025, and ray-tracing is no longer an afterthought,","speaker":null,"is_sponsor":0},{"start_s":436.94,"end_s":440.68,"text":"or even a choice in some cases, with the new name of Jones being the first game","speaker":null,"is_sponsor":0},{"start_s":440.68,"end_s":444.58,"text":"to outright require support. So let's talk about it.","speaker":null,"is_sponsor":0},{"start_s":444.58,"end_s":450.0,"text":"For RT testing, we use the highest settings, starting at 1440, and I want to get this out of the way.","speaker":null,"is_sponsor":0},{"start_s":450.0,"end_s":454.2,"text":"AMD does not ray-trace well. Now, in fact, 2 makes for a very playable experience","speaker":null,"is_sponsor":0},{"start_s":454.2,"end_s":457.6,"text":"on the 5090, with 1% lows well above 60 FPS.","speaker":null,"is_sponsor":0},{"start_s":457.6,"end_s":461.58,"text":"Numbers, it can't quite hit yet on the absolutely brutal Blackmith Wukong,","speaker":null,"is_sponsor":0},{"start_s":461.58,"end_s":465.44,"text":"though it is playable, unlike the poor 7900XTX.","speaker":null,"is_sponsor":0},{"start_s":465.44,"end_s":470.8,"text":"Ouch! In Cyberpunk, the 5090 has just a 20% lead over the 4090,","speaker":null,"is_sponsor":0},{"start_s":470.8,"end_s":474.82,"text":"but compared to the 4080 Super, it maintains its price-to-performance ratio,","speaker":null,"is_sponsor":0},{"start_s":474.82,"end_s":478.48,"text":"which I generally consider to be pretty darn acceptable for a Halo-class card.","speaker":null,"is_sponsor":0},{"start_s":478.52,"end_s":483.32,"text":"In the lightly ray-traced F124, AMD comes back to life a little,","speaker":null,"is_sponsor":0},{"start_s":483.32,"end_s":487.52,"text":"performing well against the 4080 Super, and the same can be said for Returnal,","speaker":null,"is_sponsor":0},{"start_s":487.52,"end_s":492.52,"text":"but there's no question that the 5090 is king for RT at 1440P,","speaker":null,"is_sponsor":0},{"start_s":493.6,"end_s":499.16,"text":"with a crown that only gets more dazzling at 4K. Blackmith Wukong falls below what we consider playable","speaker":null,"is_sponsor":0},{"start_s":499.16,"end_s":503.0,"text":"for an intense action game. Don't worry, we'll talk about AI upscaling later,","speaker":null,"is_sponsor":0},{"start_s":503.0,"end_s":508.32,"text":"because first, dang, look at this thing! Maintaining performance in the 50s at these settings,","speaker":null,"is_sponsor":0},{"start_s":508.48,"end_s":512.16,"text":"this economy? Dang, NVIDIA, that's pretty impressive.","speaker":null,"is_sponsor":0},{"start_s":512.16,"end_s":515.3,"text":"And if you care more about absolute cinema than framerate,","speaker":null,"is_sponsor":0},{"start_s":515.3,"end_s":520.48,"text":"well, it holds above 30 FPS in Alan Wake 2, which should go great with your popcorn.","speaker":null,"is_sponsor":0},{"start_s":520.48,"end_s":524.08,"text":"F124 and Returnal are similar stories as the 1440P results,","speaker":null,"is_sponsor":0},{"start_s":524.08,"end_s":528.88,"text":"just with more pixels and fewer FPS. All of this taken together means we're looking at","speaker":null,"is_sponsor":0},{"start_s":528.88,"end_s":533.6,"text":"a greater than 30% lead over last gen at a 25% higher price,","speaker":null,"is_sponsor":0},{"start_s":533.6,"end_s":537.02,"text":"meaning the new RT cores are providing some benefit,","speaker":null,"is_sponsor":0},{"start_s":537.02,"end_s":541.42,"text":"but it's pretty small compared to the impact of NVIDIA just plunking in more of them.","speaker":null,"is_sponsor":0},{"start_s":541.42,"end_s":544.54,"text":"This is obviously a downer compared to the good old days","speaker":null,"is_sponsor":0},{"start_s":544.54,"end_s":549.14,"text":"when we used to get yearly GPU refreshes with dramatic improvements to performance per dollar.","speaker":null,"is_sponsor":0},{"start_s":549.14,"end_s":553.84,"text":"But it's clear that unless cutting-edge semiconductor manufacturing miraculously gets cheaper,","speaker":null,"is_sponsor":0},{"start_s":553.84,"end_s":558.94,"text":"those days are never coming back. So if we compare this more to, say,","speaker":null,"is_sponsor":0},{"start_s":558.94,"end_s":563.24,"text":"adding a second card in SLI, a feature NVIDIA no longer supports,","speaker":null,"is_sponsor":0},{"start_s":563.24,"end_s":569.54,"text":"then the glass half-full-take is, hey, at least it costs less than $240.90s.","speaker":null,"is_sponsor":0},{"start_s":569.54,"end_s":572.9,"text":"But NVIDIA still has some tricks up their sleeve.","speaker":null,"is_sponsor":0},{"start_s":572.9,"end_s":577.06,"text":"Holy heck, that's one tight leather jacket. How the heck did you fit that stuff in those sleeves?","speaker":null,"is_sponsor":0},{"start_s":577.06,"end_s":581.82,"text":"With DLSS4 and multi-frame gen, by making use of flip metering and swapping them","speaker":null,"is_sponsor":0},{"start_s":581.82,"end_s":586.02,"text":"from convolution neural networks to transformer-based models.","speaker":null,"is_sponsor":0},{"start_s":586.02,"end_s":589.9,"text":"There's a lot of words again, lots of words again. Let's break it down.","speaker":null,"is_sponsor":0},{"start_s":589.9,"end_s":593.98,"text":"DLSS4 is NVIDIA's latest suite of AI enhancements,","speaker":null,"is_sponsor":0},{"start_s":593.98,"end_s":598.26,"text":"and it's the biggest change in years. Previous versions of DLSS included","speaker":null,"is_sponsor":0},{"start_s":598.26,"end_s":601.46,"text":"a convolutional neural network or CNN.","speaker":null,"is_sponsor":0},{"start_s":601.46,"end_s":604.94,"text":"A CNN can be thought of as a series of filters","speaker":null,"is_sponsor":0},{"start_s":604.94,"end_s":608.74,"text":"that look for specific details. When used for image processing,","speaker":null,"is_sponsor":0},{"start_s":608.74,"end_s":615.22,"text":"one layer could be looking for vertical edges, one for horizontal edges, and one for contrast, et cetera.","speaker":null,"is_sponsor":0},{"start_s":615.22,"end_s":619.98,"text":"The neural network then observes the results from the filters and can use that information","speaker":null,"is_sponsor":0},{"start_s":619.98,"end_s":624.34,"text":"to identify things. Like if an image contains a dog or a stop sign","speaker":null,"is_sponsor":0},{"start_s":624.34,"end_s":627.34,"text":"and seems convoluted, well, it literally is.","speaker":null,"is_sponsor":0},{"start_s":627.34,"end_s":630.86,"text":"On 4,000 series GPUs, this information was combined","speaker":null,"is_sponsor":0},{"start_s":630.86,"end_s":634.78,"text":"with an optical flow accelerator that interpreted the motion in the scene","speaker":null,"is_sponsor":0},{"start_s":634.78,"end_s":638.46,"text":"to upscale or generate frames. So why the switch?","speaker":null,"is_sponsor":0},{"start_s":638.46,"end_s":641.94,"text":"Scaling. Each filter can only scan and compute","speaker":null,"is_sponsor":0},{"start_s":641.94,"end_s":646.34,"text":"a small number of pixels at a time. When you have millions of pixels,","speaker":null,"is_sponsor":0},{"start_s":646.34,"end_s":650.14,"text":"dozens of times per second, increasing performance can be tough.","speaker":null,"is_sponsor":0},{"start_s":650.14,"end_s":654.94,"text":"So DLSS uses a new transformer model, which, as NVIDIA explains,","speaker":null,"is_sponsor":0},{"start_s":654.94,"end_s":660.14,"text":"allows them to evaluate the relative importance of each pixel across an entire frame","speaker":null,"is_sponsor":0},{"start_s":660.14,"end_s":664.58,"text":"and over multiple frames to achieve a deeper understanding of the scenes","speaker":null,"is_sponsor":0},{"start_s":664.58,"end_s":668.58,"text":"that offers greater stability, reduced ghosting, higher detail in motion,","speaker":null,"is_sponsor":0},{"start_s":668.58,"end_s":671.78,"text":"and smoother edges. They also scale better,","speaker":null,"is_sponsor":0},{"start_s":671.78,"end_s":676.66,"text":"which is part of why they have become so heavily used in things like large language models.","speaker":null,"is_sponsor":0},{"start_s":676.66,"end_s":682.5,"text":"The transformer is the T in chat GPT, so while a CNN can see this picture and say,","speaker":null,"is_sponsor":0},{"start_s":682.5,"end_s":685.82,"text":"there is a cat, there is a product from LTTstore.com.","speaker":null,"is_sponsor":0},{"start_s":685.82,"end_s":691.34,"text":"A transformer might say, there is a cat enjoying the premium CRT-themed peck cave","speaker":null,"is_sponsor":0},{"start_s":691.34,"end_s":696.38,"text":"from LTTstore.com. However, while a transformer can process","speaker":null,"is_sponsor":0},{"start_s":696.38,"end_s":700.34,"text":"complete images faster, it does require more data for training,","speaker":null,"is_sponsor":0},{"start_s":700.34,"end_s":706.1,"text":"and honestly, with the side-by-side comparisons, it is tough to tell the difference in image quality","speaker":null,"is_sponsor":0},{"start_s":706.1,"end_s":709.5,"text":"between the two models, like really tough.","speaker":null,"is_sponsor":0},{"start_s":709.5,"end_s":713.54,"text":"There are clear benefits in specific areas that NVIDIA points out,","speaker":null,"is_sponsor":0},{"start_s":713.54,"end_s":716.58,"text":"things like fences, power lines, and barbed wires,","speaker":null,"is_sponsor":0},{"start_s":716.58,"end_s":721.62,"text":"but there's still obvious artifacts when dealing with semi-transparent objects,","speaker":null,"is_sponsor":0},{"start_s":721.62,"end_s":727.3,"text":"or just very busy scenes. A lot of the artifacts are different than DLSS3,","speaker":null,"is_sponsor":0},{"start_s":727.3,"end_s":731.66,"text":"but are still present. On the bright side, at least on high-end cards,","speaker":null,"is_sponsor":0},{"start_s":731.66,"end_s":735.9,"text":"the transformer models don't show a substantial performance hit.","speaker":null,"is_sponsor":0},{"start_s":735.9,"end_s":739.3,"text":"Enough theory, I wanna talk about MF-ing G.","speaker":null,"is_sponsor":0},{"start_s":739.3,"end_s":743.58,"text":"Multi-frame Gen is perhaps the most game-changing tech landing with these new cards.","speaker":null,"is_sponsor":0},{"start_s":743.58,"end_s":747.94,"text":"Like the previous version of Framegen, it uses AI to generate in-between frames","speaker":null,"is_sponsor":0},{"start_s":747.94,"end_s":752.7,"text":"based on optical flow data and rendered frames, but Multi-frame Gen now allows users","speaker":null,"is_sponsor":0},{"start_s":752.7,"end_s":755.98,"text":"to generate up to three in-betweens, rather than just one,","speaker":null,"is_sponsor":0},{"start_s":755.98,"end_s":759.3,"text":"boosting frame rates to up to four times native.","speaker":null,"is_sponsor":0},{"start_s":759.3,"end_s":763.98,"text":"Does it work? Well, according to our charts, yes, very yes.","speaker":null,"is_sponsor":0},{"start_s":763.98,"end_s":769.42,"text":"The numbers double, triple, and quadruple, and make the 5090 look absolutely ridiculous,","speaker":null,"is_sponsor":0},{"start_s":769.42,"end_s":772.62,"text":"at least in the charts. But as big as that bar is,","speaker":null,"is_sponsor":0},{"start_s":772.62,"end_s":776.42,"text":"the real frames haven't changed. So what's the deal?","speaker":null,"is_sponsor":0},{"start_s":776.42,"end_s":781.54,"text":"Well, MFG's pretty wild. DLSS3 Framegen required specific optical flow","speaker":null,"is_sponsor":0},{"start_s":781.54,"end_s":785.38,"text":"accelerating hardware on GPUs, and combined that with the game data,","speaker":null,"is_sponsor":0},{"start_s":785.38,"end_s":789.66,"text":"like depth and motion vectors to generate in-between frames. And it was an okay solution,","speaker":null,"is_sponsor":0},{"start_s":789.66,"end_s":793.9,"text":"but you had to have two bits of hardware processing each frame,","speaker":null,"is_sponsor":0},{"start_s":793.9,"end_s":797.52,"text":"and that's just inefficient, and could even cause the GPU to throttle,","speaker":null,"is_sponsor":0},{"start_s":797.52,"end_s":800.62,"text":"resulting in a lower base frame rate to multiply off of.","speaker":null,"is_sponsor":0},{"start_s":800.66,"end_s":804.38,"text":"That's why you didn't just see a straight doubling of frame rate when you turned Framegen on.","speaker":null,"is_sponsor":0},{"start_s":804.38,"end_s":808.5,"text":"The 5090 and multi-FrameGen is Shu Aida's optical flow accelerator,","speaker":null,"is_sponsor":0},{"start_s":808.5,"end_s":812.62,"text":"and instead utilized tightly integrated Tensor and CUDA cores in Blackwell","speaker":null,"is_sponsor":0},{"start_s":812.62,"end_s":815.66,"text":"to run a lightweight AI optical flow model.","speaker":null,"is_sponsor":0},{"start_s":815.66,"end_s":819.98,"text":"Not hardware accelerated, it's just an AI model. This means that single Framegen","speaker":null,"is_sponsor":0},{"start_s":819.98,"end_s":823.98,"text":"should now run 40% faster while using 30% less VRAM.","speaker":null,"is_sponsor":0},{"start_s":823.98,"end_s":828.7,"text":"We know it works, but how does it look? Well, it depends on who you ask.","speaker":null,"is_sponsor":0},{"start_s":828.7,"end_s":831.82,"text":"If you want to see your FPS number much higher then it works great.","speaker":null,"is_sponsor":0},{"start_s":831.82,"end_s":836.9,"text":"Nothing else sort of hacking your FPS counter will let you get nearly 600 FPS in Cyberpunk.","speaker":null,"is_sponsor":0},{"start_s":836.9,"end_s":840.86,"text":"But visually, it's not perfect, and Framegen weirdness still persists.","speaker":null,"is_sponsor":0},{"start_s":840.86,"end_s":845.26,"text":"Look at the combing on these crosswalks in Cyberpunk, or this bottle phasing in and out in the benchmark,","speaker":null,"is_sponsor":0},{"start_s":845.26,"end_s":849.14,"text":"or the doubling of the fan blades in these large HVAC units. In Alan Wake 2,","speaker":null,"is_sponsor":0},{"start_s":849.14,"end_s":852.9,"text":"there's obvious artifacting around the player model and around the edge of your flashlight,","speaker":null,"is_sponsor":0},{"start_s":852.9,"end_s":857.5,"text":"which is sadly exactly where you will be looking 100% of the time.","speaker":null,"is_sponsor":0},{"start_s":857.5,"end_s":862.1,"text":"And it's worth noting that these artifacts are not present when we're just using DLSS for upscaling.","speaker":null,"is_sponsor":0},{"start_s":862.1,"end_s":866.78,"text":"Curiously, while Cyberpunk and Alan Wake were both updated with explicit support for multi Framegen,","speaker":null,"is_sponsor":0},{"start_s":866.78,"end_s":870.78,"text":"the feature can be forced in any DLSS3 single Framegen","speaker":null,"is_sponsor":0},{"start_s":870.78,"end_s":876.34,"text":"supported games through the NVIDIA driver. And the game that fared the best was Dragon Age of Veilguard.","speaker":null,"is_sponsor":0},{"start_s":876.34,"end_s":879.74,"text":"The world's full of magic, so who's to say exactly what a vortex of shadow","speaker":null,"is_sponsor":0},{"start_s":879.74,"end_s":884.22,"text":"is supposed to look like? And since it already ran at a solid 70 FPS","speaker":null,"is_sponsor":0},{"start_s":884.22,"end_s":888.82,"text":"with all the settings cranked, no Framegen, it kept input latency manageable,","speaker":null,"is_sponsor":0},{"start_s":888.82,"end_s":892.86,"text":"which is really important, because if your base FPS is only 30 frames,","speaker":null,"is_sponsor":0},{"start_s":892.86,"end_s":897.22,"text":"well, Framegen will make it look smooth, but you'll observe many visual anomalies,","speaker":null,"is_sponsor":0},{"start_s":897.22,"end_s":901.06,"text":"and latency is still dictated by your true frame rate,","speaker":null,"is_sponsor":0},{"start_s":901.06,"end_s":905.46,"text":"meaning that the game feels far less responsive than it looks like it should be.","speaker":null,"is_sponsor":0},{"start_s":905.46,"end_s":909.46,"text":"The good news, says NVIDIA, is that MFG at least isn't adding any latency","speaker":null,"is_sponsor":0},{"start_s":909.46,"end_s":913.5,"text":"compared to the base frame rate, but we felt like it would be a good idea to verify that.","speaker":null,"is_sponsor":0},{"start_s":913.5,"end_s":917.7,"text":"And verify we did, using our trusty LDAT, our click to photon test results","speaker":null,"is_sponsor":0},{"start_s":917.7,"end_s":922.26,"text":"showed that Framegen does not increase latency over native with reflex on.","speaker":null,"is_sponsor":0},{"start_s":922.26,"end_s":925.3,"text":"In fact, it seems to actually lower the latency slightly.","speaker":null,"is_sponsor":0},{"start_s":925.3,"end_s":928.42,"text":"It doesn't really make sense to us, so we're gonna chalk that up to sampling error,","speaker":null,"is_sponsor":0},{"start_s":928.42,"end_s":932.58,"text":"but if there is any effect on latency, it's so minor that it's not noticeable","speaker":null,"is_sponsor":0},{"start_s":932.58,"end_s":936.18,"text":"compared to our total system latency, and that is mighty impressive.","speaker":null,"is_sponsor":0},{"start_s":936.18,"end_s":940.26,"text":"But the issue remains, you can't beat your base frame rate's latency,","speaker":null,"is_sponsor":0},{"start_s":940.26,"end_s":944.86,"text":"and when it's disconnected from the motion you see on screen, it almost makes it worse.","speaker":null,"is_sponsor":0},{"start_s":944.86,"end_s":948.34,"text":"Perhaps the situation will improve with reflex too, but that's not here yet,","speaker":null,"is_sponsor":0},{"start_s":948.34,"end_s":951.5,"text":"so we aren't gonna dwell on it, even if the tech is really cool.","speaker":null,"is_sponsor":0},{"start_s":951.5,"end_s":955.62,"text":"Our take on multi-frame gen right now is that it has the same key flaws before.","speaker":null,"is_sponsor":0},{"start_s":955.62,"end_s":958.74,"text":"It's a win more feature that works the best","speaker":null,"is_sponsor":0},{"start_s":958.74,"end_s":963.42,"text":"when it makes the least sense to use, which means it is definitely not the silver bullet","speaker":null,"is_sponsor":0},{"start_s":963.42,"end_s":966.94,"text":"that NVIDIA's graphs make it out to be. With all this power,","speaker":null,"is_sponsor":0},{"start_s":966.94,"end_s":971.82,"text":"it would make a lot more sense to use the 5090 to make some money, right? So let's talk productivity.","speaker":null,"is_sponsor":0},{"start_s":971.82,"end_s":976.34,"text":"Hey, you might not know me, but I do this kind of thing and this kind of thing.","speaker":null,"is_sponsor":0},{"start_s":976.34,"end_s":979.34,"text":"And NVIDIA's new architecture has benefits for me too.","speaker":null,"is_sponsor":0},{"start_s":979.34,"end_s":982.66,"text":"The encoder and decoders provide support for 422 chroma sampling,","speaker":null,"is_sponsor":0},{"start_s":982.66,"end_s":987.74,"text":"which will make working with high-end video files much faster, especially for multi-camera video edits.","speaker":null,"is_sponsor":0},{"start_s":987.74,"end_s":991.22,"text":"The encoders also provide better quality at smaller file sizes.","speaker":null,"is_sponsor":0},{"start_s":991.22,"end_s":996.46,"text":"Sadly, we can't verify that for you today as we are currently re-evaluating our encoding benchmarks,","speaker":null,"is_sponsor":0},{"start_s":996.46,"end_s":1000.22,"text":"but the new media engine is almost certainly playing a role in Puget Bench,","speaker":null,"is_sponsor":0},{"start_s":1000.22,"end_s":1003.54,"text":"where we see a nice 9% bump in Premiere Pro performance","speaker":null,"is_sponsor":0},{"start_s":1003.54,"end_s":1007.5,"text":"and an even nicer, nearly 20% improvement DaVinci Resolve","speaker":null,"is_sponsor":0},{"start_s":1007.5,"end_s":1011.14,"text":"when compared to the 4090. In Blender, NVIDIA has us considering","speaker":null,"is_sponsor":0},{"start_s":1011.14,"end_s":1015.14,"text":"finding a new benchmark as the 5090 has brought Barbershop render times","speaker":null,"is_sponsor":0},{"start_s":1015.14,"end_s":1019.1,"text":"to less than half a minute, more than double the speed of the 3090 Ti.","speaker":null,"is_sponsor":0},{"start_s":1019.1,"end_s":1023.82,"text":"Nice. Overwrought editing transition here. Double the 3090 Ti, you say?","speaker":null,"is_sponsor":0},{"start_s":1023.82,"end_s":1028.9,"text":"AI nerds, rejoice! If you're like me and for your sake, I hope you're not,","speaker":null,"is_sponsor":0},{"start_s":1028.9,"end_s":1032.38,"text":"you've been dying for NVIDIA to release a new 32-gig consumer card.","speaker":null,"is_sponsor":0},{"start_s":1032.38,"end_s":1037.7,"text":"So with all the bragging NVIDIA's been doing about AI tops, I'm expecting some big numbers.","speaker":null,"is_sponsor":0},{"start_s":1037.7,"end_s":1042.34,"text":"And in the Procyon text benchmarks, what the fuck? Number is not big.","speaker":null,"is_sponsor":0},{"start_s":1042.34,"end_s":1047.26,"text":"Sure, the 5090 is still the best card on the charts, but I was expecting more than this.","speaker":null,"is_sponsor":0},{"start_s":1047.26,"end_s":1050.62,"text":"We'll see roughly 20 to 30% improvement over the 4090,","speaker":null,"is_sponsor":0},{"start_s":1050.62,"end_s":1055.18,"text":"depending on the benchmark, and 60 to 70% over the 3090 Ti.","speaker":null,"is_sponsor":0},{"start_s":1055.18,"end_s":1060.06,"text":"I can see why they keep talking about AI tops and not specific performance.","speaker":null,"is_sponsor":0},{"start_s":1060.06,"end_s":1064.58,"text":"In ML Perf, the story remains largely the same in the time to first token","speaker":null,"is_sponsor":0},{"start_s":1064.58,"end_s":1070.02,"text":"and the token generation rate benchmarks. For image generation, our preferred Procyon benchmark","speaker":null,"is_sponsor":0},{"start_s":1070.02,"end_s":1074.62,"text":"doesn't support the 5090 yet. So we tested using the benchmark provided by NVIDIA.","speaker":null,"is_sponsor":0},{"start_s":1074.62,"end_s":1080.26,"text":"Prepare your salt grains for the taking. In the Procyon Flux FP8 image generation,","speaker":null,"is_sponsor":0},{"start_s":1080.26,"end_s":1083.78,"text":"the 5090 leads by a margin in line with the rest of our benchmarks.","speaker":null,"is_sponsor":0},{"start_s":1083.78,"end_s":1088.5,"text":"But when we switch to FP4 precision, the 5090 shows how powerful the native hardware support","speaker":null,"is_sponsor":0},{"start_s":1088.5,"end_s":1093.42,"text":"can be, taking less than one quarter of the time to generate images compared to the 4090.","speaker":null,"is_sponsor":0},{"start_s":1093.42,"end_s":1097.9,"text":"We'd love to see how this fairs against the 3090 Ti, but this NVIDIA provided benchmark","speaker":null,"is_sponsor":0},{"start_s":1097.9,"end_s":1101.44,"text":"doesn't support older cards. I was expecting AI to be the place","speaker":null,"is_sponsor":0},{"start_s":1101.44,"end_s":1104.62,"text":"where this card really shines, but I guess potential buyers will have to settle","speaker":null,"is_sponsor":0},{"start_s":1104.62,"end_s":1109.14,"text":"for just having the best consumer friendly grade card for AI.","speaker":null,"is_sponsor":0},{"start_s":1109.14,"end_s":1114.46,"text":"We'll call it the nifty feinty. Somehow there is still more review to go.","speaker":null,"is_sponsor":0},{"start_s":1114.46,"end_s":1119.66,"text":"Good thing this is so digestible and uncomplicated so far, right? Blackwell's main efficiency improvements over ADA","speaker":null,"is_sponsor":0},{"start_s":1119.66,"end_s":1122.66,"text":"seem to come from what they're calling Max-Q functionality.","speaker":null,"is_sponsor":0},{"start_s":1122.66,"end_s":1126.22,"text":"It boils down to a few small but significant changes.","speaker":null,"is_sponsor":0},{"start_s":1126.22,"end_s":1130.36,"text":"Improvements to power gating, thanks to an additional power rail and improved logic","speaker":null,"is_sponsor":0},{"start_s":1130.36,"end_s":1134.08,"text":"allows more of the GPU to switch to a low power state more rapidly.","speaker":null,"is_sponsor":0},{"start_s":1134.08,"end_s":1137.66,"text":"In CPU bound or frame cap scenarios, this could help save some power,","speaker":null,"is_sponsor":0},{"start_s":1137.66,"end_s":1141.54,"text":"especially on mobile chips. But in GPU bound full load scenarios,","speaker":null,"is_sponsor":0},{"start_s":1141.58,"end_s":1146.52,"text":"this monster will absolutely draw its fully rated 575 watts,","speaker":null,"is_sponsor":0},{"start_s":1146.52,"end_s":1150.94,"text":"including transient spikes of as high as 637 watts.","speaker":null,"is_sponsor":0},{"start_s":1150.94,"end_s":1154.9,"text":"And you can see even in real world gaming, it will pull space heater levels of power","speaker":null,"is_sponsor":0},{"start_s":1154.9,"end_s":1158.62,"text":"with an average of 554 watts in F124.","speaker":null,"is_sponsor":0},{"start_s":1158.62,"end_s":1162.46,"text":"And to manage all that power, they had to make a unique cooler design.","speaker":null,"is_sponsor":0},{"start_s":1162.46,"end_s":1166.1,"text":"While everyone else was making four slot behemoths, NVIDIA built something very different.","speaker":null,"is_sponsor":0},{"start_s":1166.1,"end_s":1170.66,"text":"And man, does it look good. Not only is it classy, it's also innovative.","speaker":null,"is_sponsor":0},{"start_s":1170.7,"end_s":1174.7,"text":"Over ambitious even. The main board of the video card is just the middle section","speaker":null,"is_sponsor":0},{"start_s":1174.7,"end_s":1179.5,"text":"and the outputs in PCIe connector are on daughter boards connected via what they call a flexible PCB.","speaker":null,"is_sponsor":0},{"start_s":1179.5,"end_s":1184.22,"text":"It's like a stiff ribbon cable, I guess. This allows for a double flow through design","speaker":null,"is_sponsor":0},{"start_s":1184.22,"end_s":1190.38,"text":"with fans blowing through dense heat sinks. Anecdotally, the fans run quiet, too quiet even.","speaker":null,"is_sponsor":0},{"start_s":1190.38,"end_s":1194.66,"text":"So like the 4090, most of the time, the loudest thing about the card will be its coil wine.","speaker":null,"is_sponsor":0},{"start_s":1194.66,"end_s":1200.42,"text":"It's not the worst we've heard, but it's noticeable. And if you expected it to run cooler than the 4090 founders,","speaker":null,"is_sponsor":0},{"start_s":1200.42,"end_s":1205.58,"text":"it doesn't. But given how much smaller it is, not to mention the enormous thermal load it's dealing with,","speaker":null,"is_sponsor":0},{"start_s":1205.58,"end_s":1210.06,"text":"I'd say it's doing a great job. But the new cooler style brings new build considerations.","speaker":null,"is_sponsor":0},{"start_s":1210.06,"end_s":1213.54,"text":"We know that flow through coolers can have a noticeable impact on CPU temps,","speaker":null,"is_sponsor":0},{"start_s":1213.54,"end_s":1217.3,"text":"especially for those using tower heat sinks. So as a double flow through, double bad,","speaker":null,"is_sponsor":0},{"start_s":1217.3,"end_s":1221.34,"text":"we took the 5090 and the 4090 FE and put them in a Corsair 4000D.","speaker":null,"is_sponsor":0},{"start_s":1221.34,"end_s":1224.5,"text":"And even with both running at 450 Watts to control the experiment,","speaker":null,"is_sponsor":0},{"start_s":1224.5,"end_s":1229.1,"text":"our poor Noctua NHD15 saw CPU temps that are roughly three degrees higher","speaker":null,"is_sponsor":0},{"start_s":1229.14,"end_s":1234.3,"text":"in both synthetic and gaming workloads. A CPU cooler upgrade, perhaps an intake-mounted radiator,","speaker":null,"is_sponsor":0},{"start_s":1234.3,"end_s":1238.22,"text":"could be in order for some folks. Wow, that was a lot to talk about.","speaker":null,"is_sponsor":0},{"start_s":1238.22,"end_s":1241.5,"text":"So what's our conclusion? Well, if you're a professional game developer","speaker":null,"is_sponsor":0},{"start_s":1241.5,"end_s":1245.78,"text":"or any other professional, or basically anyone who can use the new performance","speaker":null,"is_sponsor":0},{"start_s":1245.78,"end_s":1248.82,"text":"and especially the new features to make money,","speaker":null,"is_sponsor":0},{"start_s":1248.82,"end_s":1252.9,"text":"it's a no-brainer. As for the gamers, well, if the 4090 was stupid,","speaker":null,"is_sponsor":0},{"start_s":1252.9,"end_s":1257.54,"text":"stupid price, but stupid performance, the 5090 is stupider.","speaker":null,"is_sponsor":0},{"start_s":1257.54,"end_s":1263.66,"text":"It provides 30-ish percent more performance by using 33% more hardware and 30-ish percent more power,","speaker":null,"is_sponsor":0},{"start_s":1263.66,"end_s":1268.98,"text":"and it gets a roughly 25% price increase. It's a 4090 plus plus.","speaker":null,"is_sponsor":0},{"start_s":1268.98,"end_s":1273.06,"text":"On the one hand, you could look at this and say, wow, this doesn't look good","speaker":null,"is_sponsor":0},{"start_s":1273.06,"end_s":1276.38,"text":"for the rest of the 50 series lineup, but it's also worth considering","speaker":null,"is_sponsor":0},{"start_s":1276.38,"end_s":1279.38,"text":"that the 4090 was kind of an outlier for 40 series,","speaker":null,"is_sponsor":0},{"start_s":1279.38,"end_s":1284.34,"text":"offering a huge boost over its predecessor with the rest offering smaller upgrades.","speaker":null,"is_sponsor":0},{"start_s":1284.34,"end_s":1288.82,"text":"So if you're on 30 series and still not in the mega, mega baller income bracket,","speaker":null,"is_sponsor":0},{"start_s":1288.82,"end_s":1294.18,"text":"I guess all we can do is wait and see. See if Lydus has the strength to do the segue.","speaker":null,"is_sponsor":0},{"start_s":1294.18,"end_s":1299.18,"text":"Who are you? If you like this video,","speaker":null,"is_sponsor":0},{"start_s":1299.18,"end_s":1303.54,"text":"check out the one you get on ACCS chronging,","speaker":null,"is_sponsor":0},{"start_s":1303.54,"end_s":1305.74,"text":"watch this good nugget for now.","speaker":null,"is_sponsor":0},{"start_s":1306.98,"end_s":1309.66,"text":"I wish I could eat a nugget right now.","speaker":null,"is_sponsor":0}],"full_text":"It's here, it is ripping fast, and it's $2,000 US dollars. But in exchange for your least favorite of kidneys, NVIDIA promises that their brand new GeForce RTX 5090 will deliver a level of performance that obliterates their only real competition. NVIDIA, more GPU cores, boom. More VRAM, and faster VRAM, boom, boom. Enhanced RT and AI cores, wider memory bus, and PCIe Gen 5, boom, boom, boom. On top of that, NVIDIA has packed in a deep learning super shed load of new features that I would love to tell you about. But unless NVIDIA also invented AI teeth extraction, I won't be able to. So instead, I leave this review in your capable hands. See you later. I got it, I got it, I got it. First up, graphics performance. Let's cut to the chase. Let's get right to raw gaming results. No ray tracing, no upscaling, and we're starting with 1440p. Across our suite of games at 1440p, the 5090 never falls flat on its face, obviously, but still manages to be underwhelming. In the Vulcan-based Red Dead Redemption 2, we see less than a 10% improvement over the 4090, and that lackluster uplift is repeated in F124. More problematic is the embarrassingly small 30% lead over the 7900XTX, which frequently goes for a little over two-fifths of the price. Oof. And Returnal doesn't bring better news. But as we move on to newer, more graphically intensive games, the 5090 does start to pull away from the pack. In the gorgeous thriller, Alan Wake 2, it beats the 3090 Ti by more than double and looks great in Blacksmith Wukong, beating the 4090 by 27%. Cyberpunk is another strong point compared to the previous generations, but as low as they are on the chart, it's worth noting AMD's strong performance per dollar in this game, at least when ray tracing isn't enabled. We'll get to that later. For now, this might be obvious, but if you are a 1440p player, the 5090 is overkill with the current crop of CPUs. If you're on the latest 9800X3D, you might find that the 5090 exerts a little bit more of a commanding lead, but I think that anyone with this setup should be putting their money into a new monitor rather than a new CPU. Let's move on to 4K testing, where the CPU bottlenecks are less likely to rear their ugly heads. In Cyberpunk, the 5090 is the first card to ever crack triple digits at our ultra preset, scoring a 30 FPS lead over the 4090. In Alan Wake 2, the story remains largely the same, offering a noticeable difference in playability compared to any previous flagship. Blacksmith Wukong at cinematic settings, however, is the Everest-like summit, where even the 5090 falls short of 60 FPS average. Perhaps an overclocking Sherpa could get us to the peak, but that's a subject for another day. In Red Dead Redemption 2, the 5090 does not impress, especially when you consider its price. And in F124, the 5090 continues to operate at that kind of level of performance that no one else can touch. The problem is that for all the hype, the performance bump is roughly on par with the price bump, making the 5090 look less like a truly next-generation product and more like a 4090 Super GT Zikaiburkei. But what's the deal? I thought that Blackwell was supposed to be the giant leap forward with all that flip metering and neural rendering and increased rate triangle intersections. What the fuck? Whoa, whoa, whoa, whoa, all right, hold on. That's a lot of words, but to understand how they're going to impact performance, and they will, we need to understand what they mean. See, Blackwell brings so many new enhancements that NVIDIA marketing doesn't even call it a GPU architecture. No, it's called a neural rendering architecture. So what is that? As far as we can tell, it's about equal parts, genuine innovation and marketing fluff. We'll start with the innovation. Up until this point, NVIDIA's AI accelerating tensor cores could not be accessed by a graphics API like Vulkan or DirectX. But through collaboration with Microsoft, DirectX now has the Cooperative Vectors API, which means that gamers can use neural shaders. Unlike typical shaders, this allows geometry to be imbued with extra properties. But now that extra property could be a small neural network that could generate more geometry or help ease ray tracing calculations. For instance, mega geometry. This one allows for real time generation of level of detail steps without requiring any normal maps. Think of like UE5's Nanite, which helps ease jarring LOD change effects and saves developer time, but with, you know, AI. To take advantage of these features, NVIDIA loaded the 5090 with the hardware it needs to accelerate them. It's got fifth gen tensor cores, which drastically reduces memory usage for simpler AI models that don't require high precision. As for the non-AI stuff, we get upgraded fourth gen RT cores, which now double the ray-triangle intersection rate with just 75% of the memory footprint. And as for the regular old CUDA cores, well, those just don't seem to have changed very much. So far, the 5090 has managed a best case scenario of about 30% faster than its predecessor, seemingly entirely thanks to the 33% higher GPU core camp. This, combined with their reuse of TSMC's 4N process node from last gen, explains why the new chip is so big and why NVIDIA had to sacrifice some clock speed to keep their yields, and therefore pricing still attainable to the 1%. GDDR7, on the other hand, is kind of a big deal. It boasts double the data rate of GDDR6 while using half as much power per bit. This is an enlarged part thanks to the shift to PAM3 signaling. PAM, short for pulse amplitude modulation, is akin to how we store data in multi-level cell flash storage. GDDR6 uses PAM4, meaning that each clock can be encoded for four different states, rather than just two. But it came with a big trade-off, the error rate, since the signals are so similar in amplitude that sometimes they can be hard to tell apart, especially when there's interference. PAM3 improves the situation by just trying to handle three states instead of four, giving a little bit more room between each of them. This improves signal integrity, allowing GDDR7 to run at higher frequency while consuming less power to make up for the trade-offs. And let's not forget that we finally got 32 gigs of VRAM. This will be a huge jump for AIDarks, and maybe gamers someday. But there are some other cool things, like NVIDIA's new ninth-gen N-Vanc hardware video encoders, which support higher-quality 422 10-bit video. This, for the right people, is a huge deal, and might make Blackwell a must-have upgrade. And for the folks out there who own monitors, hi, Ploof, we finally get a card that can actually take advantage of DP 2.1 UHBR20, a new DisplayPort standard that can drive 4K 240Hz without display-stream compression. And all of this while talking to your computer at PCIe Gen 5. It's 2025, and ray-tracing is no longer an afterthought, or even a choice in some cases, with the new name of Jones being the first game to outright require support. So let's talk about it. For RT testing, we use the highest settings, starting at 1440, and I want to get this out of the way. AMD does not ray-trace well. Now, in fact, 2 makes for a very playable experience on the 5090, with 1% lows well above 60 FPS. Numbers, it can't quite hit yet on the absolutely brutal Blackmith Wukong, though it is playable, unlike the poor 7900XTX. Ouch! In Cyberpunk, the 5090 has just a 20% lead over the 4090, but compared to the 4080 Super, it maintains its price-to-performance ratio, which I generally consider to be pretty darn acceptable for a Halo-class card. In the lightly ray-traced F124, AMD comes back to life a little, performing well against the 4080 Super, and the same can be said for Returnal, but there's no question that the 5090 is king for RT at 1440P, with a crown that only gets more dazzling at 4K. Blackmith Wukong falls below what we consider playable for an intense action game. Don't worry, we'll talk about AI upscaling later, because first, dang, look at this thing! Maintaining performance in the 50s at these settings, this economy? Dang, NVIDIA, that's pretty impressive. And if you care more about absolute cinema than framerate, well, it holds above 30 FPS in Alan Wake 2, which should go great with your popcorn. F124 and Returnal are similar stories as the 1440P results, just with more pixels and fewer FPS. All of this taken together means we're looking at a greater than 30% lead over last gen at a 25% higher price, meaning the new RT cores are providing some benefit, but it's pretty small compared to the impact of NVIDIA just plunking in more of them. This is obviously a downer compared to the good old days when we used to get yearly GPU refreshes with dramatic improvements to performance per dollar. But it's clear that unless cutting-edge semiconductor manufacturing miraculously gets cheaper, those days are never coming back. So if we compare this more to, say, adding a second card in SLI, a feature NVIDIA no longer supports, then the glass half-full-take is, hey, at least it costs less than $240.90s. But NVIDIA still has some tricks up their sleeve. Holy heck, that's one tight leather jacket. How the heck did you fit that stuff in those sleeves? With DLSS4 and multi-frame gen, by making use of flip metering and swapping them from convolution neural networks to transformer-based models. There's a lot of words again, lots of words again. Let's break it down. DLSS4 is NVIDIA's latest suite of AI enhancements, and it's the biggest change in years. Previous versions of DLSS included a convolutional neural network or CNN. A CNN can be thought of as a series of filters that look for specific details. When used for image processing, one layer could be looking for vertical edges, one for horizontal edges, and one for contrast, et cetera. The neural network then observes the results from the filters and can use that information to identify things. Like if an image contains a dog or a stop sign and seems convoluted, well, it literally is. On 4,000 series GPUs, this information was combined with an optical flow accelerator that interpreted the motion in the scene to upscale or generate frames. So why the switch? Scaling. Each filter can only scan and compute a small number of pixels at a time. When you have millions of pixels, dozens of times per second, increasing performance can be tough. So DLSS uses a new transformer model, which, as NVIDIA explains, allows them to evaluate the relative importance of each pixel across an entire frame and over multiple frames to achieve a deeper understanding of the scenes that offers greater stability, reduced ghosting, higher detail in motion, and smoother edges. They also scale better, which is part of why they have become so heavily used in things like large language models. The transformer is the T in chat GPT, so while a CNN can see this picture and say, there is a cat, there is a product from LTTstore.com. A transformer might say, there is a cat enjoying the premium CRT-themed peck cave from LTTstore.com. However, while a transformer can process complete images faster, it does require more data for training, and honestly, with the side-by-side comparisons, it is tough to tell the difference in image quality between the two models, like really tough. There are clear benefits in specific areas that NVIDIA points out, things like fences, power lines, and barbed wires, but there's still obvious artifacts when dealing with semi-transparent objects, or just very busy scenes. A lot of the artifacts are different than DLSS3, but are still present. On the bright side, at least on high-end cards, the transformer models don't show a substantial performance hit. Enough theory, I wanna talk about MF-ing G. Multi-frame Gen is perhaps the most game-changing tech landing with these new cards. Like the previous version of Framegen, it uses AI to generate in-between frames based on optical flow data and rendered frames, but Multi-frame Gen now allows users to generate up to three in-betweens, rather than just one, boosting frame rates to up to four times native. Does it work? Well, according to our charts, yes, very yes. The numbers double, triple, and quadruple, and make the 5090 look absolutely ridiculous, at least in the charts. But as big as that bar is, the real frames haven't changed. So what's the deal? Well, MFG's pretty wild. DLSS3 Framegen required specific optical flow accelerating hardware on GPUs, and combined that with the game data, like depth and motion vectors to generate in-between frames. And it was an okay solution, but you had to have two bits of hardware processing each frame, and that's just inefficient, and could even cause the GPU to throttle, resulting in a lower base frame rate to multiply off of. That's why you didn't just see a straight doubling of frame rate when you turned Framegen on. The 5090 and multi-FrameGen is Shu Aida's optical flow accelerator, and instead utilized tightly integrated Tensor and CUDA cores in Blackwell to run a lightweight AI optical flow model. Not hardware accelerated, it's just an AI model. This means that single Framegen should now run 40% faster while using 30% less VRAM. We know it works, but how does it look? Well, it depends on who you ask. If you want to see your FPS number much higher then it works great. Nothing else sort of hacking your FPS counter will let you get nearly 600 FPS in Cyberpunk. But visually, it's not perfect, and Framegen weirdness still persists. Look at the combing on these crosswalks in Cyberpunk, or this bottle phasing in and out in the benchmark, or the doubling of the fan blades in these large HVAC units. In Alan Wake 2, there's obvious artifacting around the player model and around the edge of your flashlight, which is sadly exactly where you will be looking 100% of the time. And it's worth noting that these artifacts are not present when we're just using DLSS for upscaling. Curiously, while Cyberpunk and Alan Wake were both updated with explicit support for multi Framegen, the feature can be forced in any DLSS3 single Framegen supported games through the NVIDIA driver. And the game that fared the best was Dragon Age of Veilguard. The world's full of magic, so who's to say exactly what a vortex of shadow is supposed to look like? And since it already ran at a solid 70 FPS with all the settings cranked, no Framegen, it kept input latency manageable, which is really important, because if your base FPS is only 30 frames, well, Framegen will make it look smooth, but you'll observe many visual anomalies, and latency is still dictated by your true frame rate, meaning that the game feels far less responsive than it looks like it should be. The good news, says NVIDIA, is that MFG at least isn't adding any latency compared to the base frame rate, but we felt like it would be a good idea to verify that. And verify we did, using our trusty LDAT, our click to photon test results showed that Framegen does not increase latency over native with reflex on. In fact, it seems to actually lower the latency slightly. It doesn't really make sense to us, so we're gonna chalk that up to sampling error, but if there is any effect on latency, it's so minor that it's not noticeable compared to our total system latency, and that is mighty impressive. But the issue remains, you can't beat your base frame rate's latency, and when it's disconnected from the motion you see on screen, it almost makes it worse. Perhaps the situation will improve with reflex too, but that's not here yet, so we aren't gonna dwell on it, even if the tech is really cool. Our take on multi-frame gen right now is that it has the same key flaws before. It's a win more feature that works the best when it makes the least sense to use, which means it is definitely not the silver bullet that NVIDIA's graphs make it out to be. With all this power, it would make a lot more sense to use the 5090 to make some money, right? So let's talk productivity. Hey, you might not know me, but I do this kind of thing and this kind of thing. And NVIDIA's new architecture has benefits for me too. The encoder and decoders provide support for 422 chroma sampling, which will make working with high-end video files much faster, especially for multi-camera video edits. The encoders also provide better quality at smaller file sizes. Sadly, we can't verify that for you today as we are currently re-evaluating our encoding benchmarks, but the new media engine is almost certainly playing a role in Puget Bench, where we see a nice 9% bump in Premiere Pro performance and an even nicer, nearly 20% improvement DaVinci Resolve when compared to the 4090. In Blender, NVIDIA has us considering finding a new benchmark as the 5090 has brought Barbershop render times to less than half a minute, more than double the speed of the 3090 Ti. Nice. Overwrought editing transition here. Double the 3090 Ti, you say? AI nerds, rejoice! If you're like me and for your sake, I hope you're not, you've been dying for NVIDIA to release a new 32-gig consumer card. So with all the bragging NVIDIA's been doing about AI tops, I'm expecting some big numbers. And in the Procyon text benchmarks, what the fuck? Number is not big. Sure, the 5090 is still the best card on the charts, but I was expecting more than this. We'll see roughly 20 to 30% improvement over the 4090, depending on the benchmark, and 60 to 70% over the 3090 Ti. I can see why they keep talking about AI tops and not specific performance. In ML Perf, the story remains largely the same in the time to first token and the token generation rate benchmarks. For image generation, our preferred Procyon benchmark doesn't support the 5090 yet. So we tested using the benchmark provided by NVIDIA. Prepare your salt grains for the taking. In the Procyon Flux FP8 image generation, the 5090 leads by a margin in line with the rest of our benchmarks. But when we switch to FP4 precision, the 5090 shows how powerful the native hardware support can be, taking less than one quarter of the time to generate images compared to the 4090. We'd love to see how this fairs against the 3090 Ti, but this NVIDIA provided benchmark doesn't support older cards. I was expecting AI to be the place where this card really shines, but I guess potential buyers will have to settle for just having the best consumer friendly grade card for AI. We'll call it the nifty feinty. Somehow there is still more review to go. Good thing this is so digestible and uncomplicated so far, right? Blackwell's main efficiency improvements over ADA seem to come from what they're calling Max-Q functionality. It boils down to a few small but significant changes. Improvements to power gating, thanks to an additional power rail and improved logic allows more of the GPU to switch to a low power state more rapidly. In CPU bound or frame cap scenarios, this could help save some power, especially on mobile chips. But in GPU bound full load scenarios, this monster will absolutely draw its fully rated 575 watts, including transient spikes of as high as 637 watts. And you can see even in real world gaming, it will pull space heater levels of power with an average of 554 watts in F124. And to manage all that power, they had to make a unique cooler design. While everyone else was making four slot behemoths, NVIDIA built something very different. And man, does it look good. Not only is it classy, it's also innovative. Over ambitious even. The main board of the video card is just the middle section and the outputs in PCIe connector are on daughter boards connected via what they call a flexible PCB. It's like a stiff ribbon cable, I guess. This allows for a double flow through design with fans blowing through dense heat sinks. Anecdotally, the fans run quiet, too quiet even. So like the 4090, most of the time, the loudest thing about the card will be its coil wine. It's not the worst we've heard, but it's noticeable. And if you expected it to run cooler than the 4090 founders, it doesn't. But given how much smaller it is, not to mention the enormous thermal load it's dealing with, I'd say it's doing a great job. But the new cooler style brings new build considerations. We know that flow through coolers can have a noticeable impact on CPU temps, especially for those using tower heat sinks. So as a double flow through, double bad, we took the 5090 and the 4090 FE and put them in a Corsair 4000D. And even with both running at 450 Watts to control the experiment, our poor Noctua NHD15 saw CPU temps that are roughly three degrees higher in both synthetic and gaming workloads. A CPU cooler upgrade, perhaps an intake-mounted radiator, could be in order for some folks. Wow, that was a lot to talk about. So what's our conclusion? Well, if you're a professional game developer or any other professional, or basically anyone who can use the new performance and especially the new features to make money, it's a no-brainer. As for the gamers, well, if the 4090 was stupid, stupid price, but stupid performance, the 5090 is stupider. It provides 30-ish percent more performance by using 33% more hardware and 30-ish percent more power, and it gets a roughly 25% price increase. It's a 4090 plus plus. On the one hand, you could look at this and say, wow, this doesn't look good for the rest of the 50 series lineup, but it's also worth considering that the 4090 was kind of an outlier for 40 series, offering a huge boost over its predecessor with the rest offering smaller upgrades. So if you're on 30 series and still not in the mega, mega baller income bracket, I guess all we can do is wait and see. See if Lydus has the strength to do the segue. Who are you? If you like this video, check out the one you get on ACCS chronging, watch this good nugget for now. I wish I could eat a nugget right now."}