{"video_id":"fp_HdSyKqcjsc","title":"TQ: Your Graphics Card Just Got SMARTER","channel":"Techquickie","show":"Techquickie","published_at":"2020-05-19T18:02:18.621Z","duration_s":200,"segments":[{"start_s":0.0,"end_s":4.8,"text":"Deep Learning Super Sampling. This probably sounds like some kind of hokey way to balance","speaker":null,"is_sponsor":0},{"start_s":4.8,"end_s":9.84,"text":"your energy field or something, but it's actually a new AI-powered method from NVIDIA to make your","speaker":null,"is_sponsor":0},{"start_s":9.84,"end_s":15.44,"text":"games look better. Now, you might have already been familiar with traditional super sampling. That is,","speaker":null,"is_sponsor":0},{"start_s":15.44,"end_s":20.48,"text":"when your graphics cards render frames at a higher resolution than what your monitor can support,","speaker":null,"is_sponsor":0},{"start_s":20.48,"end_s":25.12,"text":"but then downscales the image to fit your display. Although it's very computationally taxing,","speaker":null,"is_sponsor":0},{"start_s":25.12,"end_s":29.52,"text":"it can provide a significant quality boost that other anti-aliasing methods may not be able to","speaker":null,"is_sponsor":0},{"start_s":30.08,"end_s":34.24,"text":"achieve. DLSS, however, is quite different than standard super sampling, and it's actually","speaker":null,"is_sponsor":0},{"start_s":34.24,"end_s":39.52,"text":"less demanding on your GPU. Instead of simply forcing your GPU to render higher resolution","speaker":null,"is_sponsor":0},{"start_s":39.52,"end_s":44.64,"text":"frames from scratch, it uses a neural network to predict what the frames should look like.","speaker":null,"is_sponsor":0},{"start_s":44.64,"end_s":49.04,"text":"The neural network is trained by an NVIDIA supercomputer that feeds it correct frames","speaker":null,"is_sponsor":0},{"start_s":49.04,"end_s":53.84,"text":"from certain games to help it learn how to generate extra pixels accurately. These","speaker":null,"is_sponsor":0},{"start_s":53.92,"end_s":59.68,"text":"correct frames are actually 16k images, so the AI will have a very granular level of detail to","speaker":null,"is_sponsor":0},{"start_s":59.68,"end_s":64.8,"text":"learn from. The AI model is then sent to your GPU via driver updates so it can be run locally. The","speaker":null,"is_sponsor":0},{"start_s":64.8,"end_s":69.44,"text":"idea here is that it's easier for your GPU to run this neural network when you're playing a","speaker":null,"is_sponsor":0},{"start_s":69.44,"end_s":73.92,"text":"graphically intense game running at a low frame rate than it is to keep drawing new frames from","speaker":null,"is_sponsor":0},{"start_s":73.92,"end_s":80.32,"text":"scratch. Regardless of how difficult the game is to run, DLSS uses a fixed amount of time for frame,","speaker":null,"is_sponsor":0},{"start_s":80.32,"end_s":85.12,"text":"so it often doesn't take as long for your GPU to spit out a DLSS-assisted frame","speaker":null,"is_sponsor":0},{"start_s":85.12,"end_s":90.48,"text":"as it does to render one the old-fashioned way, partly thanks to newer RTX GPUs having specialized","speaker":null,"is_sponsor":0},{"start_s":90.48,"end_s":94.8,"text":"tensor cores that are supposed to be optimized for running AI. They kind of are.","speaker":null,"is_sponsor":0},{"start_s":94.8,"end_s":100.08,"text":"And a new version, DLSS 2.0, was recently rolled out and features a few key improvements. For","speaker":null,"is_sponsor":0},{"start_s":100.08,"end_s":105.36,"text":"starters, DLSS 2.0 aims to provide near native resolution quality while the GPU renders well","speaker":null,"is_sponsor":0},{"start_s":105.36,"end_s":109.84,"text":"under half the pixels it would otherwise need to handle. There are also efficiency improvements to","speaker":null,"is_sponsor":0},{"start_s":109.84,"end_s":114.0,"text":"the neural network that should help it process images faster, ultimately boosting frame rates.","speaker":null,"is_sponsor":0},{"start_s":114.0,"end_s":118.32,"text":"Another big strength is that now the neural network is much more generalized. Instead of","speaker":null,"is_sponsor":0},{"start_s":118.32,"end_s":123.36,"text":"needing to separately train it for every game, NVIDIA now uses more general visual content that's","speaker":null,"is_sponsor":0},{"start_s":123.36,"end_s":127.6,"text":"supposed to be more representative of a variety of different games, which means improvements can","speaker":null,"is_sponsor":0},{"start_s":127.6,"end_s":133.28,"text":"be delivered to users more quickly and more games will end up supporting DLSS. Users can now decide","speaker":null,"is_sponsor":0},{"start_s":133.28,"end_s":137.6,"text":"how much they want to utilize DLSS technology rather than just leaving it completely up to","speaker":null,"is_sponsor":0},{"start_s":137.6,"end_s":142.96,"text":"the GPU and driver. Gamers can now choose between three modes for image quality, prioritizing either","speaker":null,"is_sponsor":0},{"start_s":142.96,"end_s":147.76,"text":"greater image quality or higher frame rates. The benefits of all of this can mostly been seen when","speaker":null,"is_sponsor":0},{"start_s":147.76,"end_s":152.48,"text":"looking at the finer details in a scene, not just at the edges of objects where you traditionally","speaker":null,"is_sponsor":0},{"start_s":152.48,"end_s":157.52,"text":"see jaggies, if the anti-aliasing isn't up to par. Things like text, chain link fences, and","speaker":null,"is_sponsor":0},{"start_s":157.52,"end_s":162.8,"text":"details on faraway buildings should be a lot clearer with DLSS without game breaking slowdowns.","speaker":null,"is_sponsor":0},{"start_s":162.8,"end_s":167.68,"text":"And in some cases, DLSS even appears to be able to make visual elements look more detailed","speaker":null,"is_sponsor":0},{"start_s":167.68,"end_s":172.4,"text":"than they even would be with native resolution, despite rendering fewer pixels from scratch,","speaker":null,"is_sponsor":0},{"start_s":172.4,"end_s":176.8,"text":"if the AI figures out how to enhance a texture or shadow to a greater extent than what the","speaker":null,"is_sponsor":0},{"start_s":176.8,"end_s":181.12,"text":"game's code originally calls for. Of course, with other parts of an image, it might still be","speaker":null,"is_sponsor":0},{"start_s":181.12,"end_s":185.76,"text":"inferior to standard rendering. But the hope is as processing power increases and the algorithm","speaker":null,"is_sponsor":0},{"start_s":185.76,"end_s":190.96,"text":"continues to be fine-tuned, it'll become easier and easier to run more and more games at buttery","speaker":null,"is_sponsor":0},{"start_s":190.96,"end_s":196.08,"text":"smooth frame rates without compromising quality. Thanks for watching. Like, dislike,","speaker":null,"is_sponsor":0},{"start_s":196.08,"end_s":199.76,"text":"check out our other videos, comment with video suggestions, and don't forget to subscribe and","speaker":null,"is_sponsor":0},{"start_s":199.76,"end_s":200.56,"text":"follow below.","speaker":null,"is_sponsor":0}],"full_text":"Deep Learning Super Sampling. This probably sounds like some kind of hokey way to balance your energy field or something, but it's actually a new AI-powered method from NVIDIA to make your games look better. Now, you might have already been familiar with traditional super sampling. That is, when your graphics cards render frames at a higher resolution than what your monitor can support, but then downscales the image to fit your display. Although it's very computationally taxing, it can provide a significant quality boost that other anti-aliasing methods may not be able to achieve. DLSS, however, is quite different than standard super sampling, and it's actually less demanding on your GPU. Instead of simply forcing your GPU to render higher resolution frames from scratch, it uses a neural network to predict what the frames should look like. The neural network is trained by an NVIDIA supercomputer that feeds it correct frames from certain games to help it learn how to generate extra pixels accurately. These correct frames are actually 16k images, so the AI will have a very granular level of detail to learn from. The AI model is then sent to your GPU via driver updates so it can be run locally. The idea here is that it's easier for your GPU to run this neural network when you're playing a graphically intense game running at a low frame rate than it is to keep drawing new frames from scratch. Regardless of how difficult the game is to run, DLSS uses a fixed amount of time for frame, so it often doesn't take as long for your GPU to spit out a DLSS-assisted frame as it does to render one the old-fashioned way, partly thanks to newer RTX GPUs having specialized tensor cores that are supposed to be optimized for running AI. They kind of are. And a new version, DLSS 2.0, was recently rolled out and features a few key improvements. For starters, DLSS 2.0 aims to provide near native resolution quality while the GPU renders well under half the pixels it would otherwise need to handle. There are also efficiency improvements to the neural network that should help it process images faster, ultimately boosting frame rates. Another big strength is that now the neural network is much more generalized. Instead of needing to separately train it for every game, NVIDIA now uses more general visual content that's supposed to be more representative of a variety of different games, which means improvements can be delivered to users more quickly and more games will end up supporting DLSS. Users can now decide how much they want to utilize DLSS technology rather than just leaving it completely up to the GPU and driver. Gamers can now choose between three modes for image quality, prioritizing either greater image quality or higher frame rates. The benefits of all of this can mostly been seen when looking at the finer details in a scene, not just at the edges of objects where you traditionally see jaggies, if the anti-aliasing isn't up to par. Things like text, chain link fences, and details on faraway buildings should be a lot clearer with DLSS without game breaking slowdowns. And in some cases, DLSS even appears to be able to make visual elements look more detailed than they even would be with native resolution, despite rendering fewer pixels from scratch, if the AI figures out how to enhance a texture or shadow to a greater extent than what the game's code originally calls for. Of course, with other parts of an image, it might still be inferior to standard rendering. But the hope is as processing power increases and the algorithm continues to be fine-tuned, it'll become easier and easier to run more and more games at buttery smooth frame rates without compromising quality. Thanks for watching. Like, dislike, check out our other videos, comment with video suggestions, and don't forget to subscribe and follow below."}