Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
I would assume that they are saying in a bigger scope and just happen to divide down to a ratio of 1 to 32.
Like rendering in 480p (307k pixels) and then generating 4k (8.3M pixels). Which results in like 1:27, sorta close enough to what he’s saying. The AI upscale like dlss and fsr are doing just that at less extreme upscale.
Maybe I don’t know enough about computer graphics, but in what world would you have/want to display a group of 33 pixels (one computed, 32 inferred)?!
Are we inferring 5 to the left and right and the row above and below in weird 3 x 11 strips?
I would assume that they are saying in a bigger scope and just happen to divide down to a ratio of 1 to 32.
Like rendering in 480p (307k pixels) and then generating 4k (8.3M pixels). Which results in like 1:27, sorta close enough to what he’s saying. The AI upscale like dlss and fsr are doing just that at less extreme upscale.