Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
Increasing the CPU optimization by 0.02% does seem crazy to me. If you’re going to spend time working on something, make it worthwhile. Also, isn’t while(true) {print(money)} Microsoft, Apple and Amazon:s business model?
Only if you’d removed and fixed all other bottlenecks that would gain you more than 0.02%. And I’m not convinced there are many if any projects of any reasonable size where that has been the case.
Increasing the CPU optimization by 0.02% does seem crazy to me. If you’re going to spend time working on something, make it worthwhile. Also, isn’t while(true) {print(money)} Microsoft, Apple and Amazon:s business model?
I mean if the CPU that’s running these instructions is super low power then 0.02 might be worth it
Or if you’re scaling a large cluster of CPUs for parallel computations where a 0.02% increase can make a tangible difference in runtimes.
Only if you’d removed and fixed all other bottlenecks that would gain you more than 0.02%. And I’m not convinced there are many if any projects of any reasonable size where that has been the case.