Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
That’s one of those paradoxes with human behavior around problems. If you put in effort to resolve the problem before it becomes significant, either no one notices, or they claim your effort was unnecessary because it wasn’t a problem in the first place.
Y2K bugs are a great example. Lots of effort, time, and money was spent ahead of time to prevent it from becoming a problem…and you get people claiming the whole thing was just nothing to be worried about at all and the expense was pointless.
Dates with the year stored as two digits only (say, 1995 was stored as “95”), which worked fine for things like comparisons (for example: “is the year in entry A before or after the year in entry B?”) which were just done by numerical comparison (i.e. 98 > 95 hence a date with a year ending in 98 is after a date with the year ending in 95), until 2000 were the year being store would become “00” and all those assumptions that you could compare those stored years as numbers would break, as would as all the maths being done on two digits (i.e. a loan taken in 1995 would in 1998 be on its 98 - 95 = 3rd year with that system, but in 2000 it would be on its 98 - 00 = - 98th - so negative - year which would further break the maths downstream with interesting results like the computer telling the bank it would have to give money to the lender to close the loan).
Ultimatelly a lot of work was done (I myself worked in some of that stuff) and very few important things blew up or started producing erroneous numbers when the year 2000 came.
Early computers had very limited resources, RAM, storage, etc. (first computer I worked with only had 4k of RAM for example) It often made sense to only use the last 2 digits of the year as an optimization in many common tasks that computers were used for, as both the 1800s and the 2000s were far enough away that most basic date calculations worked fine. Also, the industry was changing rapidly, and few people expected their software to be used for more than a few years - certainly not for decades, so focus was usually on solving the immediate tasks as efficiently as possible, without much consideration for the distant future.
However, it turned out that a lot of the code written in this period (70s and 80s) became “legacy code” that companies started relying on for far longer than was expected, to the point that old retired COBOL programmers were being hired for big $$ in late 90s to come and fix Y2K issues in code written decades ago. Many large systems had some critical ancient mainframe code somewhere along the dependency chains. On top of that, even stuff that was meant to handle Y2K was not always tested well, and all kinds of unexpected dependencies crept up where a small bug here, or some forgotten non-compliant library there could wreak havoc once date rolled over into the 2000s.
A lot of the Y2K work was testing all the systems and finding all the places such bugs were hiding.
Put energy into building robust systems organically (A lot of problems get solved because they where experienced, not because they where predicted) and then a year later you have folks asking “Can’t we just simplify this and remove XYZ? Do these problems even exist? Can you show us how often edge cases a, b, c happens to justify why this needs to operate this way?”…etc
Should have just let it fail and fixed the issues once pagerduty got involved instead 😒
That’s one of those paradoxes with human behavior around problems. If you put in effort to resolve the problem before it becomes significant, either no one notices, or they claim your effort was unnecessary because it wasn’t a problem in the first place.
Y2K bugs are a great example. Lots of effort, time, and money was spent ahead of time to prevent it from becoming a problem…and you get people claiming the whole thing was just nothing to be worried about at all and the expense was pointless.
what’s Y2K bugs exactly?
Dates with the year stored as two digits only (say, 1995 was stored as “95”), which worked fine for things like comparisons (for example: “is the year in entry A before or after the year in entry B?”) which were just done by numerical comparison (i.e. 98 > 95 hence a date with a year ending in 98 is after a date with the year ending in 95), until 2000 were the year being store would become “00” and all those assumptions that you could compare those stored years as numbers would break, as would as all the maths being done on two digits (i.e. a loan taken in 1995 would in 1998 be on its 98 - 95 = 3rd year with that system, but in 2000 it would be on its 98 - 00 = - 98th - so negative - year which would further break the maths downstream with interesting results like the computer telling the bank it would have to give money to the lender to close the loan).
Ultimatelly a lot of work was done (I myself worked in some of that stuff) and very few important things blew up or started producing erroneous numbers when the year 2000 came.
https://en.m.wikipedia.org/wiki/Year_2000_problem
Generic summary: Two digit clocks hitting 00 thinking its 1900 not 2000.
I wonder why they didn’t think about making computers and clocks count past 100 when creating them? Did they not expect to ever get to the year 2000?
Early computers had very limited resources, RAM, storage, etc. (first computer I worked with only had 4k of RAM for example) It often made sense to only use the last 2 digits of the year as an optimization in many common tasks that computers were used for, as both the 1800s and the 2000s were far enough away that most basic date calculations worked fine. Also, the industry was changing rapidly, and few people expected their software to be used for more than a few years - certainly not for decades, so focus was usually on solving the immediate tasks as efficiently as possible, without much consideration for the distant future.
However, it turned out that a lot of the code written in this period (70s and 80s) became “legacy code” that companies started relying on for far longer than was expected, to the point that old retired COBOL programmers were being hired for big $$ in late 90s to come and fix Y2K issues in code written decades ago. Many large systems had some critical ancient mainframe code somewhere along the dependency chains. On top of that, even stuff that was meant to handle Y2K was not always tested well, and all kinds of unexpected dependencies crept up where a small bug here, or some forgotten non-compliant library there could wreak havoc once date rolled over into the 2000s.
A lot of the Y2K work was testing all the systems and finding all the places such bugs were hiding.
that’s interesting, thank you!
It’s similar to the Y2K38 bug:
https://en.m.wikipedia.org/wiki/Year_2038_problem
damn someone should fix that
The year is 2038, nothing happened. Seems like a lot of nothing. (Meanwhile behind the scenes. Developers are happy they prevented a major problem).
en.wikipedia.org/wiki/preparedness_paradox
Happens at work so often.
Put energy into building robust systems organically (A lot of problems get solved because they where experienced, not because they where predicted) and then a year later you have folks asking “Can’t we just simplify this and remove XYZ? Do these problems even exist? Can you show us how often edge cases a, b, c happens to justify why this needs to operate this way?”…etc
Should have just let it fail and fixed the issues once pagerduty got involved instead 😒