Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
Mainly because JavaScript was designed to work along side HTTP in a browser. Most of its input will be text, so defaulting common behavior to strings makes some sense.
That’s misleading at best and most likely just false, and it’s worrying it’s so upvoted.
There’s no historical record explaining why this was designed this way, but we can infer some things. HTTP is very unlikely a factor, XHR / AJAX has been added years after the .sort() function. Additionally, it doesn’t make sense in the context that other comparisons are not string-wise (sort()/quicksort is basically a series of comparisons).
The trouble with JS arrays is that they can contain any values - e.g. [false, undefined, 1567, 10, "Hello world", { x: 1 }]. How do you sort those? There must be one function to compare every combination of value, but how do you compare booleans and objects?
There’s no such function which would provide reasonable results. In that context, doing .toString() and then string-wise comparison/sorting doesn’t seem that crazy - every object has .toString(), it will compute something, and often it will work well enough.
There could be some additional smartness - if the array contains numbers only, it could choose to use a number-wise comparison function. But that would require a) extra implementation complexity (JS was famously designed in short time) and b) reduced performance - since JS runtime doesn’t know what type of values are present in the array, it would have to scan the whole array before starting the sort. But I guess the a) was the decisive factor in the beginning and backwards compatibility prevented improving the function later.
You are probably correct. I don’t know if it’s true, it’s probably more likely it was a way for it not to fail.
I said HTTP mainly because HTML is plaintext because of it. 1.0s main purpose was to manipulate the page. Of course Array objects weren’t added til 1.1, when netscape navigator 3.0 released, but it was still mostly 1.0 code. I felt like having everything be coercable to string made it easy for you to just assign it to the document. If you assigned the wrong thing it wouldn’t crash.
I originally thought there was a precursor to microsofts XMLHTTP in an earlier version due to the 1997 ECMAScript documentation specifically talking about using it both client and serverside to distribute computations, but it was far more static. So, I’m probably just wrong.
Mainly because JavaScript was designed to work along side HTTP in a browser. Most of its input will be text, so defaulting common behavior to strings makes some sense.
That’s misleading at best and most likely just false, and it’s worrying it’s so upvoted.
There’s no historical record explaining why this was designed this way, but we can infer some things. HTTP is very unlikely a factor, XHR / AJAX has been added years after the
.sort()
function. Additionally, it doesn’t make sense in the context that other comparisons are not string-wise (sort()/quicksort is basically a series of comparisons).The trouble with JS arrays is that they can contain any values - e.g.
[false, undefined, 1567, 10, "Hello world", { x: 1 }]
. How do you sort those? There must be one function to compare every combination of value, but how do you compare booleans and objects?There’s no such function which would provide reasonable results. In that context, doing
.toString()
and then string-wise comparison/sorting doesn’t seem that crazy - every object has.toString()
, it will compute something, and often it will work well enough.There could be some additional smartness - if the array contains numbers only, it could choose to use a number-wise comparison function. But that would require a) extra implementation complexity (JS was famously designed in short time) and b) reduced performance - since JS runtime doesn’t know what type of values are present in the array, it would have to scan the whole array before starting the sort. But I guess the a) was the decisive factor in the beginning and backwards compatibility prevented improving the function later.
You are probably correct. I don’t know if it’s true, it’s probably more likely it was a way for it not to fail.
I said HTTP mainly because HTML is plaintext because of it. 1.0s main purpose was to manipulate the page. Of course Array objects weren’t added til 1.1, when netscape navigator 3.0 released, but it was still mostly 1.0 code. I felt like having everything be coercable to string made it easy for you to just assign it to the document. If you assigned the wrong thing it wouldn’t crash.
I originally thought there was a precursor to microsofts XMLHTTP in an earlier version due to the 1997 ECMAScript documentation specifically talking about using it both client and serverside to distribute computations, but it was far more static. So, I’m probably just wrong.
thank you for the explanation, that does clarify things