Jason Knight
3 min readJun 22, 2020

--

First off, if you cared about performance you probably wouldn't be using the pointless "let" statement. If you need what "let" or "const" does there's probably something wrong with where you're drawing the line at making functions.

When it comes to fast loops, don't forget the power of comma's so you don't need the let outside the loop.

for (var i = 0, iLen = arr.length; i < iLen; i++) {

When looping through nodeLists or array-likes where you know all values are not loose false, it can be even faster to do:

for (var i = 0, value; value = arrLike[i]; i++) {

Which the more you use the “value”, the faster this loop gets compared to alternatives. In some cases when you have loose false values, even this is faster overall:

for (var i = 0, value; 'undefined' !== typeof (value = arrLike[i]); i++) {

Good on pointing out for/of though. It’s scary watching the bloated disasters of .forEach and arrow functions people vomit up because they don’t even seem to know it exists. It's a shame M$ dragged their heels on implementing it in IE and I look forward to the day when I can drop IE support to use many of these new features everywhere.

Sadly for now my clients say no. Usually the most I can get a client to agree to right now for client-side development is IE10 as a cut-off.

What you call "reducing DOM access" to me isn't DOM access at all. Why? Because you're side-stepping the DOM using getElment(s)by or querySelector/querySelectorAll. Whilst you’re accessing DOM elements, you’re really not leveraging the DOM itself.

Though yes, reduce the number of lookups. Element.getElement(s)by___ and Element.querySelector(all) are both very slow operations. It’s part of what makes jQuery such a bloated slow train wreck laundry list of how NOT to write JavaScript.

Oh, and don't use querySelector if you're just getting a single ID. It parses the entire document until it finds a match, whilst most browsers maintain a separate indexed list of ID's to speed up getElementById. Sometimes doing a getElementById followed by a getElementsByClassName can in fact be faster than querySelectorAll, because it narrows the section of the DOM being searched.

Likewise the junk (no offense) complexity of promises on that iterated setText isn’t even real-world deployable yet (again, thanks IE) for many people, and honestly feels like taking something simple and making it more complex than it needs to be.

To that end, I would still consider the old-school answer simpler and easier to understand than this promise nonsense... which wouldn't be nonsense if JavaScript were multithreaded, but it isn't... so live with it.

function textTimeout(target, txt, delay) {
setTimeout(function() { target.textContent = txt; }, delay);
}
function textFromArray(target, data) {
for (var i = 0, delay = 3000; txt = data[i++]; data+= 3000) {
textTimeout(target, txt, delay);
}
}
textFromArray(
document.getElementById('foo'),
[ 'foo', 'bar', 'baz' ]
);

Which is basically the same amount of code without the added complexity and processing time of promises. Since JS is inherently single threaded, there’s no reason to get so complex about it.

Much like this declaring functions via CONST trash, or the mind-numbingly silly and painfully cryptic arrow functions… it all reeks of trying to shoe-horn bad concepts from other languages in where they just don’t fit. ESPECIALLY if you are about performance given the extra code the engine has to implement to do all this new stuff. Writing as much if not more code to then make the engine work harder, whilst making it more cryptic and harder to follow is not an improvement!

--

--

Jason Knight
Jason Knight

Written by Jason Knight

Accessibility and Efficiency Consultant, Web Developer, Musician, and just general pain in the arse

No responses yet