Sunday, January 8, 2012

By The Way, CSS Is a Bad Idea

Updated Safari the other day. Safari 5.1.2, specifically. Issues:
  • Seems that CPU usage is unreasonably high for some changes to attributes matched by CSS rules.
  • Now, it seems, when updating an DOM element attribute, the change is not re-rendered until mouseout.
This used to work nice, changes re-rendered immediately, and no CPU overuse. Odd (original blog title: Safari 5.1.2 - What Just Happened?). I dearly hope it's not a change to WebKit, and that Chrome keeps working the same as it does right now.

I offer you the following hypothesis: that this change has to do with CSS complexity; maybe adding the new types of rules (CSS3, CSS4?) or changing CSS rule matching algorithm increases calculation algorithmic complexity in specific patterns.

And these are useful patterns, I'd say. The pattern I suspect goes like in this CSS:
[var_attr=X] [attr=X] {...}
Which is useful in many situations. For instance for GUI components with multiple parts that you want to reflect a particular state, like On/Off, Active/Inactive:
.component[var_attr=off] .part1[attr=off] {...}
.component[var_attr=on] .part1[attr=on] {...}
.component[var_attr=off] .part2[attr=off] {...}
.component[var_attr=on] .part2[attr=on] {...}
And then you can set the state of the component by changing only one attribute. There are alternatives, of course: removing and adding classes, but that's even uglier. Or using JavaScript, which is probably the right thing to do anyway.

Back to guessing the Safari inner workings: they added a new rule matching engine to accomodate new CSS rule patterns, perhaps the 'parent' matching rule (so far, everything has been matching inwards only, if you see what I mean), using a new algorithm, which breaks badly in the above patterns. Then, they added the "don't re-render until mouseout" rule to cope with a slower algorithm. Which is what IE8 seems to do too, so maybe this is suggested standard behavior to cope with slowness? Remember, just a guess of mine, of course.

I've googled around a little to find information about CSS, especially I'd like to find some information on its algorithmic complexity. Hard to find. I found some papers on parallelizing web page rendering. That might tell you something.

This, http://www.eecs.berkeley.edu/~lmeyerov/projects/pbrowser/pubfiles/paper.pdf, is a good paper of 2010. Quote, to support the CSS denouncing to follow: "Due to the weak abstraction mechanisms in the selector language, multiple rules often use the same selectors.". CSS has abstraction? :) Well, within itself it does not. It only provides one layer of indirection. Presumably to make CSS intuitive to non-programmers.

My opinion on CSS: it is big mistake, still to unfold. It has un-over-come-able problems. It is not scoped and has no definitions, only rules; which is bad for composability. I suspect it has terrible worst-case algorithmic complexity. It is declarative, but in an unstructured way.

It will cost the web dearly. I could of course go on and write about what are, to my mind, better solutions; maybe some day soon. But for now, a prediction: just like NoSQL, which I really do see as having some really good points -- especially as far as the SQL language goes -- we'll soon see a NoCSS movement. I, for one, am about to rewrite some CSS into JavaScript right about now.

PS. More pages found ("computational complexity" instead of "algorithmic complexity"):