Documents and applications

There are static documents: the ones that may be composed using some kind of a markup language, probably structured somehow, and intended to be usable as a source of information. One can manipulate those using document viewers or editors, which could be efficient and configured to serve one's needs. Usually static documents can also be easily processed.

Then there are applications: the programs that one can use to perform tasks – for instance, to manipulate those documents. A program should be studied in order to be used efficiently, if it can be used that way at all (newbie-oriented programs often can't); it also should be configured to serve one's needs and fit one's preferences well.

And there is web, a mixture of hypertext documents and mostly poor applications; a web browser is basically both a thin and a fat client, with no strict separation. That's a big part of what makes it so annoying: if you're trying to configure a web browser, the websites that are closer to applications get broken very easily (or, rather, their numerous bugs get exposed). But accepting those poor applications as they are is hard if you are used to better (more effecient, less broken, configurable, etc) ones.

Here's an observation to make: the law of the instrument seems to be actively encouraged in computing. That is, common technologies (such as web and programming languages) grow according to how they are used, even if they are used for unintended purposes – and then they turn into semi-broken multi-instruments. Somewhat similar to adjusting definitions of frequently misused words.

Quite often there are client-side applications in places where automatically generated (or even manually composed) documents would have served well. There's also a whole world of broken CSS and HTML (which very few people use directly nowadays). Overall, it's quite awful and unpleasant to use those applications.

But hyperlinked documents are still a great idea. The problem is mostly that the hyperlinks often lead to information that is hard to retrieve or use (even ignoring connection-, censorship-, captcha-, and paywall-related issues). That's one of the things I like about Gopher: it consists of lightweight and hyperlinked documents only. Each document can be manipulated in the same way; after learning and configuring one client well, you can browse gopherspace comfortably.

Maybe specialized search engines or webrings could be used to collect and find accessible JS-free websites. Though even techy news aggregators seem to do a good job at that: the pages linked from those tend to be relatively accessible.