But in the middle of making the point, some inflammatory language appears:
The term "web application" was designed to disregard the fundamental distinction between software delivered to users and software running on the server.
I use the term web application myself, and I don't use it in order to disregard the fundamental distinction between local and remote software. I use it because it accurately describes most of the programs I work on, programs which are complicated enough that "web site" is misleading, and which users interact with via web browsers instead of local desktop applications. I don't feel like a useful idiot or dupe of nefarious forces who "designed" the term to obscure an innate distinction with implications for freedom. I suspect the people who coined and popularized the term had the same motive that I do using it: it is an accurate and terse description of a type of software. It wasn't designed to obscure a distinction; instead, it describes a situation in which changes in technology have blurred what was previously a fairly sharp distinction.
I find it useful to be reminded that many web applications have a substantial (or perhaps any) downloaded component, and that that may not be free. The rest of the article is very helpful and very reasonably worded; why did this slip through?