With the rise of these retro-looking websites, I feel it's possible again to start using a browser from the '90s. Someone should make a static-site social media platform for full compatibility.
Not so much. While a lot of these websites use classic approaches (handcrafted HTML/CSS, server-side includes, etc.) and aesthetics, the actual versions of those technologies used are often rather modern. For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect. Of course, you wouldn't want to do it that way nowadays, because it wouldn't be responsive or mobile-friendly.
(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)
> For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect.
Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?
I'm happy they didn't choose to go full authentic with quirks mode and table-based layouts, because Firefox has some truly ancient bugs in nested table rendering... that'll never get fixed, because... no one uses them anymore!
I think it’s the former. Many of these retro layouts are pretty terrible. They existed because they were the best at the time, but using modern HTML features to recreate bad layouts from the last is just missing the point completely.
I loaded up Windows 98SE SP2 in a VM and tried to use it to browse the modern web but it was basically impossible since it only supported HTTP/1.1 websites. I was only able to find maybe 3-4 websites that still supported it and load.
Having studied, and attempted to build, a few taxonomies / information hierarchies myself (a fraught endeavour, perhaps information is not in fact hierarchical? (Blasphemy!!!)), I'm wondering how stable the present organisational schema will prove, and how future migrations might be handled.
Unexpectedly related to the problem of perfect classification is McGilchrist’s The Master and His Emissary. It shows that human mind is a duet where each part exhibits a different mode of attending to reality: one seeks patterns and classifies, while the other experiences reality as indivisible whole. The former is impossible to do “correctly”[0]; the latter is impossible to communicate.
(Naturally, one would notice how in making this argument it itself has to use the classifying approach, but the meta point stands.)
Notably, the classifying mode was shown in other animals (as this is common to probably every creature with two eyes and a brain) to engage when seeking food or interacting with friendly creatures. This highlights its ultimate purposes—consumption and communication, not truth.
In a healthy human both parts act in tandem by selectively inhibiting each other; I believe in later sections he goes a bit into the dangers of over-prioritizing exclusively the classifying part all the time.
Due to the unattainability of comprehensive and lossless classification, presenting information in ways that allows for coexistence of different competing taxonomies (e.g., tagging) is perhaps a worthy compromise: it still serves the communication requirement, but without locking into a local optimum.
[0] I don’t exactly recall off the top of my head how Iain gets there (there is plenty of material), but similar arguments were made elsewhere—e.g., Clay Shirky’s points about the inherent lossiness of any ontology and the impossible requirement to be capable of mind reading and fortune telling, or I personally would extrapolate a point from the incompleteness theorem: we cannot pick apart and formally classify a system which we ourselves are part of in a way that is complete and provably correct.
Yes, the seeming hierarchy in information is bit shallow. Yahoo, Altavista and others tried this and it became unmanageable soon. Google realized that keywords and page-raking is the way to go. I think keywords are sort of same as a dimensions in multi-dimensional embeddings.
Information, is basically is about relating something to other known things. A closer relation is being interpreted as location proximity in a taxonomy space.
This is cute, but I absolutely do not care about buying a omg.lol URL for $20/yr, and I'm not trying to be a hater because the concept is fine, but anybody who falls into this same boat should know this is explicitly "not for them"
I don't think pointing out "this is a web directory full of links submitted by people willing to spend $20/yr" is being cheap, per se, the same way I don't think paying to be "verified" on Twitter means your content is worth paying attention to
There was a time where "willing to pay for access" was a decent spam control mechanism, but that was long ago
Someone wants to add it enough to click the button that adds the site. Sometimes you need to REALLY want to add it because no category is applicable so you also click the button to add the category.
Sadly it's the same for Sci-Fi art. I had a link to submit, but you need to sign up and it's $20. Fair enough if they want to set some minimum barrier for the site to filter out suggestions from every Tom, Dick, and Harry (and Jane?), but I don't feel so investing in this to give them $20 to provide a suggestion.
(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)
Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?
https://portal.mozz.us/gopher/gopher.somnolescent.net/9/w2kr...
with these NEW values in about:config set to true:
Also, set these to false:(Whether for this or comparable projects.)
<https://en.wikipedia.org/wiki/Taxonomy>
<https://en.wikipedia.org/wiki/Library_classification>
https://web.archive.org/web/20191117161738/http://shirky.com...
(Naturally, one would notice how in making this argument it itself has to use the classifying approach, but the meta point stands.)
Notably, the classifying mode was shown in other animals (as this is common to probably every creature with two eyes and a brain) to engage when seeking food or interacting with friendly creatures. This highlights its ultimate purposes—consumption and communication, not truth.
In a healthy human both parts act in tandem by selectively inhibiting each other; I believe in later sections he goes a bit into the dangers of over-prioritizing exclusively the classifying part all the time.
Due to the unattainability of comprehensive and lossless classification, presenting information in ways that allows for coexistence of different competing taxonomies (e.g., tagging) is perhaps a worthy compromise: it still serves the communication requirement, but without locking into a local optimum.
[0] I don’t exactly recall off the top of my head how Iain gets there (there is plenty of material), but similar arguments were made elsewhere—e.g., Clay Shirky’s points about the inherent lossiness of any ontology and the impossible requirement to be capable of mind reading and fortune telling, or I personally would extrapolate a point from the incompleteness theorem: we cannot pick apart and formally classify a system which we ourselves are part of in a way that is complete and provably correct.
Information, is basically is about relating something to other known things. A closer relation is being interpreted as location proximity in a taxonomy space.
Even if it was $10/year, people would still cry foul.
There was a time where "willing to pay for access" was a decent spam control mechanism, but that was long ago
https://www.simonstalenhag.se/
^ The link is for the sci-fi art, not the hookers.