Web Push
... turning browsers back into browsers.
"It'd be an OK daily driver if it had a modern browser."
This is true for... a whole lot of alternative OS projects. SerenityOS is just in the process of upgrading its browser so that it can log into Discord (... implying that it's not capable of doing that quite yet). ReactOS has... Firefox 3.x-ish? Not to speak of the many retro OSes: classic Macs, Win95, Amigas, etc.
For some of these, you can get something based on WebKit or Chrome etc. working; for many cases, you'd want to actually write a new one that is a better fit for your OS / hardware / etc.
Too bad that writing a new, reasonably functional browser is, at this point in history, basically impossible.
... remember when...
... The Web was the thing that provided all the hope against Microsoft's Evil Windows Monopoly?
After all, it was based on open standards, and (almost) all the code was running on the server side! Not only did this allow you to write your web app in any programming language you wanted; it also let you use whatever platform you preferred, as long as it had a browser that could render HTML reasonably. This was a whole lot easier than emulating Windows and all its APIs, all of them, otherwise apps would crash or just plain not work. Wine was trying; it had a really hard job though.
No wonder; it's a lot easier to implement a document viewer than a VM / platform.
And then, of course, the Web turned into a VM / platform. Sure, "websites" keep calling themselves that, but many of them have more lines of code in them than... entire 8-bit OSes I guess. And... if you get a crucial part of e.g. the DOM API wrong: nothing actually shows up.
At this point, "why doesn't [OS] have a modern browser" is basically equivalent to "why can't Linux run Win32 programs???". Or, better yet, "why can't your OS run a Windows VM"?
State
The Web started out as a stateless thing; you say hi to a server, you fetch a HTML page, you say bye, that's it. This stayed mostly this way even with JS and webapps: if you have e.g. a login state, it's stored in a session cookie that's being thrown back at the server with every request so that the server can dig it out from a database who it is talking to. If you restart the server between two requests, or replace it with another box altogether... no one cares.
Which makes web servers really nicely scalable.
It is also making webapps a lot more complex than they could be.
Compare that with the vast majority of terminal-based systems, ranging from UNIX to AS/400 (... I think possibly also mainframes). For these, there is a program that is running on the server; you're interacting with this one, either character by character (UNIX) or screen by screen (AS/400). In your program, you can print stuff to the user's screen, request input, process input, all this by just going down your program line by line, in a really straightforward way.
Web things replicate all this with carefully placed pages, URIs, forms, etc. Results are similar; the implementation isn't.
For a relatively long time, you couldn't even try replicating the "server-side process" model of programming. Before Javascript... well, all you had was forms. Even afterwards, until websockets, you needed to resort to hacks like long polling. And even with this... having a server-side process running for each client is, well, weird, and less scalable, and URIs won't work, etc.
It's especially not worth it given how, these days, you could just do this entire thing on the client side, all in Javascript! No more need for juggling state into URIs when the user clicks something; you can just write your app as a single page, just as in the olden times, except it's Web magic now!
The simplicity of this is probably one of the causes why SPAs are so popular; you wouldn't need or even want most of them as a user, but... they're just easier to write because they let you have your stateful process somewhere.
...but... solutions?
So... we declared that it'd be nice if webapps weren't that complicated, and then we concluded that it's easier for them to be complicated. We're fairly doomed then, right?
It's interesting to think about what these things are trying to do though: have that single process that can push stuff onto the screen, whenever it wants / when the user does something, etc. The client side is the only place on the Web where this reasonably fits.
Imagine the following protocol though!
The Pushable Web Protocol
You download a HTML page. (... so far, it's the same as always.) Your browser renders it.
And then... it keeps a connection open.
There is no JavaScript.
You click on a button. It has an event handler. But... remember: there is no JS. If you click it... it sends the event down the pipe, towards the server.
Some time later, a message emerges from the pipe. "Replace the contents of .main-article-body with this HTML: [... HTML follows.]". The browser replaces the contents and renders it.
There is CSS. Some messages from the pipe modify CSS. That way you can have neat animations.
Messages from the pipe can also tell you to navigate to another page entirely. Or to update parts of the page. You can use CSS selectors for this. Browsers, after all, can find elements fairly well, even if they render them slightly incorrectly sometimes.
Writing a good browser, of course, is hard. Writing a working one, on the other hand, is not that bad; as long as you implement these 10-20 protocol messages correctly, you're good.
Too utopian?
Well, currently, with HTML5, you can't even load HTML content into a DIV without resorting to JS. You could kinda pretend, with websockets, that the Web does work like this, and still do everything on the server side; you probably wouldn't get a lot worse latencies for most things (... after all, JS needs to load content, too), but... you'd need a bit more memory in your servers for sure.
Yet... if this was an actual thing, you could write browsers a lot more easily, reverting back to the model of terminals. (As an extra, you'd get to not write JS, anywhere.)
The good news is that... this looks... kinda doable still?
Proxies!!!
Imagine a proxy server. It has an instance of Chrome running.
When it gets a request, it opens a tab, loads the requested site, runs all the JS, wait until the results are reasonably stable, packs up the resulting HTML / CSS, and sends it down to its client. "... here, this is your web page."
The resulting HTML has event handlers. The kinds that send messages down a pipe.
On the other end of the pipe, there sits the Chrome tab. User events get emulated; JS gets called; requests get sent out. And whenever parts of the page update... the updates go down the pipe, towards our no-JS browser.
It doesn't even have to be a specially designed, pipe-listening no-JS browser. It could be just... IE 4.0 on your retro Win98 machine, running some JS that pushes events into something resembling a pipe, towards the proxy, which then could also rewrite the HTML and CSS to show up on IE 4.0 in a reasonable way.
If you've heard of Browservice: this is pretty much the same story, but we're using HTML instead of tons of bitmaps, making this take a lot less bandwidth, and the user experience a lot closer to what you'd generally expect from a browser (vs. a remote desktop client).
... comments welcome, either in email or on the (eventual) Mastodon post on Fosstodon.