Hacker Newsnew | past | comments | ask | show | jobs | submit | ux266478's commentslogin

Type annotations mix poorly with s-expressions imo. Try an ML, which answers the same question of "How do we represent the lambda calculus as a programming language?"

Most devices in that class I see run some vendor flavor of Android or ChromeOS and not Windows, so definitionally speaking they do run Linux out of the box.

Yes but it's a bit academic. The problem is that getting a FOSS distro of Linux onto low-end general-purpose computing hardware is harder now than it was a decade ago. I speak from bitter recent experience.

Oh, I know perfectly well what you mean. The move to the SoC paradigm has serious implications for the future of computing freedom. I can't imagine how we might be able to fight this crap, realistically.

> this simplicity also comes specifically because there's less contributions.

Not entirely. A rather large amount of Linux's mess stems from the fact that it was a hobbyist project in its foundational years. It was never clean or well designed, at any point in its life. Go look at Linux 1.2.0 vs FreeBSD 2.0

Even when Linux began to get traction, it had already developed an ingrained culture that didn't particularly care about "nice" code or architectural solutions. The BSDs inherited their culture where such things were prioritized. You're right that things get messier as they get larger, but the gap between the two is much, much larger than can possibly be accounted for. Things like Linux not respecting NICE values have very little to do with surface-level problems like stylistic inconsistencies in the source code.


Why do you think so?

It's sad because it removes choice from users over what OS to run. People that only use windows are going and throwing their old computers away.

I think of it more in the reverse, the choice being removed is the hardware you can use. It has been the case from the dawn of computing that you start from a usecase (which correlates to software, which maps to an operating system) and then look at your options for hardware. The more specific your usecase, the more specific your software, which correlates to a specific choice of hardware. There is no, and can be no, "have it all". It's a fundamental principle of mathematics, the postulates you choose radically change the set of proofs you have access to, and the set of proofs you choose entail the axioms and structures you can take.

Now it can be better or worse, and right now it's never been better. There was a time when your language, your shell and your operating system were specific to the exact model of computer (not just a patched kernel, everything was fully bespoke) and you have a very limited set of peripherals. That we suffer from more esoteric operating systems lagging behind the bleeding edge of extremely complicated peripherals is a very good place to be in. That there's always room for improvement shouldn't be cause for sadness.


> Now it can be better or worse, and right now it's never been better. There was a time when your language, your shell and your operating system were specific to the exact model of comput

No, it is not. There was a small period of time between the 90s and the 2010s where you could grab almost every 386 OS and have your hardware mostly decently run for it, and if not, drivers would be easily written from manufacturer specifications. That time was definitely better then than what we have today, or what we had before then. I am writing this as someone who was written serial port controller drivers for the BeOS.

> That we suffer from more esoteric operating systems lagging behind the bleeding edge of extremely complicated peripherals is a very good place to be in.

This is the wrong logic, because operating systems become esoteric since they can't support the hardware, and hardware becomes complicated and under-specified because there's basically only one operating system to take care of. You may _think_ you have no reason to be sad if you're a user of Windows or Linux, but you have plenty anyway.


> There was a small period of time between the 90s and the 2010s where you could grab almost every 386 OS and have your hardware mostly decently run for it

And prior to that, you could grab every OS running on IBM clones and not have to worry about graphics drivers at all, because graphics acceleration wasn't a thing. The era you refer to had already introduced software contingency on hardware within x86. This disparity was further compounded in the mid-2010s as GPUs exploded in complexity and their drivers screamed into tens of millions of lines of code, eclipsing kernels themselves. This is not distinguishable from the introduction of graphics drivers in any generalized manner. They were driven by the same process.

An important thing I want to point out as well; you're doing a lot of heavy lifting by limiting the pool to x86 computers, which is already giving up and admitting to a very strong restriction of hardware choice. Don't take that as pedantry, it's a very well hidden assumption that you've accidentally overlooked, or in the case that you think it's irrelevant, I'm letting you know that I don't consider it irrelevant in the slightest. When I think of computers, I'm not just thinking of x86 PCs. In the 90s I'm thinking of SGI workstations, Acorns, Amiga, Macs. I'm thinking of mainframes and supercomputers and everything else.

> This is the wrong logic, because operating systems become esoteric since they can't support the hardware

On the contrary, I assure you that this logic rests on faulty premises. As a general principle it's clearly false since most operating systems (which are long forgotten) predate it by decades, and in the specific context of Linux winning over FreeBSD, it's still not applicable as that happened smack dab in this era you describe.

> You may _think_ you have no reason to be sad if you're a user of Windows or Linux, but you have plenty anyway.

I'm a user of Linux, FreeBSD and 9Front. I just don't (and never have) bought hardware at random. You can reason your way into sadness any which way, but rationalization isn't always meaningfully justified. I just don't find it sad that my second desktop can't have an RX 9000 whatever in it. Where's the cut off line for that? Why not be sad that I can't jam a Fujitsu ARM processor into a PCIE slot as another type of satellite processor? The incompatibility is of the same effect, but I don't see you lamenting or even considering the latter, as though mounting a processor to a PCB is somehow fundamentally less possible than writing a modern graphics driver.


> And prior to that, you could grab every OS running on IBM clones and not have to worry about graphics drivers at all, because graphics acceleration wasn't a thing. The era you refer to had already introduced software contingency on hardware within x86. This disparity was further compounded in the mid-2010s as GPUs exploded in complexity and their drivers screamed into tens of millions of lines of code, eclipsing kernels themselves.

Not at all; I excluded this early era because you could _not_ be sure to find an OS that would support your graphics card at all, other than maybe what the BIOS supported. I am talking about the 90s because GPUs already had plenty of non-BIOS-supported features, like multiple CRTCs, weird fixed acceleration pipelines, weird coprocessors with random ISAs, and yet you could still find operating systems with 3rd party drivers supporting them.

It is a _perfectly_ distinguishable era. See how many OSes support 3D cards from the era like i9xx. Heck, FreeBSD itself qualifies, but also BeOS and many others.

In addition, I am talking about the _kernel_ part, which by any logic should be ridiculously simple. E.g. this is not a compiler to a random ISA or anything like that. It is what in Linux you would call a DRM driver, and the only reason they are complex and millions of LoCs is that they are under-specified, by AMD and the rest. Most of lines of AMD driver code in Linux are the register indices for each and every card submodel (yes, really, header files!), when it is clearly that before they would just have standarized on one set and abstracted from it. Compare AtomBIOS support in cards from a decade ago and cards from today. It is literally easier today for a 3rd party to implement support for the more complicated parts of the GPU (e.g. running compute code!), which AMD more or less documents, than it is to support basic modesetting support as it was in the 00s. This has happened!

Hardware may be more complicated, but interfaces needn't be more complicated. This, I believe, is a symptom, not the cause.

> I just don't find it sad that my second desktop can't have an RX 9000 whatever in it. Where's the cut off line for that? Why not be sad that I can't jam a Fujitsu ARM processor into a PCIE slot as another type of satellite processor?

You do not find it sad that there is no longer any operating system other than Linux supporting any amount of hardware, simple or not ?

Also, you call every non-Linux OS as "esoteric" as a counter-argument to my point , yet you try to use support for definitely esoteric hardware (which would be even hard to acquire!) as an argument for your point, whatever it is ? When I'm complaining that I can no longer rely on FreeBSD, literally the 2nd open OS with most hardware support, on supporting basic hardware (!) from this decade, when on the past I could more or less rely on _all_ BSDs supporting it, as well as a myriad other OSes , the argument that "oh well it never supported hardware that it is impossible to find in stores anyway, so I don't care" sounds pretty hollow.

Certainly even slightly deviating from the popular hardware has always resulted in diminishing returns, but today it is much worse, _except_ for Linux.


> but sadly it is saddled with the perception that it is Windows-only, which hasn't been true for a decade

In my experience it does not work very well outside of the sanctioned Linux distributions. Quirky heisenbugs and nonsensical crashes made it virtually unusable for me on Void. I doubt that's changed in the years that have since passed.

> not necessarily a negative, because Windows is a decent OS

Is a language runtime worth an operating system? I think that's a paradigm we left behind in the 1970s when the two were effectively inseperable (and interwoven with hardware!) I wouldn't expect someone to swap to using a Unix system because they really want a better Haskell experience.

I just don't see any actual interesting or meaningful reasons to care about .NET, I effectively feel the same way about it that I do about Go. Just not something that solves any problem I have, and doesn't have anything that interests me. Although effectively I did try it, so it's a moot point considering that's one of the outcomes you're wishing for.


> In my experience it does not work very well outside of the sanctioned Linux distributions. Quirky heisenbugs and nonsensical crashes made it virtually unusable for me on Void. I doubt that's changed in the years that have since passed.

It's open source. Did you follow the spirit of Linux to file a bug report of as much sense of the crashes as you could make? Most OSS only supports as many distros as people are willing to test and file accurate bug reports (and/or scratch the itch themselves and solve it). It seems a bit unfair to expect .NET to magically have a test matrix including every possible distro when almost nothing else does. (It's what keeps distro maintainers employed, testing other people's apps, too.)

It probably has gotten better since then, for what it is worth. .NET has gotten a lot of hardening on Linux and a lot of companies are relying on Linux servers for .NET apps now.

At the very least there are very tiny Alpine-based containers that run .NET considerably well and are very well tested, so Docker is always a strong option for .NET today no matter what Linux distro you want on the "bare metal" running Docker.


> Most OSS only supports as many distros as people are willing to test

Linux distros don't differ too significantly from each other nowadays (systemd plus a different package manager most of the time), so I'm almost sure this is not the source of problems.

Nonetheless, I can only add that we have ridiculous slowdowns in some standard library network calls on Linux, and at that point it is just not true that it will "seamlessly run on Linux", unfortunately.


> Did you follow the spirit of Linux to file a bug report of as much sense of the crashes as you could make?

No, because the only reason I needed C#/.NET to work was to use an internal tool someone before me had written in C#/.NET. It was not really to explore C# or make it usable. I just threw out the old tool, wrote a new one in scheme so I could do my job, and moved on with my life. I don't particularly care about this spirit of Linux, and Microsoft's tooling being weirdly fragile isn't my problem. I assume they already know this is an architectural issue, hence why they specify supported distributions. On principle I believe solving the architectural issue is what they should be concerned about, rather than making new bandaids.

> Most OSS only supports as many distros as people are willing to test and file accurate bug reports

The problem is that most runtimes and standard libraries don't need to specify a notion of a "supported" distribution. At best, they just refer to platforms with pre-made packages while happily pointing other distributions to the git repo. Even complicated, highly abstract and weird ones don't make this kind of distinction. SWI-Prolog and its myriad of frameworks (which includes a full blown GNU Emacs clone) work out of the box anywhere. GHC and the RTS work flawlessly out of the box.

I understand (even if I don't feel the same way) why a comprehensive abstraction layer like .NET is evangelized. All the same I have to consider that it's a product of a multi-trillion dollar corporation, made to compete with the thing whose marketing tagline is "write once, run anywhere". That only makes the distro dependency stand in an even harsher relief, frankly.

You like .NET? Perfectly fine and valid, and I assume it actually works for you. Just indicating that "cross platform" is contingent on more than kernel and cpu architecture here, which is fairly unusual for this type of software. That's before we get into things like comparisons with ocaml, which I know is miserable on Windows and thus is often considered not really something you'd seriously consider using there. The .NET ecosystem essentially has the same problem outside of Windows where the grain and expectations of the tooling are counter-intuitive to the operating system and usual modus operandi of its users.


I think there is an architectural problem, but not where you seem to be expecting it to be. I got caught up in some low level distro nonsense+drama from smashing my head against horrors deep in autoconf in automake and got a deep look into the realm of the Distribution Maintainer lifestyle and how much Linux distributions are individual snowflakes despite presumably all being the same OS. As the old joke goes "the only stable ABI on Linux is Win32".

.NET has a huge kitchen sink standard library. Maybe the closer parallel is Python and Python has had periods where it only supported a few named distributions, too. That's not currently the case, but "how the sausage is made" is still a lot grosser than you might expect, with some Distribution Maintainers maintaining entire forks and a lot of the work not done by Python directly. Python is everywhere because it became one of the favorite shell scripting languages of Distribution Maintainers. (Which also exacerbated the Python 2 to 3 migration because entire distros got stuck on 2 while all their shell scripts got rewritten.) (But also if you want to compare Java's cross platform to .NET's I think we need a long digression into how many Java runtimes there are and the strange and subtle incompatibilities of different distro's affiliations to one or another. I also made the mistake of trying to use a Java application as a regular application in my youth of accidentally dealing with deep distro incompatibilities. That was also not fun.)

I get it, you don't have to like .NET. I just think you have an inflated view of what "cross platform" means when it comes to Linux. Linux isn't just one platform. Most things are rebuilt from source constantly because the binary interfaces especially libc's/glibc's under them are constantly shifting like quicksand. See also the messes that are Flatpak and Snap and how much work they've done to try to build around distro incompatibilities (by building increasingly more complex wish-it-were-VMs).


> There are no maintenance costs for the open sea.

There are massive maintenance costs for the open sea with how we utilize it. Maritime security and policing, navigational infrastructure, weather reporting, radio repeaters, international bureaucracy, etc.

Global maritime trade is extremely costly. It's simply hidden behind opaque public spending on things you don't think about. In all likelihood it's a sunk cost that would ballpark around a few hundred billion dollars annually, invisible money spent just to keep things running at the scale and reliability that they do.

Now the maritime traffic passing through the Strait of Hormuz may only partially overlap with this spending, but people greatly overestimate just how "cheap" maritime activity actually is.


Very Large Crude Carriers carry ~2 million barrels of oil. Ultra Large Crude Carriers double that. If oil went down to $50/Bbl, that $2 million fee amounts to a ~2% tax per ship, given their cargo capacity. It's not particularly exorbitant, especially given that the entire reason they proposed this toll was to fund their rebuilding efforts (Americans and Israelis did a lot of damage that's been under-reported and ignored)

This conflict has been an interesting case of watching mass hysteria interact with propaganda in the newform, rapid pace of media that exists in the internet age. The amount of wild conjecture, speculation, misinformation is the most extreme I've ever seen it, eclipsing even the 6 months of nonsense that was spurred on by the Russian invasion of Ukraine.


The 2% is the camel's nose. They are establishing that they tax the Strait traffic and there is no longer freedom of navigation. Once it is a done deal, the deal will be altered...

If that’s right, 2% indeed doesn’t sound bad. Especially since it’s supposed to be split with Oman.

The way forward for what though? It remains to be seen if this level of infrastructure and complexity has any kind of resilience. I seriously doubt it does, looking back on history. I think it's far more likely that the post-industrial population contraction (which hasn't even really begun) as well as climate change (anthropogenic or not) will make it far more likely that this model of "everybody uses a computer" ends up in the junk bin of history. Can't say I'd be sad to see it go. Somebody who has no interest in computers shouldn't ever have to touch one.

> Great job, we took something that was meant to be a next frontier in humanity and let anyone connect with anyone else without gatekeepers/intermediaries

We already had that, it's called shortwave radio. The internet, especially as it's implemented and as it's used, is a terrible way to achieve this. It's service providers the whole way down.


There are definitely problems, but IRC in the 90s had strong ham radio vibes imo.

It would be funny if HAM radio came back because the social filter imposed by the limitations wound up being more important than the technological capability.

Problem is that HAM radio also has social filters you broadcast to everyone and you don’t know who is listening. Encrypted communication is not allowed in HAM.

You are not supposed to use it for „communication” as in Facebook. You are supposed to use spectrum to test your gear and keep transmissions short to leave space for others.

I was in local HAM club and passed the exam for license but never got license to transmit mostly because you are not supposed to chat frivolously over the radio.


> It's service providers the whole way down.

And still likely better than heavily regulated airwaves.


It doesn't have to be anything so extreme as novel work. The frontier of models still struggle when faced with moderately complex semantics. They've gotten quite good at gluing dependencies together, but it was a rather disappointing nothingburger watching Claude choke on a large xterm project I tried to give him. Spent a month getting absolutely nowhere, just building stuff out until it was so broken the codebase had to be reset and he'd start over from square 1. We've come a long way in certain aspects, but honestly we're just as far away from the silver bullet as we were 3 years ago (for the shit I care about). I'm already bundling up for the next winter.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: