• 0 Posts
  • 25 Comments
Joined 7 months ago
cake
Cake day: January 26th, 2025

help-circle


  • SQLite is one of the very few open source projects with a reasonable plan for monetisation.

    • Do you want to use one of the proprietary extensions? Fork up a few thousand. No biggie.
    • Do you operate in a regulated industry (aviation) and need access to the 100% coverage test suite along with a paper trail? Fork up ”Call us”.
    • Is your company insisting that you only use licensed or supported software? Well, you can apparently pay them for a licence to their public domain software.

    Basically, squeeze regulated industries, hard.

    I’m all for open source, but at some point developers should stop acting surprised when people use their work at the edges of the licence terms (Looking at you Mongo, Redis and Hashicorp). As for developers running projects on their free time, maybe start setting boundaries? The only reason companies aren’t paying is because they know they can get away withholding money, because some sucker will step up and do it for free, ”for the greater good”. Stop letting them get it for free.

    Looks like RedHat is kinda going in this direction (pay to get a paper trail saying a CVE-number is patched), and basically always have been squeezing regulated industry. Say what you want about that strategy, it’s at least financially viable long term. (Again, looking at you Hashicorp, Redis, Mongo, Minio and friends)


  • Computational biochemistry is slowly getting there. Alphafold was a big breakthrough, and there is plenty of ongoing research simulating more and more.

    We can probably never get rid of animal testing entirely for clinical research, we’ll always need to validate simulations in animals before moving on to humans.

    I do however agree that animal testing outside of clinical research approved by a competent independent ethics committee can fuck right off. (Looking at you, cosmetics industry)



  • It’s 2025. Any internet connected machine on any EOL OS or without updates applied in a timely manner should get nuked from orbit.

    And that goes for all Linux and Android users out there too. Update your bloody phones.

    I have a Windows 10 machine with firewalls, updates and antivirus all turned off, for a single specific software. Works fine, and will keep working fine for a long time, but that installation will never again see a route to the internet.



  • I know it’s possible to run music production on Linux, in fact it’s better than ever.

    But:

    • OP explicitly asks for keeping his Cakewalk and Ableton files working.
    • OP has a small child and just wants a working music production machine with minimal fuff and time investment.
    • Like 95% of people doing any kind of music production (outside of our Linux bubble) will have an iLok licenced favourite plugin somewhere. Never seen a professional without several.

    Please stop recommending Linux to people who aren’t ready for it yet. Find the people who are, get them over. The rest will follow.




  • For music production on a hobby level? Linux is not what you want.

    The VST availability is abysmal. For a DAW, you can choose between Reaper and Ardour. Both are reasonably good, but without decent third party VSTs you’ll suffer. You won’t get iLok working, you won’t get any commercial plugins working. Your old project files won’t open.

    Now, if you are exclusively working with Airwindows plugins (look it up!) in Reaper, you could get away with a Linux migration. Cakewalk and Ableton? Not a chance in hell.

    Go buy a cheap used 16GB M1 Mac Mini. Music production stuff ”just works”. Given your config, looks like that could be within budget. Or upgrade your old machine to Windows 11, pick your poison.


  • Fine, take the structured approach to ”Linux”:

    • 3-5 years of university studies with a well designed curriculum, including operating systems basics, networking, security, data structures and compilers. This will get you the basic stuff you need to know to further delve into ”Linux”.
    • Add MIT’s ”Missing Semester” online course. This will get you more proficient in practice.
    • Go grab a RedHat certification (or don’t, it’s not worth the paper it’s printed on). This will ensure you have a paper certifying you are sufficiently indoctrinated. It’s also a structured course in Linux.
    • Go do stuff with your newly acquired knowledge and gradually build up your competences.

    If that investment seems a bit steep, take only the last step, build a homelab and take a structured approach to any interesting subjects you encounter doing that.


  • Structured approach to what? You don’t take a structured approach to a hammer, you use it as a tool to accomplish something.

    ”The Linux Programming Interface” is an excellent book, if you are interested in interacting with the Linux kernel directly, but somehow I doubt that’s what OP wants to do. I doubt OP knows what he wants to do.

    Besides, please note that I did encourage taking a structured approach to stuff discovered on the way. But taking a structured approach to ”Linux” is just a bad idea, it’s far to broad of a topic.

    Edit: RedHat has their certification programs. These are certainly structured. You’ll get to know RedHat and the RedHat^{TM} certified way of doing things. That’s probably the closest thing to what OP wants. You even get a paper at the end if you pay up. This is not the most efficient way to get proficient.



  • Everytime someone says something positive about BTRFS I’m compelled to verify whether RAID6 is usable.

    The RAID 5 and RAID 6 modes of Btrfs are fatally flawed, and should not be used for “anything but testing with throw-away data.”

    Alas, no. The Arch wiki still contains the same quote, and friends don’t let friends store data without parity.

    So in the end, the best BTRFS can do right now is running RAID10 for a storage efficiency of 50%. Running dedup on that feels a bit wasteful…

    (Sidenote: actually, ZFS runs dedup after per block compression, so it can only dedup blocks that are identical. Still works though, unlike when people do user level .tar.gz-style compression. The it’s game over.)





  • Lustre 2.16 got released recently, so in a year or so you may actually be able to run commercially supported Lustre with IPv6 support. Yay!

    After that, it’s only a matter of time before it’s finally possible to start testing supercomputers with IPv6! (And finally building a production system with IPv6 a few more years after that, when all the bugs have been squashed)

    Look at the Top500 list. Fucking everyone runs Lustre somewhere, and usually old versions. The US strategic nuclear weapons research is practically all on Lustre. My guess is most weather forecasting globally runs on Lustre. (Oh, and a shitton of AI of course.)

    Up until now, you were stuck with mounting your filesystem over IPv4 (well, kinda IPv4 over RDMA, ish). If you want commercial support for your hundreds of petabytes (you do), you still can’t migrate. And this isn’t a small indie project without testers, it’s commercially supported with billions in revenue, supporting compute hardware for even more money.

    My point with this rambling is that a open source software that is this widely deployed, depended upon and this well funded, still failed to roll out IPv6 support until now. The long tail of migrating the world to IPv6 hasn’t even begun yet, we are still in the early days. Soon someone will start looking at the widely deployed, depended upon and badly funded stuff.

    And maybe, if IPv6 didn’t try to change a bunch of extra stuff, we’d be further along. (Though, in the specific case of Lustre, I’ll gladly accuse DDN and Whamcloud for being incompetent…)


  • In the real world, addresses are an abstraction to provide knowledge needed to move something from point A to point B. We could use coordinates or refer to the exact office the recipient sits in, but we don’t. Actually, we usually try to keep it at a fairly high level of abstraction.

    The analogy is broken, because in the real world, we don’t want extremely exact addressing and transport without middlemen. We want abstract addresses, with transport routing partially to fully decoupled from the addressing scheme. GP provides a nice argument for IPv4.

    I know how NAT works, but we are working within the constraints of a very broken analogy here. Also yes, internal logistics can and will be the harbinger of unnecessary bureaucracy, especially when implemented correctly.