Formerly known as arc@lemm.ee / server shuts down end June 25

  • 0 Posts
  • 28 Comments
Joined 2 months ago
cake
Cake day: June 10th, 2025

help-circle
  • I just took an old Optiplex with a GTX1650 and got it going with Ubuntu 24.04 and my experience was mostly okay but I saw a number of issues which could confound a newbie. Firstly, I had to go to the command like to run the ubuntu-drivers auto install because the card wasn’t set up properly. If I hadn’t then games wouldn’t run properly. But then I was able to install Steam and get some games going. Acceleration looked okay and I tested games which were running under Windows emulation and natively with some success - however there was a long delay launching some games, like it was having to transpile shaders or something. Still, when they worked they seemed to work well.

    The most egregious issue I had is that Ubuntu defaults to an X11 desktop and the desktop is slightly off but the games work well. If I change to a Wayland desktop, then the desktop is buttery smooth but the games are very choppy. I suspect that’s the driver for this old card just doesn’t work properly with the window manager for some reason in that mode, that the wm is not giving the game a proper surface to render in or is somehow interfering with performance.


  • I don’t really buy the “small incompatibilities” argument. The project strives for total compatibility, even down to the most esoteric parameter that nobody has ever heard of. And even that seems like overkill to me - there are alternative implementations of core commands on Linux and other *nix systems like BSD, Solaris etc. where the compatibility is way worse. For example, busybox is used in embedded Linux, and a containerized images like Alpine Linux.

    It also seems a bit rich to complain that uutils might get extended. GNU coreutils came into being because of dissatisfaction with the commands that came with the default *nix. Same for bash (vs sh), GNU cc (vs cc), GNU emacs (vs emacs) and so on. Was there somebody back then complaining about devs “spamming commits” that extended functionality?

    And other Rust applications won’t only work with uutils. That’s absurd. They’ll test the capabilities of the OS they’re built to run on either at build time with feature flags or at runtime by probing commands. Just like any other high level application.

    As for license, MIT is used for plenty other things in a typical Linux dist, e.g. X11.

    The biggest point of concern for a Rust rewrite is dependency integrity. Rust uses cargo to manage dependencies and absolutely everything in the Cargo.toml/Cargo.lock files has to be reviewed. The crates.io repository is beginning to support package signing and The Update Framework initiative but every single dependency of uutils would need to be carefully reviewed and signature validated for it to be considered trustworthy. Basically everything needs to get locked down, and wherever possible dependencies expunged altogether.






  • My experience with Linux with Nvidia drivers was basically - hey execute this “.run” file and you get drivers. Okay that worked but then if the kernel updated, the drivers broke and had to be reinstalled. And if the dist upgraded to a new version then the drivers broke completely. And NVidia gave up providing drivers at all for their older GPUs and I was stuck with Noveau which is better than nothing but useless for gaming.

    Conversely, some dists are supported by graphics manufacturers with proper packages but there is always that gap where the driver dependencies and the kernel dependencies are out of sync. Or the graphics driver only works on the last couple of dists and support disappears after that. Or you upgrade the dist and then discover there are no drivers for it yet.

    I know it rankles some purists, but really there should be an long term, versioned ABI for graphics drivers on Linux. There is sort-of is one with Gallium3D but it’s still not supported properly by all vendors.


  • The success of Steam Deck has helped a lot. Prior to that Linux ports tended to be very perfunctory and they weren’t tested or supported very well. I guess that now there are actual Linux gamers (via Steam Deck), that support has improved. That said, I think outside of Steam Deck and SteamOS, your experience of gaming is going to be extremely dependent on your GPU, driver support and a number of other factors. Things are far more likely to work well on Windows than they would for Linux.


  • No, YOU don’t understand end to end encryption, and you don’t understand browsers. You say you could “write down a base64 encoded binary blob on a website”. Yes you could and how do you decrypt it? The asnwer is with a key (asymmetric or symmetric) that the recipient must have in memory of the receiving software - the browser that the filter has already intercepted and compromised. So “moar layers” is not protection since the filter could inject any JS it likes to reveal the inner key and/or conversation. It could do this ad nauseum and the only protection is how determined the filter is.

    But this is also a nonsense argument just on a practical level. The problem is kids connecting to adult websites, or websites with some adult content. The filter doesn’t need to do much - either block a domain outright, or do some DPI to determine from the path what part of the website the browser is calling. The government thinks it reasonable that every single website that potentially hosts adult content should capture proof of identity of adults. I contend that really the issue is kids having access to those websites at all, and that proxies can and would be a far more effective way to control the issue without imposing on adults. No solution is perfect, but a filter is a far more effective way than entrusting some random website with personal information. Only this week somebody found an app that was storing ids in a public S3 bucket compromising all those users. Multiply that by hundreds, thousands of websites all needing verification and this will not be the last compromise by any means.











  • Actually it can be done and is being done. Software like Fortigate Firewall can do deep packet inspection on encrypted connections by replacing certs with their own and doing man in the middle inspection. It requires the browser has a root CA cert that trusts the certs issued by the proxy but that’s about it. Filtering software could onboard a new device where the root cert could be installed.

    And if Fortigate can do it then any filtering software can too. e.g. a kid uses their filtered device to go to reddit.com, the filter software substitutes reddit’s cert for their own and proxies the connection. Then it looks at the paths to see if the kid is visiting an innocuous group or an 18+ group. So basic filtering rules could be:

    1. If domain is entirely blocked, just block it.
    2. If domain hosts mixed content, deep packet inspection & block if necessary
    3. If domain is innocuous allow it through

    This is eminently possible for an ISP to implement and do so in a way that it ONLY happens when a user opts into it on a registered device while leaving everything open if they did not opt into it.

    And like I said this is an ISP problem to figure out. The government could have set the rules and walked away. And as a solution it would be far more simple that requiring every website to implement age verification.


  • That’s a problem is for ISPs and content providers to figure out. I don’t see why the government has to care other than laying out the ground rules - you must offer and implement a parental filter for people who want it for free as part of your service. If ISPs have to do deep packet inspection and proxy certs for protected devices / accounts then that’s what they’ll have to do.

    As far as the government is concerned it’s not their problem. They’ve said what should happen and providing the choice without being assholes to people over 18 who are exercising their rights to use the internet as they see fit.