I’m not sure either, Win 10/11 are pretty quick to get going and Ubuntu is not much longer than that. If I have to hard reset the mbp for work, it’s a nice block of slacker time :)
Behohippy
Gamer, rider, dev. Interested in anything AI.
- 3 Posts
- 10 Comments
Behohippy@lemmy.worldto
Technology@lemmy.ml•Anybody have good guides on repurposing old 32bit laptops?English
9·2 years agoFor the really old stuff, I used to do NetBSD. I’m sure their 32bit x86 support is still top notch.
Behohippy@lemmy.worldto
Games@lemmy.world•The Weekly 'What are you playing?' Discussion - 20-07-2023English
2·2 years agoHalls of Torment. $5 game on steam that is like a Vampire Survivors clone, but with more rpg elements to it.
Behohippy@lemmy.worldto
Selfhosted@lemmy.world•Intel is quitting on its adorable, powerful, and upgradable mini NUC computers
4·2 years agoThese are amazing. Dell, Lenovo and I think HP made these tiny things and they were so much easier to get than Pi’s during the shortage. Plus they’re incredibly fast in comparison.
I’ve got a background in deep learning and I still struggle to understand the attention mechanism. I know it’s a key/value store but I’m not sure what it’s doing to the tensor when it passes through different layers.
Behohippy@lemmy.worldto
Android@lemmy.ml•New android podcast "android faithful" to fill the hole the cancelled twit show, All about Android, left in our hearts.
4·2 years agoSubscribed. That last episode of AAA was heartbreaking.
Behohippy@lemmy.worldto
Rimworld@lemmy.world•Are there any Vanilla Expanded mods you avoid?English
2·2 years agoSame. I loved the idea of what VE does but playing the game was just a confusing mess for me. I stick to the same 8 mods I always use.
Behohippy@lemmy.worldto
AI@lemmy.ml•The AI Feedback Loop: Researchers Warn Of "Model Collapse" As AI Trains on AI-Generated ContentEnglish
1·2 years agoAny data sets produced before 2022 will be very valuable compared to anything after. Maybe the only way we avoid this is to stick to training LLMs on older data and prompt inject anything newer, rather than training for it.
Looks like the original base is suffering from some toxic fallout… might be a while. Enjoy building a new colony here!


The advancements in this space have moved so fast, it’s hard to extract a predictive model on where we’ll end up and how fast it’ll get there.
Meta releasing LLaMA produced a ton of innovation from open source that showed you could run models that were nearly the same level as ChatGPT with less parameters, on smaller and smaller hardware. At the same time, almost every large company you can think of has prioritized integrating generative AI as a high strategic priority with blank cheque budgets. Whole industries (also deeply funded) are popping up around solving the context window memory deficiencies, prompt stuffing for better steerability, better summarization and embedding of your personal or corporate data.
We’re going to see LLM tech everywhere in everything, even if it makes no sense and becomes annoying. After a few years, maybe it’ll seem normal to have a conversation with your shoes?