Blóðbók
- 3 Posts
- 40 Comments
Blóðbók@slrpnk.netto Technology@lemmy.world•Sam Altman Says AI Using Too Much Energy, Will Require Breakthrough Energy SourceEnglish31·2 years agoIt’s not so much the hardware as it is the software and utilisation, and by software I don’t necessarily mean any specific algorithm, because I know they give much thought to optimisation strategies when it comes to implementation and design of machine learning architectures. What I mean by software is the full stack considered as a whole, and by utilisation I mean the way services advertise and make use of ill-suited architectures.
The full stack consists of general purpose computing devices with an unreasonable number of layers of abstraction between the hardware and the languages used in implementations of machine learning. A lot of this stuff is written in Python! While algorithmic complexity is naturally a major factor, how it is compiled and executed matters a lot, too.
Once AI implementations stabilise, the theoretically most energy efficient way to run it would be on custom hardware made to only run that code, and that code would be written in the lowest possible level of abstraction. The closer we get to the metal (or the closer the metal gets to our program), the more efficient we can make it go. I don’t think we take bespoke hardware seriously enough; we’re stuck in this mindset of everything being general-purpose.
As for utilisation: LLMs are not fit or even capable of dealing with logical problems or anything involving reasoning based on knowledge; they can’t even reliably regurgitate knowledge. Yet, as far as I can tell, this constitutes a significant portion of its current use.
If the usage of LLMs was reserved for solving linguistic problems, then we wouldn’t be wasting so much energy generating text and expecting it to contain wisdom. A language model should serve as a surface layer – an interface – on top of bespoke tools, including other domain-specific types of models. I know we’re seeing this idea being iterated on, but I don’t see this being pushed nearly enough.[1]
When it comes to image generation models, I think it’s wrong to focus on generating derivative art/remixes of existing works instead of on tools to help artists express themselves. All these image generation sites we have now consume so much power just so that artistically wanting people can generate 20 versions (give or take an order of magnitude) of the same generic thing. I would like to see AI technology made specifically for integration into professional workflows and tools, enabling creative people to enhance and iterate on their work through specific instructions.[2] The AI we have now are made for people who can’t tell (or don’t care about) the difference between remixing and creating and just want to tell the computer to make something nice so they can use it to sell their products.
The end result in all these cases is that fewer people can live off of being creative and/or knowledgeable while energy consumption spikes as computers generate shitty substitutes. After all, capitalism is all about efficient allocation of resources. Just so happens that quality (of life; art; anything) is inefficient and exploiting the planet is cheap.
For example, why does OpenAI gate external tool integration behind a payment plan while offering simple text generation for free? That just encourages people to rely on text generation for all kinds of tasks it’s not suitable for. Other examples include companies offering AI “assistants” or even AI “teachers”(!), all of which are incapable of even remembering the topic being discussed 2 minutes into a conversation. ↩︎
I get incredibly frustrated when I try to use image generation tools because I go into it with a vision, but since the models are incapable of creating anything new based on actual concepts I only ever end up with something incredibly artistically compromised and derivative. I can generate hundreds of images based on various contortions of the same prompt, reference image, masking, etc and still not get what I want. THAT is inefficient use of resources, and it’s all because the tools are just not made to help me do art. ↩︎
Blóðbók@slrpnk.netto Technology@lemmy.world•"There are thousands of volunteers who donated their labour to Duo... Bit by bit all of our work was hidden from us as Duolingo became a publicly-traded company."English5·2 years agoIt’s not like corporations are some animal who can’t help but be who they are.
That’s exactly what they are. They are composed of people only to the extent that a car is composed of wheels.
If it’s otherwise in working order, a flat tire will be replaced and the car will be going wherever it’s meant to go. Profit city is where all roads lead to, and a flat tire (or four) can only delay for so long.
If you want to hold corporations to moral standards, you have to change the incentives (destinations) and restructure corporations to be actually owned and controlled by people who are then held to those moral standards (put more of the car into the wheels).
I think of it as a problem of “attention dysregulation”. At least that feels like a closer description, since attention is a very central component in many of the difficulties we experience - it just can’t be reduced to a “deficit” (whatever that could even mean).
You probably know this already, but I like to (re)phrase existing knowledge in several ways even if just for myself, because one can know something in more than one way: Attention regulation is how a brain prioritises, filters, and emphasises information about the external world, and I believe it also plays a big (and interesting) part in executive function
I understand the general concept of ‘attention’ as an allocation/distribution mechanism of cognitive resources, so calling it “deficient” feels a bit like category error. It’s like reducing the challenges faced by a governing body responsible for mismanaging an economy to an “economy deficit problem”. Just doesn’t make much sense, even if the end result looks like a deficit in resources (analogous to focus) (in some areas).
Blóðbók@slrpnk.netto Technology@lemmy.world•Pika Labs new generative AI video tool unveiled — and it looks like a big dealEnglish69·2 years agoIs this going to be available for free? And if so, to what extent? I’m not paying for AI, but would be cool to try it out.
I’ve also been burnt a few times by registering for some “free” AI service only to realise after putting in some actual effort into trying to create something that literally any actual value you might extract from it is gated behind a payment plan. This was the case when I tried generating voices, for example: spend an hour crafting something I like; generating any actual audio with it? Pay up. It’s like trying out a free MMO where you spend a long time creating your character just the way you want it only to be greeted by “trial over - subscribe now!”
Blóðbók@slrpnk.netto Technology@lemmy.world•Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity EnsuesEnglish2·2 years agoTrue, I could have identified those as suggested solutions (albeit rather broad and unspecific, which is perfectly fine). I also sympathise on both accounts.
I have this personal intuition that a lot of social friction could be mitigated if we took some inspiration from the principle of locality physics when designing social networks and structuring society in general. The idea of locality in physics is that physical systems interact only with their adjacent neighbours. The analogous social principle I have in mind is that interactions between people that understand and respect each other should be facilitated and emphasised, and (direct) interactions between people far apart from each other on (some notion of) a “compatibility spectrum” should be limited and de-emphasised. The idea here is that this would enable political and cultural ideas to be propagated and shared with proportionate friction, resulting in a gradual dissipation of truly incompatible views and norms, which would hopefully reduce polarisation.
The way it works today is that people are constantly exposed directly to strangers’ unpalatable ideas and cultures, and there is zero reason for someone to seriously consider any of that since no trust or understanding exists between the (often largely unconsenting) audience and the (often loud) proponents. If some sentiment was instead communicated to a person after having passed through a series of increasingly trusted people (and after likely having undergone some revisions and filtering), that would make the person more likely to consider and extract value from it, and that would bring them a little bit closer to the opposite end of that chain.
Anyway, those are my musings on this matter.
Blóðbók@slrpnk.netto Technology@lemmy.world•Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity EnsuesEnglish1·2 years agoWe don’t have to prove that the brain isn’t puppeted from some external realm of “consciousness” in order to say we can be quite confident that it isn’t, because positing that there is such a thing as free will in the traditional notion of the term is magical thinking, which most of us might agree isn’t particularly respectable.
What we can do is take a compatibilist approach and say there is something that is “effectively indeterministic” about human decision making, because we can’t ever ourselves predict our own actions any faster than we observe them. I don’t have any moral contribution to make here; I just wanted to add this reflection.
Blóðbók@slrpnk.netto Technology@lemmy.world•Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity EnsuesEnglish21·2 years agoI don’t see em suggesting any particular solutions, so I’m not sure what you are criticizing or why you think it would result in Elon remaining at large any more than from figurative fruit throwing.
I agree that social repercussions have a place, but I also agree that it is only “good enough” for many – but not all – situations. Seeking a more sophisticated approach based on studying and identifying potential root causes seems to me like it would be more sustainable, not to mention an opportunity for individual growth.
Blóðbók@slrpnk.netto Technology@lemmy.world•Google announces April 2024 shutdown date for Google PodcastsEnglish1·2 years agoI don’t know if it’s actually a setting, I’ve only noticed the behaviour. Neat little feature!
Blóðbók@slrpnk.netto Technology@lemmy.world•Google announces April 2024 shutdown date for Google PodcastsEnglish4·2 years agoWhat you describe is also a feature of AntennaPod.
Edit: AntennaPod is also open source.
Blóðbók@slrpnk.netto Science@lemmy.ml•Antimatter falls down, not up: CERN experiment confirms theory1·2 years agodeleted by creator
Blóðbók@slrpnk.netto Gaming@beehaw.org•Why Baldur’s Gate III is an accidental PS5 console exclusive4·2 years agoand a console
Blóðbók@slrpnk.netto Technology@lemmy.ml•The only way to avoid Grammarly using your data for AI is to pay for 500 accounts1·2 years agoThank you for this! I’ve been reluctantly using Grammarly because I thought there were no alternatives.
Blóðbók@slrpnk.netto Meta (slrpnk.net)@slrpnk.net•SLRPNK community discussion - August 20236·2 years agoMaybe c/crumbling
edit: c/rumbling
Blóðbók@slrpnk.netto Autism@lemmy.world•Thoughts on why small talk is so uniquely painfulEnglish4·2 years agoYour descriptions reflects my experience quite accurately (and I am diagnosed, AuDD). I usually try to be vague on purpose when answering how I am, or give non-answers (such as “I [simply] am”).
Blóðbók@slrpnk.netto Technology@beehaw.org•[Fortune] Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds3·2 years agoI’ve unironically done something like this
Blóðbók@slrpnk.netto Technology@beehaw.org•[Fortune] Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds6·2 years agoI see. Thanks for clarifying
Blóðbók@slrpnk.netto Asklemmy@lemmy.ml•What is the definition of capitalism? Is it compatible with people owning the things they produce?English41·2 years agoSpecifically conflating private and personal property.
Blóðbók@slrpnk.netto Technology@beehaw.org•[Fortune] Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds19·2 years agoYou shouldn’t need to be a prompt engineer just to get answers to math questions that are not blatantly wrong. I believe the prompts are included in the paper so that you don’t have to guess if they were badly formatted.
I updated to 1.19 and have two app updates listed as available. They are not updated automatically and there is no F-Droid setting for background updates that I can find. In order to install the two aforementioned updates I am required to first download them and then, for each one, I have to press install and then confirm on a popup.
To be fair, those updates were available before I updated F-Droid, so whatever mechanism that is supposed to be triggered may not have been because the updates were not new?
Nevertheless I am excited about the prospect, because updating my apps have been such a pain that I constantly procrastinate dealing with it. Sitting with the phone in front of me, clicking a few times, waiting, clicking a few times, waiting, then repeat… never leaving the app and making sure it doesn’t fall asleep… it is not a fun activity.