- 232 Posts
- 334 Comments
Even_Adder@lemmy.dbzer0.comOPMto Stable Diffusion@lemmy.dbzer0.com•OmniSVG: A Unified Scalable Vector Graphics Generation ModelEnglish4·22 days agoThe project page didn’t have a link to it, but there is a demo on HF.
Even_Adder@lemmy.dbzer0.comOPMto Stable Diffusion@lemmy.dbzer0.com•OmniSVG: A Unified Scalable Vector Graphics Generation ModelEnglish1·22 days agoYour name is crazy. 🤣
Even_Adder@lemmy.dbzer0.comOPMto Stable Diffusion@lemmy.dbzer0.com•OmniSVG: A Unified Scalable Vector Graphics Generation ModelEnglish2·22 days agoIt can do IMG to SVG. Check out the right side of this image:
Even_Adder@lemmy.dbzer0.comto 196@lemmy.blahaj.zone•YouTube’s dumbest new feature yetEnglish4·2 years agoThat’s pretty funny.
Even_Adder@lemmy.dbzer0.comto 196@lemmy.blahaj.zone•YouTube’s dumbest new feature yetEnglish4·2 years agoDoes it work?
I like this one.
Even_Adder@lemmy.dbzer0.comto AI Generated Images@sh.itjust.works•Spider - Broken Pixel?English2·2 years agoYou can do a lot of cool things raising and lowering the strength on LoRAs once you figure it out. There’s some good guides in the articles section on Civitai.
Even_Adder@lemmy.dbzer0.comto AI Generated Images@sh.itjust.works•Spider - Broken Pixel?English2·2 years agoYou didn’t use any LoRA or anything? That’s impressive.
Even_Adder@lemmy.dbzer0.comto AI Generated Images@sh.itjust.works•Spider - Broken Pixel?English2·2 years agoIf you have the original output PNG the parameters are all stored on there. You can drag the PNG into the text box on A1111 and press this button to load them for generation or the PNG info tab.
Even_Adder@lemmy.dbzer0.comto AI Generated Images@sh.itjust.works•Spider - Broken Pixel?English2·2 years agoWhat’re the generation parameters? It’ll be a lot easier for people to help if you share them.
Even_Adder@lemmy.dbzer0.comto AI Generated Images@sh.itjust.works•negative promptEnglish9·2 years agoYou’ve opened the gate. You aren’t meant to reverse the polarities.
Even_Adder@lemmy.dbzer0.comto Technology@lemmy.world•4chan daily challenge sparked deluge of explicit AI Taylor Swift imagesEnglish105·2 years agoUnderestimate them at your peril.
Even_Adder@lemmy.dbzer0.comto Technology@lemmy.world•OpenAI's GPT Trademark Request Has Been DeniedEnglish5·2 years agoYou love to see it.
Sousou no Frieren.
Even_Adder@lemmy.dbzer0.comto Memes@lemmy.ml•In the near future, it is projected that contrarians will gain self awareness.English11·2 years agoBringing physically or mentally disabled people into the discussion does not add or prove anything, I think we both agree they understand and experience the world as they are conscious beings.
This has, as usual, descended into a discussion about the word “understanding”. We differ in that I actually do consider it mystical to some degree as it is poorly defined and implies some aspect of consciousness to myself and others.
I’d appreciate it if you could share evidence to support these claims.
That’s language for you I’m afraid, it’s a tool to convey concepts that can easily be misinterpreted. As I’ve previously alluded to, this comes down to definitions and you can’t really argue your point without reducing complexity of how living things experience the world.
What definitions? Cite them.
I’m not overstating anything (it’s difficult to overstate the complexities of the mind), but I can see how it could be interpreted that way given your propensity to oversimplify all aspects of a conscious being.
Explain how I’m oversimplifying, don’t simply state that I’m doing it.
The burden of proof here rests on your shoulders and my view is certainly not just a personal belief, it’s the default scientific position. Repeating my point about the definition of “understanding” which you failed to counter does not make it an agrument from incredulity.
I’ve already provided my proof. I apologize if I missed it, but I haven’t seen your proof yet. Show me the default scientific position.
If you offer your definition of the word “understanding” I might be able to agree as long as it does not evoke human or even animal conscious experience. There’s literally no evidence for that and as we know, extraordinary claims require extraordinary evidence.
I’ve already shared it previously, multiple times. Now, I’m eager to hear any supporting information you might have.
If you have evidence to support your claims, I’d be happy to consider it. However, without any, I won’t be returning to this discussion.
I’ve watched it twice and still don’t fully understand what’s going.
Even_Adder@lemmy.dbzer0.comto Memes@lemmy.ml•In the near future, it is projected that contrarians will gain self awareness.English11·2 years agoNope that does not mean it experienced the world, that’s the reductionist view. It’s reductionist because you said it learnt from a human perspective, which it didn’t. A human’s perspective is much more than a camera and a microphone in a cot. And experience is much more than being able to link words to pictures.
What about people without fully able bodies or minds, do they not experience the world, is their’s not a human perspective? Some people experience the world in profoundly unique ways, enriching our understanding of what it means to be human. This highlights the limitations of defining “experience” this way.
Also, Please explain how experience is much more than being able to link words to pictures.
In general, you (and others with a similar view) reduce complexity of words used to descibe conciousness like “understanding”, “experience” and “perspective” so they no longer carry the weight they were intended to have. At this point you attribute them to neural networks which are just categorisation algorithms.
You’re overstating the significance of things like “understanding” and imbuing them with mystical properties without defining what you actually mean. This is an argument from incredulity, repeatedly asserting that neural networks lack “true” understanding without any explanation or evidence. This is a personal belief disguised as a logical or philosophical claim. If a neural network can reliably connect images with their meanings, even for unseen examples, it demonstrates a level of understanding on its own terms.
I don’t think being alive is necessarily essential for understanding, I just can’t think of any examples of non-living things that understand at present. I’d posit that there is something more we are yet to discover about consciousness and the inner workings of living brains that cannot be fully captured in the mathematics of neural networks as yet. Otherwise we’d have already solved the hard problem of consciousness.
Your definitions are remarkably vague and lack clear boundaries. This is a false dilemma, you leave no room for other alternatives. Perhaps we haven’t solved the hard problem of consciousness, but neural networks can still exhibit a form of understanding. You also haven’t explained how the hard problem of consciousness is even meaningful in this conversation in the first place.
I’m not trying to shift the goalposts, it’s just difficult to convey concisely without writing a wall of text. Neither of the links you provided are actual evidence for your view because this isn’t really a discussion that evidence can be provided for. It’s really a philosophical one about the nature of understanding.
Understanding isn’t a mystical concept. We acknowledge understanding in animals when they react meaningfully to the unfamiliar, like a mark on their body. Similarly, when a LLM can assess skill levels in a complex game like chess, it demonstrates a form of understanding, even if it differs from our own. There’s no need to overcomplicate it; like you said, it’s a sliding scale, and both animals and LLMs exhibit it in ways that are relevant.
Here’s a comparison from r*ddit. Hopefully people are able to fine-tune on this. It’s got a way better license than Flux.