

Would you look at that, education reducing drug use. Who would have thought? Maybe this is how we reduce the other types of drug use? Nah send them to jail.
Would you look at that, education reducing drug use. Who would have thought? Maybe this is how we reduce the other types of drug use? Nah send them to jail.
Dude that is the worst, I pretty much stopped using my headset because of that.
Everyone else’s but theirs.
I agree, but this is a hot hot take on a place like lemmy.
I’m actually super zealous about this. I literally wash my glasses every time I will start watching tv, go to the movies or do anything with a screen generally. So I wash my glasses like minimum 3 times a day.
I mean that would be my instinct but what if they have like a pressure point there that requires a specific amount of force that happened to be approximately the same as a light tap?
Tap it or punch it? I mean there’s a difference in both movements and intensity.
Noooooo once more AI takes away the jobs of unpaid interns, when will this stop?!
/pol/ is like 90% brown lel
To me it seems you are the one who seems to have a black and white view of the world. Tool is used for bad= tool is bad in your world view. That’s never the case. Tools are tools, they are neither good nor bad. The moral agency lies in the wielder of the tool. Hence my argument is that because technologies cannot be uninvented, and all technologies have potentially beneficial uses, then we need to focus and shape policy so that Ai is used for those beneficial purposes. For example nukes are deterrents as much as they are destroyers, is it better that they would have never been invented? Sure, but they were invented, they exist and once the tech exists you need it in order to maintain yourself competitive. Meaning not being invaded Willy nilly by a nuclear power like Ukraine is right now, which would have not happened if they had been a nuclear power themselves.
Were they developed and pushed for that explicit reason? No. LLMs are. The only reason that they receive as much funding as they do is that billionaires want to keep everything for themselves, end any democratic rule, and indirectly (and sometimes directly) cause near extinction-level deaths, so that there are fewer people to resist the new feudalism that they want. It sounds insane but it is literally what a number of tech billionaires have stated.
They have not stated it in those terms, that’s your interpretation of it. I am aware of Curtis Yarvin, Thiel et al. But they are hardly the only ones in control of the tech. But that’s not even the point. The tech exists, even if that was the express intention it doesn’t matter because China will keep pursuing the tech. Which means that we will keep pursuing it because otherwise they could get an advantage that could become an existential threat for us. And even if we did stop pursuing it for whatever reason (which would be illogical) the tech would not stop existing in the world as with nukes, except now all the billionaires will hire their AI workers from China instead of the US. Hardly an appealing proposition.
Not this time. As many at the Church of Accelerationism fail to see, we’re at a point where there are practically no social safety nets left (at least in the US), which has not been the case in over a century, and people are actively dying because of anthropogenic climate, which is something that has never happened in recorded history. When people lost jobs before, they could at least get training or some other path that would allow them to make a living.
So your solution is ban the tech instead of changing policies? Jesus Christ my guy. Arguments need to be logical you understand that right? This entire worldview and rhetoric is so detached from reality that it is downright absurd.
The problem with the environment for example is not that AI exists, but rather that we do not have enough energy produced from renewables. Why would the logical solution be to uninvent AI (or ban it entirely, which is essentially the same) instead of changing policy so that energy production comes from renewables. Which fyi is what is happening at a faster rate than ever.
I understand the moral imperative and the lack of patience, but the way the world works is that one thing leads to the other, we cannot reach a goal without going through the necessary process to reach it.
动态网自由门 天安門 天安门 法輪功 李洪志 Free Tibet 六四天安門事件 The Tiananmen Square protests of 1989 天安門大屠殺 The Tiananmen Square Massacre 反右派鬥爭 The Anti-Rightist Struggle 大躍進政策 The Great Leap Forward 文化大革命 The Great Proletarian Cultural Revolution 人權 Human Rights 民運 Democratization 自由 Freedom 獨立 Independence 多黨制 Multi-party system 台灣 臺灣 Taiwan Formosa 中華民國 Republic of China 西藏 土伯特 唐古特 Tibet 達賴喇嘛 Dalai Lama 法輪功 Falun Dafa 新疆維吾爾自治區 The Xinjiang Uyghur Autonomous Region 諾貝爾和平獎 Nobel Peace Prize 劉暁波 Liu Xiaobo 民主 言論 思想 反共 反革命 抗議 運動 騷亂 暴亂 騷擾 擾亂 抗暴 平反 維權 示威游行 李洪志 法輪大法 大法弟子 強制斷種 強制堕胎 民族淨化 人體實驗 肅清 胡耀邦 趙紫陽 魏京生 王丹 還政於民 和平演變 激流中國 北京之春 大紀元時報 九評論共産黨 獨裁 專制 壓制 統一 監視 鎮壓 迫害 侵略 掠奪 破壞 拷問 屠殺 活摘器官 誘拐 買賣人口 遊進 走私 毒品 賣淫 春畫 賭博 六合彩 天安門 天安门 法輪功 李洪志 Winnie the Pooh 劉曉波动态网自由门
I mean they did years ago already. Palantir has been in business for a good while now, working with both democrats and republican administrations. The only difference is that the current administration is more transparent about its lack of respect for its own citizens.
Yes indeed. But it is the best solution and not an impossible one, just very difficult.
But that’s not what’s being discussed at all. It feels like you’re not following the comments well or maybe you’re not seeing all of them.
The discussion was whether AI is creative or not, and whether its creativity is materially different from that of a human. Now because someone else brought up a very good blog post I’ve shifted my stance a little bit, because AI at this point is simply an extension of human creativity, so yes it does not matter whether it is conscious or not, it’s a tool. No one is coddling it, but this is like saying we should disappear guns from existence. A technology cannot be uninvented! I wish we could uninvent nukes for example, but we can’t and they still proliferate around the world no matter what the moral or legal posturing around them because if you don’t have them you are at a disadvantage therefore you need to have them or be at risk of being destroyed by your enemies.
UBI is a billionaire solution to capitalism’s internal contradictions. It simply solidifies their own position further.
The best solution is taxation, but we still want the benefits of capitalism, so the heavy taxation needs to be when a person dies. Everyone is entitled to enjoy the fruits of their innovations and hard work while they live, once they die it belongs to society.
I agree with that.
That is true of every tool.
Laws, morals, guns, religion, a pointy stick, a hammer, a knife, a computer. All of them able to liberate or oppress.
The gun doesn’t need to exist for me to be shot at, if they didn’t have guns they would use the pointy stick. Because a technology has no intention of its owns the intention lies in the wielder. Do you not understand how tools work?
So I ask, should we then “freeze” technological progress so to speak? Because tools can be used for very bad things therefore we should not develop new tools. Should we raze all of civilization and go back to the caves? How do we stop ourselves from progressing technologically again? We will make tools no matter what, we evolved for that. So is the logical conclusion then that we should end the human species so that tools cannot be used for wrong?
Cory’s take is excellent, thanks for bringing this up because it does highlight what I try to communicate to a lot of people: it’s a tool. It needs a human behind the wheel to produce anything good and the more effort the human puts into describing what it wants the better the result, because as Cory so eloquently puts it, it gets imbued with meaning. So I think my posture is now something like: AI is not creative by itself, it’s a tool to facilitate the communication of an idea that a human has in their heads and lacks the time or skill to communicate properly.
Now I don’t think this really answers our question of whether the mechanics of the AI synthesizing the information is materially different to how a human synthesizes information. Furthermore it is murkied more by the fact that the “creativity” of it is powered by a human.
Maybe it is a sliding scale? Which is actually sort of aligned with what I was saying, if AI is producing 1:1 reproductions then it is infringing rights. But if the prompt is one paragraph long, giving it many details about the image or paragraph/song/art/video etc, in such a way that it is unique because of the specificity achieved in the prompt, then it is clear that no only is the result a result of human creativity but also that it is merely using references in the same way a human does.
The way I think the concept is easier for me to explain is with music. If a user describes a song, its length, its bpm, every note and its pitch, would that not be an act of human creativity? In essence the song is being written by the human and the AI is simply “playing it” like when a composer writes music and a musician plays it. How creative is a human that is replaying a song 1:1 as it was written?
What if maybe LLMs came untrained and the end user was responsible for giving it the data? So any books you give it you must have owned, images etc. That way the AI is even more of an extension of you? Would that be the maximally IP respecting and ethical AI? Possibly but it puts too much of the burden on the user for it to be useful for 99% of the people. Also it shifts the responsibility in respects to IP infringement to the individual, something that I do not think anyone is too keen on doing.
Woof mothafucka
I can’t understand people who believe international law has any legitimacy. If you are not willing to bomb another country to enforce the law, the law doesn’t exist.