• 1 Post
  • 64 Comments
Joined 2 years ago
cake
Cake day: July 16th, 2023

help-circle
  • scratchee@feddit.uktoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 day ago

    Modern llms were a left field development.

    Most ai research has serious and obvious scaling problems. It did well at first, but scaling up the training didn’t significantly improve the results. LLMs went from more of the same to a gold rush the day it was revealed that they scaled “well” (relatively speaking). They then went through orders of magnitude improvements very quickly because they could (unlike previous ai training models which wouldn’t have benefited like this).

    We’ve had chatbots for decades, but with a the same low capability ceiling that most other old techniques had, they really were a different beast to modern LLMs with their stupidly excessive training regimes.



  • Same logic would suggest we’d never compete with an eyeball, but we went from 10 minute photos to outperforming most of the eyes abilities in cheap consumer hardware in little more than a century.

    And the eye is almost as crucial to survival as the brain.

    That said, I do agree it seems likely we’ll borrow from biology on the computer problem. Brains have very impressive parallelism despite how terrible the design of neurons is. If we can grow a brain in the lab that would be very useful indeed. More useful if we could skip the chemical messaging somehow and get signals around at a speed that wasn’t embarrassingly slow, then we’d be way ahead of biology in the hardware performance game and would have a real chance of coming up with something like agi, even without the level of problem solving that billions of years of evolution can provide.


  • Oh sure, the current ai craze is just a hype train based on one seemingly effective trick.

    We have outperformed biology in a number of areas, and cannot compete in a number of others (yet), so I see it as a bit of a wash atm whether we’re better engineers than nature or worse atm.

    The brain looks to be a tricky thing to compete with, but it has some really big limitations we don’t need to deal with (chemical neuron messaging really sucks by most measures).

    So yeah, not saying we’ll do agi in the next few decades (and not with just LLMs, for sure), but I’d be surprised if we don’t figure something out once get computers a couple orders of magnitude faster so more than a handful of companies can afford to experiment.



  • scratchee@feddit.uktoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    3 days ago

    Possible, but seems unlikely.

    Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.

    If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).

    So yeah. My money is that we’ll figure it out sooner or later.

    Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.





  • scratchee@feddit.uktoScience Memes@mander.xyzLittle Pea Shooters
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 days ago

    You mentioned “from the perspective of the planet” before, and I think perhaps that’s the key, from the planet’s perspective you fall and rise with equal velocities and equal accelerations, but crucially the planet is moving relative to other things and curves your orbit, so whilst you might might have the same falling and rising speeds relative to it, they’re not in the same direction, so your velocity has changed, and from an external perspective you’ve gained velocity from it.

    Imagine you start stationary relative to the sun, with Jupiter barrelling towards you (not on a collision course!). From Jupiter’s perspective you fall towards it, and so from the suns perspective you gain velocity opposite jupiters orbit, but you’re not directly head on so it twists your course (let’s say 90 degrees to keep things simple) then as you leave Jupiter it indeed decelerates you relative, but crucially you’re in a different direction now, (from jupiters perspective) you’re pointed right towards the sun, so as you pull away Jupiter is decelerating you in the sun direction (aka accelerates you away from the sun). So you were both accelerated in the anti-Jupiter-orbit direction and then again in the anti-sun direction. Added together those give you a vector which is non-zero, so you’ve gained speed from Jupiter.

    If your orbit didn’t curve (eg if you could pass straight through the middle of Jupiter without colliding) I think perhaps it’d cancel out its own effects on your velocity, though I’d need to check to be certain…



  • scratchee@feddit.uktoDad Jokes@lemmy.worldCount on it.
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    I think that was a post-Rome change that later got changed back again.

    Before the 5th century/fall of Rome it was January, and we had a 1200 year long flight of fancy with March new years before finally returning to the January start the Romans picked for us.


  • All intelligence is maths. I’d not typically say that all maths is intelligence, but anything that uses maths to respond to its environment in a way that takes inputs and selects an appropriate response is acting intelligently (to some extent).

    AI is mostly used (by actual programmers) to refer to any programming technique that uses a training step, eg where the logic is not manually provided, and instead some form of training/learning technique is used.

    The exact boundary is a little grey and has shifted over time (nowadays most wouldn’t include something as simple as a symbolic programming, the ai window has shifted further and now just feeding in logical statements and letting the computer resolve their implications doesn’t feel like a big enough step away from basic procedural programming), but the term has a pretty useful meaning nonetheless. If you can read a program and run through the logic in your head it’s not ai, if you are instead teaching the computer to train itself to solve the problems you want solved, then it is ai.

    Unless you’re programming computer games, then ai is just anything that functions as an agent in the game world, and probably just means a few if statements. But then again, in computer games “lighting” has nothing to do with photons and “physics” bares little resemblance to the behaviour of real world matter, so it sort of fits that “ai” in games is just some if statements behind a curtain pretending to be clever, that’s just how the sausage is made.





  • Stealing human remains is illegal, but selling (correctly sourced) human remains is legal.

    I think their point is that it’s very hard to prove bones are illegally sourced, meaning they can only prosecute if they’re able to prove the bones were sourced illegally.

    If instead it was always illegal to sell human remains (presumably with exceptions for medical/educational purposes), that might make policing them somewhat easier.

    An alternate strategy might be to require strict tracking for human remains - you can sell a skull but it must have a certificate listing the full chain back to its original owner (presumably deceased). Failure to retain that chain of custody gets you in legal hot water regardless of how you obtained it. Possibly with a little extra security to prevent duplicate use of legitimate certification. (eg each sale is logged with a trusted 3rd party so you can’t keep claiming that every skull you sell is the same guy until one of them gets inspected, forcing you to find a new legitimate doner to act as cover).