• MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 hours ago

    I generally explain to people that the current state of technology for LLMs is that they’re larger and more complicated versions of the text prediction on your phone, you know, when it guesses what word you want to put next, but with whole sentences using the entirety of the public Internet as the base of reference for that information.

    And they basically burn through kilowatts of power per inquiry.