• 0 Posts
  • 7 Comments
Joined 6 years ago
cake
Cake day: January 18th, 2020

help-circle


  • Yep that’s right, below the plank length you can’t make position measurements without destroying what you’re trying to measure.

    And you are right that it can be fully explained without needing to be in a simulation, that is how these were discovered after all. The simulation angle is pretty far outside of the math of respectable physics.

    The reason the simulation hypothesis bleeds into the discussion is because its natural to ask “why” things break down at that specific size. Humans don’t like vague answers like “because god likes that number”, we prefer to tell ourselves stories that fit the numbers into physical things in our minds. Just like the Bohr model of the atom was a useful story about how atoms are structured for decades despite not being rigorously proven (and even being firmly disproven nowadays), one story we can tell ourselves to make sense of quantization is to view it as an attempt at limiting precision for computation.

    It is only a story at the end of the day though. We don’t really know why physics was set up exactly this way any more than we know why the big bang happened in the first place. Just lots of different peoples guesses, telling plausible stories about the math.


  • Yep that’s right!

    I was using “grid” to be more easily understood, but what we really have is quantization. We get plank length from the uncertainty principle, as well as plank time, which is reminiscent of common strategies for reducing the computational requirements of simulations (reduced precision calculations). This pattern is repeated in the quantization of charge, etc.

    So you’re right its not a grid. Just a continuous cap on precision. But the point stands, it looks familiar to programmers and is a fair reason to suspect we’re in a simulation. Not proof of course, just a neat little hint.


  • The high level idea (uncertainty principle looks like we’re on a grid, which feels familiar to programmers) is fine, that’s a common and pretty solid argument for living in a simulation.

    The code you’ve written though has very little (actually none, as far as I can see) connection to that idea. Its like you’ve tried to do an experiment in physics but all your code is for a modern art exhibit.

    AI is leading you astray here. I have a bachelors in physics, and none of this is actual physics. I suggest setting AI aside and doing this the old fashioned way with textbooks and paper, because the code here isn’t doing anything even remotely related to what you want.


  • I have been doing a bit of compute work on nixos with both AMD and nvidia, and I’d say it depends on what you’re doing.

    If you’re doing your compute via compute shaders, you’ll have a great experience on AMD. Zero hiccups for me, I just wrote my shaders and ran them no problem. Vulkan is incredible.

    If you have to interact with other people’s compute crap though, it might be a bad time. Most folks do GPU compute with cuda, and that won’t be fun for you on AMD. Yes there are translation layers, and you can make them work for some use cases, but its a bad experience. And yeah rocm exists… but does it really? Not many cards actually support rocm, and software support for it is just as sparse.


  • Well yeah sure if you want a set algorithm to perfectly reproduce this exact universe deterministically, that’s not gonna work out so well.

    But a simulation doesn’t have to be perfectly consistent and deterministic to “work”. If anything, the fact that some things can’t be predicted is evidence in favor of us being in a simulation, not against.

    This paper just rules out a class of algorithms. Were not in a specific type of simulation. Doesn’t mean we’re not in a simulation at all.