32 comments

  • ordinarily 4 hours ago
    It's genuinely a great introduction to LLMs. I built my own awhile ago based off Milton's Paradise Lost: https://www.wvrk.org/works/milton
  • mudkipdev 2 hours ago
    This is probably a consequence of the training data being fully lowercase:

    You> hello Guppy> hi. did you bring micro pellets.

    You> HELLO Guppy> i don't know what it means but it's mine.

    • functional_dev 1 hour ago
      Great find! It appears uppercase tokens are completely unknonw to the tokenizer.

      But the character still comes through in response :)

  • cbdevidal 4 hours ago
    > you're my favorite big shape. my mouth are happy when you're here.

    Laughed loudly :-D

    • vunderba 3 hours ago
      This is a direct output from the synthetic training data though - wonder if there is a bit of overfitting going on or it’s just a natural limitation of a much smaller model.
  • zwaps 2 hours ago
    I like the idea, just that the examples are reproduced from the training data set.

    How does it handle unknown queries?

  • ankitsanghi 2 hours ago
    Love it! I think it's important to understand how the tools we use (and will only increasingly use) work under the hood.
  • kubrador 2 hours ago
    how's it handle longer context or does it start hallucinating after like 2 sentences? curious what the ceiling is before the 9M params
  • kaipereira 2 hours ago
    This is so cool! I'd love to see a write-up on how made it, and what you referenced because designing neural networks always feel like a maze ;)
  • monksy 1 hour ago
    Is this a reference from the Bobiverse?
  • martmulx 4 hours ago
    How much training data did you end up needing for the fish personality to feel coherent? Curious what the minimum viable dataset looks like for something like this.
  • gnarlouse 4 hours ago
    I... wow, you made an LLM that can actually tell jokes?
    • murkt 40 minutes ago
      With 9M params it just repeats the joke from a training dataset.
  • NyxVox 4 hours ago
    Hm, I can actually try the training on my GPU. One of the things I want to try next. Maybe a bit more complex than a fish :)
  • brcmthrowaway 1 hour ago
    Why are there so many dead comments from new accounts?
    • loveparade 42 minutes ago
      It really seems it's mostly AI comments on this. Maybe this topic is attractive to all the bots.
    • AlecSchueler 1 hour ago
      They all seem to be slop comments.
  • SilentM68 5 hours ago
    Would have been funny if it were called "DORY" due to memory recall issues of the fish vs LLMs similar recall issues :)
  • AndrewKemendo 6 hours ago
    I love these kinds of educational implementations.

    I want to really praise the (unintentional?) nod to Nagel, by limiting capabilities to representation of a fish, the user is immediately able to understand the constraints. It can only talk like a fish cause it’s very simple

    Especially compared to public models, thats a really simple correspondence to grok intuitively (small LLM > only as verbose as a fish, larger LLM > more verbose) so kudos to the author for making that simple and fun.

    • dvt 5 hours ago
      > the user is immediately able to understand the constraints

      Nagel's point was quite literally the opposite[1] of this, though. We can't understand what it must "be like to be a bat" because their mental model is so fundamentally different than ours. So using all the human language tokens in the world can't get us to truly understand what it's like to be a bat, or a guppy, or whatever. In fact, Nagel's point is arguably even stronger: there's no possible mental mapping between the experience of a bat and the experience of a human.

      [1] https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf

      • Terr_ 1 hour ago
        IMO we're a step before that: We don't even have a real fish involved, we have a character that is fictionally a fish.

        In LLM-discussions, obviously-fictional characters can be useful for this, like if someone builds a "Chat with Count Dracula" app. To truly believe that a typical "AI" is some entity that "wants to be helpful" is just as mistaken as believing the same architecture creates an entity that "feels the dark thirst for the blood of the living."

        Or, in this case, that it really enjoys food-pellets.

      • andoando 1 hour ago
        Id highly disagree with that. Were all living in the same shared universe, and underlying every intelligence must be precisely an understanding of events happening in this space-time.
      • AndrewKemendo 5 hours ago
        Different argument

        I’m not going to argue other than to say that you need to view the point from a third party perspective evaluating “fish” vs “more verbose thing,” such that the composition is the determinant of the complexity of interaction (which has a unique qualia per nagel)

        Hence why it’s a “unintentional nod” not an instantiation

  • rclkrtrzckr 58 minutes ago
    I could fork it and create TrumpLM. Not a big leap, I suppose.
  • nullbyte808 5 hours ago
    Adorable! Maybe a personality that speaks in emojis?
  • oyebenny 2 hours ago
    Neat!
  • dinkumthinkum 2 hours ago
    I think this is a nice project because it is end to end and serves its goal well. Good job! It's a good example how someone might do something similar for a specific purpose. There are other visualizers that explain different aspects of LLMs but this is a good applied example.
  • peifeng07 19 minutes ago
    [dead]
  • zephyrwhimsy 18 minutes ago
    [dead]
  • george_belsky 11 minutes ago
    [dead]
  • Alexzoofficial 1 hour ago
    [flagged]
  • Morpheus_Matrix 5 hours ago
    [flagged]
  • agenexus 3 hours ago
    [flagged]
  • ethanmacavoy 4 hours ago
    [flagged]
  • aesopturtle 3 hours ago
    [flagged]
  • weiyong1024 5 hours ago
    [flagged]
  • aditya7303011 2 hours ago
    [dead]
  • aditya7303011 2 hours ago
    [dead]
  • LeonTing1010 4 hours ago
    [flagged]
  • jiusanzhou 54 minutes ago
    The decision to strip out GQA/RoPE/SwiGLU and go vanilla transformer is the right call here — at 9M params those additions add complexity without meaningful gains, and keeping the code simple makes it way more readable as a learning resource. I especially appreciate the design choice of baking the personality into the weights instead of using a system prompt, since it forces you to confront how training data shapes model behavior directly rather than hiding behind prompt engineering.
    • ngruhn 44 minutes ago
      comment smells AI written
    • 3m 34 minutes ago
      AI account