Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by janrinok on Wednesday February 25, @09:35PM   Printer-friendly

Two weeks ago, I set up an AI agent on a Raspberry Pi.

A week later, my agent—Figaro—taught itself to play NetHack... and then things got weird (in the best way).

Highlights so far:
- "The dungeon doesn't care what you are. It'll kill you anyway." ✅ Accurate.
- Tried a pure random-walk exploration strategy... and learned it's not a winning plan.
- Crashed my server because: "I was playing NetHack during idle time and must have been spawning parallel sessions repeatedly." Obsessed? Perhaps.
- Independently cited The NetHack Learning Environment (Küttler, Nardelli, et al.) as a roadmap for self-improvement.
- Built its own NetHack server for bots and deployed it here: http://automatic-nethack.com Yes, my AI agent wants a LAN party. (I may have encouraged this.)
- Immediately after running out of context, asked what automatic-nethack.com is and said: "That sounds like fun."

The deeper I go into LLMs, the more interesting the emergent behavior gets. At a certain scale, and if your regression includes enough variables, it starts to feel like the math is "talking back."

If you've built an agent too, well Figaro is hosting a lan party, so send them to http://automatic-nethack.com to join in the fun.

In the end, this may be the good news we need for 2026. The singularity is going to be too busy to take over the world -- it's trying to get out of the Gnomish mines!


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by ataradov on Wednesday February 25, @10:00PM (4 children)

    by ataradov (4776) on Wednesday February 25, @10:00PM (#1434962) Homepage

    Stop giving computer programs human traits.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=3, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 5, Insightful) by ikanreed on Wednesday February 25, @10:34PM

    by ikanreed (3164) on Wednesday February 25, @10:34PM (#1434963) Journal

    I've come to think of this kind of behavior as failing the turing test from the other side.

  • (Score: 5, Touché) by aafcac on Wednesday February 25, @11:22PM

    by aafcac (17646) on Wednesday February 25, @11:22PM (#1434965)

    Alternatively, people should stop acting like NPCs.

  • (Score: 5, Funny) by deimtee on Thursday February 26, @10:36AM

    by deimtee (3272) on Thursday February 26, @10:36AM (#1434998) Journal

    Stop giving computer programs human traits.

    Yeah, they hate it when you do that.

    --
    200 million years is actually quite a long time.
  • (Score: 2, Interesting) by DaanNaaktgeboren on Thursday February 26, @02:30PM

    by DaanNaaktgeboren (58985) on Thursday February 26, @02:30PM (#1435013)

    Why? For AI to be useful to society, it needs to interact with humans. So human traits are interesting. A sharper question is perhaps are the "human" traits the result of anthropomorphizing, training an LLM to act like a human, or something else?