Thursday, December 25, 2025
New top story on Hacker News: Show HN: Lamp Carousel – DIY kinetic sculpture powered by lamp heat
Show HN: Lamp Carousel – DIY kinetic sculpture powered by lamp heat
5 by Evidlo | 0 comments on Hacker News.
I wanted to share this fun craft activity for the holidays that I've been doing with my family over the last few years. I came up with these while cutting up some cans trying to make an aluminum version of paper spinners. There are a variety of shapes that work, but generally bigger+lighter spinners are better. Also incandescent bulbs are the best, but LEDs work too. They remind me of candle carousels I would see at my grandparents' house during Christmas. Let me know what you think!
5 by Evidlo | 0 comments on Hacker News.
I wanted to share this fun craft activity for the holidays that I've been doing with my family over the last few years. I came up with these while cutting up some cans trying to make an aluminum version of paper spinners. There are a variety of shapes that work, but generally bigger+lighter spinners are better. Also incandescent bulbs are the best, but LEDs work too. They remind me of candle carousels I would see at my grandparents' house during Christmas. Let me know what you think!
Wednesday, December 24, 2025
New top story on Hacker News: Show HN: Vibium – Browser automation for AI and humans, by Selenium's creator
Show HN: Vibium – Browser automation for AI and humans, by Selenium's creator
35 by hugs | 22 comments on Hacker News.
i started the selenium project 21 years ago. vibium is what i'd build if i started over today with ai agents in mind. go binary under the hood (handles browser, bidi, mcp) but devs never see it. just npm install vibium. python/java coming. for claude code: claude mcp add vibium -- npx -y vibium v1 ships today. ama.
35 by hugs | 22 comments on Hacker News.
i started the selenium project 21 years ago. vibium is what i'd build if i started over today with ai agents in mind. go binary under the hood (handles browser, bidi, mcp) but devs never see it. just npm install vibium. python/java coming. for claude code: claude mcp add vibium -- npx -y vibium v1 ships today. ama.
Tuesday, December 23, 2025
Monday, December 22, 2025
Sunday, December 21, 2025
Saturday, December 20, 2025
New top story on Hacker News: Show HN: HN Wrapped 2025 - an LLM reviews your year on HN
Show HN: HN Wrapped 2025 - an LLM reviews your year on HN
17 by hubraumhugo | 4 comments on Hacker News.
I was looking for some fun project to play around with the latest Gemini models and ended up building this :) Enter your username and get: - Generated roasts and stats based on your HN activity 2025 - Your personalized HN front page from 2035 (inspired by a recent Show HN [0]) - An xkcd-style comic of your HN persona It uses the latest gemini-3-flash and gemini-3-pro-image (nano banana pro) models, which deliver pretty impressive and funny results. A few examples: - dang: https://ift.tt/0ex3JPt - myself: https://ift.tt/4lCGLsg Give it a try and share yours :) Happy holidays! [0] https://ift.tt/meKfCbi
17 by hubraumhugo | 4 comments on Hacker News.
I was looking for some fun project to play around with the latest Gemini models and ended up building this :) Enter your username and get: - Generated roasts and stats based on your HN activity 2025 - Your personalized HN front page from 2035 (inspired by a recent Show HN [0]) - An xkcd-style comic of your HN persona It uses the latest gemini-3-flash and gemini-3-pro-image (nano banana pro) models, which deliver pretty impressive and funny results. A few examples: - dang: https://ift.tt/0ex3JPt - myself: https://ift.tt/4lCGLsg Give it a try and share yours :) Happy holidays! [0] https://ift.tt/meKfCbi
Friday, December 19, 2025
New top story on Hacker News: Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)
Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)
4 by linggen | 2 comments on Hacker News.
Hi HN, Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this. My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem. The Tech: Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required. Team Memory: Index knowledge so teammates' LLMs get context automatically. Visual Map: See file dependencies and refactor "blast radius." MCP-Native: Supports Cursor, Zed, and Claude Desktop. Linggen saves me hours. I’d love to hear how you manage complex system context! Repo: https://ift.tt/KM7Vo1R Website: https://linggen.dev
4 by linggen | 2 comments on Hacker News.
Hi HN, Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this. My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem. The Tech: Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required. Team Memory: Index knowledge so teammates' LLMs get context automatically. Visual Map: See file dependencies and refactor "blast radius." MCP-Native: Supports Cursor, Zed, and Claude Desktop. Linggen saves me hours. I’d love to hear how you manage complex system context! Repo: https://ift.tt/KM7Vo1R Website: https://linggen.dev
Thursday, December 18, 2025
New top story on Hacker News: Show HN: Paper2Any – Open tool to generate editable PPTs from research papers
Show HN: Paper2Any – Open tool to generate editable PPTs from research papers
7 by Mey0320 | 0 comments on Hacker News.
Hi HN, We are the OpenDCAI group from Peking University. We built Paper2Any, an open-source tool designed to automate the "Paper to Slides" workflow based on our DataFlow-Agent framework. The Problem: Writing papers is hard, but creating professional architecture diagrams and slides (PPTs) is often more tedious. Most AI tools just generate static images (PNGs) that are impossible to tweak for final publication. The Solution: Paper2Any takes a PDF, text, or sketch as input, understands the research logic, and generates fully editable PPTX (PowerPoint) files and SVGs. We prioritize flexibility and fidelity—allowing you to specify page ranges, switch visual styles, and preserve original assets. How it works: 1. Multimodal Reading: Extracts text and visual elements from the paper. You can now specify page ranges (e.g., Method section only) to focus the context and reduce token usage. 2. Content Understanding: Identifies core contributions and structural logic. 3. PPT Generation: Instead of generating one flat image, it generates independent elements (blocks, arrows, text) with selectable visual styles and organizes them into a slide layout. Links: - Demo: http://dcai-paper2any.cpolar.top/ - Code (DataFlow-Agent): https://ift.tt/Kav359Z We'd love to hear your feedback on the generation quality and the agent workflow!
7 by Mey0320 | 0 comments on Hacker News.
Hi HN, We are the OpenDCAI group from Peking University. We built Paper2Any, an open-source tool designed to automate the "Paper to Slides" workflow based on our DataFlow-Agent framework. The Problem: Writing papers is hard, but creating professional architecture diagrams and slides (PPTs) is often more tedious. Most AI tools just generate static images (PNGs) that are impossible to tweak for final publication. The Solution: Paper2Any takes a PDF, text, or sketch as input, understands the research logic, and generates fully editable PPTX (PowerPoint) files and SVGs. We prioritize flexibility and fidelity—allowing you to specify page ranges, switch visual styles, and preserve original assets. How it works: 1. Multimodal Reading: Extracts text and visual elements from the paper. You can now specify page ranges (e.g., Method section only) to focus the context and reduce token usage. 2. Content Understanding: Identifies core contributions and structural logic. 3. PPT Generation: Instead of generating one flat image, it generates independent elements (blocks, arrows, text) with selectable visual styles and organizes them into a slide layout. Links: - Demo: http://dcai-paper2any.cpolar.top/ - Code (DataFlow-Agent): https://ift.tt/Kav359Z We'd love to hear your feedback on the generation quality and the agent workflow!
Wednesday, December 17, 2025
Tuesday, December 16, 2025
Monday, December 15, 2025
Sunday, December 14, 2025
Saturday, December 13, 2025
Friday, December 12, 2025
Thursday, December 11, 2025
New top story on Hacker News: Show HN: SIM – Apache-2.0 n8n alternative
Show HN: SIM – Apache-2.0 n8n alternative
25 by waleedlatif1 | 2 comments on Hacker News.
Hey HN, Waleed here. We're building Sim ( https://sim.ai/ ), an open-source visual editor to build agentic workflows. Repo here: https://ift.tt/bTt6Hli . Docs here: https://docs.sim.ai . You can run Sim locally using Docker, with no execution limits or other restrictions. We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves. We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added: - 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ... - Tool calling with granular control: forced, auto - Agent memory: conversation memory with sliding window support (by last n messages or tokens) - Trace spans: detailed logging and observability for nested workflows and tool calling - Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents - Workflow deployment versioning with rollbacks - MCP support, Human-in-the-loop block - Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows) Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives. Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening. We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :) [1] https://ift.tt/QmXcniK [2] https://ift.tt/T8iIoWb
25 by waleedlatif1 | 2 comments on Hacker News.
Hey HN, Waleed here. We're building Sim ( https://sim.ai/ ), an open-source visual editor to build agentic workflows. Repo here: https://ift.tt/bTt6Hli . Docs here: https://docs.sim.ai . You can run Sim locally using Docker, with no execution limits or other restrictions. We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves. We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added: - 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ... - Tool calling with granular control: forced, auto - Agent memory: conversation memory with sliding window support (by last n messages or tokens) - Trace spans: detailed logging and observability for nested workflows and tool calling - Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents - Workflow deployment versioning with rollbacks - MCP support, Human-in-the-loop block - Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows) Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives. Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening. We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :) [1] https://ift.tt/QmXcniK [2] https://ift.tt/T8iIoWb









