n8n Beginners' Walkthrough: Step-by-Step Assembly of an AI Agent Brain, Instructions, and Scheduled Reddit Summarizer

Key Takeaways
- You can build a personal AI agent to automate information gathering using n8n, an open-source workflow builder.
- The core process involves four steps: a Schedule Trigger to start, an HTTP Request to fetch data (e.g., from Reddit), an AI Node to summarize, and a Notification Node (e.g., Discord) to deliver the result.
- This moves beyond simple automation into agentic workflow orchestration, creating a system where an AI can reason about and act on live data for you.
I almost missed the boat on Devin, the AI software engineer. The announcement was buried under a mountain of memes and random posts in my favorite AI subreddits. By the time I caught up, I was already a day behind.
It was a classic case of information overload, and frankly, I was tired of it. I don’t want to just automate tasks; I want to orchestrate an entire workflow that thinks for me.
That's when I decided to stop being a passive consumer and start building my own digital scouts. My tool of choice? n8n. n8n is an open-source, visual workflow builder that lets you connect pretty much anything to anything else, making it the perfect workshop for assembling AI helpers that sift through the noise.
Today, I'm walking you through how to build your very first AI agent in n8n. We're going to create a scheduled scout that reads a subreddit for you and delivers a concise summary every single morning. This isn't just about saving time; it's about building your first digital employee.
What We're Building: Your Personal AI Reddit Scout
The Goal: A daily summary from your favorite subreddit.
The mission is simple: every morning at 9 AM, an automated workflow will wake up and scan the top posts from a subreddit of our choice. It will then use an AI model to summarize the key points and deliver that intelligence report directly to us. No more endless scrolling.
The Tools: n8n, Reddit, and an AI model.
We'll be using three main components: 1. n8n: The platform where we'll assemble our agent. 2. Reddit: Our source of raw data (specifically, we'll use its public JSON feed). 3. An LLM (like OpenAI's GPT): The "brain" that will perform the summarization.
A Quick Look at the Final Workflow.
Visually, our assembly line will look incredibly simple, which is the beauty of n8n's node-based system:
Schedule Trigger → HTTP Request (to Reddit) → AI Agent (Summarizer) → Discord Message
Prerequisites: Gathering Your Components
Before we start building, let's get our parts in order.
Setting up your n8n instance (Cloud or Self-Hosted).
You can use n8n's cloud service for a quick start or self-host it as I do. Being open-source and self-hostable gives you total control over your data and workflows. You can get it running on a server in minutes.
Getting your OpenAI API Key.
You'll need an API key from a service like OpenAI, Anthropic, or Groq. This will allow our n8n workflow to make calls to the large language model that will act as our agent's brain.
Preparing Reddit credentials (if needed for the node).
For this simple project, we don't need special credentials. We'll be hitting Reddit's public .json endpoint, which is a neat trick for grabbing data without authentication.
Step 1: The 'Wake-Up' Call - Scheduling the Agent
Every agent needs a trigger, a command to start its work. Ours won't be a button press, but the clock itself.
Adding the 'Schedule' Trigger Node.
In your new n8n workflow, the first node you'll see is the "Start" node. Click the '+' and search for the Schedule node. This node is your personal alarm clock for your workflows.
Configuring it to run every morning.
I want my briefing at 9 AM UTC. To do that, I'll set the "Mode" to "Every Day" and the "Hour" to 9. For more control, you can use a cron expression like 0 9 * * *.
Step 2: The 'Eyes & Ears' - Fetching Data from Reddit
With our wake-up call set, our agent needs to look at the world. We'll point its eyes toward Reddit.
Adding and configuring the HTTP Request Node.
Click the '+' on the Schedule node and add an HTTP Request node. This is a universal tool for interacting with almost any API on the web.
Targeting a specific subreddit (e.g., r/artificialintelligence).
In the URL field, enter the address for the subreddit's JSON feed. For example, to get the top 5 new posts from the r/artificialintelligence subreddit, the URL would be: https://www.reddit.com/r/artificialintelligence/new.json?limit=5.
Setting it to grab the top posts of the day.
Run a quick test by clicking "Execute Node." You should see a JSON output containing the titles, text, and other data from the latest posts. We now have our raw material.
Step 3: The 'Brain' - Processing Information with AI
This is where the magic happens. We're giving our workflow a mind of its own.
Introducing the OpenAI / LLM Node.
Add an OpenAI node (or your preferred LLM provider's node) after the HTTP Request. You'll need to connect your API key credentials the first time you do this.
Crafting the 'Instructions': The summarization prompt.
In the prompt field, we'll write our instructions, telling the AI exactly what we want. We can dynamically pull in data from the previous node using n8n's expressions.
Here's a prompt I like:
You are an expert analyst. Your job is to summarize the key information from the following Reddit posts. Please provide a brief, bulleted summary of the top 3-5 most important topics.
Here is the data:
{{ $json.data.children }}
Wiring the Reddit data into your AI prompt.
That {{ $json.data.children }} part is key. It's an n8n expression that takes the entire output from the previous HTTP node and injects it directly into our prompt. The AI now has the context it needs to do its job.
This step exemplifies the evolution from simple task automation to what I call agentic workflow orchestration. We're not just moving data; we're creating a system where an AI can reason and act upon it.
For a more advanced setup where your agent needs to use tools or remember past conversations, you'd use the AI Agent node. This node can be equipped with memory and the ability to make its own API calls.
Step 4: The 'Mouthpiece' - Delivering the Summary
Our agent has done its thinking. Now it needs to report back to us.
Choosing your delivery method (e.g., Discord, Slack, Email Node).
I live in Discord, so I'll use the Discord node. You could just as easily use the Slack, Telegram, or Email node. Connect it after the OpenAI node.
Formatting the output from the AI Node into a clean message.
In the "Text" field of the Discord node, we'll use an expression to pull in the result from the AI. The expression will look something like this: {{ $('OpenAI').json.choices[0].message.content }}.
I like to add a little flair:
**Good Morning! Here is your AI Reddit Summary for the day:**
{{ $('OpenAI').json.choices[0].message.content }}
Final Assembly: Activating Your AI Agent
The components are connected. It's time to flip the switch.
Testing each step of the workflow.
The best feature of n8n is the ability to run each node individually. Execute the Schedule node, then the HTTP node, and so on. This makes debugging incredibly intuitive.
Turning on the 'Active' toggle.
Once you're happy with the test run, save the workflow and click the "Active" toggle in the top-right corner.
Congratulations, your agent is now live!
That's it. Your agent is now operational. Tomorrow morning at 9 AM, it will execute its mission and deliver your summary.
Conclusion: What's Next for Your Agent?
You've just built more than a simple automation; you've assembled a basic AI agent. You learned how to schedule a task, fetch data from an API, process that data with an AI brain, and deliver the output. This is a fundamental building block for creating a powerful, personalized system of digital workers.
What's next? The possibilities are endless. * Expand the scope: Add another workflow that summarizes Hacker News or a specific blog's RSS feed. * Add filtering: Insert an "IF" node after the Reddit step to only summarize posts with more than 100 upvotes. * Create alerts: Change the prompt to look for specific keywords and only send a notification if a keyword is found.
By chaining together these simple nodes, you can build surprisingly complex systems. This is the new frontier for solo founders and tech enthusiasts. We're not just using tools; we're building intelligent systems that can scale our abilities far beyond what was previously possible.
Go on, build your first scout. The age of information overload is over. The age of personal intelligence agents has begun.
Recommended Watch
💬 Thoughts? Share in the comments below!
Comments
Post a Comment