In this blog, I dive into the recently launched AI agent system from Make.com and compare it against the established AI agents from n8n. By breaking down their features, usability, and overall performance, I aim to help you choose the best platform for your automation needs.
User Experience (UX)
Creating AI agents in n8n is incredibly straightforward. You start by clicking “Create Workflow” and adding your first step. A trigger is essential, so you might opt for a chat trigger, allowing you to interact with the agent through a chat interface. After that, simply click the plus sign, type in “agent,” and you’re ready to go. The AI agent node appears, enabling you to add your system message or prompt.
In n8n, you can also add an LLM chat model, a memory retention mechanism, and various tools. For instance, adding OpenAI is easy if you already have your credentials set up. Just click “Open Chat,” say hello, and the AI agent responds immediately. The setup process in n8n is intuitive, making it easy for anyone to get started with AI agents.
Conversely, Make.com approaches things differently. They have a dedicated tab for AI agents. When you click it, you can create an agent, defining its attributes upfront and embedding it into your scenarios. For example, you might label your agent as a “research agent” and choose a model. While this functionality is still in beta, it has room for improvement.
Once you save your initial system prompt, you enter the agent edit page, where you can expand and refine the prompt further. However, the system prompt area has limitations; for instance, you can’t easily expand it. You can add tools to the agent, but the tools are essentially scenarios. This means that while you have some flexibility, you can’t use the same level of options available in n8n, where you can trigger HTTP requests directly.
In n8n, you have a long list of tools that can be hooked into your agents directly, and they don’t need to be wrapped in scenarios. This flexibility makes the creation of agents simpler. If you want to add a Google Calendar tool, just select it and let the model decide what data to inject. This saves time and effort.
Needing to create a scenario for every tool can feel cumbersome in Make.com. You can only choose from existing scenarios, which adds an extra layer of complexity that isn’t present in n8n. The intuitive nature of n8n shines through here, as it allows for easier visualization of the various tools connected to the agent node.
In summary, n8n offers a more mature and user-friendly experience for setting up agents. The ease with which you can configure and test agents makes it the clear winner in this category. I look forward to seeing how Make.com evolves its workflows, but for now, n8n is simply more intuitive.
Interfaces and Triggers
n8n shines when it comes to interfaces and triggers. The embedded chat interface is particularly effective for testing agents. However, you’re not limited to just that; there are multiple ways to trigger actions. You can initiate triggers via webhooks, on a schedule, or when executed by other workflows. This flexibility enables the creation of multi-agent teams and adds depth to your automation capabilities.
Form submission triggers offer another layer of versatility. For instance, you can create a custom form that, when filled out, triggers your AI agent. The chat interface can even be made publicly available, allowing for its embedding on websites as a customer service chatbot.
In contrast, triggering an AI agent in Make.com feels more cumbersome. You might set a variable for a message, but this process lacks the fluidity and back-and-forth conversation style offered by n8n’s chat interface. While it’s possible to run a scenario and get a response back, the interaction feels clunky and doesn’t provide the same user experience.
Make.com has its own set of triggers, but the limitation of only having one trigger per scenario is a significant drawback. You can’t connect multiple triggers as you can in n8n. This limitation, coupled with the absence of a native chat interface, means you miss out on the ease of embedding functionalities. A workaround involves using webhooks, but this requires building a custom front end, complicating the entire process.
Make.com promotes its AI agents more as reasoning nodes within workflows rather than as traditional chatbot interfaces. This shift in focus is evident in how they structure their scenarios, aiming for a more autonomous decision-making process rather than simple user interactions.
To summarize, n8n offers a more versatile and user-friendly approach to interfaces and triggers. The ability to easily create, test, and embed agents as chatbots gives it a significant edge over Make.com.
LLMs and Reasoning
When looking at LLMs and reasoning, Make.com offers a variety of model choices. However, a notable limitation is that once you’ve created an agent, you can’t change the model provider. This lack of flexibility can be a barrier for users who want to experiment with different models. For instance, if you choose the Gemini connection initially, switching to another model later requires creating a new agent.
Reasoning agents are essential for generating intelligent responses. While they may take longer and cost more, they significantly enhance the output quality, especially in multi-agent systems. When I tested the reasoning capabilities of different models, I found that while some worked well, others lacked the necessary thinking abilities.
In contrast, n8n offers a more flexible environment for working with LLMs. You can easily enable or disable thinking features for the models, allowing for better customization based on your needs. Additionally, the variety of models available, including enterprise-level options, provides a broader range of possibilities for users.
In the end, while both platforms have strengths, n8n provides a more comprehensive and flexible approach to LLMs and reasoning. The added options and ease of use make it a preferable choice for many users.
Prompt Engineering
Prompt engineering is another area where the two platforms diverge significantly. In Make.com, the system prompt is static; you cannot incorporate dynamic variables. This limitation restricts the customization of your AI agents. When I attempted to add dynamic elements like the current date, I had to do so at the scenario level rather than within the agent settings.
On the other hand, n8n allows for greater flexibility in prompt engineering. You can set prompts to accept dynamic information, enabling a more tailored approach. The use of logical operators and expressions further enhances the customization options available to users.
Moreover, n8n’s low-code environment allows for the integration of complex logic using JavaScript or Python, making it easier to create sophisticated prompts. This level of flexibility is crucial for users looking to maximize the potential of their AI agents.
Overall, n8n stands out in the realm of prompt engineering. The ability to incorporate dynamic elements and complex logic makes it a clear winner in this category.
Tools
When it comes to tools, Make.com’s agents rely on scenarios that the agent has access to. This setup means that each tool is essentially a scenario, which can limit flexibility. In contrast, n8n’s agents are more straightforward and can be directly embedded on a canvas. If you need to use an agent across multiple workflows, it’s as simple as copying and pasting.
However, Make.com does offer a broader range of integrations with various services. This means that while n8n may have a simpler structure, Make.com provides more out-of-the-box modules that can be utilized. If you frequently need to hit specific APIs, Make.com might be the better choice.
Both platforms allow for HTTP requests to custom endpoints, but the ease of use varies. Users often find themselves needing to interact with APIs directly in n8n due to a lack of native tools for certain services.
In summary, while Make.com has more integrations, n8n’s straightforward approach to tools gives it an advantage in usability. The choice between the two often depends on what specific requirements you have for your projects.
Memory and Sessions
Memory management is crucial for creating effective AI agents. In Make.com, the options for memory persistence are somewhat limited. You can choose between a simple memory that loads into RAM, which won’t last beyond that session, or opt for more persistent choices like Redis or Postgres chat memory. If you select simple memory, you can set a session key, either fixed or variable, to help track different interactions. You can also determine a context window length, which dictates how many interactions the agent retains, a vital feature for maintaining conversation flow.
However, Make.com offers minimal flexibility in this area. While you can set a thread ID or session ID in the agent’s canvas configuration, the overall control remains basic. For instance, if WhatsApp is your trigger, the user’s phone number could serve as the session identifier. This way, the agent retains the context of the conversation throughout the interaction. Although the documentation lacks clarity on the iterations from history count, it seems to refer to the number of interactions stored.
For beginners, Make.com’s approach is user-friendly. Many new users might not require complex memory management systems, so this simplicity works well. You can create sessions based on IDs or generate new ones if no ID is set. While Make.com isn’t bad in this aspect, it abstracts much of the process away. In contrast, n8n provides greater control but introduces more complexity. For sophisticated agents, n8n’s flexibility is likely more beneficial.
Knowledge and RAG
Knowledge management is another area where Make.com falls short. Surprisingly, their announcements regarding AI agents didn’t even mention retrieval-augmented generation (RAG). An effective AI agent combines an LLM brain, tools for triggering actions, knowledge for informed responses, and memory. Omitting knowledge from the conversation feels like a significant oversight.
In n8n, knowledge is derived from tools rather than being a separate leg. The vector stores are particularly useful here. For example, if you want your agent to access a Pinecone vector store, you simply click to connect and retrieve documents. This process is straightforward and offers a wealth of functionalities. You can select embedding models, such as OpenAI’s text embedding model, to optimize how the agent processes queries.
When a query comes in, it’s transformed into an embedding, which is sent to the vector store to fetch relevant results. This data is then utilized to formulate the agent’s response. n8n also provides extensive features for loading documents into vector stores. For instance, I previously demonstrated how to inject web pages into a vector store using a specific chunking strategy. This involves breaking down the content into manageable segments, which enhances retrieval accuracy.
While Make.com has some functionality for RAG, it lacks robust chunking options. My attempts to upload Google Sheets data into a vector store resulted in each row treated as a vector without any effective chunking. The native chunking features are underwhelming, making it difficult to achieve decent retrieval results. To enhance this, Make.com should implement native modules that aid in embedding and chunking documents efficiently.
Output Formats
Output formats are vital for ensuring that the data produced by AI agents can be effectively utilized in workflows. Make.com positions its agents to be integrated within specific scenarios, which often demand structured outputs in JSON format. While you can define the required JSON schema in the system instructions, there’s no option to enforce it strictly.
In contrast, n8n allows you to specify an output format directly when configuring the agent. This includes the ability to add an output parser, which can handle different parsing options. For example, if you need to produce a structured JSON object, you can set that up easily. Once the output parser is configured, the agent is required to generate outputs in that specified format.
Moreover, n8n offers an auto-fixing output parser feature. If the initial output doesn’t meet the specified format, another LLM can be employed to correct it. This setup significantly enhances the reliability of outputs, which can then seamlessly integrate into subsequent modules.
Make.com needs to improve its functionality regarding enforced output formats. Providing users with the ability to create flexible schemas would enhance the reliability of the data generated by its agents. For now, n8n has the upper hand in this category.
Multi-Agent Teams
Creating multi-agent teams can be a game-changer for complex automations. I developed a personal assistant agent that manages 25 sub-agents. This architecture allows for a highly organized and efficient workflow. For example, the main agent, named Hal 9001, uses Telegram as a trigger and can respond in the iconic voice from the movie “2001: A Space Odyssey.” This setup utilizes text-to-speech technology from the Speechify API.
The structure of this multi-agent system uses agents as tools. Each supervisor agent can trigger various sub-agents, such as email, Slack, and Twitter agents. This hierarchical architecture makes it easy to manage multiple tasks simultaneously. In theory, you could replicate this setup in Make.com by creating a director agent and embedding subordinate agents within scenarios.
However, the setup in Make.com is more cumbersome due to the abstraction of AI agents outside the workflow canvas. Additionally, timeouts can pose challenges. For instance, if sub-agents take too long to execute their tasks, the main agent workflow may timeout. Make.com allows you to continue the scenario while the agent is working, but this requires specifying a webhook URL for longer tasks.
Despite these challenges, both platforms face issues with timeouts for multi-agent systems. While n8n offers more flexibility, particularly in self-hosted versions, both platforms have limitations. As I continue to explore Make.com’s AI agents, I aim to provide more insights into their capabilities in the context of multi-agent systems.
Deployment and Privacy
For deployment and privacy, I can confidently say that n8n takes the lead. You have options with n8n: run it in the cloud, self-host on your own server, or use services like Render or Railway. You can also host it with AWS or Google Cloud, or even run it locally in Docker. This flexibility offers significant advantages for data privacy. You can completely isolate your setup behind firewalls.
In contrast, Make.com restricts you to their cloud platform. While this simplicity may appeal to some, it limits your options. Make.com does have a privacy section on their website discussing privacy by design. They offer decent privacy features, like the ability to turn off logging once a scenario is active. This ensures that sensitive information isn’t saved during execution.
On their more expensive enterprise plan, Make.com offers enhanced security features such as audit logs, compliance support, and single sign-on with company-specific identification management systems. However, the lack of flexibility in deployment options makes n8n the clear winner here.
MCP
Moving on to MCP, I previously created a video on MCP agents in n8n. This was using a community module that gained significant traction. Since then, n8n has introduced their own MCP clients and a server for integration with platforms like Cloud Desktop or Cursor. If you’re unfamiliar with MCP, I recommend checking out my video for a deeper understanding.
With industry support, MCP is becoming the standard for building agents and connecting them with tools. Even Zapier has developed an MCP product, allowing agents to link to their vast array of modules. Make.com, however, is lagging behind and needs to roll out an MCP solution to remain competitive. Another point for n8n.
Pricing
Now let’s talk about pricing. Make.com’s pricing model is based on operations. For example, ten thousand operations on the core plan costs nine dollars a month. Within their AI agents, you’ll incur costs per inference through various LLM providers. The same applies to n8n, where you’ll pay for connections to different platforms.
For n8n, the open-source version lets you download it for free and run it yourself, which means you only pay for server costs. You won’t be charged per operation. In n8n Cloud, you have a monthly budget for workflow executions, starting at two thousand five hundred, with unlimited steps or operations. You can also maintain five active workflows.
Make.com allows for an unlimited number of active workflows, but if you’re running n8n agents at scale, it’s wise to set up your own server on platforms like Render or Railway. You can get your instance running for as low as five dollars a month, which is more economical than the pay-per-operation model of Make.com. Once again, n8n takes this round.
The Results!
So, what’s the verdict? n8n agents clearly come out on top. It’s a more mature platform with greater flexibility and a wider range of features. n8n is also keeping pace with the latest in MCP developments, while Make.com trails behind. The only area where Make.com excels is its extensive tool integrations, which n8n can’t match.
Surprisingly, I found n8n easier to use than Make.com, which I initially thought would be more beginner-friendly. Setting up agents in Make.com feels clunky, while n8n allows for a straightforward process: drop an agent node onto a canvas, hook up a chat model, connect a few tools, and you’re good to go.
Make.com might need to rethink how they structure agent creation. Integrating everything into scenarios could simplify usage significantly. Overall, n8n offers a more seamless experience, and I look forward to exploring its capabilities further.