. N3uron Academy | Configuring MCP Server
Contact Us
Download N3uron
Back to videos

MCP Server / Configuring MCP Server

Configuring MCP Server

Description

In this video of our N3uron Academy, we are going to focus on the configuration process of the MCP Server module. Let’s get started!

  • [11:09] Configuring MCP Server

Transcription

[00:00] Hello everyone, and welcome back. In the previous video, we introduced the N3uron MCP Server and reviewed its purpose and main capabilities. In this video, we are going to continue from that introduction and focus on the configuration process. As we mentioned earlier, we will use the PVDemo project for this tutorial. This project can be downloaded from the Web Vision introduction article available in our Knowledge Base. The idea is to provide a practical environment where anyone interested in testing the N3uron MCP Server can do so without needing access to a real plant. At the same time, we encourage you to take everything we cover here and apply it to your own real assets and production environments. The PVDemo project is a self-contained simulation of a photovoltaic plant, built using datasets from a real PV installation. This makes it a very useful example for learning and testing, while still reflecting realistic operating conditions.

[01:00] Once the backup file has been downloaded, go to System, then Config, then This Node, and open the Backup Manager. From there, import the backup and load it into the node. In our case, this step has already been completed, so we are ready to continue with the MCP configuration. Now let’s move into the Tags configuration so we can take a closer look at the tag model used in this project. Here, it is important to highlight that having a well-structured tag model is essential, especially when working with AI assistants through the MCP Server. The quality of the results we obtain is closely related to the quality of the context we provide. This means that tags should not only be organized in a clear and logical way, but they should also include meaningful metadata. Elements such as descriptions, engineering units, scaling information, alarm descriptions, and consistent naming conventions all play a very important role. The more context we provide through the tag model, the better the AI can reason about our installation, understand the purpose of the assets, and interpret the data correctly. This reduces ambiguity and minimizes the need for guessing, which ultimately leads to more accurate and more useful results.

[02:06] So in short, a strong tag structure is not only a good practice from a data modeling perspective, but also a key factor in helping AI assistants understand our assets more effectively. Next, we are going to instantiate the MCP Server module, following the usual procedure for adding a new module in N3uron. To do that, go to Config and then click on Modules section. From there, click the menu button, create a new module, assign it a meaningful name, and then select MCP Server as the module type. As always, using a clear and descriptive module name is recommended. In this case, I have already instantiated the module beforehand, so I am not going to create it again. Instead, I will simply discard the changes and continue with the existing configuration. Now we navigate to the MCP Server connection settings. In this area, we can choose whether the server will use HTTP or HTTPS. If we enable HTTPS, we secure the communication using TLS and the corresponding certificates.

[03:01] We can also decide which network interface the server will listen on, either a specific network interface or all available network interfaces, depending on the deployment requirements. For certificate management, N3uron gives us different options. We can embed the certificates directly in the configuration, or we can provide the file path to the certificate files. Using file paths can be especially useful in scenarios where certificates are rotated periodically and need to be updated without manually embedding them again. We also have the Certificate Helper available, which simplifies the process of managing certificates. With this tool, we can generate a certificate signing request, or even create self-signed certificates automatically for testing and internal validation purposes. In this example, we are using a self-signed certificate. Once it is created, we will download it and install it in the Windows Trusted Root Certificate Store. This way, the MCP client that we will use later will rely on the operating system trust store to validate and trust the server connection. Once the certificate is installed, we move to the Access section and configure Bearer tokens.

[04:03] Here, we define what each token can do and create different access profiles if needed. MCP is based on three primitives: tools, prompts, and resources. Resources can include files, tags, and alarms, including subscription-based resources. N3uron supports all MCP primitives, including subscriptions. Even if some clients do not fully support the whole specification yet, N3uron is already ready for it. Here we can also grant access to custom tools, prompts, and resources. For built-in categories like Tags, Alarms, System, or Modules, we can assign no access, read, or write permissions. Finally, we also add tag filters to define which parts of the tag model the MCP Client can access. Each setting includes inline help, and more information is available in our Knowledge Base. Now we move to MCP Inspector, a useful tool for testing and troubleshooting MCP servers, especially when working with custom tools. You can find the installation steps and the command we use here in our Knowledge Base.

[05:00] In this section, the goal is to explore the tools, prompts, and resources exposed by the server. Once we run MCP Inspector, we can connect to our MCP Server by selecting the Streamable HTTP transport, entering the server URL, and adding the Authorization header with the Bearer token we created in N3uron. After connecting, MCP Inspector lets us explore everything the server exposes through the protocol itself, including the available resources, prompts, and tools, along with the context and documentation needed for an AI agent to understand how to use them. We start in the Resources section. Here we can list the available resources and inspect examples such as log files and uploaded file resources, like datasheets. We can also open tag-based resources and subscribe to them, which lets us receive notifications when values change. This is especially useful for real-time monitoring scenarios, where the client needs to react to updates instead of repeatedly polling for data. Next, we move to Prompts. These are predefined instructions and queries that we can create at the server level for our specific use cases and business logic.

[06:00] This means users working with a conversational AI assistant do not need to write the same prompt every time. Instead, the client can reuse prompts that are already standardized and optimized for the task. This is also valuable for autonomous agents, since they can retrieve and use these prompts directly, when supported, to carry out tasks more consistently. Finally, we open the Tools section, where we can inspect the available operations exposed by the server. In this example, we use the tag_describe tool to retrieve tags under a given path, together with their metadata and, optionally, their current value, quality, and timestamp. One important detail is that tag tools support pagination. This is necessary because real tag models can contain thousands of tags, and returning everything in a single response would be inefficient and harder for clients to process. With pagination, results are delivered in manageable chunks, improving performance and making navigation through large tag models much more practical. The Knowledge Base also provides comprehensive documentation for all the available tools exposed by the MCP Server.

[07:02] There, you can find a detailed explanation of what each tool does, which parameters it accepts, how those parameters should be used, and what kind of output you can expect in return. This is very helpful not only for understanding the capabilities available to the client, but also for designing reliable integrations and troubleshooting interactions more easily. In other words, the MCP protocol already exposes a lot of self-describing context, and the Knowledge Base complements that with deeper documentation so users and developers can understand the behavior of each tool with much more clarity. Now we move to our first MCP client example using AnythingLLM, which we chose for its simplicity and user-friendly setup. Throughout the videos, we’ll mainly use Claude Desktop, but also AnythingLLM and other clients to show you alternative options. To connect it to the N3uron MCP Server, we go to the Agent Skills section, open the MCP Servers area, and click the wrench icon. This opens the configuration file where we can enter the MCP connection settings, including the server URL and the authorization token.

[08:02] In our Knowledge Base, you can find a configuration example that you can copy and paste as a starting point. Then, you only need to adjust the IP address, port, and token to match your own environment. In that same configuration window, users can also enable additional capabilities such as chart generation, RAG, and other supported features, depending on the use case. This makes AnythingLLM a very practical option to quickly start testing and interacting with the N3uron MCP Server. Next, we go to the LLM Provider section, where we choose whether to use a cloud-based model or a local LLM. In this example, we are using Anthropic Sonnet 4.6 with an API key. It is important to remember that the quality of the results will depend greatly on the model we choose. In general, the better the model fits our use case, the better the outcome will be. In other words, it is about choosing the right tool for the right job. For more demanding use cases, a more capable model may provide better reasoning, context handling, and tool usage. At the same time, many local LLMs can do a very good job when the use case is more specific, more predictable, or less complex.

[09:06] This gives us flexibility to balance performance, cost, privacy, and infrastructure requirements depending on the application. Now we start with a simple first prompt. In the next videos, we will gradually move to more advanced queries and more agentic setups, but for now this is enough to get started. Here, the assistant generates a clear report based on the last seven days of data. The main result is a critical fleet-wide issue: all inverter 1 units are underperforming compared to their paired inverter 2 units, with an estimated energy loss of about 5,675 kilowatt-hours. The report also suggests a likely cause. Based on the startup behavior, it points to a possible misconfiguration of the photovoltaic startup voltage parameter, which would make inverter 1 start later in the morning and stop earlier in the evening.

[10:02] In addition, the assistant provides practical on-site checks ranked by priority, so the result is not only descriptive but also actionable. We also see lower-severity findings, including a station-level spread and a KPI-related anomaly that appears to come from the derived tag calculation rather than from a real production issue. So even with a simple prompt, we already get a useful report with findings, likely causes, and recommended actions. At this point, we encourage you to verify the results by yourself and compare the findings with the actual data, to confirm that the conclusions align with what is really happening in the process. Now that your AI assistant is connected to the N3uron MCP Server, take some time to explore it, experiment with it, and get familiar with how it behaves. This is the best way to discover your own use cases and understand how to get the most value from it. We also recommend reading the documentation in more detail, so you can better understand the available tools, resources, prompts, and configuration options. Take care, and see you in the next videos, where we will continue with more advanced and exciting use cases.

Contact Us
Download N3uron