Block Editor
Key Concepts
Blocks are drag-and-drop modules that can be individually configured and manipulated. This process is done through a visual editor called the Block Editor. A combination of blocks is referred to as a stack. Every block consists of a combination of nodes.
- Combination of nodes → Block
- Combination of blocks → Stack
Within the Block Editor, you have:
- LLMs: Mostly open-source, Hugging Face-compatible models.
- Data Connectors: These connect to various data sources to retrieve data dynamically during pipeline execution.
- Agents: Includes React Agents and Function Calling Agents.
- Supervised Machine Learning: Models trained via the “Train a Model” section, but available in the Block Editor like other modules.
- INTELLITHING Router Engine: Every stack uses the INTELLITHING Router Engine by default to route queries to the appropriate block. However, workflows allow you to override this default behavior and create a custom flow.
Key definition
There are multiple categories of the blocks.
- LLM: Large Language Models which are Hugging Face compatible models with an additional IntelliConfig file.
- TML: Traditional Machine Learning is a block created upon training a model in the "Train a Model" section. Upon successful training, they are treated as a block and can be utilized in your workflow.
- Data Connectors: Data connectors are INTELLITHING's legacy connection mechanism for allowing a model to access and integrate with other sources of data and utilize the data as and when needed.
- Crawler: With Crawler/Scraper you can index multiple web pages and allow LLM access to your web pages.
- RAG: Retrieval Augmented Generation is a category with a single block that accepts different data formats.
- ReAct Agents: A ReAct agent is an approach where a language model can reason through a problem step-by-step, breaking down tasks logically, act by calling tools, APIs, or interacting with an environment (like a search engine, calculator, or code executor), and observe the result of each action. This observe-think-act cycle is repeated until the task is completed. ReAct agents represent a broader shift in AI toward tool-augmented large language models and agentic workflows, enabling systems that can not only generate text but also interact intelligently with their surroundings to solve complex problems.
- Function Calling Agents: Function-calling agents are AI systems powered by language models that can intelligently decide when and how to call predefined functions to complete a task. Instead of only generating plain text, these agents are given access to a set of tools or APIs—each represented as a "function" with a specific name, parameters, and expected output. When prompted with a task, the agent can choose to call one or more of these functions, provide the required inputs, and then use the results to continue reasoning or generate a final response. This approach allows language models to move beyond static text generation and interact with external systems—like databases, web services, or calculators—in a structured, reliable way. Function-calling agents are a key enabler for building practical AI applications, such as personal assistants, data query systems, or autonomous workflows, where the model acts not only as a conversational interface but as an intelligent orchestrator of real-world actions.
Each category contains multiple blocks that can be simply dragged and dropped and configured accordingly.