Models
INTELLITHING hosts its own models locally, although they are Hugging Face-compatible models originating from the Hugging Face ecosystem. For compatibility and configuration purposes, the INTELLITHING platform currently requires an IntelliConfig
file. As a result, the team continuously curates and hosts the most reliable and well-known models in the model directory.
INTELLITHING also supports the Ollama framework, which enables powerful local workflows and seamless model deployment.
š Key Concepts
There are two categories of models supported in INTELLITHING:
- Large Language Models (LLMs)
- TML (Traditional Machine Learning)
INTELLITHINGās hybrid philosophy is grounded in enterprise-grade reliability. Solely relying on LLMs for decision-making is not recommended in critical or regulated environments. Therefore, TML models are treated as tools within LLM workflowsājust like other blocks or utilities. This allows for precision decision-making, auditability, and enhanced explainability when required.
INTELLITHING automates the integration, input, and output management between LLMs and TML models within your workflows.
ā ļø Unlike pre-trained LLMs, TML models require training on your own data. You can use real datasets or leverage INTELLITHINGās powerful data synthesizer to generate synthetic data. For details, refer to the "Train a Model" section.
š Key Definitions
-
LLM (Large Language Model):
Pre-trained transformer-based models used within the block editor. In INTELLITHING workflows, LLMs primarily serve as orchestratorsāintegrating multiple tools and blocksārather than being used for critical or final decision-making. -
TML (Traditional Machine Learning):
Explainable models trained on structured, domain-specific data. TML models are optimized for narrowly scoped tasks, require feature engineering, and offer fine control over input-output behavior. They are evaluated using standard metrics like error rates, correlation scores, and confusion matricesāoffering transparency and interpretability.
INTELLITHINGās AutoML engine automates the entire ML pipelineāfrom data preprocessing and model selection to hyperparameter optimization, model explainability, and resource management. Once trained, the TML model is available as a drag-and-drop block in the block editor.
āļø LLM Configuration
Just like any other block, an LLM block can be configured by clicking on it within the block editor. This opens a configuration prompt with several customizable options.
ā¹ļø If youāre using INTELLITHING v4.0 or later, instance selection (CPU/GPU) is done via the project card under the deployment & utilities section.
Configuration Options
-
GPU/CPU Selection
Define the number and type of compute resources to use for this deployment. -
Chat Engine Toggle
Select this if you are building a chat application instead of a general-purpose workflow automation. When enabled, INTELLITHING will automatically attach the Chat Engine and Chat Memory blocks to your deployment.
a. Chat Engine
A high-level interface enabling multi-turn conversations with your data. It supports context-aware interactions, allowing users to follow up on previous queries. It works by managing message flow and integrating with LLMs and data sources to generate coherent, contextual responses.
b. Chat Memory
Manages the history of interactions within a session. It ensures that previous messages are stored and retrievable, allowing the chat engine to incorporate past context into its replies.
Memory types like ChatMemoryBuffer
allow token-limited history retention and message summarization to manage long conversations.
āļø TML Configuration
Once a Traditional Machine Learning model has been trained, you can drag and drop it as a block in the editor. Clicking on the block opens a configuration prompt that dynamically displays the modelās input and output variables along with editable description fields.
INTELLITHINGās auto-formatting system uses this metadata to intelligently route inputs and outputsāeven in complex, chat-heavy environments where data may be unstructured or semi-structured.