Logs
π§ Philosophy
While Tracing gives you a structured, span-based timeline of what happened during a request, the Log View complements it by exposing the raw stream of system logs β straight from the deployed pod running your workflow or application.
Logs provide immediate visibility into what your app is printing β whether itβs debug output, tracebacks, status updates, or standard print
logs.
Where tracing answers "What did INTELLITHING do?", logs answer:
- What did my app say?
- Are there errors, crashes, or warnings in real-time?
- Is my custom code running as expected?
The Log tab gives you a live connection to the underlying pod, so you can watch logs as they happen, with no need to SSH, attach kubectl, or connect to external logging tools.
π Key Concepts
Concept | Description |
---|---|
Pod | The container (running on Kubernetes) that hosts your application instance |
Log Stream | The real-time output of stdout and stderr from your running app |
Search Loop | The system automatically searches for the correct pod and connects to it |
Reconnection | If the pod crashes or restarts, Helm attempts to reconnect automatically |
π Key Definitions
- Live Logs: Real-time messages from your app, streamed via WebSocket.
- Search Loop: The client periodically looks for a
Running
pod behind your deployment. - Pod Binding: Once a pod is found, the system streams logs from that specific container.
- Ping: A heartbeat system ensures that logs are still streaming correctly.
- Auto-Retry: If the pod dies, the system searches again and reconnects to the next available one.
Real Example
- You click the Logs tab for your deployed app.
- INTELLITHING searches for a running pod.
- Once found, a message appears:
- You now see lines like:
2025-09-03 13:14:02 | INFO | Starting model inference...
2025-09-03 13:14:02 | WARNING | Deprecated param 'temperature' used.
2025-09-03 13:14:03 | ERROR | Exception during handler execution: TimeoutError
If the pod dies or restarts, youβll see:
βοΈ How the Log View Fits into INTELLITHING
The Logs tab is available next to Trace in Helm.
It is designed for:
- Live debugging β Watch logs stream as your app handles real traffic
- Crash inspection β See
print()
output or error stack traces right after a failure - Validation β Confirm your model, script, or DSL behavior matches expectations
- Minimal friction β No kubectl, no devops setup β logs work out-of-the-box
The Logs tab is tightly integrated into the deployment lifecycle:
- It auto-discovers the right pod behind your deployment
- It keeps the connection alive using heartbeats
- If the pod changes (e.g. restarts), it reconnects seamlessly
- You donβt have to know Kubernetes at all β it's abstracted away
π Why It Matters
- No Setup Required β You donβt need to deploy Loki, Prometheus, or any infra tools.
- Live Feedback β Watch your appβs log lines update in real time.
- Faster Debugging β See tracebacks, warnings, and printouts instantly.
- Tied to Deployment β Youβre always seeing the logs for the correct running pod.
π How to Use It
-
Open your app/workflow in Helm
-
Navigate to the Logs tab (next to Trace)
-
Wait for the connection message:
- Logs start appearing immediately.
π‘ Logs reflect your app's
print()
,logging.info()
,logging.error()
, etc.
π οΈ Best Practices
Tip | Why |
---|---|
Use logging not print() |
Structured logs give you timestamps, levels, and file info |
Add try/except around model handlers | You'll catch tracebacks that show up in logs |
Write one line per log | Avoid multiline logs for easier reading |
Use different levels (INFO, WARNING, ERROR) | This makes scanning logs much easier |
Tail logs live while testing | Run a sample request, then watch logs stream in |
π§ When to Use Trace vs Logs
Situation | Use Trace β | Use Logs β |
---|---|---|
See input/output structure | β | |
Investigate crash inside a model | β | |
Debug DSL behavior | β | β |
Look at custom print statements | β | |
Validate workflow routing decisions | β | |
Monitor model stream response chunks | β | |
Spot uncaught Python exceptions | β |