Integrating Signals with Cost Tracking
Overview
This guide shows you how to integrate cost tracking with signals in your AI agent workflow. By linking these two together, you can:
- Automatically attribute AI provider costs (OpenAI, Anthropic, etc.) to specific business events
- Track the true cost of achieving business outcomes (e.g., booking a meeting, sending an email, processing an image or a document)
- Generate accurate invoices that reflect both usage-based pricing and actual AI costs
Prerequisites
Before you begin, make sure you have:
- Created an API key in the Paid dashboard
- Installed the Paid SDK for your language
- Configured at least one Agent with signals
- Set up your customer and order records
Note: This feature is currently available in Python, Node.js, and Java SDKs. Support for Go and Ruby is coming soon.
How It Works
In a typical AI agent workflow, your event processing logic might look like this:
To link AI costs with signals, you need to:
- Wrap your AI provider calls with Paid wrappers or hooks (e.g.,
PaidOpenAI,PaidAnthropic) - Wrap your entire event processing function in a tracing context (
@paid_tracing()decorator or context manager) - Emit the signal within the same trace with
signal()function andenable_cost_tracing=True
This ensures all AI calls and the signal share the same OpenTelemetry trace, allowing Paid to link costs to your business outcome.
Important: Cost Tracing Rules
Only enable cost tracing for ONE signal per trace
If you emit multiple signals within a single trace and enable cost tracing for more than one, the costs will be double-counted across multiple signals. The pattern should be:
- Multiple AI calls + ONE signal with cost tracing = Costs attributed to that signal
- Multiple signals = Only one should have
enable_cost_tracing=True
Auto-Instrumentation (OpenTelemetry Instrumentors)
For maximum convenience, you can use OpenTelemetry auto-instrumentation to automatically track costs without modifying your AI library calls. This approach uses official OpenTelemetry instrumentors for supported AI libraries.
Quick Start
Python
Java
Supported Libraries
Auto-instrumentation supports the following AI libraries:
Python:
Java:
Note: Node.js auto-instrumentation support is coming soon.
Selective Instrumentation
Python
Java
How It Works
- Auto-instrumentation uses official OpenTelemetry instrumentors for each AI library
- It automatically wraps library calls without requiring you to use Paid wrapper classes
- Works seamlessly with
@paid_tracing()decorator or context manager - Costs are tracked in the same way as when using manual wrappers
- Should be called once during application startup, typically before creating AI client instances
Basic Integration Pattern
Here’s the recommended pattern for integrating cost tracking with signals:
Python - Decorator
Python - Context Manager
Node.js
Java
Go
Ruby
Real-World Example: AI SDR Agent
Let’s walk through a complete example of an AI SDR agent that processes email replies and attributes costs to the appropriate signal.
Python
Node.js
Java
Go
Ruby
Event Loop Integration
Most AI agents run in an event loop, processing events continuously. Here’s how to integrate cost tracking in that pattern:
Python
Node.js
Java
Go
Ruby
Passing User Metadata
You can attach custom metadata to your traces by passing a metadata dictionary to the paid_tracing() decorator or context manager. This metadata will be stored with the trace and can be used to filter and query traces later.
Python - Decorator
Python - Context Manager
Node.js
Java
Querying Traces by Metadata
Once you’ve added metadata to your traces, you can filter traces using the metadata parameter in the traces API endpoint:
Advanced: Distributed Tracing
Sometimes your agent workflow cannot fit into a single traceable function because it has to be disjoint for various reasons:
- Complex concurrency logic
- Asynchronous workflows with callbacks
- Work distributed across multiple processes or machines
- Event-driven architectures with message queues
For these cases, Paid provides tracing tokens that allow you to link costs and signals across distributed parts of your application.
How Distributed Tracing Works
- Generate a tracing token in one part of your system
- Pass the token to other parts (via API, message queue, database, etc.)
- Pass the token to
@paid_tracing()decorator or context manager when executing traced code - All traces that share the same token will be linked together. This way signals can be associated with the costs.
Note: This feature is currently available in the Python SDK only. Node.js support is coming soon.
Using tracing_token Parameter (Recommended)
The simplest approach is to pass the tracing token directly to @paid_tracing() decorator or context manager. This scopes the token to just that specific trace.
Process 1: Generate Token
Process 2: Use Token
Node.js
Best Practices for Distributed Tracing
-
Generate once, use everywhere: Generate the token in the first part of your workflow, then pass and set it in all other parts
-
Store tokens with workflow identifiers: Use a correlation ID to link the token with your workflow
-
Link costs to the final signal: Emit one signal with
enable_cost_tracing=Trueat the end of your distributed workflow -
Use descriptive workflow IDs: Make it easy to debug by using meaningful identifiers
Supported AI Provider Wrappers and Hooks
The following AI provider wrappers and hooks are available for cost tracking:
Python SDK
PaidOpenAI/PaidAsyncOpenAI- OpenAI APIPaidOpenAIAgentsHook- Openai Agents hookPaidAnthropic/PaidAsyncAnthropic- Anthropic Claude APIPaidMistral- Mistral AI APIPaidBedrock- AWS BedrockPaidGemini- Google GeminiPaidLlamaIndexOpenAI- LlamaIndex with OpenAIPaidLangChainCallback- LangChain hook
Node.js SDK
PaidOpenAI- OpenAI API (import from@paid-ai/paid-node/openai)PaidAnthropic- Anthropic Claude API (import from@paid-ai/paid-node/anthropic)PaidMistral- Mistral AI API (import from@paid-ai/paid-node/mistral)PaidLangChainCallback- LangChain callback handler (import from@paid-ai/paid-node/langchain)- Vercel AI SDK wrapper (import from
@paid-ai/paid-node/vercel)
Java SDK
BedrockInstrumentor- AWS Bedrock auto-instrumentation (supports Converse, InvokeModel)
Best Practices
1. One Signal, One Outcome
Each trace should represent a single logical business outcome with one cost-enabled signal:
Python
Java
2. Wrap the Entire Event Handler
Make sure your tracing context includes all AI calls and the signal:
Python
Java
Troubleshooting
Costs Not Appearing in Dashboard
If costs aren’t showing up linked to your signals:
- Check tracing is initialized:
- Python: Use
@paid_tracingdecorator or context manager (tracing auto-initializes) - Java: Call
PaidTracing.initialize()before making any traced calls
- Python: Use
- Verify instrumentation is enabled:
- Python: Make sure you’re using
PaidOpenAIwrapper orpaid_autoinstrument() - Java: Call
BedrockInstrumentor.instrument()after initializing tracing
- Python: Make sure you’re using
- Confirm signal is in trace: The signal must be emitted within the traced function/scope
- Check enable_cost_tracing flag:
- Python: Make sure it’s set to
Truefor the signal - Java: Make sure the second parameter is
trueinPaidTracing.signal()
- Python: Make sure it’s set to
- Verify external_agent_id: This is required when emitting signals
Double-Counted Costs
If costs appear multiple times:
- Check that only ONE signal per trace has
enable_cost_tracing=True - Make sure you’re not accidentally calling the traced function multiple times
Missing AI Provider Costs
If some AI calls aren’t being tracked:
- Python: Ensure all AI client calls are using the Paid wrapper or auto-instrumentation
- Java: Ensure
BedrockInstrumentor.instrument()is called before creating the Bedrock client - Check that the wrapper/instrumentor supports your AI provider (see supported wrappers above)
- Verify instrumentation is set up before entering the trace
Next Steps
Need Help?
- Email: support@paid.ai
- Community: Join
#paid-developerson Slack