Integrating Signals with Cost Tracking

Overview

This guide shows you how to integrate cost tracking with signals in your AI agent workflow. By linking these two together, you can:

  • Automatically attribute AI provider costs (OpenAI, Anthropic, etc.) to specific business events
  • Track the true cost of achieving business outcomes (e.g., booking a meeting, sending an email, processing an image or a document)
  • Generate accurate invoices that reflect both usage-based pricing and actual AI costs

Prerequisites

Before you begin, make sure you have:

  • Created an API key in the Paid dashboard
  • Installed the Paid SDK for your language
  • Configured at least one Agent with signals
  • Set up your customer and order records

Note: This feature is currently available in Python, Node.js, and Java SDKs. Support for Go and Ruby is coming soon.

How It Works

In a typical AI agent workflow, your event processing logic might look like this:

1. Receive event (e.g., "prospect replied to email")
2. Process event → make AI calls to generate response
3. Send response
4. Send signal to Paid

To link AI costs with signals, you need to:

  1. Wrap your AI provider calls with Paid wrappers or hooks (e.g., PaidOpenAI, PaidAnthropic)
  2. Wrap your entire event processing function in a tracing context (@paid_tracing() decorator or context manager)
  3. Emit the signal within the same trace with signal() function and enable_cost_tracing=True

This ensures all AI calls and the signal share the same OpenTelemetry trace, allowing Paid to link costs to your business outcome.

Important: Cost Tracing Rules

Only enable cost tracing for ONE signal per trace

If you emit multiple signals within a single trace and enable cost tracing for more than one, the costs will be double-counted across multiple signals. The pattern should be:

  • Multiple AI calls + ONE signal with cost tracing = Costs attributed to that signal
  • Multiple signals = Only one should have enable_cost_tracing=True

Auto-Instrumentation (OpenTelemetry Instrumentors)

For maximum convenience, you can use OpenTelemetry auto-instrumentation to automatically track costs without modifying your AI library calls. This approach uses official OpenTelemetry instrumentors for supported AI libraries.

Quick Start

1from paid import Paid
2from paid.tracing import paid_autoinstrument, initialize_tracing
3from openai import OpenAI
4
5# Initialize Paid SDK
6client = Paid(token="PAID_API_KEY")
7initialize_tracing()
8
9paid_autoinstrument() # instruments all available: anthropic, gemini, openai, openai-agents, bedrock, langchain
10
11# Now all OpenAI calls will be automatically traced
12openai_client = OpenAI(api_key="<OPENAI_API_KEY>")
13
14@paid_tracing("your_external_customer_id", "your_external_agent_id")
15def chat_with_gpt():
16 response = openai_client.chat.completions.create(
17 model="gpt-4",
18 messages=[{"role": "user", "content": "Hello!"}]
19 )
20 return response
21
22chat_with_gpt() # Costs are automatically tracked!

Supported Libraries

Auto-instrumentation supports the following AI libraries:

Python:

anthropic - Anthropic SDK
gemini - Google Generative AI (google-generativeai)
openai - OpenAI Python SDK
openai-agents - OpenAI Agents SDK
bedrock - AWS Bedrock (boto3)
langchain - LangChain framework

Java:

bedrock - AWS Bedrock

Note: Node.js auto-instrumentation support is coming soon.

Selective Instrumentation

1from paid.tracing import paid_autoinstrument
2
3# Instrument only Anthropic and OpenAI
4paid_autoinstrument(libraries=["anthropic", "openai"])

How It Works

  • Auto-instrumentation uses official OpenTelemetry instrumentors for each AI library
  • It automatically wraps library calls without requiring you to use Paid wrapper classes
  • Works seamlessly with @paid_tracing() decorator or context manager
  • Costs are tracked in the same way as when using manual wrappers
  • Should be called once during application startup, typically before creating AI client instances

Basic Integration Pattern

Here’s the recommended pattern for integrating cost tracking with signals:

1from paid.tracing import paid_tracing, signal, initialize_tracing
2from paid.tracing.wrappers import PaidOpenAI
3from openai import OpenAI
4
5initialize_tracing()
6
7# Wrap your AI provider client
8openai_client = PaidOpenAI(OpenAI(api_key="<OPENAI_API_KEY>"))
9
10@paid_tracing("customer_123", "agent_123")
11def process_event(event):
12 """
13 Process an agent event - wrapped in tracing context
14 """
15 # Step 1: Make AI calls (automatically traced)
16 response = openai_client.chat.completions.create(
17 model="gpt-4",
18 messages=[
19 {"role": "system", "content": "You are an AI SDR assistant"},
20 {"role": "user", "content": event.content}
21 ]
22 )
23
24 # Step 2: Your business logic
25 send_response(response.choices[0].message.content)
26
27 # Step 3: Emit signal with cost tracing enabled
28 signal(
29 event_name="email_sent",
30 data={"response_length": len(response.choices[0].message.content)},
31 enable_cost_tracing=True # Link costs to this signal
32 )
33
34# Call your event processor
35process_event(incoming_event)

Real-World Example: AI SDR Agent

Let’s walk through a complete example of an AI SDR agent that processes email replies and attributes costs to the appropriate signal.

1from paid.tracing import paid_tracing, signal, paid_autoinstrument, initialize_tracing
2from openai import OpenAI
3
4initialize_tracing()
5
6# Instrument OpenAI before creating OpenAI client
7paid_autoinstrument()
8openai_client = OpenAI(api_key="<OPENAI_API_KEY>")
9# or alternatively can use openai wrapper without paid autoinstrumentation
10# from paid.tracing.wrappers import PaidOpenAI
11# openai_client = PaidOpenAI(OpenAI(api_key="<OPENAI_API_KEY>"))
12
13@paid_tracing("acme_account_123", "ai_sdr_agent")
14def handle_email_reply(email):
15 """
16 Handle incoming email reply from prospect
17 """
18 # AI call #1: Analyze sentiment
19 sentiment_response = openai_client.chat.completions.create(
20 model="gpt-4",
21 messages=[
22 {"role": "system", "content": "Analyze email sentiment"},
23 {"role": "user", "content": email.body}
24 ]
25 )
26
27 sentiment = sentiment_response.choices[0].message.content
28
29 # AI call #2: Generate response
30 reply_response = openai_client.chat.completions.create(
31 model="gpt-4",
32 messages=[
33 {"role": "system", "content": f"Generate reply. Sentiment: {sentiment}"},
34 {"role": "user", "content": email.body}
35 ]
36 )
37
38 reply_text = reply_response.choices[0].message.content
39
40 # Business logic: Send the email
41 send_email(
42 to=email.sender,
43 subject=f"Re: {email.subject}",
44 body=reply_text
45 )
46
47 # Emit signal - costs from BOTH AI calls will be attributed here
48 signal(
49 event_name="email_reply_processed",
50 data={
51 "sentiment": sentiment,
52 "reply_length": len(reply_text)
53 },
54 enable_cost_tracing=True
55 )
56
57 # Optional: Emit additional signal WITHOUT cost tracing
58 # (to track the event without duplicating costs)
59 signal(
60 event_name="email_sentiment_analyzed",
61 data={"sentiment": sentiment},
62 enable_cost_tracing=False # Don't link costs to this one
63 )
64
65# Process incoming email
66handle_email_reply(incoming_email)

Event Loop Integration

Most AI agents run in an event loop, processing events continuously. Here’s how to integrate cost tracking in that pattern:

1from paid.tracing import paid_tracing, signal, initialize_tracing
2from paid.tracing.wrappers import PaidOpenAI
3from openai import OpenAI
4
5initialize_tracing()
6
7# Initialize once
8openai_client = PaidOpenAI(OpenAI(api_key="<OPENAI_API_KEY>"))
9
10@paid_tracing("customer_123", "agent_123")
11def process_agent_event(event):
12 """Process a single agent event with cost tracking"""
13
14 # Make AI calls as needed
15 if event.type == "email_received":
16 response = openai_client.chat.completions.create(
17 model="gpt-4",
18 messages=[{"role": "user", "content": event.content}]
19 )
20 handle_email_response(response)
21
22 # Emit signal with cost tracing
23 signal(
24 event_name="email_processed",
25 enable_cost_tracing=True
26 )
27
28 elif event.type == "call_completed":
29 # Different AI calls for call processing
30 summary = openai_client.chat.completions.create(
31 model="gpt-4",
32 messages=[{"role": "user", "content": f"Summarize: {event.transcript}"}]
33 )
34
35 signal(
36 event_name="call_summarized",
37 enable_cost_tracing=True
38 )
39
40# Event loop
41while True:
42 event = get_next_event() # Your event queue
43 if event:
44 process_agent_event(event) # Each call creates a new trace

Passing User Metadata

You can attach custom metadata to your traces by passing a metadata dictionary to the paid_tracing() decorator or context manager. This metadata will be stored with the trace and can be used to filter and query traces later.

1from paid.tracing import paid_tracing, signal, initialize_tracing
2from paid.tracing.wrappers import PaidOpenAI
3from openai import OpenAI
4
5initialize_tracing()
6
7openai_client = PaidOpenAI(OpenAI(api_key="<OPENAI_API_KEY>"))
8
9@paid_tracing(
10 "customer_123",
11 "agent_123",
12 metadata={
13 "campaign_id": "campaign_456",
14 "environment": "production",
15 "user_tier": "enterprise"
16 }
17)
18def process_event(event):
19 """Process event with custom metadata"""
20 response = openai_client.chat.completions.create(
21 model="gpt-4",
22 messages=[{"role": "user", "content": event.content}]
23 )
24
25 signal("event_processed", enable_cost_tracing=True)
26 return response
27
28process_event(incoming_event)

Querying Traces by Metadata

Once you’ve added metadata to your traces, you can filter traces using the metadata parameter in the traces API endpoint:

$# Filter by single metadata field
>curl -G "https://api.paid.ai/api/organizations/{orgId}/traces" \
> --data-urlencode 'metadata={"campaign_id":"campaign_456"}' \
> -H "Authorization: Bearer YOUR_API_KEY"
>
># Filter by multiple metadata fields (all must match)
>curl -G "https://api.paid.ai/api/organizations/{orgId}/traces" \
> --data-urlencode 'metadata={"campaign_id":"campaign_456","environment":"production"}' \
> -H "Authorization: Bearer YOUR_API_KEY"

Advanced: Distributed Tracing

Sometimes your agent workflow cannot fit into a single traceable function because it has to be disjoint for various reasons:

  • Complex concurrency logic
  • Asynchronous workflows with callbacks
  • Work distributed across multiple processes or machines
  • Event-driven architectures with message queues

For these cases, Paid provides tracing tokens that allow you to link costs and signals across distributed parts of your application.

How Distributed Tracing Works

  1. Generate a tracing token in one part of your system
  2. Pass the token to other parts (via API, message queue, database, etc.)
  3. Pass the token to @paid_tracing() decorator or context manager when executing traced code
  4. All traces that share the same token will be linked together. This way signals can be associated with the costs.

Note: This feature is currently available in the Python SDK only. Node.js support is coming soon.

The simplest approach is to pass the tracing token directly to @paid_tracing() decorator or context manager. This scopes the token to just that specific trace.

1from paid.tracing import paid_tracing, signal, generate_tracing_token, initialize_tracing
2from paid.tracing.wrappers.openai import PaidOpenAI
3from openai import OpenAI
4
5initialize_tracing()
6
7openai_client = PaidOpenAI(OpenAI(api_key="<OPENAI_API_KEY>"))
8
9# Generate a tracing token
10tracing_token = generate_tracing_token()
11print(f"Tracing token: {tracing_token}")
12
13# Store this token somewhere accessible to other processes
14# (database, message queue, Redis, etc.)
15save_to_redis("workflow_123", tracing_token)
16
17@paid_tracing("customer_123", tracing_token=tracing_token, external_agent_id="agent_123")
18def initial_processing(email):
19 """First part of workflow - email analysis"""
20 analysis = openai_client.chat.completions.create(
21 model="gpt-4",
22 messages=[{"role": "user", "content": f"Analyze: {email}"}]
23 )
24
25 # Emit signal WITHOUT cost tracing (costs will be linked later)
26 signal(
27 event_name="email_analyzed",
28 enable_cost_tracing=False
29 )
30
31 return analysis
32
33# Process the email
34result = initial_processing(incoming_email)
35
36# Send result to another process for response generation
37enqueue_task("generate_response", {"analysis": result, "workflow_id": "workflow_123"})

Best Practices for Distributed Tracing

  1. Generate once, use everywhere: Generate the token in the first part of your workflow, then pass and set it in all other parts

  2. Store tokens with workflow identifiers: Use a correlation ID to link the token with your workflow

    1# Store with workflow ID
    2redis.set(f"trace_token:{workflow_id}", token)
    3
    4# Retrieve later
    5token = redis.get(f"trace_token:{workflow_id}")
  3. Link costs to the final signal: Emit one signal with enable_cost_tracing=True at the end of your distributed workflow

    1from paid.tracing import signal
    2
    3# Process 1: AI calls, signal without costs
    4signal("step_1_complete", enable_cost_tracing=False)
    5
    6# Process 2: More AI calls, signal without costs
    7signal("step_2_complete", enable_cost_tracing=False)
    8
    9# Process 3: Final step, signal with costs (links all AI calls)
    10signal("workflow_complete", enable_cost_tracing=True)
  4. Use descriptive workflow IDs: Make it easy to debug by using meaningful identifiers

    1workflow_id = f"email_response_{customer_id}_{timestamp}"

Supported AI Provider Wrappers and Hooks

The following AI provider wrappers and hooks are available for cost tracking:

Python SDK

  • PaidOpenAI / PaidAsyncOpenAI - OpenAI API
  • PaidOpenAIAgentsHook - Openai Agents hook
  • PaidAnthropic / PaidAsyncAnthropic - Anthropic Claude API
  • PaidMistral - Mistral AI API
  • PaidBedrock - AWS Bedrock
  • PaidGemini - Google Gemini
  • PaidLlamaIndexOpenAI - LlamaIndex with OpenAI
  • PaidLangChainCallback - LangChain hook

Node.js SDK

  • PaidOpenAI - OpenAI API (import from @paid-ai/paid-node/openai)
  • PaidAnthropic - Anthropic Claude API (import from @paid-ai/paid-node/anthropic)
  • PaidMistral - Mistral AI API (import from @paid-ai/paid-node/mistral)
  • PaidLangChainCallback - LangChain callback handler (import from @paid-ai/paid-node/langchain)
  • Vercel AI SDK wrapper (import from @paid-ai/paid-node/vercel)

Java SDK

  • BedrockInstrumentor - AWS Bedrock auto-instrumentation (supports Converse, InvokeModel)

Best Practices

1. One Signal, One Outcome

Each trace should represent a single logical business outcome with one cost-enabled signal:

1from paid.tracing import paid_tracing, signal
2
3# ✅ Good: One trace = One outcome
4@paid_tracing("customer_123", "agent_123")
5def book_meeting(prospect):
6 # Multiple AI calls OK
7 analyze_calendar(prospect)
8 generate_invite(prospect)
9 send_confirmation(prospect)
10
11 # Single signal with cost tracing
12 signal("meeting_booked", enable_cost_tracing=True)
1# ❌ Bad: Multiple cost-enabled signals in one trace
2@paid_tracing("customer_123", "agent_123")
3def process_workflow(prospect):
4 analyze_calendar(prospect)
5 signal("calendar_analyzed", enable_cost_tracing=True) # Costs counted
6
7 generate_invite(prospect)
8 signal("invite_generated", enable_cost_tracing=True) # Costs counted AGAIN!

2. Wrap the Entire Event Handler

Make sure your tracing context includes all AI calls and the signal:

1from paid.tracing import paid_tracing, signal
2
3# ✅ Good: Everything in one trace
4@paid_tracing("customer_123", "agent_123")
5def process_event(event):
6 ai_call_1()
7 ai_call_2()
8 signal("event_processed", enable_cost_tracing=True)
9
10# ❌ Bad: AI calls outside trace
11def process_event(event):
12 ai_call_1() # Cost not tracked!
13
14 @paid_tracing("customer_123", "agent_123")
15 def emit_signal():
16 signal("event_processed", enable_cost_tracing=True)
17
18 emit_signal()

Troubleshooting

Costs Not Appearing in Dashboard

If costs aren’t showing up linked to your signals:

  1. Check tracing is initialized:
    • Python: Use @paid_tracing decorator or context manager (tracing auto-initializes)
    • Java: Call PaidTracing.initialize() before making any traced calls
  2. Verify instrumentation is enabled:
    • Python: Make sure you’re using PaidOpenAI wrapper or paid_autoinstrument()
    • Java: Call BedrockInstrumentor.instrument() after initializing tracing
  3. Confirm signal is in trace: The signal must be emitted within the traced function/scope
  4. Check enable_cost_tracing flag:
    • Python: Make sure it’s set to True for the signal
    • Java: Make sure the second parameter is true in PaidTracing.signal()
  5. Verify external_agent_id: This is required when emitting signals

Double-Counted Costs

If costs appear multiple times:

  • Check that only ONE signal per trace has enable_cost_tracing=True
  • Make sure you’re not accidentally calling the traced function multiple times

Missing AI Provider Costs

If some AI calls aren’t being tracked:

  • Python: Ensure all AI client calls are using the Paid wrapper or auto-instrumentation
  • Java: Ensure BedrockInstrumentor.instrument() is called before creating the Bedrock client
  • Check that the wrapper/instrumentor supports your AI provider (see supported wrappers above)
  • Verify instrumentation is set up before entering the trace

Next Steps

Need Help?

Additional Resources