Skip to content

feat: add event emission support for OpenAI Agents instrumentation#3645

Open
LingduoKong wants to merge 7 commits intotraceloop:mainfrom
LingduoKong:feat/openai-agents-event-emission
Open

feat: add event emission support for OpenAI Agents instrumentation#3645
LingduoKong wants to merge 7 commits intotraceloop:mainfrom
LingduoKong:feat/openai-agents-event-emission

Conversation

@LingduoKong
Copy link

@LingduoKong LingduoKong commented Jan 28, 2026

Summary

Add event emission support for OpenAI Agents instrumentation following OpenTelemetry GenAI semantic conventions.

Closes #3441

Changes

  • Add event models (MessageEvent, ChoiceEvent, ToolStartEvent, ToolEndEvent)
  • Add event emitter with proper semantic convention compliance
  • Add Config class with use_legacy_attributes flag and event_logger
  • Integrate event emission in hooks while maintaining backward compatibility
  • Add test coverage for legacy mode

Backward Compatibility

  • Default use_legacy_attributes=True preserves existing span attribute behavior
  • Set use_legacy_attributes=False to enable event emission mode
  • Respects TRACELOOP_TRACE_CONTENT setting for content redaction

Important

Adds event emission support for OpenAI Agents instrumentation with backward compatibility and comprehensive testing.

  • Behavior:
    • Adds event emission support for OpenAI Agents instrumentation using OpenTelemetry GenAI semantic conventions.
    • Introduces MessageEvent, ChoiceEvent, ToolStartEvent, and ToolEndEvent in event_models.py.
    • Implements event emission in _hooks.py with backward compatibility.
    • Adds Config class in config.py with use_legacy_attributes and event_logger.
    • Default use_legacy_attributes=True maintains existing span attribute behavior.
    • Set use_legacy_attributes=False to enable event emission mode.
    • Respects TRACELOOP_TRACE_CONTENT for content redaction.
  • Event Emitter:
    • Adds event_emitter.py to handle event emission based on event type.
    • Uses should_emit_events() and should_send_prompts() from utils.py.
  • Testing:
    • Adds tests in test_events.py for both legacy and event emission modes.
    • Includes VCR cassettes for testing different scenarios in tests/cassettes/test_events/.

This description was created by Ellipsis for fb27b90. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • New Features

    • Event-based telemetry for OpenAI agent interactions (messages, choices, tool start/end) with optional event logger
    • Global configuration to toggle event emission vs. legacy span-attribute mode
    • Structured event models and emission flow for richer GenAI observability
  • Tests

    • New test cassettes and tests covering event emission scenarios, content/no-content modes, and function/tool interactions

✏️ Tip: You can customize this high-level summary in your review settings.

…trumentation

Add support for emitting OpenTelemetry events following GenAI semantic
conventions. This includes:

- New event models (MessageEvent, ChoiceEvent, ToolStartEvent, ToolEndEvent)
- Event emitter implementation with proper semantic convention compliance
- Config support for event_logger and use_legacy_attributes flag
- Integration with hooks to emit events for messages, choices, and tool calls
- Respects TRACELOOP_TRACE_CONTENT setting for content redaction

The implementation maintains backward compatibility through the
use_legacy_attributes flag (default: True), which uses span attributes
when enabled and events when disabled.

Closes traceloop#3441
@CLAassistant
Copy link

CLAassistant commented Jan 28, 2026

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link

coderabbitai bot commented Jan 28, 2026

Warning

Rate limit exceeded

@LingduoKong has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 7 minutes and 28 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Adds event-emission support to the OpenAI Agents instrumentation: new Config class and instrumentor options, event models and emitter, hooks updated to emit GenAI semantic-convention events (or fall back to legacy span attributes), utilities and tests plus VCR cassettes for event scenarios.

Changes

Cohort / File(s) Summary
Instrumentation init & config
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/__init__.py, .../config.py, .../utils.py
Adds Config class with exception_logger, use_legacy_attributes, event_logger; instrumentor ctor accepts exception_logger and use_legacy_attributes; should_emit_events() utility gates event mode.
Event models
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/event_models.py
New TypedDicts and dataclasses: _FunctionToolCall, ToolCall, CompletionMessage, MessageEvent, ChoiceEvent, ToolStartEvent, ToolEndEvent.
Event emitter
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/event_emitter.py
New emitter with role enum, event name mapping, and handlers to emit MessageEvent, ChoiceEvent, ToolStartEvent, ToolEndEvent using configured EventLogger and GenAI semantic names; content redaction and tool-call normalization applied.
Hooks & span handling
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
Extended span start/end processing to parse/normalize tool calls, emit Message/Choice/Tool events when should_emit_events() is true, and preserve legacy span-attribute behavior otherwise. Adds helpers _parse_tool_calls_for_event and _handle_function_span_end.
Tests & fixtures
packages/opentelemetry-instrumentation-openai-agents/tests/conftest.py, packages/opentelemetry-instrumentation-openai-agents/tests/test_events.py
New fixtures for EventLoggerProvider, log/span exporters, instrument_with_content/instrument_with_no_content; tests for legacy mode, event mode (with/without content), and function/tool events; VCR scrubbing helpers.
VCR cassettes
packages/opentelemetry-instrumentation-openai-agents/tests/cassettes/test_events/*
Adds multiple YAML cassettes capturing HTTP interactions for legacy, event-with-content, event-no-content, and function-tool scenarios.
Project metadata
manifest_file, requirements.txt, pyproject.toml
Updated project metadata / requirements referenced by diffs.

Sequence Diagram

sequenceDiagram
    participant App as Application
    participant Instr as OpenAI Agents Instrumentor
    participant Hook as Hooks (_hooks.py)
    participant Emitter as Event Emitter
    participant Logger as EventLogger

    App->>Instr: Initialize instrumentor(use_legacy_attributes=False)
    Instr->>Instr: Set Config.use_legacy_attributes=False\nConfig.event_logger via EventLoggerProvider

    App->>Hook: Agent processes input
    Hook->>Hook: should_emit_events()?
    alt Event mode
        Hook->>Emitter: emit_event(MessageEvent)
        Emitter->>Emitter: format per GenAI semantic conventions
        Emitter->>Logger: emit(gen_ai.user.message)
    else Legacy mode
        Hook->>Hook: attach prompt as span attributes
    end

    App->>Hook: Agent receives model response
    Hook->>Hook: should_emit_events()?
    alt Event mode
        Hook->>Emitter: emit_event(ChoiceEvent)
        Emitter->>Logger: emit(gen_ai.choice)
    else Legacy mode
        Hook->>Hook: attach completion attributes to span
    end

    Note right of Hook: Tool/function spans\n-> emit ToolStart/ToolEnd or set attributes
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 I hopped through spans and events today,
GenAI names now lead the merry way.
Tools start, choices sing, messages show—
Config minded, legacy mode still in tow.
A carrot of telemetry, neatly in a row. 🥕✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 58.06% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add event emission support for OpenAI Agents instrumentation' accurately and clearly describes the main change—adding event emission support to OpenAI Agents. It is concise, specific, and directly reflects the primary objective of the PR.
Linked Issues check ✅ Passed All coding requirements from issue #3441 are met: event models (MessageEvent, ChoiceEvent, ToolStartEvent, ToolEndEvent) are defined; event emitter module emits semantic-convention-compliant events (gen_ai.user.message, gen_ai.choice, gen_ai.tool.start, gen_ai.tool.end); Config class with use_legacy_attributes flag is implemented; event emission is integrated into hooks with backward compatibility; utility functions (should_emit_events) are provided; TRACELOOP_TRACE_CONTENT is respected; comprehensive tests with VCR cassettes cover legacy and event-emission scenarios.
Out of Scope Changes check ✅ Passed All changes are directly aligned with the stated objectives. Files modified/added (config.py, event_models.py, event_emitter.py, utils.py, _hooks.py, init.py, test_events.py, conftest.py, and cassette files) are all necessary for implementing event emission support and testing. No extraneous changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

…tering

Bug fixes:
- Fix messages with tool_calls but no content being silently skipped
  (now emits events when role is set AND either content or tool_calls exist)
- Fix tool events not being emitted when content tracing is disabled
  (now emits tool events when events mode is enabled, with empty message
  when content tracing is disabled)
- Add docstring to Config class explaining the intentional singleton
  pattern and warning about multiple instrumentor instances
- Add comments in _emit_message_event clarifying that unknown roles
  are kept in the body per semantic conventions (role is required
  when it differs from the event name)
@LingduoKong LingduoKong marked this pull request as ready for review January 28, 2026 15:56
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to fb27b90 in 32 seconds. Click for details.
  • Reviewed 1824 lines of code in 12 files
  • Skipped 1 files when reviewing.
  • Skipped posting 0 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.

Workflow ID: wflow_ThxQEFUs2B2D4itG

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In
`@packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/config.py`:
- Around line 1-4: The module currently imports the private EventLogger symbol
(EventLogger) from opentelemetry._events; replace that with the public Logs API
by importing the public logger factory (e.g., get_logger) from the OpenTelemetry
logs API and update any uses of EventLogger in this module to obtain and use a
logger via get_logger (or the appropriate public API), ensuring instrumentation
uses only public opentelemetry API surfaces.

In
`@packages/opentelemetry-instrumentation-openai-agents/tests/cassettes/test_events/test_agent_with_function_tool_events.yaml`:
- Around line 17-19: The cassette in
tests/cassettes/test_events/test_agent_with_function_tool_events.yaml contains
sensitive/volatile cookies and project identifiers (e.g., cookie entries
"__cf_bm" and "_cfuvid" and OpenAI project IDs) that must be redacted; replace
the real values with deterministic placeholders (e.g., "<REDACTED_COOKIE>" /
"<REDACTED_PROJECT_ID>") or configure the VCR filter to scrub these keys so
replays are deterministic and identifiers are not leaked, and apply the same
redaction to the other occurrences noted (around the other listed line ranges).

In `@packages/opentelemetry-instrumentation-openai-agents/tests/conftest.py`:
- Around line 304-307: The session-scoped fixture span_exporter
(InMemorySpanExporter) can retain spans between tests; update the
function-scoped fixtures instrument_with_content and instrument_with_no_content
to call span_exporter.clear() at the start (or before yielding) so each test
starts with an empty exporter; locate the fixtures instrument_with_content and
instrument_with_no_content and add a span_exporter.clear() invocation (using the
InMemorySpanExporter.clear method) to prevent span leakage across tests.

In `@packages/opentelemetry-instrumentation-openai-agents/tests/test_events.py`:
- Around line 52-55: The test fails because SpanAttributes.LLM_PROMPTS is
referenced but not defined; add a new constant named LLM_PROMPTS to the
SpanAttributes class (opentelemetry.semconv_ai.SpanAttributes) with the
canonical attribute key (e.g., "llm.prompts") and ensure it is
exported/available from the module so tests can access
SpanAttributes.LLM_PROMPTS; keep the name and casing consistent with other
constants in the class and update any module exports or __all__ if needed.
🧹 Nitpick comments (4)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/config.py (1)

22-24: Consider adding type annotation for exception_logger.

For consistency with the other class attributes, consider adding a type annotation to exception_logger.

Suggested change
-    exception_logger = None
+    exception_logger: Optional[Any] = None  # or a more specific type if known

Note: You'll need to import Any from typing if the specific type is unknown.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)

57-111: Consider consolidating duplicate tool_call parsing logic.

The _parse_tool_calls_for_event function (lines 57-111) duplicates much of the tool_call normalization logic that also exists in _extract_prompt_attributes (lines 221-261). Both handle:

  • Converting objects to dicts via hasattr checks
  • Extracting nested function fields
  • Parsing JSON arguments

This duplication increases maintenance burden and risk of divergence.

♻️ Suggested approach

Extract a shared _normalize_tool_call(tool_call) -> dict helper that both the event path and legacy path can use:

def _normalize_tool_call(tool_call) -> dict:
    """Normalize tool_call from various formats to a consistent dict."""
    if not isinstance(tool_call, dict):
        tc_dict = {}
        if hasattr(tool_call, "id"):
            tc_dict["id"] = tool_call.id
        if hasattr(tool_call, "function"):
            func = tool_call.function
            tc_dict["name"] = getattr(func, "name", None)
            tc_dict["arguments"] = getattr(func, "arguments", None)
        elif hasattr(tool_call, "name"):
            tc_dict["name"] = tool_call.name
        if hasattr(tool_call, "arguments"):
            tc_dict["arguments"] = tool_call.arguments
        tool_call = tc_dict

    if "function" in tool_call:
        function = tool_call["function"]
        tool_call = {
            "id": tool_call.get("id"),
            "name": function.get("name"),
            "arguments": function.get("arguments"),
        }
    return tool_call

746-752: Comment is misleading - content filtering happens here, not in event emitter.

The comment on line 747 says "Event emitter handles content filtering internally," but the filtering actually happens right here with input_str if should_send_prompts() else "". The _emit_tool_start_event and _emit_tool_end_event functions don't perform any content filtering (they just emit the event as-is).

Consider updating the comment for accuracy:

             if should_emit_events():
-                # Event emitter handles content filtering internally
+                # Filter content here based on TRACELOOP_TRACE_CONTENT setting
                 emit_event(ToolStartEvent(message=input_str if should_send_prompts() else ""))
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/event_emitter.py (1)

48-49: Redundant should_emit_events() check.

The emit_event function checks should_emit_events() at line 48, but all callers in _hooks.py already guard with if should_emit_events(): before calling emit_event. This double-check is defensive but adds unnecessary overhead on the hot path.

Consider removing the check here if you want to reduce overhead, or document it as intentional defensive programming.

Comment on lines 1 to 4
from typing import Optional

from opentelemetry._events import EventLogger

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OpenTelemetry Python EventLogger import documentation official API

💡 Result:

As of the current official OpenTelemetry Python API docs, there isn’t an EventLogger class to import. “Events” are emitted via the Logs API (a Logger.emit(..., event_name=...) style API), using opentelemetry._logs. [1]

Official Python Logs API imports (docs)

from opentelemetry._logs import get_logger

get_logger(...) returns a Logger from the global LoggerProvider. [1]

Official Python Logs SDK imports (docs)

from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler

This is the SDK LoggerProvider / handler used to collect/export log records. [2][3]

Where “EventLogger” is defined (spec, not Python API docs)

EventLogger is defined by the OpenTelemetry Event API specification, but that spec does not imply there is a corresponding Python EventLogger import available today. [4]

Sources: [1] [2] [3] [4]


🌐 Web query:

OpenTelemetry Python _events module private API stability

💡 Result:

opentelemetry._events in OpenTelemetry Python should be treated as private/unstable API:

  • In OpenTelemetry Python, Logs are still “Development” and explicitly expected to involve deprecations and breaking changes while being stabilized*—and the Events API is tied to that same “logs/events” area (for example, the logging API docs describe generating “logs and events”). [2][4]
  • Per the OpenTelemetry project’s versioning & stability policy, only APIs that have reached the appropriate stability level get compatibility guarantees; unstable/experimental areas do not provide the same stability guarantees. [1]

Practical implication: don’t build a library/public integration that depends on opentelemetry._events remaining stable across minor releases. If you must use it, pin exact versions (and plan for breakage when upgrading) until the corresponding signal/API is declared stable. [1][2]

Sources
[1] OpenTelemetry “Versioning and stability” spec: https://opentelemetry.io/docs/specs/otel/versioning-and-stability/
[2] OpenTelemetry Python repo README (Logs “Development*”, breaking changes while stabilizing): https://github.com/open-telemetry/opentelemetry-python
[4] OpenTelemetry Python docs for opentelemetry._logs (“generate logs and events”): https://opentelemetry-python.readthedocs.io/en/stable/api/_logs.html


🌐 Web query:

OpenTelemetry Python public API vs internal modules underscore convention

💡 Result:

In OpenTelemetry Python, “public API” means the stable, supported import surface intended for application and library code—primarily what’s exposed by the opentelemetry-api package (for example opentelemetry.trace, opentelemetry.metrics, opentelemetry.context). The opentelemetry-sdk package provides the reference implementation and is something applications (not reusable libraries) may choose to depend on. [1]

Practical rules (Python underscore convention applied to OTel)

  • Avoid importing anything with a leading underscore (module or symbol), e.g. opentelemetry.context._something or from ... import _FOO. In OTel this is explicitly used for internal-only hooks/keys that are not part of the public API and may change without notice. [2]
  • For reusable libraries/instrumentation, depend on and import from opentelemetry-api only (e.g. from opentelemetry import trace / trace.get_tracer(...)). This keeps your library compatible regardless of which SDK (or even no SDK) the application installs. [1]
  • For applications/agents, it’s normal to import from the SDK (e.g. opentelemetry.sdk.trace) to configure providers/processors/exporters, but treat SDK internals (especially underscore modules) as non-public. [3]

Quick examples

  • Prefer (public API): from opentelemetry import trace / from opentelemetry.context import attach, detach [1]
  • App config (SDK): from opentelemetry.sdk.trace import TracerProvider [3]
  • Avoid (internal): from opentelemetry.context import _SUPPRESS_INSTRUMENTATION_KEY (called out as intentionally non-public) [2]

Sources:
[1] OpenTelemetry Python repo README (API vs SDK guidance) (github.com)
[2] opentelemetry-python issue discussing underscore internal key not being public API (github.com)
[3] OpenTelemetry Python SDK trace docs (opentelemetry-python.readthedocs.io)

Citations:


Use the public OpenTelemetry Logs API instead of the private opentelemetry._events module.

The EventLogger is imported from opentelemetry._events, which is a private/unstable module. OpenTelemetry's versioning policy explicitly does not provide stability guarantees for internal APIs, and the Events API is currently in development with expected breaking changes. For reusable instrumentation libraries, imports must use only the public API surface (e.g., opentelemetry-api). Replace this with the public Logs API: from opentelemetry._logs import get_logger (or a higher-level public API equivalent) to avoid breaking changes in future releases.

🤖 Prompt for AI Agents
In
`@packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/config.py`
around lines 1 - 4, The module currently imports the private EventLogger symbol
(EventLogger) from opentelemetry._events; replace that with the public Logs API
by importing the public logger factory (e.g., get_logger) from the OpenTelemetry
logs API and update any uses of EventLogger in this module to obtain and use a
logger via get_logger (or the appropriate public API), ensuring instrumentation
uses only public opentelemetry API surfaces.

Comment on lines 52 to 55
assert response_span.attributes.get(f"{SpanAttributes.LLM_PROMPTS}.0.role") == "user"
assert (
response_span.attributes.get(f"{SpanAttributes.LLM_PROMPTS}.0.content") == query
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Search for LLM_PROMPTS definition in the semantic conventions package
echo "=== Searching for LLM_PROMPTS in semantic conventions ==="
rg -n "LLM_PROMPTS" packages/opentelemetry-semantic-conventions-ai/

echo -e "\n=== Searching for LLM_PROMPTS in entire repo ==="
rg -n "LLM_PROMPTS" --type py

echo -e "\n=== Searching for SpanAttributes class definition ==="
rg -n "class SpanAttributes" packages/ -A 20

Repository: traceloop/openllmetry

Length of output: 25138


🏁 Script executed:

#!/bin/bash

# Get the full SpanAttributes class definition
echo "=== Full SpanAttributes class definition ==="
sed -n '64,200p' packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py

Repository: traceloop/openllmetry

Length of output: 7151


🏁 Script executed:

#!/bin/bash

# Search for LLM_PROMPTS definition line in the semantic conventions file
grep -n "LLM_PROMPTS" packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py | head -5

# Get the full content around that area
line_num=$(grep -n "LLM_PROMPTS" packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py | head -1 | cut -d: -f1)
if [ ! -z "$line_num" ]; then
  echo "=== LLM_PROMPTS definition around line $line_num ==="
  sed -n "$((line_num - 5)),$((line_num + 5))p" packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
fi

Repository: traceloop/openllmetry

Length of output: 47


🏁 Script executed:

#!/bin/bash

# Check the complete SpanAttributes class and its inheritance
echo "=== SpanAttributes class definition (full) ==="
sed -n '64,500p' packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py | grep -n "class SpanAttributes\|LLM_PROMPTS\|def __"

# Also check if there's a parent class or imports
echo -e "\n=== File header and imports ==="
head -70 packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py | tail -20

# Check how the test file imports SpanAttributes
echo -e "\n=== test_events.py imports ==="
head -20 packages/opentelemetry-instrumentation-openai-agents/tests/test_events.py

Repository: traceloop/openllmetry

Length of output: 2028


🏁 Script executed:

#!/bin/bash

# Get file size first
wc -l packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py

# Search for LLM_PROMPTS in the entire file
echo "=== Searching for LLM_PROMPTS in entire file ==="
grep -c "LLM_PROMPTS" packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py || echo "Not found (exit code: $?)"

# Get complete file and search manually
echo -e "\n=== Full semconv_ai file checking for all LLM_ attributes ==="
cat packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py | grep "LLM_" | head -20

Repository: traceloop/openllmetry

Length of output: 1642


🏁 Script executed:

#!/bin/bash

# Get the entire file content and extract just the SpanAttributes class
python3 << 'EOF'
with open('packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py', 'r') as f:
    content = f.read()
    
# Find where SpanAttributes class starts
start = content.find('class SpanAttributes:')
if start != -1:
    # Get the rest of the file from that point
    class_content = content[start:]
    
    # Count attributes
    lines = class_content.split('\n')
    attributes = [line.strip() for line in lines if '=' in line and not line.strip().startswith('#')]
    
    print(f"SpanAttributes class found at character position: {start}")
    print(f"Total attribute assignments in class: {len(attributes)}")
    print("\nSearching for LLM_PROMPTS in attributes:")
    found = False
    for i, attr in enumerate(attributes):
        if 'PROMPTS' in attr.upper():
            print(f"  {i}: {attr}")
            found = True
    if not found:
        print("  LLM_PROMPTS attribute NOT FOUND")
    
    print("\nFirst 10 attributes:")
    for attr in attributes[:10]:
        print(f"  {attr}")
    
    print("\nLast 10 attributes:")
    for attr in attributes[-10:]:
        print(f"  {attr}")
EOF

Repository: traceloop/openllmetry

Length of output: 1038


🏁 Script executed:

#!/bin/bash

# Check if SpanAttributes has __getattr__ or other special methods
echo "=== Checking for special methods in SpanAttributes class ==="
grep -A 300 "class SpanAttributes:" packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py | grep "def __"

# Also check if there's a parent class
echo -e "\n=== SpanAttributes class declaration ==="
grep "class SpanAttributes" packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py

# Check if there are other files that might extend SpanAttributes
echo -e "\n=== Searching for other SpanAttributes definitions ==="
find packages -name "*.py" -type f -exec grep -l "class SpanAttributes" {} \;

Repository: traceloop/openllmetry

Length of output: 329


🏁 Script executed:

#!/bin/bash

# Get the complete test function
sed -n '38,60p' packages/opentelemetry-instrumentation-openai-agents/tests/test_events.py

Repository: traceloop/openllmetry

Length of output: 938


🏁 Script executed:

#!/bin/bash

# Check what value LLM_PROMPTS should have based on usage pattern
echo "=== Checking the value pattern used with LLM_PROMPTS ==="
grep -h "SpanAttributes.LLM_PROMPTS" packages/opentelemetry-instrumentation-openai-agents/tests/test_openai_agents.py | head -3

# Check if there's a GEN_AI_PROMPT or similar that should be LLM_PROMPTS
echo -e "\n=== Checking semantic conventions for PROMPT attributes ==="
grep -E "(PROMPT|prompt)" packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py | grep -v "#"

Repository: traceloop/openllmetry

Length of output: 858


SpanAttributes.LLM_PROMPTS attribute does not exist and will cause AttributeError.

The test references SpanAttributes.LLM_PROMPTS (line 52), but this attribute is not defined in the SpanAttributes class in packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py. The class contains 177 attribute definitions but LLM_PROMPTS is not among them. This will raise an AttributeError at runtime when the test executes on line 52.

🤖 Prompt for AI Agents
In `@packages/opentelemetry-instrumentation-openai-agents/tests/test_events.py`
around lines 52 - 55, The test fails because SpanAttributes.LLM_PROMPTS is
referenced but not defined; add a new constant named LLM_PROMPTS to the
SpanAttributes class (opentelemetry.semconv_ai.SpanAttributes) with the
canonical attribute key (e.g., "llm.prompts") and ensure it is
exported/available from the module so tests can access
SpanAttributes.LLM_PROMPTS; keep the name and casing consistent with other
constants in the class and update any module exports or __all__ if needed.

- Add comment explaining opentelemetry._events is the official
  incubating Events API (not private)
- Update vcr_config to scrub sensitive cookies and project IDs
  from cassette recordings
- Add span_exporter.clear() to test fixtures to prevent span
  leakage between tests
- Fix test_events.py to use GenAIAttributes.GEN_AI_PROMPT instead
  of non-existent SpanAttributes.LLM_PROMPTS
- Add type annotation for exception_logger (Optional[Any])
- Fix misleading comments: content filtering happens at call site,
  not in event emitter functions
- Replace cookies with <REDACTED_COOKIE>
- Replace project IDs with <REDACTED_PROJECT_ID>
- Replace organization names with <REDACTED_ORGANIZATION>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

🚀 Feature: Event emission support for OpenAI Agents instrumentation

2 participants