-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: auto name lists for ai mode #933
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughA new private static method Changes
Sequence DiagramsequenceDiagram
participant WE as WorkflowEnricher
participant GN as generateListName
participant LLM as LLM Provider<br/>(Anthropic/OpenAI/Ollama)
participant Val as Validation & Normalization
WE->>GN: captureList decision detected<br/>(prompt, url, fieldNames, llmConfig)
activate GN
GN->>GN: Select LLM provider<br/>based on config
GN->>GN: Construct system prompt<br/>with field context
GN->>LLM: Call API with prompts
activate LLM
LLM-->>GN: Response (raw list name)
deactivate LLM
GN->>Val: Validate & normalize<br/>(trim, strip quotes, title case)
activate Val
Val->>Val: Check length ≤ 50 chars
Val-->>GN: Normalized name
deactivate Val
alt Success
GN-->>WE: Generated list name
else Error/Validation fails
GN-->>WE: Fallback "List 1"
end
deactivate GN
WE->>WE: Use returned name for<br/>scrapeList action
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
server/src/sdk/workflowEnricher.ts (2)
1347-1375: Consider adding timeout for external API calls.The axios calls to OpenAI (and similarly Anthropic SDK calls) don't specify a timeout. In production, this could cause the request to hang indefinitely if the API is unresponsive. Other similar methods in this file also lack timeouts, so this may be a broader pattern to address.
🔎 Proposed fix: Add timeout to axios call
const response = await axios.post(`${openaiBaseUrl}/chat/completions`, { model: openaiModel, messages: [ { role: 'system', content: systemPrompt }, { role: 'user', content: userPrompt } ], max_tokens: 20, temperature: 0.1 }, { headers: { 'Authorization': `Bearer ${llmConfig?.apiKey || process.env.OPENAI_API_KEY}`, 'Content-Type': 'application/json' - } + }, + timeout: 30000 // 30 second timeout });
1392-1394: Title Case normalization may break acronyms.The current normalization lowercases all characters after the first in each word, which could turn "API Products" into "Api Products" or "USA Jobs" into "Usa Jobs". This is minor given the LLM is instructed to use Title Case, but worth noting.
🔎 Alternative: Preserve all-caps words
- listName = listName.split(' ') - .map((word: string) => word.charAt(0).toUpperCase() + word.slice(1).toLowerCase()) - .join(' '); + listName = listName.split(' ') + .map((word: string) => { + // Preserve all-caps words (likely acronyms) + if (word === word.toUpperCase() && word.length > 1) { + return word; + } + return word.charAt(0).toUpperCase() + word.slice(1).toLowerCase(); + }) + .join(' ');
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
server/src/sdk/workflowEnricher.ts
🧰 Additional context used
🧬 Code graph analysis (1)
server/src/sdk/workflowEnricher.ts (2)
server/src/sdk/browserSide/pageAnalyzer.js (1)
finalFields(999-999)server/src/routes/index.ts (1)
workflow(10-10)
🔇 Additional comments (3)
server/src/sdk/workflowEnricher.ts (3)
1243-1263: Method signature and setup look good.The method follows the established patterns in this class for LLM-based operations. Limiting field context to 10 items is a reasonable choice to avoid token limits while still providing useful context.
1265-1292: Well-crafted prompts for list naming.The system prompt is clear and provides good examples for domain adaptation. Explicitly requesting "ONLY the list name, nothing else" helps ensure consistent output format across providers.
1498-1510: Clean integration of list name generation.The placement after
finalFieldsis determined ensures the list name reflects the actual fields being extracted. UsingObject.keys(finalFields)provides relevant context to the LLM for generating meaningful names.
What this PR does?
Auto names the list data captured for the AI mode.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.