Browse Source

feat: add provider-aware llm suggestion runtime

master
Jan Svabenik 1 month ago
parent
commit
cb551bed1a
8 changed files with 613 additions and 13 deletions
  1. +3
    -3
      README.md
  2. +5
    -5
      docs/TARGET_STATE_AND_ROADMAP.md
  3. +7
    -2
      internal/app/app.go
  4. +1
    -0
      internal/domain/models.go
  5. +249
    -0
      internal/llmruntime/runtime.go
  6. +228
    -0
      internal/mapping/provider_suggestion_generator.go
  7. +106
    -0
      internal/mapping/provider_suggestion_generator_test.go
  8. +14
    -3
      internal/mapping/suggestion_generator.go

+ 3
- 3
README.md View File

@@ -14,15 +14,15 @@ Die App kann heute:
- Im Draft-/Build-UI den User-Flow auf Stammdaten, Intake-/Website-Kontext, Stil-Auswahl und Template-Felder fokussieren; Prompt-Interna liegen in Settings.
- Interne semantische Zielslots (z. B. `hero.title`, `service_items[n].description`) auf Template-Felder abbilden als Vorbereitung fuer spaeteren LLM-Autofill.
- Repeated-Bereiche in semantischen Slots werden block-/rollenbasiert getrennt (z. B. Services/Team/Testimonials pro Item statt Sammel-Slot).
- LLM-first Autofill-Vorschlaege (ueber den bestehenden QC-Providerpfad), mit strukturierter Feldzuordnung auf `fieldPath`/Slot und Rule-based Fallback fuer Ausfall-/Testfaelle.
- LLM-first Autofill-Vorschlaege ueber provider-aware Runtime (OpenAI, Anthropic, Google, xAI, Ollama/kompatibel) mit aktiver Provider-/Modell-Auswahl aus Settings, strukturierter Feldzuordnung auf `fieldPath`/Slot und Rule-based Fallback fuer Ausfall-/Testfaelle.
- Suggestion-Workflow getrennt von Feldwerten (Preview), inkl. `Generate all`, `Regenerate all`, `Apply all to empty` sowie per-Feld `Apply`/`Regenerate` im Draft-/Build-UI.
- Technische Felddetails (z. B. `fieldPath`, Suggestion-Metadaten, Slot-Preview) sind im UI standardmaessig ausgeblendet und nur per Debug-Toggle sichtbar.
- Builds aus geprueften Daten starten sowie Job-Status pollen und Editor-URL nachladen.

Wichtig:
- Leadharvester liefert nur Intake-Daten (Stammdaten + optional Kontext) in Drafts.
- LLM-Autofill bleibt Assistenz im Review-Flow: Vorschlaege werden separat gespeichert und manuell angewendet; bei LLM-Ausfall greift deterministischer Rule-based Fallback.
- Die neue Provider-/Modell-Konfiguration ist Phase-A-Grundlage fuer spaeteres Routing; der bestehende LLM-Suggestions-Runtimepfad bleibt in diesem Schritt unveraendert.
- LLM-Autofill bleibt Assistenz im Review-Flow: Vorschlaege werden separat gespeichert und manuell angewendet; bei Provider-Ausfall greift ein Fallback-Pfad (QC-kompatibel, danach deterministisch Rule-based).
- Provider-/Modell-/Base-URL/API-Key-Settings steuern den primaeren Suggestion-Runtimepfad produktiv.

## Lokaler Start



+ 5
- 5
docs/TARGET_STATE_AND_ROADMAP.md View File

@@ -41,8 +41,8 @@ Aktueller Stand:
- Prompt-/Systemsteuerung liegt global in Settings; der normale Build-/Review-Flow bleibt auf Inhalte und Feldbearbeitung fokussiert.
- Semantische Zielslots (z. B. `hero.title`, `service_items[n].description`) werden intern auf konkrete Template-Felder gemappt als Vorbereitung fuer spaeteren LLM-Autofill.
- Repeated-Sektionen (u. a. Services/Team/Testimonials) werden in der Slot-Vorschau block- und rollentypisch pro Item getrennt statt in Sammel-Slots zusammenzufallen.
- LLM-first Suggestion-State fuer Draft-/Build-UI ist vorhanden: Vorschlaege werden separat von Feldwerten gespeichert und per Generate/Regenerate/Apply (global und per Feld) explizit gesteuert; Rule-based bleibt als Fallback/Testpfad aktiv.
- Settings-Grundlage fuer spaetere Providerwahl ist vorhanden: aktiver LLM-Provider, aktives Modell, Base URL fuer Ollama/kompatible Endpoints sowie getrennte API-Key-Felder je Provider (OpenAI, Anthropic, Google, xAI, Ollama) sind persistent in `app_settings`.
- LLM-first Suggestion-State fuer Draft-/Build-UI ist vorhanden: Vorschlaege werden separat von Feldwerten gespeichert und per Generate/Regenerate/Apply (global und per Feld) explizit gesteuert; Rule-based bleibt als letzter Fallback/Testpfad aktiv.
- Provider-aware Suggestion-Runtime ist aktiv: Settings (`llm_active_provider`, `llm_active_model`, provider-spezifischer API-Key, `llm_base_url` fuer Ollama/kompatible Endpoints) steuern den primaeren Laufzeitpfad; der bestehende QC-Pfad bleibt als Kompatibilitaetsfallback erhalten.
- Technische Felddetails (z. B. Feldpfade/Slots/Suggestion-Metadaten) sind im UI per Debug-Toggle optional einblendbar.
- Build-Start erfordert bereits einen Template-Manifest-Status `reviewed`/`validated`.
- Prozessuale Review-Gates (z. B. Freigabe-Policy, Rollen, Pflichtchecks pro Feld) sind noch nicht vollstaendig ausgebaut.
@@ -104,12 +104,12 @@ Statusmarker:
- [ ] Monitoring/Fehlerbild fuer Intake-Qualitaet und Nachbearbeitungsquote.

### E) LLM-Assistenz
- [-] Feldvorschlaege im Draft als expliziter Preview-/Apply-/Regenerate-Workflow (LLM-first ueber bestehenden Providerpfad; Rule-based nur Fallback/Test).
- [x] Draft-Autofill mit nachvollziehbarer Herkunft je Feld (`llm` vs `fallback-rule-based` im Suggestion-State).
- [x] Feldvorschlaege im Draft als expliziter Preview-/Apply-/Regenerate-Workflow (LLM-first ueber provider-aware Runtime; Rule-based nur Fallback/Test).
- [x] Draft-Autofill mit nachvollziehbarer Herkunft je Feld (provider-label wie `openai`/`anthropic`/`google`/`xai`/`ollama`, `qc-llm` als Kompatibilitaetsfallback, `fallback-rule-based` als letzter Fallback).
- [-] Stilprofil-Logik unter Beruecksichtigung von `businessType` + Tonalitaet (Kontext wird in den LLM-Pfad uebergeben; Qualitaets-/Governance-Feinschliff offen).
- [-] Prompt-/Systemsteuerung (Master-Prompt + Prompt-Bloecke) in Settings in den LLM-Suggestionspfad eingebunden; Build-Flow ohne prominente Prompt-Interna.
- [x] Semantische Slot-Mappings zwischen Template-Feldern und Zielrollen als Bruecke fuer LLM-Autofill aktiv genutzt (inkl. verbesserter Trennung in Repeated-Bereichen).
- [-] Phase A Provider-/Modell-Settings-Fundament in Settings/UI/Persistenz umgesetzt (inkl. provider-spezifischer Key-Speicherung); produktive Runtime-Umschaltung pro Provider/Modell folgt in spaeteren Phasen.
- [x] Phase A/B Provider-/Modell-Settings-Fundament inkl. produktiver Runtime-Umschaltung umgesetzt (Provider-/Modellwahl + provider-spezifische Keys + Base URL fuer Ollama/kompatible Endpoints steuern Suggestions direkt).

### F) Security und Betriebsreife
- [ ] Verbindliche Secret-Strategie (verschluesselte Speicherung statt einfacher Platzhalterlogik).


+ 7
- 2
internal/app/app.go View File

@@ -17,6 +17,7 @@ import (
"qctextbuilder/internal/httpserver"
"qctextbuilder/internal/httpserver/handlers"
"qctextbuilder/internal/httpserver/views"
"qctextbuilder/internal/llmruntime"
"qctextbuilder/internal/logging"
"qctextbuilder/internal/mapping"
"qctextbuilder/internal/onboarding"
@@ -71,9 +72,13 @@ func New(cfg config.Config) (*App, error) {
draftSvc := draftsvc.New(draftStore, templateStore, manifestStore)
mappingSvc := mapping.New()
buildSvc := buildsvc.New(qc, templateStore, manifestStore, buildStore, mappingSvc, time.Duration(cfg.PollTimeoutSeconds)*time.Second)
providerRuntime := llmruntime.NewFactory(45 * time.Second)
suggestionGenerator := mapping.NewCompositeSuggestionGenerator(
mapping.NewLLMSuggestionGenerator(qc),
mapping.NewRuleBasedSuggestionGenerator(),
mapping.NewProviderAwareSuggestionGenerator(settingsStore, providerRuntime),
mapping.NewCompositeSuggestionGenerator(
mapping.NewQCLLMSuggestionGenerator(qc),
mapping.NewRuleBasedSuggestionGenerator(),
),
)
pollingSvc := polling.New(buildSvc, buildStore, time.Duration(cfg.PollIntervalSeconds)*time.Second, cfg.PollMaxConcurrent, logger)
api := handlers.NewAPI(templateSvc, onboardSvc, draftSvc, buildSvc)


+ 1
- 0
internal/domain/models.go View File

@@ -89,6 +89,7 @@ type BuildDraft struct {

const (
DraftSuggestionSourceLLM = "llm"
DraftSuggestionSourceQCLLM = "qc-llm"
DraftSuggestionSourceFallbackRuleBased = "fallback-rule-based"
DraftSuggestionSourceRuleBased = DraftSuggestionSourceFallbackRuleBased



+ 249
- 0
internal/llmruntime/runtime.go View File

@@ -0,0 +1,249 @@
package llmruntime

import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
)

type Request struct {
Provider string
Model string
BaseURL string
APIKey string
SystemPrompt string
UserPrompt string
}

type Client interface {
Generate(ctx context.Context, req Request) (string, error)
}

type Factory struct {
httpClient *http.Client
}

func NewFactory(timeout time.Duration) *Factory {
if timeout <= 0 {
timeout = 45 * time.Second
}
return &Factory{
httpClient: &http.Client{Timeout: timeout},
}
}

func (f *Factory) ClientFor(provider string) (Client, error) {
normalized := strings.ToLower(strings.TrimSpace(provider))
switch normalized {
case "openai", "xai", "ollama":
return &openAICompatibleClient{httpClient: f.httpClient}, nil
case "anthropic":
return &anthropicClient{httpClient: f.httpClient}, nil
case "google":
return &googleClient{httpClient: f.httpClient}, nil
default:
return nil, fmt.Errorf("unsupported llm provider: %s", normalized)
}
}

type openAICompatibleClient struct {
httpClient *http.Client
}

func (c *openAICompatibleClient) Generate(ctx context.Context, req Request) (string, error) {
baseURL := strings.TrimRight(strings.TrimSpace(req.BaseURL), "/")
if baseURL == "" {
switch strings.ToLower(strings.TrimSpace(req.Provider)) {
case "xai":
baseURL = "https://api.x.ai"
case "ollama":
baseURL = "http://localhost:11434"
default:
baseURL = "https://api.openai.com"
}
}

payload := map[string]any{
"model": strings.TrimSpace(req.Model),
"temperature": 0,
"messages": []map[string]string{
{"role": "system", "content": strings.TrimSpace(req.SystemPrompt)},
{"role": "user", "content": strings.TrimSpace(req.UserPrompt)},
},
}

body, err := doJSON(ctx, c.httpClient, http.MethodPost, baseURL+"/v1/chat/completions", req.APIKey, nil, payload)
if err != nil {
return "", err
}

var response struct {
Choices []struct {
Message struct {
Content string `json:"content"`
} `json:"message"`
} `json:"choices"`
}
if err := json.Unmarshal(body, &response); err != nil {
return "", fmt.Errorf("decode openai-compatible response: %w", err)
}
if len(response.Choices) == 0 {
return "", fmt.Errorf("empty openai-compatible response")
}
return strings.TrimSpace(response.Choices[0].Message.Content), nil
}

type anthropicClient struct {
httpClient *http.Client
}

func (c *anthropicClient) Generate(ctx context.Context, req Request) (string, error) {
baseURL := strings.TrimRight(strings.TrimSpace(req.BaseURL), "/")
if baseURL == "" {
baseURL = "https://api.anthropic.com"
}
payload := map[string]any{
"model": strings.TrimSpace(req.Model),
"max_tokens": 1200,
"temperature": 0,
"system": strings.TrimSpace(req.SystemPrompt),
"messages": []map[string]any{
{"role": "user", "content": strings.TrimSpace(req.UserPrompt)},
},
}
headers := map[string]string{"anthropic-version": "2023-06-01"}
body, err := doJSON(ctx, c.httpClient, http.MethodPost, baseURL+"/v1/messages", req.APIKey, headers, payload)
if err != nil {
return "", err
}

var response struct {
Content []struct {
Type string `json:"type"`
Text string `json:"text"`
} `json:"content"`
}
if err := json.Unmarshal(body, &response); err != nil {
return "", fmt.Errorf("decode anthropic response: %w", err)
}
for _, item := range response.Content {
if strings.EqualFold(strings.TrimSpace(item.Type), "text") && strings.TrimSpace(item.Text) != "" {
return strings.TrimSpace(item.Text), nil
}
}
return "", fmt.Errorf("empty anthropic response")
}

type googleClient struct {
httpClient *http.Client
}

func (c *googleClient) Generate(ctx context.Context, req Request) (string, error) {
baseURL := strings.TrimRight(strings.TrimSpace(req.BaseURL), "/")
if baseURL == "" {
baseURL = "https://generativelanguage.googleapis.com"
}
model := strings.TrimSpace(req.Model)
if model == "" {
return "", fmt.Errorf("google model is required")
}
apiKey := strings.TrimSpace(req.APIKey)
if apiKey == "" {
return "", fmt.Errorf("google api key is required")
}

endpoint := fmt.Sprintf("%s/v1beta/models/%s:generateContent?key=%s", baseURL, url.PathEscape(model), url.QueryEscape(apiKey))
payload := map[string]any{
"contents": []map[string]any{
{"parts": []map[string]string{{"text": strings.TrimSpace(req.UserPrompt)}}},
},
"generationConfig": map[string]any{
"temperature": 0,
},
}
if strings.TrimSpace(req.SystemPrompt) != "" {
payload["systemInstruction"] = map[string]any{
"parts": []map[string]string{{"text": strings.TrimSpace(req.SystemPrompt)}},
}
}

body, err := doJSON(ctx, c.httpClient, http.MethodPost, endpoint, "", nil, payload)
if err != nil {
return "", err
}

var response struct {
Candidates []struct {
Content struct {
Parts []struct {
Text string `json:"text"`
} `json:"parts"`
} `json:"content"`
} `json:"candidates"`
}
if err := json.Unmarshal(body, &response); err != nil {
return "", fmt.Errorf("decode google response: %w", err)
}
if len(response.Candidates) == 0 {
return "", fmt.Errorf("empty google response")
}
parts := make([]string, 0, len(response.Candidates[0].Content.Parts))
for _, part := range response.Candidates[0].Content.Parts {
if text := strings.TrimSpace(part.Text); text != "" {
parts = append(parts, text)
}
}
if len(parts) == 0 {
return "", fmt.Errorf("google response has no text parts")
}
return strings.Join(parts, "\n"), nil
}

func doJSON(ctx context.Context, httpClient *http.Client, method, endpoint, apiKey string, headers map[string]string, payload any) ([]byte, error) {
body, err := json.Marshal(payload)
if err != nil {
return nil, fmt.Errorf("marshal request: %w", err)
}

req, err := http.NewRequestWithContext(ctx, method, endpoint, bytes.NewReader(body))
if err != nil {
return nil, fmt.Errorf("build request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Accept", "application/json")
if strings.TrimSpace(apiKey) != "" {
req.Header.Set("Authorization", "Bearer "+strings.TrimSpace(apiKey))
req.Header.Set("x-api-key", strings.TrimSpace(apiKey))
}
for key, value := range headers {
if strings.TrimSpace(key) == "" {
continue
}
req.Header.Set(key, value)
}

resp, err := httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("do request: %w", err)
}
defer resp.Body.Close()

respBody, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("read response: %w", err)
}
if resp.StatusCode >= 400 {
message := strings.TrimSpace(string(respBody))
if len(message) > 500 {
message = message[:500]
}
return nil, fmt.Errorf("provider http %d: %s", resp.StatusCode, message)
}
return respBody, nil
}

+ 228
- 0
internal/mapping/provider_suggestion_generator.go View File

@@ -0,0 +1,228 @@
package mapping

import (
"context"
"encoding/json"
"fmt"
"strings"

"qctextbuilder/internal/domain"
"qctextbuilder/internal/llmruntime"
)

type SettingsReader interface {
GetSettings(ctx context.Context) (*domain.AppSettings, error)
}

type ProviderAwareSuggestionGenerator struct {
settings SettingsReader
runtimeFactory *llmruntime.Factory
}

func NewProviderAwareSuggestionGenerator(settings SettingsReader, runtimeFactory *llmruntime.Factory) *ProviderAwareSuggestionGenerator {
return &ProviderAwareSuggestionGenerator{
settings: settings,
runtimeFactory: runtimeFactory,
}
}

func (g *ProviderAwareSuggestionGenerator) Generate(ctx context.Context, req SuggestionRequest) (SuggestionResult, error) {
if g == nil || g.settings == nil || g.runtimeFactory == nil {
return SuggestionResult{}, fmt.Errorf("provider-aware generator is not configured")
}

settings, err := g.settings.GetSettings(ctx)
if err != nil || settings == nil {
return SuggestionResult{}, fmt.Errorf("llm settings are not available")
}
provider := domain.NormalizeLLMProvider(settings.LLMActiveProvider)
model := domain.NormalizeLLMModel(provider, settings.LLMActiveModel)
if strings.TrimSpace(model) == "" {
return SuggestionResult{}, fmt.Errorf("no active model configured")
}
apiKey := apiKeyForProvider(provider, *settings)
if provider != domain.LLMProviderOllama && strings.TrimSpace(apiKey) == "" {
return SuggestionResult{}, fmt.Errorf("api key for provider %s is not configured", provider)
}

targets := collectSuggestionTargets(req.Fields, req.Existing, req.IncludeFilled)
if len(targets) == 0 {
return SuggestionResult{Suggestions: []Suggestion{}, ByFieldPath: map[string]Suggestion{}}, nil
}
allowed := make(map[string]SemanticSlotTarget, len(targets))
for _, target := range targets {
allowed[target.FieldPath] = target
}

providerClient, err := g.runtimeFactory.ClientFor(provider)
if err != nil {
return SuggestionResult{}, err
}
systemPrompt, userPrompt := buildProviderPrompts(req, targets)
raw, err := providerClient.Generate(ctx, llmruntime.Request{
Provider: provider,
Model: model,
BaseURL: strings.TrimSpace(settings.LLMBaseURL),
APIKey: strings.TrimSpace(apiKey),
SystemPrompt: systemPrompt,
UserPrompt: userPrompt,
})
if err != nil {
return SuggestionResult{}, err
}

parsed, err := parseProviderSuggestions(raw)
if err != nil {
return SuggestionResult{}, err
}

out := SuggestionResult{
Suggestions: make([]Suggestion, 0, len(parsed)),
ByFieldPath: map[string]Suggestion{},
}
for _, item := range parsed {
fieldPath := strings.TrimSpace(item.FieldPath)
target, ok := allowed[fieldPath]
if !ok {
continue
}
value := strings.TrimSpace(item.Value)
if value == "" {
continue
}
suggestion := Suggestion{
FieldPath: fieldPath,
Slot: firstNonEmpty(strings.TrimSpace(item.Slot), target.Slot),
Value: value,
Reason: firstNonEmpty(strings.TrimSpace(item.Reason), "provider suggestion"),
Source: provider,
}
if _, exists := out.ByFieldPath[fieldPath]; exists {
continue
}
out.Suggestions = append(out.Suggestions, suggestion)
out.ByFieldPath[fieldPath] = suggestion
}
return out, nil
}

type providerSuggestion struct {
FieldPath string `json:"fieldPath"`
Slot string `json:"slot,omitempty"`
Value string `json:"value"`
Reason string `json:"reason,omitempty"`
}

func parseProviderSuggestions(raw string) ([]providerSuggestion, error) {
content := strings.TrimSpace(raw)
if content == "" {
return nil, fmt.Errorf("empty provider response")
}

candidates := []string{content}
if fence := extractFencedJSON(content); fence != "" {
candidates = append([]string{fence}, candidates...)
}
if object := extractJSONObject(content); object != "" {
candidates = append(candidates, object)
}

for _, candidate := range candidates {
items, ok := parseSuggestionsCandidate(candidate)
if ok {
return items, nil
}
}
return nil, fmt.Errorf("provider response is not valid suggestions json")
}

func parseSuggestionsCandidate(raw string) ([]providerSuggestion, bool) {
var objectPayload struct {
Suggestions []providerSuggestion `json:"suggestions"`
}
if err := json.Unmarshal([]byte(raw), &objectPayload); err == nil && len(objectPayload.Suggestions) > 0 {
return objectPayload.Suggestions, true
}
var listPayload []providerSuggestion
if err := json.Unmarshal([]byte(raw), &listPayload); err == nil && len(listPayload) > 0 {
return listPayload, true
}
return nil, false
}

func extractFencedJSON(value string) string {
const fence = "```"
start := strings.Index(value, fence)
for start >= 0 {
rest := value[start+len(fence):]
end := strings.Index(rest, fence)
if end < 0 {
return ""
}
block := strings.TrimSpace(rest[:end])
block = strings.TrimPrefix(block, "json")
block = strings.TrimPrefix(block, "JSON")
block = strings.TrimSpace(block)
if strings.HasPrefix(block, "{") || strings.HasPrefix(block, "[") {
return block
}
nextOffset := start + len(fence) + end + len(fence)
nextStart := strings.Index(value[nextOffset:], fence)
if nextStart < 0 {
break
}
start = nextOffset + nextStart
}
return ""
}

func extractJSONObject(value string) string {
start := strings.IndexAny(value, "{[")
if start < 0 {
return ""
}
end := strings.LastIndexAny(value, "}]")
if end <= start {
return ""
}
return strings.TrimSpace(value[start : end+1])
}

func buildProviderPrompts(req SuggestionRequest, targets []SemanticSlotTarget) (string, string) {
targetPayload := make([]map[string]string, 0, len(targets))
for _, target := range targets {
targetPayload = append(targetPayload, map[string]string{
"fieldPath": strings.TrimSpace(target.FieldPath),
"slot": strings.TrimSpace(target.Slot),
})
}
contextPayload := map[string]any{
"globalData": req.GlobalData,
"draftContext": llmDraftContextMap(req.DraftContext),
"masterPrompt": strings.TrimSpace(req.MasterPrompt),
"promptBlocks": enabledPromptBlocks(req.PromptBlocks),
"targets": targetPayload,
}
contextJSON, _ := json.MarshalIndent(contextPayload, "", " ")

system := "You generate website text suggestions. Return JSON only. Format: {\"suggestions\":[{\"fieldPath\":\"...\",\"slot\":\"...\",\"value\":\"...\",\"reason\":\"...\"}]}. Use only provided field paths. Keep values concise and in input language."
user := "Generate suggestions for each target field using the provided context. Do not include markdown.\n\n" + string(contextJSON)
return system, user
}

func apiKeyForProvider(provider string, settings domain.AppSettings) string {
switch provider {
case domain.LLMProviderOpenAI:
return strings.TrimSpace(settings.OpenAIAPIKeyEncrypted)
case domain.LLMProviderAnthropic:
return strings.TrimSpace(settings.AnthropicAPIKeyEncrypted)
case domain.LLMProviderGoogle:
return strings.TrimSpace(settings.GoogleAPIKeyEncrypted)
case domain.LLMProviderXAI:
return strings.TrimSpace(settings.XAIAPIKeyEncrypted)
case domain.LLMProviderOllama:
return strings.TrimSpace(settings.OllamaAPIKeyEncrypted)
default:
return ""
}
}

+ 106
- 0
internal/mapping/provider_suggestion_generator_test.go View File

@@ -0,0 +1,106 @@
package mapping

import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"

"qctextbuilder/internal/domain"
"qctextbuilder/internal/llmruntime"
)

type stubSettingsReader struct {
settings *domain.AppSettings
err error
}

func (s *stubSettingsReader) GetSettings(context.Context) (*domain.AppSettings, error) {
if s.err != nil {
return nil, s.err
}
return s.settings, nil
}

func TestProviderAwareSuggestionGenerator_UsesActiveProviderModelAndKey(t *testing.T) {
t.Parallel()

var (
gotPath string
gotAuth string
gotModel string
)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
gotPath = r.URL.Path
gotAuth = r.Header.Get("Authorization")
var payload map[string]any
_ = json.NewDecoder(r.Body).Decode(&payload)
gotModel, _ = payload["model"].(string)
_, _ = w.Write([]byte(`{"choices":[{"message":{"content":"{\"suggestions\":[{\"fieldPath\":\"text.textTitle_m1710_1\",\"value\":\"Provider Hero\",\"reason\":\"focused hero\"}]}"}}]}`))
}))
defer server.Close()

generator := NewProviderAwareSuggestionGenerator(&stubSettingsReader{settings: &domain.AppSettings{
LLMActiveProvider: domain.LLMProviderOpenAI,
LLMActiveModel: "gpt-5.4",
LLMBaseURL: server.URL,
OpenAIAPIKeyEncrypted: "openai-key",
}}, llmruntime.NewFactory(5*time.Second))

result, err := generator.Generate(context.Background(), SuggestionRequest{
Fields: []domain.TemplateField{
{Path: "text.textTitle_m1710_1", KeyName: "textTitle_m1710_1", FieldKind: "text", IsEnabled: true, WebsiteSection: domain.WebsiteSectionHero},
},
GlobalData: map[string]any{"companyName": "Muster AG"},
Existing: map[string]string{},
})
if err != nil {
t.Fatalf("generate failed: %v", err)
}
if got := result.ByFieldPath["text.textTitle_m1710_1"].Value; got != "Provider Hero" {
t.Fatalf("unexpected value: %q", got)
}
if got := result.ByFieldPath["text.textTitle_m1710_1"].Source; got != domain.LLMProviderOpenAI {
t.Fatalf("unexpected source: %q", got)
}
if gotPath != "/v1/chat/completions" {
t.Fatalf("unexpected path: %s", gotPath)
}
if gotAuth != "Bearer openai-key" {
t.Fatalf("unexpected auth header: %q", gotAuth)
}
if gotModel != "gpt-5.4" {
t.Fatalf("unexpected model: %q", gotModel)
}
}

func TestProviderAwareSuggestionGenerator_RequiresAPIKeyForNonOllama(t *testing.T) {
t.Parallel()

generator := NewProviderAwareSuggestionGenerator(&stubSettingsReader{settings: &domain.AppSettings{
LLMActiveProvider: domain.LLMProviderAnthropic,
LLMActiveModel: "claude-sonnet-4-5",
}}, llmruntime.NewFactory(5*time.Second))

_, err := generator.Generate(context.Background(), SuggestionRequest{
Fields: []domain.TemplateField{{Path: "text.textTitle_m1710_1", KeyName: "textTitle_m1710_1", FieldKind: "text", IsEnabled: true, WebsiteSection: domain.WebsiteSectionHero}},
})
if err == nil || !strings.Contains(err.Error(), "api key") {
t.Fatalf("expected api key error, got: %v", err)
}
}

func TestParseProviderSuggestions_AcceptsFencedJSON(t *testing.T) {
t.Parallel()

items, err := parseProviderSuggestions("```json\n{\"suggestions\":[{\"fieldPath\":\"a\",\"value\":\"b\"}]}\n```")
if err != nil {
t.Fatalf("parse failed: %v", err)
}
if len(items) != 1 || items[0].FieldPath != "a" || items[0].Value != "b" {
t.Fatalf("unexpected parsed result: %+v", items)
}
}

+ 14
- 3
internal/mapping/suggestion_generator.go View File

@@ -26,11 +26,22 @@ func (g *RuleBasedSuggestionGenerator) Generate(_ context.Context, req Suggestio
}

type LLMSuggestionGenerator struct {
qc qcclient.Client
qc qcclient.Client
source string
}

func NewLLMSuggestionGenerator(qc qcclient.Client) *LLMSuggestionGenerator {
return &LLMSuggestionGenerator{qc: qc}
return &LLMSuggestionGenerator{
qc: qc,
source: domain.DraftSuggestionSourceLLM,
}
}

func NewQCLLMSuggestionGenerator(qc qcclient.Client) *LLMSuggestionGenerator {
return &LLMSuggestionGenerator{
qc: qc,
source: domain.DraftSuggestionSourceQCLLM,
}
}

func (g *LLMSuggestionGenerator) Generate(ctx context.Context, req SuggestionRequest) (SuggestionResult, error) {
@@ -102,7 +113,7 @@ func (g *LLMSuggestionGenerator) Generate(ctx context.Context, req SuggestionReq
Slot: target.Slot,
Value: value,
Reason: "llm suggestion from template content generation",
Source: domain.DraftSuggestionSourceLLM,
Source: firstNonEmpty(strings.TrimSpace(g.source), domain.DraftSuggestionSourceLLM),
}
out.Suggestions = append(out.Suggestions, suggestion)
out.ByFieldPath[target.FieldPath] = suggestion


Loading…
Cancel
Save