| @@ -10,7 +10,8 @@ Die App kann heute: | |||||
| - Drafts anlegen, aktualisieren und im Status `draft` -> `reviewed` -> `submitted` fuehren. | - Drafts anlegen, aktualisieren und im Status `draft` -> `reviewed` -> `submitted` fuehren. | ||||
| - Externen Draft-Intake ueber `POST /api/drafts/intake` verarbeiten (Stammdaten + optional Website-/Stilkontext, kein Direkt-Build). | - Externen Draft-Intake ueber `POST /api/drafts/intake` verarbeiten (Stammdaten + optional Website-/Stilkontext, kein Direkt-Build). | ||||
| - Globalen Master-Prompt in Settings pflegen sowie Prompt-Bloecke fuer den spaeteren LLM-Flow als Standard konfigurieren. | - Globalen Master-Prompt in Settings pflegen sowie Prompt-Bloecke fuer den spaeteren LLM-Flow als Standard konfigurieren. | ||||
| - Im Settings-/Config-Bereich die LLM-Basiskonfiguration pflegen: aktiver Provider, aktives Modell, Base URL fuer Ollama/kompatible Endpoints sowie getrennte API-Key-Speicher je Provider (OpenAI, Anthropic, Google, xAI, Ollama). | |||||
| - Im Settings-/Config-Bereich die LLM-Basiskonfiguration pflegen: aktiver Provider, aktives Modell (provider-aware statische Auswahlliste), Base URL fuer Ollama/kompatible Endpoints, Temperature/Max Tokens sowie getrennte API-Key-Speicher je Provider (OpenAI, Anthropic, Google, xAI, Ollama). | |||||
| - LLM-Provider-Konfiguration in Settings per leichtgewichtigem Validate-Action pruefen (aktiver Provider/Modell/Key/Base URL via kurzem Runtime-Request). | |||||
| - Im Draft-/Build-UI den User-Flow auf Stammdaten, Intake-/Website-Kontext, Stil-Auswahl und Template-Felder fokussieren; Prompt-Interna liegen in Settings. | - Im Draft-/Build-UI den User-Flow auf Stammdaten, Intake-/Website-Kontext, Stil-Auswahl und Template-Felder fokussieren; Prompt-Interna liegen in Settings. | ||||
| - Interne semantische Zielslots (z. B. `hero.title`, `service_items[n].description`) auf Template-Felder abbilden als Vorbereitung fuer spaeteren LLM-Autofill. | - Interne semantische Zielslots (z. B. `hero.title`, `service_items[n].description`) auf Template-Felder abbilden als Vorbereitung fuer spaeteren LLM-Autofill. | ||||
| - Repeated-Bereiche in semantischen Slots werden block-/rollenbasiert getrennt (z. B. Services/Team/Testimonials pro Item statt Sammel-Slot). | - Repeated-Bereiche in semantischen Slots werden block-/rollenbasiert getrennt (z. B. Services/Team/Testimonials pro Item statt Sammel-Slot). | ||||
| @@ -22,7 +23,7 @@ Die App kann heute: | |||||
| Wichtig: | Wichtig: | ||||
| - Leadharvester liefert nur Intake-Daten (Stammdaten + optional Kontext) in Drafts. | - Leadharvester liefert nur Intake-Daten (Stammdaten + optional Kontext) in Drafts. | ||||
| - LLM-Autofill bleibt Assistenz im Review-Flow: Vorschlaege werden separat gespeichert und manuell angewendet; bei Provider-Ausfall greift ein Fallback-Pfad (QC-kompatibel, danach deterministisch Rule-based). | - LLM-Autofill bleibt Assistenz im Review-Flow: Vorschlaege werden separat gespeichert und manuell angewendet; bei Provider-Ausfall greift ein Fallback-Pfad (QC-kompatibel, danach deterministisch Rule-based). | ||||
| - Provider-/Modell-/Base-URL/API-Key-Settings steuern den primaeren Suggestion-Runtimepfad produktiv. | |||||
| - Provider-/Modell-/Base-URL/API-Key/Temperature/Max-Tokens-Settings steuern den primaeren Suggestion-Runtimepfad produktiv. | |||||
| ## Lokaler Start | ## Lokaler Start | ||||
| @@ -38,7 +39,7 @@ Wichtig: | |||||
| ## Persistenz | ## Persistenz | ||||
| Default ist SQLite. | Default ist SQLite. | ||||
| Gespeichert werden Settings (inkl. Prompt-Konfig und LLM-Provider-/Modell-/Key-Grundlagen), Templates, Manifeste/Felder, Drafts und Site-Builds. | |||||
| Gespeichert werden Settings (inkl. Prompt-Konfig und LLM-Provider-/Modell-/Runtime-/Key-Grundlagen), Templates, Manifeste/Felder, Drafts und Site-Builds. | |||||
| ## Draft-/Review-Flow | ## Draft-/Review-Flow | ||||
| @@ -42,7 +42,9 @@ Aktueller Stand: | |||||
| - Semantische Zielslots (z. B. `hero.title`, `service_items[n].description`) werden intern auf konkrete Template-Felder gemappt als Vorbereitung fuer spaeteren LLM-Autofill. | - Semantische Zielslots (z. B. `hero.title`, `service_items[n].description`) werden intern auf konkrete Template-Felder gemappt als Vorbereitung fuer spaeteren LLM-Autofill. | ||||
| - Repeated-Sektionen (u. a. Services/Team/Testimonials) werden in der Slot-Vorschau block- und rollentypisch pro Item getrennt statt in Sammel-Slots zusammenzufallen. | - Repeated-Sektionen (u. a. Services/Team/Testimonials) werden in der Slot-Vorschau block- und rollentypisch pro Item getrennt statt in Sammel-Slots zusammenzufallen. | ||||
| - LLM-first Suggestion-State fuer Draft-/Build-UI ist vorhanden: Vorschlaege werden separat von Feldwerten gespeichert und per Generate/Regenerate/Apply (global und per Feld) explizit gesteuert; Rule-based bleibt als letzter Fallback/Testpfad aktiv. | - LLM-first Suggestion-State fuer Draft-/Build-UI ist vorhanden: Vorschlaege werden separat von Feldwerten gespeichert und per Generate/Regenerate/Apply (global und per Feld) explizit gesteuert; Rule-based bleibt als letzter Fallback/Testpfad aktiv. | ||||
| - Provider-aware Suggestion-Runtime ist aktiv: Settings (`llm_active_provider`, `llm_active_model`, provider-spezifischer API-Key, `llm_base_url` fuer Ollama/kompatible Endpoints) steuern den primaeren Laufzeitpfad; der bestehende QC-Pfad bleibt als Kompatibilitaetsfallback erhalten. | |||||
| - Provider-aware Suggestion-Runtime ist aktiv: Settings (`llm_active_provider`, `llm_active_model`, `llm_temperature`, `llm_max_tokens`, provider-spezifischer API-Key, `llm_base_url` fuer Ollama/kompatible Endpoints) steuern den primaeren Laufzeitpfad; der bestehende QC-Pfad bleibt als Kompatibilitaetsfallback erhalten. | |||||
| - Settings enthalten einen leichtgewichtigen Validate-Action fuer die aktive Provider-Konfiguration (kurzer Runtime-Check), ohne den Draft-/Review-Flow zu umgehen. | |||||
| - Modellauswahl ist provider-aware statisch umgesetzt und so strukturiert, dass spaeter dynamische Model-Listen/Refresh anschliessbar sind. | |||||
| - Technische Felddetails (z. B. Feldpfade/Slots/Suggestion-Metadaten) sind im UI per Debug-Toggle optional einblendbar. | - Technische Felddetails (z. B. Feldpfade/Slots/Suggestion-Metadaten) sind im UI per Debug-Toggle optional einblendbar. | ||||
| - Build-Start erfordert bereits einen Template-Manifest-Status `reviewed`/`validated`. | - Build-Start erfordert bereits einen Template-Manifest-Status `reviewed`/`validated`. | ||||
| - Prozessuale Review-Gates (z. B. Freigabe-Policy, Rollen, Pflichtchecks pro Feld) sind noch nicht vollstaendig ausgebaut. | - Prozessuale Review-Gates (z. B. Freigabe-Policy, Rollen, Pflichtchecks pro Feld) sind noch nicht vollstaendig ausgebaut. | ||||
| @@ -110,6 +112,7 @@ Statusmarker: | |||||
| - [-] Prompt-/Systemsteuerung (Master-Prompt + Prompt-Bloecke) in Settings in den LLM-Suggestionspfad eingebunden; Build-Flow ohne prominente Prompt-Interna. | - [-] Prompt-/Systemsteuerung (Master-Prompt + Prompt-Bloecke) in Settings in den LLM-Suggestionspfad eingebunden; Build-Flow ohne prominente Prompt-Interna. | ||||
| - [x] Semantische Slot-Mappings zwischen Template-Feldern und Zielrollen als Bruecke fuer LLM-Autofill aktiv genutzt (inkl. verbesserter Trennung in Repeated-Bereichen). | - [x] Semantische Slot-Mappings zwischen Template-Feldern und Zielrollen als Bruecke fuer LLM-Autofill aktiv genutzt (inkl. verbesserter Trennung in Repeated-Bereichen). | ||||
| - [x] Phase A/B Provider-/Modell-Settings-Fundament inkl. produktiver Runtime-Umschaltung umgesetzt (Provider-/Modellwahl + provider-spezifische Keys + Base URL fuer Ollama/kompatible Endpoints steuern Suggestions direkt). | - [x] Phase A/B Provider-/Modell-Settings-Fundament inkl. produktiver Runtime-Umschaltung umgesetzt (Provider-/Modellwahl + provider-spezifische Keys + Base URL fuer Ollama/kompatible Endpoints steuern Suggestions direkt). | ||||
| - [x] Phase C Komfort/Qualitaet umgesetzt: Temperature/Max Tokens in Settings + Runtime, Settings-Validate-Action, robustere Provider-Response-/Fehlerbehandlung und statisch provider-aware Modell-UX mit spaeterem Ausbaupfad. | |||||
| ### F) Security und Betriebsreife | ### F) Security und Betriebsreife | ||||
| - [ ] Verbindliche Secret-Strategie (verschluesselte Speicherung statt einfacher Platzhalterlogik). | - [ ] Verbindliche Secret-Strategie (verschluesselte Speicherung statt einfacher Platzhalterlogik). | ||||
| @@ -91,6 +91,8 @@ func New(cfg config.Config) (*App, error) { | |||||
| JobPollTimeoutSeconds: cfg.PollTimeoutSeconds, | JobPollTimeoutSeconds: cfg.PollTimeoutSeconds, | ||||
| LLMActiveProvider: domain.DefaultLLMProvider(), | LLMActiveProvider: domain.DefaultLLMProvider(), | ||||
| LLMActiveModel: domain.NormalizeLLMModel(domain.DefaultLLMProvider(), ""), | LLMActiveModel: domain.NormalizeLLMModel(domain.DefaultLLMProvider(), ""), | ||||
| LLMTemperature: domain.DefaultLLMTemperature(), | |||||
| LLMMaxTokens: domain.DefaultLLMMaxTokens(), | |||||
| MasterPrompt: domain.SeedMasterPrompt, | MasterPrompt: domain.SeedMasterPrompt, | ||||
| PromptBlocks: domain.DefaultPromptBlocks(), | PromptBlocks: domain.DefaultPromptBlocks(), | ||||
| } | } | ||||
| @@ -98,6 +100,8 @@ func New(cfg config.Config) (*App, error) { | |||||
| baseSettings.LLMActiveProvider = existing.LLMActiveProvider | baseSettings.LLMActiveProvider = existing.LLMActiveProvider | ||||
| baseSettings.LLMActiveModel = existing.LLMActiveModel | baseSettings.LLMActiveModel = existing.LLMActiveModel | ||||
| baseSettings.LLMBaseURL = existing.LLMBaseURL | baseSettings.LLMBaseURL = existing.LLMBaseURL | ||||
| baseSettings.LLMTemperature = existing.LLMTemperature | |||||
| baseSettings.LLMMaxTokens = existing.LLMMaxTokens | |||||
| baseSettings.OpenAIAPIKeyEncrypted = existing.OpenAIAPIKeyEncrypted | baseSettings.OpenAIAPIKeyEncrypted = existing.OpenAIAPIKeyEncrypted | ||||
| baseSettings.AnthropicAPIKeyEncrypted = existing.AnthropicAPIKeyEncrypted | baseSettings.AnthropicAPIKeyEncrypted = existing.AnthropicAPIKeyEncrypted | ||||
| baseSettings.GoogleAPIKeyEncrypted = existing.GoogleAPIKeyEncrypted | baseSettings.GoogleAPIKeyEncrypted = existing.GoogleAPIKeyEncrypted | ||||
| @@ -118,6 +122,7 @@ func New(cfg config.Config) (*App, error) { | |||||
| r.Get("/", ui.Home) | r.Get("/", ui.Home) | ||||
| r.Get("/settings", ui.Settings) | r.Get("/settings", ui.Settings) | ||||
| r.Post("/settings/llm", ui.SaveLLMSettings) | r.Post("/settings/llm", ui.SaveLLMSettings) | ||||
| r.Post("/settings/llm/validate", ui.ValidateLLMSettings) | |||||
| r.Post("/settings/prompt", ui.SavePromptSettings) | r.Post("/settings/prompt", ui.SavePromptSettings) | ||||
| r.Get("/templates", ui.Templates) | r.Get("/templates", ui.Templates) | ||||
| r.Post("/templates/sync", ui.SyncTemplates) | r.Post("/templates/sync", ui.SyncTemplates) | ||||
| @@ -1,6 +1,9 @@ | |||||
| package domain | package domain | ||||
| import "strings" | |||||
| import ( | |||||
| "math" | |||||
| "strings" | |||||
| ) | |||||
| const ( | const ( | ||||
| LLMProviderOpenAI = "openai" | LLMProviderOpenAI = "openai" | ||||
| @@ -8,6 +11,13 @@ const ( | |||||
| LLMProviderGoogle = "google" | LLMProviderGoogle = "google" | ||||
| LLMProviderXAI = "xai" | LLMProviderXAI = "xai" | ||||
| LLMProviderOllama = "ollama" | LLMProviderOllama = "ollama" | ||||
| defaultLLMTemperature = 0.2 | |||||
| minLLMTemperature = 0.0 | |||||
| maxLLMTemperature = 2.0 | |||||
| defaultLLMMaxTokens = 1200 | |||||
| minLLMMaxTokens = 64 | |||||
| maxLLMMaxTokens = 8192 | |||||
| ) | ) | ||||
| type LLMModelOption struct { | type LLMModelOption struct { | ||||
| @@ -106,3 +116,54 @@ func NormalizeLLMModel(provider, model string) string { | |||||
| } | } | ||||
| return models[0].Value | return models[0].Value | ||||
| } | } | ||||
| func DefaultLLMTemperature() float64 { | |||||
| return defaultLLMTemperature | |||||
| } | |||||
| func NormalizeLLMTemperature(value float64) float64 { | |||||
| if math.IsNaN(value) || math.IsInf(value, 0) { | |||||
| return defaultLLMTemperature | |||||
| } | |||||
| if value < minLLMTemperature { | |||||
| value = minLLMTemperature | |||||
| } | |||||
| if value > maxLLMTemperature { | |||||
| value = maxLLMTemperature | |||||
| } | |||||
| return math.Round(value*100) / 100 | |||||
| } | |||||
| func DefaultLLMMaxTokens() int { | |||||
| return defaultLLMMaxTokens | |||||
| } | |||||
| func NormalizeLLMMaxTokens(value int) int { | |||||
| if value <= 0 { | |||||
| return defaultLLMMaxTokens | |||||
| } | |||||
| if value < minLLMMaxTokens { | |||||
| return minLLMMaxTokens | |||||
| } | |||||
| if value > maxLLMMaxTokens { | |||||
| return maxLLMMaxTokens | |||||
| } | |||||
| return value | |||||
| } | |||||
| func LLMAPIKeyForProvider(provider string, settings AppSettings) string { | |||||
| switch NormalizeLLMProvider(provider) { | |||||
| case LLMProviderOpenAI: | |||||
| return strings.TrimSpace(settings.OpenAIAPIKeyEncrypted) | |||||
| case LLMProviderAnthropic: | |||||
| return strings.TrimSpace(settings.AnthropicAPIKeyEncrypted) | |||||
| case LLMProviderGoogle: | |||||
| return strings.TrimSpace(settings.GoogleAPIKeyEncrypted) | |||||
| case LLMProviderXAI: | |||||
| return strings.TrimSpace(settings.XAIAPIKeyEncrypted) | |||||
| case LLMProviderOllama: | |||||
| return strings.TrimSpace(settings.OllamaAPIKeyEncrypted) | |||||
| default: | |||||
| return "" | |||||
| } | |||||
| } | |||||
| @@ -155,6 +155,8 @@ type AppSettings struct { | |||||
| LLMActiveProvider string `json:"llmActiveProvider,omitempty"` | LLMActiveProvider string `json:"llmActiveProvider,omitempty"` | ||||
| LLMActiveModel string `json:"llmActiveModel,omitempty"` | LLMActiveModel string `json:"llmActiveModel,omitempty"` | ||||
| LLMBaseURL string `json:"llmBaseUrl,omitempty"` | LLMBaseURL string `json:"llmBaseUrl,omitempty"` | ||||
| LLMTemperature float64 `json:"llmTemperature,omitempty"` | |||||
| LLMMaxTokens int `json:"llmMaxTokens,omitempty"` | |||||
| OpenAIAPIKeyEncrypted string `json:"openAiApiKeyEncrypted,omitempty"` | OpenAIAPIKeyEncrypted string `json:"openAiApiKeyEncrypted,omitempty"` | ||||
| AnthropicAPIKeyEncrypted string `json:"anthropicApiKeyEncrypted,omitempty"` | AnthropicAPIKeyEncrypted string `json:"anthropicApiKeyEncrypted,omitempty"` | ||||
| GoogleAPIKeyEncrypted string `json:"googleApiKeyEncrypted,omitempty"` | GoogleAPIKeyEncrypted string `json:"googleApiKeyEncrypted,omitempty"` | ||||
| @@ -19,6 +19,7 @@ import ( | |||||
| "qctextbuilder/internal/config" | "qctextbuilder/internal/config" | ||||
| "qctextbuilder/internal/domain" | "qctextbuilder/internal/domain" | ||||
| "qctextbuilder/internal/draftsvc" | "qctextbuilder/internal/draftsvc" | ||||
| "qctextbuilder/internal/llmruntime" | |||||
| "qctextbuilder/internal/mapping" | "qctextbuilder/internal/mapping" | ||||
| "qctextbuilder/internal/onboarding" | "qctextbuilder/internal/onboarding" | ||||
| "qctextbuilder/internal/store" | "qctextbuilder/internal/store" | ||||
| @@ -65,6 +66,8 @@ type settingsPageData struct { | |||||
| LLMActiveProvider string | LLMActiveProvider string | ||||
| LLMActiveModel string | LLMActiveModel string | ||||
| LLMBaseURL string | LLMBaseURL string | ||||
| LLMTemperature float64 | |||||
| LLMMaxTokens int | |||||
| OpenAIKeyConfigured bool | OpenAIKeyConfigured bool | ||||
| AnthropicKeyConfigured bool | AnthropicKeyConfigured bool | ||||
| GoogleKeyConfigured bool | GoogleKeyConfigured bool | ||||
| @@ -259,6 +262,8 @@ func (u *UI) Settings(w http.ResponseWriter, r *http.Request) { | |||||
| LLMActiveProvider: activeProvider, | LLMActiveProvider: activeProvider, | ||||
| LLMActiveModel: domain.NormalizeLLMModel(activeProvider, settings.LLMActiveModel), | LLMActiveModel: domain.NormalizeLLMModel(activeProvider, settings.LLMActiveModel), | ||||
| LLMBaseURL: strings.TrimSpace(settings.LLMBaseURL), | LLMBaseURL: strings.TrimSpace(settings.LLMBaseURL), | ||||
| LLMTemperature: domain.NormalizeLLMTemperature(settings.LLMTemperature), | |||||
| LLMMaxTokens: domain.NormalizeLLMMaxTokens(settings.LLMMaxTokens), | |||||
| OpenAIKeyConfigured: strings.TrimSpace(settings.OpenAIAPIKeyEncrypted) != "", | OpenAIKeyConfigured: strings.TrimSpace(settings.OpenAIAPIKeyEncrypted) != "", | ||||
| AnthropicKeyConfigured: strings.TrimSpace(settings.AnthropicAPIKeyEncrypted) != "", | AnthropicKeyConfigured: strings.TrimSpace(settings.AnthropicAPIKeyEncrypted) != "", | ||||
| GoogleKeyConfigured: strings.TrimSpace(settings.GoogleAPIKeyEncrypted) != "", | GoogleKeyConfigured: strings.TrimSpace(settings.GoogleAPIKeyEncrypted) != "", | ||||
| @@ -289,30 +294,34 @@ func (u *UI) SaveLLMSettings(w http.ResponseWriter, r *http.Request) { | |||||
| http.Redirect(w, r, "/settings?err=invalid+form", http.StatusSeeOther) | http.Redirect(w, r, "/settings?err=invalid+form", http.StatusSeeOther) | ||||
| return | return | ||||
| } | } | ||||
| settings := u.loadPromptSettings(r.Context()) | |||||
| settings.LLMActiveProvider = domain.NormalizeLLMProvider(r.FormValue("llm_provider")) | |||||
| settings.LLMActiveModel = domain.NormalizeLLMModel(settings.LLMActiveProvider, r.FormValue("llm_model")) | |||||
| settings.LLMBaseURL = strings.TrimSpace(r.FormValue("llm_base_url")) | |||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_openai")); value != "" { | |||||
| settings.OpenAIAPIKeyEncrypted = value | |||||
| } | |||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_anthropic")); value != "" { | |||||
| settings.AnthropicAPIKeyEncrypted = value | |||||
| settings, err := applyLLMSettingsForm(u.loadPromptSettings(r.Context()), r) | |||||
| if err != nil { | |||||
| http.Redirect(w, r, "/settings?err="+urlQuery(err.Error()), http.StatusSeeOther) | |||||
| return | |||||
| } | } | ||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_google")); value != "" { | |||||
| settings.GoogleAPIKeyEncrypted = value | |||||
| if err := u.settings.UpsertSettings(r.Context(), settings); err != nil { | |||||
| http.Redirect(w, r, "/settings?err="+urlQuery(err.Error()), http.StatusSeeOther) | |||||
| return | |||||
| } | } | ||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_xai")); value != "" { | |||||
| settings.XAIAPIKeyEncrypted = value | |||||
| http.Redirect(w, r, "/settings?msg=llm+settings+saved", http.StatusSeeOther) | |||||
| } | |||||
| func (u *UI) ValidateLLMSettings(w http.ResponseWriter, r *http.Request) { | |||||
| if err := r.ParseForm(); err != nil { | |||||
| http.Redirect(w, r, "/settings?err=invalid+form", http.StatusSeeOther) | |||||
| return | |||||
| } | } | ||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_ollama")); value != "" { | |||||
| settings.OllamaAPIKeyEncrypted = value | |||||
| settings, err := applyLLMSettingsForm(u.loadPromptSettings(r.Context()), r) | |||||
| if err != nil { | |||||
| http.Redirect(w, r, "/settings?err="+urlQuery(err.Error()), http.StatusSeeOther) | |||||
| return | |||||
| } | } | ||||
| if err := u.settings.UpsertSettings(r.Context(), settings); err != nil { | |||||
| if err := validateLLMProviderConfig(r.Context(), settings); err != nil { | |||||
| http.Redirect(w, r, "/settings?err="+urlQuery(err.Error()), http.StatusSeeOther) | http.Redirect(w, r, "/settings?err="+urlQuery(err.Error()), http.StatusSeeOther) | ||||
| return | return | ||||
| } | } | ||||
| http.Redirect(w, r, "/settings?msg=llm+settings+saved", http.StatusSeeOther) | |||||
| msg := fmt.Sprintf("llm provider config validated (%s / %s)", settings.LLMActiveProvider, settings.LLMActiveModel) | |||||
| http.Redirect(w, r, "/settings?msg="+urlQuery(msg), http.StatusSeeOther) | |||||
| } | } | ||||
| func (u *UI) Templates(w http.ResponseWriter, r *http.Request) { | func (u *UI) Templates(w http.ResponseWriter, r *http.Request) { | ||||
| @@ -749,6 +758,104 @@ func urlQuery(s string) string { | |||||
| return url.QueryEscape(s) | return url.QueryEscape(s) | ||||
| } | } | ||||
| func applyLLMSettingsForm(settings domain.AppSettings, r *http.Request) (domain.AppSettings, error) { | |||||
| next := settings | |||||
| next.LLMActiveProvider = domain.NormalizeLLMProvider(r.FormValue("llm_provider")) | |||||
| next.LLMActiveModel = domain.NormalizeLLMModel(next.LLMActiveProvider, r.FormValue("llm_model")) | |||||
| next.LLMBaseURL = strings.TrimSpace(r.FormValue("llm_base_url")) | |||||
| tempRaw := strings.TrimSpace(r.FormValue("llm_temperature")) | |||||
| if tempRaw == "" { | |||||
| next.LLMTemperature = domain.NormalizeLLMTemperature(next.LLMTemperature) | |||||
| } else { | |||||
| temp, err := strconv.ParseFloat(tempRaw, 64) | |||||
| if err != nil { | |||||
| return settings, fmt.Errorf("invalid llm temperature") | |||||
| } | |||||
| next.LLMTemperature = domain.NormalizeLLMTemperature(temp) | |||||
| } | |||||
| maxTokensRaw := strings.TrimSpace(r.FormValue("llm_max_tokens")) | |||||
| if maxTokensRaw == "" { | |||||
| next.LLMMaxTokens = domain.NormalizeLLMMaxTokens(next.LLMMaxTokens) | |||||
| } else { | |||||
| maxTokens, err := strconv.Atoi(maxTokensRaw) | |||||
| if err != nil { | |||||
| return settings, fmt.Errorf("invalid llm max tokens") | |||||
| } | |||||
| next.LLMMaxTokens = domain.NormalizeLLMMaxTokens(maxTokens) | |||||
| } | |||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_openai")); value != "" { | |||||
| next.OpenAIAPIKeyEncrypted = value | |||||
| } | |||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_anthropic")); value != "" { | |||||
| next.AnthropicAPIKeyEncrypted = value | |||||
| } | |||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_google")); value != "" { | |||||
| next.GoogleAPIKeyEncrypted = value | |||||
| } | |||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_xai")); value != "" { | |||||
| next.XAIAPIKeyEncrypted = value | |||||
| } | |||||
| if value := strings.TrimSpace(r.FormValue("llm_api_key_ollama")); value != "" { | |||||
| next.OllamaAPIKeyEncrypted = value | |||||
| } | |||||
| return next, nil | |||||
| } | |||||
| func validateLLMProviderConfig(ctx context.Context, settings domain.AppSettings) error { | |||||
| provider := domain.NormalizeLLMProvider(settings.LLMActiveProvider) | |||||
| model := domain.NormalizeLLMModel(provider, settings.LLMActiveModel) | |||||
| if strings.TrimSpace(model) == "" { | |||||
| return fmt.Errorf("no active model configured") | |||||
| } | |||||
| baseURL := strings.TrimSpace(settings.LLMBaseURL) | |||||
| if baseURL != "" { | |||||
| parsed, err := url.Parse(baseURL) | |||||
| if err != nil || strings.TrimSpace(parsed.Scheme) == "" || strings.TrimSpace(parsed.Host) == "" { | |||||
| return fmt.Errorf("invalid llm base url") | |||||
| } | |||||
| } | |||||
| apiKey := domain.LLMAPIKeyForProvider(provider, settings) | |||||
| if provider != domain.LLMProviderOllama && strings.TrimSpace(apiKey) == "" { | |||||
| return fmt.Errorf("api key for provider %s is not configured", provider) | |||||
| } | |||||
| runtimeFactory := llmruntime.NewFactory(10 * time.Second) | |||||
| client, err := runtimeFactory.ClientFor(provider) | |||||
| if err != nil { | |||||
| return err | |||||
| } | |||||
| temperature := domain.NormalizeLLMTemperature(settings.LLMTemperature) | |||||
| maxTokens := domain.NormalizeLLMMaxTokens(settings.LLMMaxTokens) | |||||
| validationTokens := maxTokens | |||||
| if validationTokens > 64 { | |||||
| validationTokens = 64 | |||||
| } | |||||
| if validationTokens < 16 { | |||||
| validationTokens = 16 | |||||
| } | |||||
| resp, err := client.Generate(ctx, llmruntime.Request{ | |||||
| Provider: provider, | |||||
| Model: model, | |||||
| BaseURL: baseURL, | |||||
| APIKey: apiKey, | |||||
| Temperature: &temperature, | |||||
| MaxTokens: &validationTokens, | |||||
| SystemPrompt: "You validate LLM connectivity for settings. Answer with plain text OK.", | |||||
| UserPrompt: "Return OK", | |||||
| }) | |||||
| if err != nil { | |||||
| return fmt.Errorf("provider validation failed (%s/%s): %w", provider, model, err) | |||||
| } | |||||
| if strings.TrimSpace(resp) == "" { | |||||
| return fmt.Errorf("provider validation failed (%s/%s): empty response", provider, model) | |||||
| } | |||||
| return nil | |||||
| } | |||||
| func boolPtr(v bool) *bool { return &v } | func boolPtr(v bool) *bool { return &v } | ||||
| func intPtr(v int) *int { return &v } | func intPtr(v int) *int { return &v } | ||||
| func int64Ptr(v int64) *int64 { return &v } | func int64Ptr(v int64) *int64 { return &v } | ||||
| @@ -1660,6 +1767,8 @@ func (u *UI) loadPromptSettings(ctx context.Context) domain.AppSettings { | |||||
| JobPollTimeoutSeconds: u.cfg.PollTimeoutSeconds, | JobPollTimeoutSeconds: u.cfg.PollTimeoutSeconds, | ||||
| LLMActiveProvider: defaultProvider, | LLMActiveProvider: defaultProvider, | ||||
| LLMActiveModel: domain.NormalizeLLMModel(defaultProvider, ""), | LLMActiveModel: domain.NormalizeLLMModel(defaultProvider, ""), | ||||
| LLMTemperature: domain.DefaultLLMTemperature(), | |||||
| LLMMaxTokens: domain.DefaultLLMMaxTokens(), | |||||
| MasterPrompt: domain.SeedMasterPrompt, | MasterPrompt: domain.SeedMasterPrompt, | ||||
| PromptBlocks: domain.DefaultPromptBlocks(), | PromptBlocks: domain.DefaultPromptBlocks(), | ||||
| } | } | ||||
| @@ -1688,6 +1797,8 @@ func (u *UI) loadPromptSettings(ctx context.Context) domain.AppSettings { | |||||
| settings.LLMActiveProvider = domain.NormalizeLLMProvider(stored.LLMActiveProvider) | settings.LLMActiveProvider = domain.NormalizeLLMProvider(stored.LLMActiveProvider) | ||||
| settings.LLMActiveModel = domain.NormalizeLLMModel(settings.LLMActiveProvider, stored.LLMActiveModel) | settings.LLMActiveModel = domain.NormalizeLLMModel(settings.LLMActiveProvider, stored.LLMActiveModel) | ||||
| settings.LLMBaseURL = strings.TrimSpace(stored.LLMBaseURL) | settings.LLMBaseURL = strings.TrimSpace(stored.LLMBaseURL) | ||||
| settings.LLMTemperature = domain.NormalizeLLMTemperature(stored.LLMTemperature) | |||||
| settings.LLMMaxTokens = domain.NormalizeLLMMaxTokens(stored.LLMMaxTokens) | |||||
| settings.OpenAIAPIKeyEncrypted = strings.TrimSpace(stored.OpenAIAPIKeyEncrypted) | settings.OpenAIAPIKeyEncrypted = strings.TrimSpace(stored.OpenAIAPIKeyEncrypted) | ||||
| settings.AnthropicAPIKeyEncrypted = strings.TrimSpace(stored.AnthropicAPIKeyEncrypted) | settings.AnthropicAPIKeyEncrypted = strings.TrimSpace(stored.AnthropicAPIKeyEncrypted) | ||||
| settings.GoogleAPIKeyEncrypted = strings.TrimSpace(stored.GoogleAPIKeyEncrypted) | settings.GoogleAPIKeyEncrypted = strings.TrimSpace(stored.GoogleAPIKeyEncrypted) | ||||
| @@ -17,6 +17,8 @@ type Request struct { | |||||
| Model string | Model string | ||||
| BaseURL string | BaseURL string | ||||
| APIKey string | APIKey string | ||||
| Temperature *float64 | |||||
| MaxTokens *int | |||||
| SystemPrompt string | SystemPrompt string | ||||
| UserPrompt string | UserPrompt string | ||||
| } | } | ||||
| @@ -71,7 +73,8 @@ func (c *openAICompatibleClient) Generate(ctx context.Context, req Request) (str | |||||
| payload := map[string]any{ | payload := map[string]any{ | ||||
| "model": strings.TrimSpace(req.Model), | "model": strings.TrimSpace(req.Model), | ||||
| "temperature": 0, | |||||
| "temperature": optionalFloat64(req.Temperature, 0), | |||||
| "max_tokens": optionalInt(req.MaxTokens, 1200), | |||||
| "messages": []map[string]string{ | "messages": []map[string]string{ | ||||
| {"role": "system", "content": strings.TrimSpace(req.SystemPrompt)}, | {"role": "system", "content": strings.TrimSpace(req.SystemPrompt)}, | ||||
| {"role": "user", "content": strings.TrimSpace(req.UserPrompt)}, | {"role": "user", "content": strings.TrimSpace(req.UserPrompt)}, | ||||
| @@ -110,8 +113,8 @@ func (c *anthropicClient) Generate(ctx context.Context, req Request) (string, er | |||||
| } | } | ||||
| payload := map[string]any{ | payload := map[string]any{ | ||||
| "model": strings.TrimSpace(req.Model), | "model": strings.TrimSpace(req.Model), | ||||
| "max_tokens": 1200, | |||||
| "temperature": 0, | |||||
| "max_tokens": optionalInt(req.MaxTokens, 1200), | |||||
| "temperature": optionalFloat64(req.Temperature, 0), | |||||
| "system": strings.TrimSpace(req.SystemPrompt), | "system": strings.TrimSpace(req.SystemPrompt), | ||||
| "messages": []map[string]any{ | "messages": []map[string]any{ | ||||
| {"role": "user", "content": strings.TrimSpace(req.UserPrompt)}, | {"role": "user", "content": strings.TrimSpace(req.UserPrompt)}, | ||||
| @@ -164,7 +167,8 @@ func (c *googleClient) Generate(ctx context.Context, req Request) (string, error | |||||
| {"parts": []map[string]string{{"text": strings.TrimSpace(req.UserPrompt)}}}, | {"parts": []map[string]string{{"text": strings.TrimSpace(req.UserPrompt)}}}, | ||||
| }, | }, | ||||
| "generationConfig": map[string]any{ | "generationConfig": map[string]any{ | ||||
| "temperature": 0, | |||||
| "temperature": optionalFloat64(req.Temperature, 0), | |||||
| "maxOutputTokens": optionalInt(req.MaxTokens, 1200), | |||||
| }, | }, | ||||
| } | } | ||||
| if strings.TrimSpace(req.SystemPrompt) != "" { | if strings.TrimSpace(req.SystemPrompt) != "" { | ||||
| @@ -239,11 +243,72 @@ func doJSON(ctx context.Context, httpClient *http.Client, method, endpoint, apiK | |||||
| return nil, fmt.Errorf("read response: %w", err) | return nil, fmt.Errorf("read response: %w", err) | ||||
| } | } | ||||
| if resp.StatusCode >= 400 { | if resp.StatusCode >= 400 { | ||||
| message := strings.TrimSpace(string(respBody)) | |||||
| if len(message) > 500 { | |||||
| message = message[:500] | |||||
| } | |||||
| message := trimProviderErrorMessage(respBody) | |||||
| return nil, fmt.Errorf("provider http %d: %s", resp.StatusCode, message) | return nil, fmt.Errorf("provider http %d: %s", resp.StatusCode, message) | ||||
| } | } | ||||
| return respBody, nil | return respBody, nil | ||||
| } | } | ||||
| func optionalFloat64(value *float64, fallback float64) float64 { | |||||
| if value == nil { | |||||
| return fallback | |||||
| } | |||||
| return *value | |||||
| } | |||||
| func optionalInt(value *int, fallback int) int { | |||||
| if value == nil { | |||||
| return fallback | |||||
| } | |||||
| return *value | |||||
| } | |||||
| func trimProviderErrorMessage(respBody []byte) string { | |||||
| message := extractProviderErrorMessage(respBody) | |||||
| if len(message) > 500 { | |||||
| return message[:500] | |||||
| } | |||||
| return message | |||||
| } | |||||
| func extractProviderErrorMessage(respBody []byte) string { | |||||
| raw := strings.TrimSpace(string(respBody)) | |||||
| if raw == "" { | |||||
| return "empty error response" | |||||
| } | |||||
| var parsed map[string]any | |||||
| if err := json.Unmarshal(respBody, &parsed); err == nil { | |||||
| if value := nestedString(parsed, "error", "message"); value != "" { | |||||
| return value | |||||
| } | |||||
| if value := nestedString(parsed, "error"); value != "" { | |||||
| return value | |||||
| } | |||||
| if value := nestedString(parsed, "message"); value != "" { | |||||
| return value | |||||
| } | |||||
| } | |||||
| return raw | |||||
| } | |||||
| func nestedString(values map[string]any, path ...string) string { | |||||
| if len(path) == 0 || values == nil { | |||||
| return "" | |||||
| } | |||||
| current := any(values) | |||||
| for _, key := range path { | |||||
| nextMap, ok := current.(map[string]any) | |||||
| if !ok { | |||||
| return "" | |||||
| } | |||||
| current = nextMap[key] | |||||
| } | |||||
| switch value := current.(type) { | |||||
| case string: | |||||
| return strings.TrimSpace(value) | |||||
| case fmt.Stringer: | |||||
| return strings.TrimSpace(value.String()) | |||||
| default: | |||||
| return "" | |||||
| } | |||||
| } | |||||
| @@ -40,9 +40,9 @@ func (g *ProviderAwareSuggestionGenerator) Generate(ctx context.Context, req Sug | |||||
| if strings.TrimSpace(model) == "" { | if strings.TrimSpace(model) == "" { | ||||
| return SuggestionResult{}, fmt.Errorf("no active model configured") | return SuggestionResult{}, fmt.Errorf("no active model configured") | ||||
| } | } | ||||
| apiKey := apiKeyForProvider(provider, *settings) | |||||
| apiKey := domain.LLMAPIKeyForProvider(provider, *settings) | |||||
| if provider != domain.LLMProviderOllama && strings.TrimSpace(apiKey) == "" { | if provider != domain.LLMProviderOllama && strings.TrimSpace(apiKey) == "" { | ||||
| return SuggestionResult{}, fmt.Errorf("api key for provider %s is not configured", provider) | |||||
| return SuggestionResult{}, fmt.Errorf("api key for provider %s is not configured in settings", provider) | |||||
| } | } | ||||
| targets := collectSuggestionTargets(req.Fields, req.Existing, req.IncludeFilled) | targets := collectSuggestionTargets(req.Fields, req.Existing, req.IncludeFilled) | ||||
| @@ -59,21 +59,25 @@ func (g *ProviderAwareSuggestionGenerator) Generate(ctx context.Context, req Sug | |||||
| return SuggestionResult{}, err | return SuggestionResult{}, err | ||||
| } | } | ||||
| systemPrompt, userPrompt := buildProviderPrompts(req, targets) | systemPrompt, userPrompt := buildProviderPrompts(req, targets) | ||||
| temperature := domain.NormalizeLLMTemperature(settings.LLMTemperature) | |||||
| maxTokens := domain.NormalizeLLMMaxTokens(settings.LLMMaxTokens) | |||||
| raw, err := providerClient.Generate(ctx, llmruntime.Request{ | raw, err := providerClient.Generate(ctx, llmruntime.Request{ | ||||
| Provider: provider, | Provider: provider, | ||||
| Model: model, | Model: model, | ||||
| BaseURL: strings.TrimSpace(settings.LLMBaseURL), | BaseURL: strings.TrimSpace(settings.LLMBaseURL), | ||||
| APIKey: strings.TrimSpace(apiKey), | APIKey: strings.TrimSpace(apiKey), | ||||
| Temperature: &temperature, | |||||
| MaxTokens: &maxTokens, | |||||
| SystemPrompt: systemPrompt, | SystemPrompt: systemPrompt, | ||||
| UserPrompt: userPrompt, | UserPrompt: userPrompt, | ||||
| }) | }) | ||||
| if err != nil { | if err != nil { | ||||
| return SuggestionResult{}, err | |||||
| return SuggestionResult{}, fmt.Errorf("provider request failed (provider=%s model=%s): %w", provider, model, err) | |||||
| } | } | ||||
| parsed, err := parseProviderSuggestions(raw) | parsed, err := parseProviderSuggestions(raw) | ||||
| if err != nil { | if err != nil { | ||||
| return SuggestionResult{}, err | |||||
| return SuggestionResult{}, fmt.Errorf("provider returned invalid suggestions json (provider=%s model=%s): %w", provider, model, err) | |||||
| } | } | ||||
| out := SuggestionResult{ | out := SuggestionResult{ | ||||
| @@ -127,27 +131,71 @@ func parseProviderSuggestions(raw string) ([]providerSuggestion, error) { | |||||
| candidates = append(candidates, object) | candidates = append(candidates, object) | ||||
| } | } | ||||
| var firstErr error | |||||
| for _, candidate := range candidates { | for _, candidate := range candidates { | ||||
| items, ok := parseSuggestionsCandidate(candidate) | |||||
| if ok { | |||||
| items, err := parseSuggestionsCandidate(candidate) | |||||
| if err == nil { | |||||
| return items, nil | return items, nil | ||||
| } | } | ||||
| if firstErr == nil { | |||||
| firstErr = err | |||||
| } | |||||
| } | |||||
| if firstErr != nil { | |||||
| return nil, firstErr | |||||
| } | } | ||||
| return nil, fmt.Errorf("provider response is not valid suggestions json") | return nil, fmt.Errorf("provider response is not valid suggestions json") | ||||
| } | } | ||||
| func parseSuggestionsCandidate(raw string) ([]providerSuggestion, bool) { | |||||
| var objectPayload struct { | |||||
| Suggestions []providerSuggestion `json:"suggestions"` | |||||
| func parseSuggestionsCandidate(raw string) ([]providerSuggestion, error) { | |||||
| var root any | |||||
| if err := json.Unmarshal([]byte(raw), &root); err != nil { | |||||
| return nil, fmt.Errorf("provider response is not valid json: %w", err) | |||||
| } | |||||
| var itemsRaw []any | |||||
| switch value := root.(type) { | |||||
| case map[string]any: | |||||
| suggestions, ok := value["suggestions"] | |||||
| if !ok { | |||||
| return nil, fmt.Errorf("provider json object must contain \"suggestions\" array") | |||||
| } | |||||
| list, ok := suggestions.([]any) | |||||
| if !ok { | |||||
| return nil, fmt.Errorf("provider \"suggestions\" must be an array") | |||||
| } | |||||
| itemsRaw = list | |||||
| case []any: | |||||
| itemsRaw = value | |||||
| default: | |||||
| return nil, fmt.Errorf("provider json payload must be an object or array") | |||||
| } | } | ||||
| if err := json.Unmarshal([]byte(raw), &objectPayload); err == nil && len(objectPayload.Suggestions) > 0 { | |||||
| return objectPayload.Suggestions, true | |||||
| if len(itemsRaw) == 0 { | |||||
| return nil, fmt.Errorf("provider returned an empty suggestions array") | |||||
| } | } | ||||
| var listPayload []providerSuggestion | |||||
| if err := json.Unmarshal([]byte(raw), &listPayload); err == nil && len(listPayload) > 0 { | |||||
| return listPayload, true | |||||
| out := make([]providerSuggestion, 0, len(itemsRaw)) | |||||
| for idx, rawItem := range itemsRaw { | |||||
| itemMap, ok := rawItem.(map[string]any) | |||||
| if !ok { | |||||
| return nil, fmt.Errorf("suggestion #%d is not an object", idx+1) | |||||
| } | |||||
| fieldPath := strings.TrimSpace(anyToString(itemMap["fieldPath"])) | |||||
| if fieldPath == "" { | |||||
| return nil, fmt.Errorf("suggestion #%d has empty fieldPath", idx+1) | |||||
| } | |||||
| value := strings.TrimSpace(anyToString(itemMap["value"])) | |||||
| if value == "" { | |||||
| return nil, fmt.Errorf("suggestion #%d for fieldPath %q has empty value", idx+1, fieldPath) | |||||
| } | |||||
| out = append(out, providerSuggestion{ | |||||
| FieldPath: fieldPath, | |||||
| Slot: strings.TrimSpace(anyToString(itemMap["slot"])), | |||||
| Value: value, | |||||
| Reason: strings.TrimSpace(anyToString(itemMap["reason"])), | |||||
| }) | |||||
| } | } | ||||
| return nil, false | |||||
| return out, nil | |||||
| } | } | ||||
| func extractFencedJSON(value string) string { | func extractFencedJSON(value string) string { | ||||
| @@ -210,19 +258,20 @@ func buildProviderPrompts(req SuggestionRequest, targets []SemanticSlotTarget) ( | |||||
| return system, user | return system, user | ||||
| } | } | ||||
| func apiKeyForProvider(provider string, settings domain.AppSettings) string { | |||||
| switch provider { | |||||
| case domain.LLMProviderOpenAI: | |||||
| return strings.TrimSpace(settings.OpenAIAPIKeyEncrypted) | |||||
| case domain.LLMProviderAnthropic: | |||||
| return strings.TrimSpace(settings.AnthropicAPIKeyEncrypted) | |||||
| case domain.LLMProviderGoogle: | |||||
| return strings.TrimSpace(settings.GoogleAPIKeyEncrypted) | |||||
| case domain.LLMProviderXAI: | |||||
| return strings.TrimSpace(settings.XAIAPIKeyEncrypted) | |||||
| case domain.LLMProviderOllama: | |||||
| return strings.TrimSpace(settings.OllamaAPIKeyEncrypted) | |||||
| default: | |||||
| func anyToString(raw any) string { | |||||
| switch value := raw.(type) { | |||||
| case string: | |||||
| return value | |||||
| case float64: | |||||
| return fmt.Sprintf("%.0f", value) | |||||
| case bool: | |||||
| if value { | |||||
| return "true" | |||||
| } | |||||
| return "false" | |||||
| case nil: | |||||
| return "" | return "" | ||||
| default: | |||||
| return fmt.Sprintf("%v", value) | |||||
| } | } | ||||
| } | } | ||||
| @@ -29,9 +29,11 @@ func TestProviderAwareSuggestionGenerator_UsesActiveProviderModelAndKey(t *testi | |||||
| t.Parallel() | t.Parallel() | ||||
| var ( | var ( | ||||
| gotPath string | |||||
| gotAuth string | |||||
| gotModel string | |||||
| gotPath string | |||||
| gotAuth string | |||||
| gotModel string | |||||
| gotTemperature float64 | |||||
| gotMaxTokens float64 | |||||
| ) | ) | ||||
| server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | ||||
| gotPath = r.URL.Path | gotPath = r.URL.Path | ||||
| @@ -39,6 +41,8 @@ func TestProviderAwareSuggestionGenerator_UsesActiveProviderModelAndKey(t *testi | |||||
| var payload map[string]any | var payload map[string]any | ||||
| _ = json.NewDecoder(r.Body).Decode(&payload) | _ = json.NewDecoder(r.Body).Decode(&payload) | ||||
| gotModel, _ = payload["model"].(string) | gotModel, _ = payload["model"].(string) | ||||
| gotTemperature, _ = payload["temperature"].(float64) | |||||
| gotMaxTokens, _ = payload["max_tokens"].(float64) | |||||
| _, _ = w.Write([]byte(`{"choices":[{"message":{"content":"{\"suggestions\":[{\"fieldPath\":\"text.textTitle_m1710_1\",\"value\":\"Provider Hero\",\"reason\":\"focused hero\"}]}"}}]}`)) | _, _ = w.Write([]byte(`{"choices":[{"message":{"content":"{\"suggestions\":[{\"fieldPath\":\"text.textTitle_m1710_1\",\"value\":\"Provider Hero\",\"reason\":\"focused hero\"}]}"}}]}`)) | ||||
| })) | })) | ||||
| defer server.Close() | defer server.Close() | ||||
| @@ -47,6 +51,8 @@ func TestProviderAwareSuggestionGenerator_UsesActiveProviderModelAndKey(t *testi | |||||
| LLMActiveProvider: domain.LLMProviderOpenAI, | LLMActiveProvider: domain.LLMProviderOpenAI, | ||||
| LLMActiveModel: "gpt-5.4", | LLMActiveModel: "gpt-5.4", | ||||
| LLMBaseURL: server.URL, | LLMBaseURL: server.URL, | ||||
| LLMTemperature: 0.65, | |||||
| LLMMaxTokens: 333, | |||||
| OpenAIAPIKeyEncrypted: "openai-key", | OpenAIAPIKeyEncrypted: "openai-key", | ||||
| }}, llmruntime.NewFactory(5*time.Second)) | }}, llmruntime.NewFactory(5*time.Second)) | ||||
| @@ -75,6 +81,12 @@ func TestProviderAwareSuggestionGenerator_UsesActiveProviderModelAndKey(t *testi | |||||
| if gotModel != "gpt-5.4" { | if gotModel != "gpt-5.4" { | ||||
| t.Fatalf("unexpected model: %q", gotModel) | t.Fatalf("unexpected model: %q", gotModel) | ||||
| } | } | ||||
| if gotTemperature != 0.65 { | |||||
| t.Fatalf("unexpected temperature: %v", gotTemperature) | |||||
| } | |||||
| if gotMaxTokens != 333 { | |||||
| t.Fatalf("unexpected max_tokens: %v", gotMaxTokens) | |||||
| } | |||||
| } | } | ||||
| func TestProviderAwareSuggestionGenerator_RequiresAPIKeyForNonOllama(t *testing.T) { | func TestProviderAwareSuggestionGenerator_RequiresAPIKeyForNonOllama(t *testing.T) { | ||||
| @@ -104,3 +116,12 @@ func TestParseProviderSuggestions_AcceptsFencedJSON(t *testing.T) { | |||||
| t.Fatalf("unexpected parsed result: %+v", items) | t.Fatalf("unexpected parsed result: %+v", items) | ||||
| } | } | ||||
| } | } | ||||
| func TestParseProviderSuggestions_RejectsEmptyValue(t *testing.T) { | |||||
| t.Parallel() | |||||
| _, err := parseProviderSuggestions(`{"suggestions":[{"fieldPath":"a","value":""}]}`) | |||||
| if err == nil || !strings.Contains(err.Error(), "empty value") { | |||||
| t.Fatalf("expected empty value error, got: %v", err) | |||||
| } | |||||
| } | |||||
| @@ -0,0 +1,5 @@ | |||||
| ALTER TABLE app_settings | |||||
| ADD COLUMN llm_temperature REAL NOT NULL DEFAULT 0.2; | |||||
| ALTER TABLE app_settings | |||||
| ADD COLUMN llm_max_tokens INTEGER NOT NULL DEFAULT 1200; | |||||
| @@ -415,10 +415,10 @@ func (s *Store) UpsertSettings(ctx context.Context, settings domain.AppSettings) | |||||
| _, err = s.db.ExecContext(ctx, ` | _, err = s.db.ExecContext(ctx, ` | ||||
| INSERT INTO app_settings ( | INSERT INTO app_settings ( | ||||
| id, qc_base_url, qc_bearer_token_encrypted, language_output_mode, job_poll_interval_seconds, job_poll_timeout_seconds, | id, qc_base_url, qc_bearer_token_encrypted, language_output_mode, job_poll_interval_seconds, job_poll_timeout_seconds, | ||||
| llm_active_provider, llm_active_model, llm_base_url, | |||||
| llm_active_provider, llm_active_model, llm_base_url, llm_temperature, llm_max_tokens, | |||||
| openai_api_key_encrypted, anthropic_api_key_encrypted, google_api_key_encrypted, xai_api_key_encrypted, ollama_api_key_encrypted, | openai_api_key_encrypted, anthropic_api_key_encrypted, google_api_key_encrypted, xai_api_key_encrypted, ollama_api_key_encrypted, | ||||
| master_prompt, prompt_blocks_json, updated_at | master_prompt, prompt_blocks_json, updated_at | ||||
| ) VALUES (1, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) | |||||
| ) VALUES (1, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) | |||||
| ON CONFLICT(id) DO UPDATE SET | ON CONFLICT(id) DO UPDATE SET | ||||
| qc_base_url = excluded.qc_base_url, | qc_base_url = excluded.qc_base_url, | ||||
| qc_bearer_token_encrypted = excluded.qc_bearer_token_encrypted, | qc_bearer_token_encrypted = excluded.qc_bearer_token_encrypted, | ||||
| @@ -428,6 +428,8 @@ func (s *Store) UpsertSettings(ctx context.Context, settings domain.AppSettings) | |||||
| llm_active_provider = excluded.llm_active_provider, | llm_active_provider = excluded.llm_active_provider, | ||||
| llm_active_model = excluded.llm_active_model, | llm_active_model = excluded.llm_active_model, | ||||
| llm_base_url = excluded.llm_base_url, | llm_base_url = excluded.llm_base_url, | ||||
| llm_temperature = excluded.llm_temperature, | |||||
| llm_max_tokens = excluded.llm_max_tokens, | |||||
| openai_api_key_encrypted = excluded.openai_api_key_encrypted, | openai_api_key_encrypted = excluded.openai_api_key_encrypted, | ||||
| anthropic_api_key_encrypted = excluded.anthropic_api_key_encrypted, | anthropic_api_key_encrypted = excluded.anthropic_api_key_encrypted, | ||||
| google_api_key_encrypted = excluded.google_api_key_encrypted, | google_api_key_encrypted = excluded.google_api_key_encrypted, | ||||
| @@ -444,6 +446,8 @@ func (s *Store) UpsertSettings(ctx context.Context, settings domain.AppSettings) | |||||
| provider, | provider, | ||||
| model, | model, | ||||
| strings.TrimSpace(settings.LLMBaseURL), | strings.TrimSpace(settings.LLMBaseURL), | ||||
| domain.NormalizeLLMTemperature(settings.LLMTemperature), | |||||
| domain.NormalizeLLMMaxTokens(settings.LLMMaxTokens), | |||||
| strings.TrimSpace(settings.OpenAIAPIKeyEncrypted), | strings.TrimSpace(settings.OpenAIAPIKeyEncrypted), | ||||
| strings.TrimSpace(settings.AnthropicAPIKeyEncrypted), | strings.TrimSpace(settings.AnthropicAPIKeyEncrypted), | ||||
| strings.TrimSpace(settings.GoogleAPIKeyEncrypted), | strings.TrimSpace(settings.GoogleAPIKeyEncrypted), | ||||
| @@ -459,7 +463,7 @@ func (s *Store) UpsertSettings(ctx context.Context, settings domain.AppSettings) | |||||
| func (s *Store) GetSettings(ctx context.Context) (*domain.AppSettings, error) { | func (s *Store) GetSettings(ctx context.Context) (*domain.AppSettings, error) { | ||||
| row := s.db.QueryRowContext(ctx, ` | row := s.db.QueryRowContext(ctx, ` | ||||
| SELECT qc_base_url, qc_bearer_token_encrypted, language_output_mode, job_poll_interval_seconds, job_poll_timeout_seconds, | SELECT qc_base_url, qc_bearer_token_encrypted, language_output_mode, job_poll_interval_seconds, job_poll_timeout_seconds, | ||||
| llm_active_provider, llm_active_model, llm_base_url, | |||||
| llm_active_provider, llm_active_model, llm_base_url, llm_temperature, llm_max_tokens, | |||||
| openai_api_key_encrypted, anthropic_api_key_encrypted, google_api_key_encrypted, xai_api_key_encrypted, ollama_api_key_encrypted, | openai_api_key_encrypted, anthropic_api_key_encrypted, google_api_key_encrypted, xai_api_key_encrypted, ollama_api_key_encrypted, | ||||
| master_prompt, prompt_blocks_json | master_prompt, prompt_blocks_json | ||||
| FROM app_settings | FROM app_settings | ||||
| @@ -475,6 +479,8 @@ func (s *Store) GetSettings(ctx context.Context) (*domain.AppSettings, error) { | |||||
| &settings.LLMActiveProvider, | &settings.LLMActiveProvider, | ||||
| &settings.LLMActiveModel, | &settings.LLMActiveModel, | ||||
| &settings.LLMBaseURL, | &settings.LLMBaseURL, | ||||
| &settings.LLMTemperature, | |||||
| &settings.LLMMaxTokens, | |||||
| &settings.OpenAIAPIKeyEncrypted, | &settings.OpenAIAPIKeyEncrypted, | ||||
| &settings.AnthropicAPIKeyEncrypted, | &settings.AnthropicAPIKeyEncrypted, | ||||
| &settings.GoogleAPIKeyEncrypted, | &settings.GoogleAPIKeyEncrypted, | ||||
| @@ -495,6 +501,8 @@ func (s *Store) GetSettings(ctx context.Context) (*domain.AppSettings, error) { | |||||
| settings.LLMActiveProvider = domain.NormalizeLLMProvider(settings.LLMActiveProvider) | settings.LLMActiveProvider = domain.NormalizeLLMProvider(settings.LLMActiveProvider) | ||||
| settings.LLMActiveModel = domain.NormalizeLLMModel(settings.LLMActiveProvider, settings.LLMActiveModel) | settings.LLMActiveModel = domain.NormalizeLLMModel(settings.LLMActiveProvider, settings.LLMActiveModel) | ||||
| settings.LLMBaseURL = strings.TrimSpace(settings.LLMBaseURL) | settings.LLMBaseURL = strings.TrimSpace(settings.LLMBaseURL) | ||||
| settings.LLMTemperature = domain.NormalizeLLMTemperature(settings.LLMTemperature) | |||||
| settings.LLMMaxTokens = domain.NormalizeLLMMaxTokens(settings.LLMMaxTokens) | |||||
| settings.OpenAIAPIKeyEncrypted = strings.TrimSpace(settings.OpenAIAPIKeyEncrypted) | settings.OpenAIAPIKeyEncrypted = strings.TrimSpace(settings.OpenAIAPIKeyEncrypted) | ||||
| settings.AnthropicAPIKeyEncrypted = strings.TrimSpace(settings.AnthropicAPIKeyEncrypted) | settings.AnthropicAPIKeyEncrypted = strings.TrimSpace(settings.AnthropicAPIKeyEncrypted) | ||||
| settings.GoogleAPIKeyEncrypted = strings.TrimSpace(settings.GoogleAPIKeyEncrypted) | settings.GoogleAPIKeyEncrypted = strings.TrimSpace(settings.GoogleAPIKeyEncrypted) | ||||
| @@ -21,7 +21,7 @@ | |||||
| </table> | </table> | ||||
| <h2>LLM Provider / Modell</h2> | <h2>LLM Provider / Modell</h2> | ||||
| <p><small>Phase-A-Grundlage: Provider, Modell, optionale Base URL (Ollama/kompatibel) und provider-spezifische API-Keys.</small></p> | |||||
| <p><small>Provider-/Modellwahl mit statischem provider-aware Katalog (spaeter erweiterbar um dynamisches Refresh), Runtime-Tuning und provider-spezifischen Keys.</small></p> | |||||
| <form method="post" action="/settings/llm"> | <form method="post" action="/settings/llm"> | ||||
| <div> | <div> | ||||
| <label>Provider | <label>Provider | ||||
| @@ -34,7 +34,7 @@ | |||||
| </div> | </div> | ||||
| <div> | <div> | ||||
| <label>Model | <label>Model | ||||
| <select name="llm_model"> | |||||
| <select id="llm-model" name="llm_model" data-selected="{{.LLMActiveModel}}"> | |||||
| {{range .LLMModelOptions}} | {{range .LLMModelOptions}} | ||||
| <option value="{{.Value}}" {{if eq $.LLMActiveModel .Value}}selected{{end}}>{{.Label}}</option> | <option value="{{.Value}}" {{if eq $.LLMActiveModel .Value}}selected{{end}}>{{.Label}}</option> | ||||
| {{end}} | {{end}} | ||||
| @@ -46,6 +46,16 @@ | |||||
| <input type="url" name="llm_base_url" placeholder="http://localhost:11434/v1" value="{{.LLMBaseURL}}"> | <input type="url" name="llm_base_url" placeholder="http://localhost:11434/v1" value="{{.LLMBaseURL}}"> | ||||
| </label> | </label> | ||||
| </div> | </div> | ||||
| <div> | |||||
| <label>Temperature (0.0 - 2.0) | |||||
| <input type="number" name="llm_temperature" min="0" max="2" step="0.01" value="{{printf "%.2f" .LLMTemperature}}"> | |||||
| </label> | |||||
| </div> | |||||
| <div> | |||||
| <label>Max Tokens (64 - 8192) | |||||
| <input type="number" name="llm_max_tokens" min="64" max="8192" step="1" value="{{.LLMMaxTokens}}"> | |||||
| </label> | |||||
| </div> | |||||
| <div> | <div> | ||||
| <label>OpenAI API Key ({{if .OpenAIKeyConfigured}}configured{{else}}not configured{{end}}) | <label>OpenAI API Key ({{if .OpenAIKeyConfigured}}configured{{else}}not configured{{end}}) | ||||
| <input type="password" name="llm_api_key_openai" placeholder="leer lassen = unveraendert"> | <input type="password" name="llm_api_key_openai" placeholder="leer lassen = unveraendert"> | ||||
| @@ -71,6 +81,7 @@ | |||||
| <input type="password" name="llm_api_key_ollama" placeholder="leer lassen = unveraendert"> | <input type="password" name="llm_api_key_ollama" placeholder="leer lassen = unveraendert"> | ||||
| </label> | </label> | ||||
| </div> | </div> | ||||
| <button type="submit" formaction="/settings/llm/validate">Validate provider config</button> | |||||
| <button type="submit">LLM-Settings speichern</button> | <button type="submit">LLM-Settings speichern</button> | ||||
| </form> | </form> | ||||
| @@ -100,12 +111,44 @@ | |||||
| <script> | <script> | ||||
| (function () { | (function () { | ||||
| var provider = document.getElementById('llm-provider'); | var provider = document.getElementById('llm-provider'); | ||||
| var model = document.getElementById('llm-model'); | |||||
| var baseUrlWrap = document.getElementById('llm-base-url-wrap'); | var baseUrlWrap = document.getElementById('llm-base-url-wrap'); | ||||
| if (!provider || !baseUrlWrap) return; | |||||
| if (!provider || !baseUrlWrap || !model) return; | |||||
| var modelCatalog = { | |||||
| {{range $provider := .LLMProviderOptions}} | |||||
| "{{$provider.Value}}": [ | |||||
| {{range $idx, $model := $provider.Models}}{{if $idx}},{{end}}{"value":"{{$model.Value}}","label":"{{$model.Label}}"}{{end}} | |||||
| ], | |||||
| {{end}} | |||||
| }; | |||||
| var selectedByProvider = {}; | |||||
| selectedByProvider[provider.value] = model.dataset.selected || model.value; | |||||
| var syncModelOptions = function () { | |||||
| var providerValue = provider.value; | |||||
| var options = modelCatalog[providerValue] || []; | |||||
| var preferred = selectedByProvider[providerValue] || model.value; | |||||
| model.innerHTML = ""; | |||||
| options.forEach(function (entry, idx) { | |||||
| var option = document.createElement('option'); | |||||
| option.value = entry.value; | |||||
| option.textContent = entry.label; | |||||
| if (entry.value === preferred || (!preferred && idx === 0)) { | |||||
| option.selected = true; | |||||
| } | |||||
| model.appendChild(option); | |||||
| }); | |||||
| }; | |||||
| var syncBaseURLVisibility = function () { | var syncBaseURLVisibility = function () { | ||||
| baseUrlWrap.style.display = provider.value === 'ollama' ? '' : 'none'; | baseUrlWrap.style.display = provider.value === 'ollama' ? '' : 'none'; | ||||
| }; | }; | ||||
| provider.addEventListener('change', syncBaseURLVisibility); | |||||
| provider.addEventListener('change', function () { | |||||
| syncModelOptions(); | |||||
| syncBaseURLVisibility(); | |||||
| }); | |||||
| model.addEventListener('change', function () { | |||||
| selectedByProvider[provider.value] = model.value; | |||||
| }); | |||||
| syncModelOptions(); | |||||
| syncBaseURLVisibility(); | syncBaseURLVisibility(); | ||||
| })(); | })(); | ||||
| </script> | </script> | ||||