AI Impact Score Generation Process
This document details how the Impact Score configuration for verticals is generated using Generative AI.
Overview
The Impact Score is a relative, LCA-inspired score used to compare products within a specific vertical (category). The configuration for this score (weights, criteria, rationale) is generated by an AI model to ensure a "literature-first" and evidence-based approach.
Components
1. Prompt Configuration (impactscore-generation.yml)
The generation process is driven by the impactscore-generation prompt template located in open4goods-config/prompts/.
It uses a structured prompt to guide the AI acting as an LCA expert.
Key Variables:
[[${VERTICAL_NAME}]]: The localized name of the vertical (e.g., "Téléviseurs").[[${AVAILABLE_CRITERIAS}]]: A list of available criteria (attributes and scores) that can be used for weighting, along with their descriptions.
2. Data Model (ImpactScoreAiResult)
The AI response is strictly typed to the org.open4goods.model.ai.ImpactScoreAiResult class. This ensures:
- Consistency: The output always follows a defined schema.
- Auditability: Every field is annotated with
@AiGeneratedField, providing specific instructions to the AI on how to populate it. - Traceability: The result includes reasoning (
rationale), sources, and search logs.
Key Fields:
criteria_weights: The core output, assigning weights to criteria.weighting_method: Explains the approach (e.g., "literature-first").gap_analysis_absolute_vs_relative: Documenting the limitations of the relative score.sources: Exhaustive list of sources used (standards, academic papers, regulations).
3. Service Layer (VerticalsGenerationService)
The VerticalsGenerationService orchestrates the process:
- Context Preparation: It gathers the vertical name and available criteria from the configured
VerticalConfigandProductRepository. - Prompt Execution: It calls
PromptService.objectPromptwith the context.PromptServiceevaluates the Thymeleaf template, injecting the variables.- It appends field-specific instructions (from
@AiGeneratedField) to the system prompt. - It enforces JSON schema compliance.
- Result Processing:
- The structured result is saved into
ImpactScoreConfig. criteriasPonderationis derived fromcriteria_weightsfor backward compatibility.- The raw prompt and JSON response are stored for audit purposes.
- The structured result is saved into
Variable Injection & Safety
The variables in the prompt ([[${...}]]) are evaluated using EvaluationService.
- Strict Evaluation: The service uses
StrictSpringStandardDialect. If a variable is missing or unresolvable in the context, aTemplateEvaluationExceptionis thrown, aborting the process. This prevents "silent failures" where placeholders remain in the text. - Context Integrity: The
contextmap passed to the prompt is explicitly built withVERTICAL_NAMEandAVAILABLE_CRITERIAS.
Audit & Traceability
The generation process emphasizes transparency:
- AI Audit Log: The full AI response, including the "reasoning" behind weights, is stored.
- Frontend Display: The
ecoscore.vuepage (and the API) exposes this audit data, allowing users (and developers) to see why a certain weight was chosen and what sources back it up.