Published

AI Impact Score Generation Process

AI Impact Score Generation Process

This document details how the Impact Score configuration for verticals is generated using Generative AI.

Overview

The Impact Score is a relative, LCA-inspired score used to compare products within a specific vertical (category). The configuration for this score (weights, criteria, rationale) is generated by an AI model to ensure a "literature-first" and evidence-based approach.

Components

1. Prompt Configuration (impactscore-generation.yml)

The generation process is driven by the impactscore-generation prompt template located in open4goods-config/prompts/. It uses a structured prompt to guide the AI acting as an LCA expert.

Key Variables:

  • [[${VERTICAL_NAME}]]: The localized name of the vertical (e.g., "Téléviseurs").
  • [[${AVAILABLE_CRITERIAS}]]: A list of available criteria (attributes and scores) that can be used for weighting, along with their descriptions.

2. Data Model (ImpactScoreAiResult)

The AI response is strictly typed to the org.open4goods.model.ai.ImpactScoreAiResult class. This ensures:

  • Consistency: The output always follows a defined schema.
  • Auditability: Every field is annotated with @AiGeneratedField, providing specific instructions to the AI on how to populate it.
  • Traceability: The result includes reasoning (rationale), sources, and search logs.

Key Fields:

  • criteria_weights: The core output, assigning weights to criteria.
  • weighting_method: Explains the approach (e.g., "literature-first").
  • gap_analysis_absolute_vs_relative: Documenting the limitations of the relative score.
  • sources: Exhaustive list of sources used (standards, academic papers, regulations).

3. Service Layer (VerticalsGenerationService)

The VerticalsGenerationService orchestrates the process:

  1. Context Preparation: It gathers the vertical name and available criteria from the configured VerticalConfig and ProductRepository.
  2. Prompt Execution: It calls PromptService.objectPrompt with the context.
    • PromptService evaluates the Thymeleaf template, injecting the variables.
    • It appends field-specific instructions (from @AiGeneratedField) to the system prompt.
    • It enforces JSON schema compliance.
  3. Result Processing:
    • The structured result is saved into ImpactScoreConfig.
    • criteriasPonderation is derived from criteria_weights for backward compatibility.
    • The raw prompt and JSON response are stored for audit purposes.

Variable Injection & Safety

The variables in the prompt ([[${...}]]) are evaluated using EvaluationService.

  • Strict Evaluation: The service uses StrictSpringStandardDialect. If a variable is missing or unresolvable in the context, a TemplateEvaluationException is thrown, aborting the process. This prevents "silent failures" where placeholders remain in the text.
  • Context Integrity: The context map passed to the prompt is explicitly built with VERTICAL_NAME and AVAILABLE_CRITERIAS.

Audit & Traceability

The generation process emphasizes transparency:

  • AI Audit Log: The full AI response, including the "reasoning" behind weights, is stored.
  • Frontend Display: The ecoscore.vue page (and the API) exposes this audit data, allowing users (and developers) to see why a certain weight was chosen and what sources back it up.