
Building trust in AI for digital twins

Building trust in AI for digital twins
The PROBONO project, which seeks to transform traditional neighbourhoods into green, sustainable environments using innovations like digital twins for urban planning and decision-making, faces challenges in deploying these technologies, especially in gaining trust from stakeholders like inhabitants, architects, and contractors.
This issue is compounded by the use of AI, which raises concerns over data privacy, quality, and system transparency. AI's 'black box' nature makes it difficult to understand how decisions are made, complicating accountability and error correction.
To address this, the field of Explainability focuses on making AI more transparent. Methods like LIME (which simplifies complex models to explain individual predictions) and SHAP (which shows how each input feature contributes to an output) help improve trust by offering clearer insights into AI decision-making.