The question constantly arises of how we select LLM providers, which criteria guide our decisions, and whether—beyond costs, runtimes, and other technical factors—there are additional criteria. We group these considerations under the term trust.
What this is about:
This discussion is less about the end user or private customer, and more about how an LLM behaves within an integrated system—for example, as an agentic AI solution such as a file-handling assistant.
Admittedly, it depends heavily on what is needed, since the use case determines whether an LLM should be more talkative or the opposite—rather “laconic.” This must be factored into the evaluation as a criterion: talkativeness can be beneficial in a generative AI context, but less so for a document assistant in the insurance sector.
In advance:
Trust in AI is not a technical problem. It is an issue of intentions, governance, and the cultural infrastructure from which a model emerges.
The outcome:
The result is a classification of currently available market LLMs, categorized by trust level: high, medium, or low.
Here, apart from cost and IT-related metrics, we evaluate whether and how we trust the outputs—i.e., qualitatively. Regarding the trust criterion, we tend to exclude LLMs with lower trust levels if they show strong ideological bias. For our use cases, we prefer LLMs that act pragmatically and whose underlying intentions we can understand.
The Trust Report on current LLMs can be requested directly from sol4data at info@sol4data.com. Please book an appointment with an AI architect for this purpose.
Summary:
Models differ less by parameters and more by the value system of the organization that builds them. Their generated responses reflect this.
We provide guidance on selecting LLMs—even when the considerations go far beyond token pricing.
Saganode is Live. From Idea to World to Video
Today we’re launching Saganode — a new way to build, explore, and scale stories and knowledge.
Read more+How Banks Make AI Auditable: Our Audit Framework for Knowledge-Centric AI Systems
How banks can use AI productively without falling into the black-box trap: we present an audit and governance framework for knowledge-centric AI systems that aligns with the EU AI Act, DORA, NIS2, GDPR & more and is specifically designed for regulated environments such as banks.
Read more+Audit Framework for Knowledge-Centric AI Systems
This document defines a pragmatic, minimum-viable audit and governance framework for organizations deploying knowledge-centric AI systems in regulated or high-trust environments. It is intentionally lightweight, but structurally rigorous. The framework is not designed to introduce additional bureaucracy, nor to prescribe a specific technology stack...
Read more+

