| Abstract: |
Problem & Motivation. The generative AI boom is spawning rapid
deployment of diverse LLM software agents. New standards such as the Model
Context Protocol (MCP) and Agent-to-Agent (A2A) protocols let agents share data
and tasks, yet organizations still lack a rigorous way to keep those agents — and
legacy algorithms — aligned with organizational targets and values.
Objectives of the Solution. We aim to deliver a software reference architecture
that (i) provides every stakeholder natural-language interaction across planning
horizons with software agents and AI algorithmic logic, (ii) provides a multidimensional way for aligning stakeholder targets and values with algorithms and
agents, (iii) provides an example for jointly modelling AI algorithms, software
agents, and LLMs, (iv) provides a way for stakeholder interaction and alignment
across time scales, (v) scales to thousands of algorithms and agents while remaining auditable, (vi) remains framework-agnostic, allowing the use of any underlying LLM, agent library, or orchestration stack without requiring redesign.
Design & Development. Guided by the Design-Science Research Methodology (DSRM), we engineered HADA (Human-Algorithm Decision Alignment)—a
protocol-agnostic, multi-agent architecture that layers role-specific interaction
agents over both Large-Language Models and legacy decision algorithms.
Our reference implementation containerises a production credit-scoring model,
getLoanDecision, and exposes it through stakeholder agents (business manager, data scientist, auditor, ethics lead and customer), enabling each role to steer,
audit and contest every decision via natural-language dialogue. The resulting constructs, design principles and justificatory knowledge are synthesised into a midrange design theory that generalises beyond the banking pilot.
Demonstration. HADA is instantiated on a cloud-native stack—Docker, Kubernetes and Python—and embedded in a retail-bank sandbox. Five scripted scenarios show how business targets, algorithmic parameters, decision explanations and
ethics triggers propagate end-to-end through the HADA architecture.
Evaluation. Walkthrough observation and log inspection were used to gauge
HADA against six predefined objectives. A stakeholder–objective coverage matrix showed 100 % fulfilment: every role could invoke conversational control,
trace KPIs and values, detect and correct bias (ZIP-code case), and reproduce
decision lineage—without dependence on a particular agent hierarchy or LLM
provider.
Contributions. The research delivers (i) an open-source HADA reference architecture, (ii) an evaluated mid-range design theory for human–AI alignment
in multi-agent settings, and (iii) empirical evidence that framework-agnostic, protocol-compliant stakeholder agents can simultaneously enhance accuracy,
transparency and ethical compliance in real-world decision pipelines. |