An Ambiguity-Bearing Output (ABO) is an AI-generated output that passes all local validity checks but carries enough semantic latitude to trigger unintended, misaligned, or irreversible interpretations when it is consumed by downstream systems.
The concept was introduced by Myriam Ayada in 2026 to describe a failure mode specific to enterprise environments where multiple AI and non-AI systems exchange outputs across automated pipelines.
According to Ayada (2026), an output y is an Ambiguity-Bearing Output if:
1. Local validity: y belongs to the set of locally valid outputs.
2. Non-zero semantic latitude: y deviates from the nominal output by δ ≠ 0.
3. Downstream divergence: At least one downstream system produces a differentdecision than the nominal output would have.
ABOs are not errors or hallucinations. They are a structural consequence of deploying AI systems whose outputs are semantically open; meaning multiple valid outputs exist for the same input, and local checks cannot distinguish between them.
In enterprise data engineering, the downstream effects of ABOs are frequently categorised as “semantic drift” or treated as unavoidable “LLM non-determinism.” However, semantic drift is merely the symptom. Even when an LLM’s temperature is set to zero (perfect determinism), it still produces outputs with non-zero semantic latitude. When these semantically open outputs are consumed by legacy systems, the resulting environment-level failure is caused by an Ambiguity-Bearing Output.
Enterprise AI outputs now flow into routing rules, eligibility checks, scoring pipelines, and retraining loops. An AI system can produce a locally acceptable output that is sufficiently underdetermined that downstream systems interpret it inconsistently with organisational intent.
Result: every system appears healthy, but the environment drifts. In simulation of a credit scoring pipeline, a +0.1pp approval rate shift produced 39 excess defaults and 2.5% P&L damage (Ayada, 2026).
Step 1: An LLM writes: “moderate risk; approve with enhanced verification.” Passes local checks, but “enhanced verification” admits multiple downstream interpretations.
Step 2: A rules engine discretises into a risk band. Subtle wording differences cross categorisation thresholds: a discretisation jump.
Step 3: Feedback loops respond with delay, reinforcing the shifted behaviour.
Step 4: No component is wrong, but portfolio metrics drift. Attribution to the original cause becomes difficult.
Generative AI places semantically open interfaces into high-throughput machine-to-machine corridors. ABOs that would have been caught by human interpretation now propagate at machine speed.
Agentic AI compounds this: autonomous decision chains across multiple systems without human checkpoints. Each agent-to-agent handoff is a potential ABO propagation point.
Discretisation jumps: Corridors mapping continuous AI outputs to categorical decisions. Small semantic latitude produces categorically different outcomes.
Feedback reinforcement: Deviation re-enters through calibration loops. Insimulation, divergence persisted 400 timesteps after ABO cessation (Ayada,2026).
In a simulation that replicates a realistic AI-integrated underwriting pipeline, e.g. four interconnected systems processing loan applications over 1,200 decision cycles, we measured what happens when a single AI output carries just enough ambiguity to pass local checks but shift downstream behaviour. Below the main results:
ISE Framework →/research/interconnected-systems-environment
ISCIL Architecture→ /research/iscil-containment-architecture
Glossary →/research/glossary
Ayada, M. (2026). Propagation of Ambiguity-Bearing Outputs Across Interconnected Systems Environment. TechRxiv (in review).
Code: github.com/Myr-Aya/ISE_simulator
Archive: Zenodo DOI 10.5281/zenodo.18719967