Methodology
SWANK::AI applies a structured, proportionate methodology designed to clarify how AI systems behave in practice, where risk accumulates, and how oversight may fail.
The methodology is analytical, not evaluative. It does not certify, approve, or remediate systems.
Methodological Principles
All analysis is guided by the following principles:
- Independence
No alignment with vendors, deployers, platforms, or institutional interests. - Proportionality
The depth of analysis is matched to the risk, context, and sensitivity of the system under review. - Behaviour Over Intent
Focus on observed or foreseeable system behaviour rather than stated goals or design claims. - Documentation First
Findings are recorded clearly and defensibly for oversight, review, or internal decision-making. - Non-Escalatory Framing
Risks are described precisely, without exaggeration or minimisation.
Analytical Inputs
Depending on scope, analysis may draw on:
- system descriptions or technical documentation
- decision logic, workflows, or escalation pathways
- policy, governance, or oversight materials
- output samples, use-case scenarios, or impact narratives
- complaints, incident records, or internal correspondence
SWANK::AI does not require access to proprietary code unless explicitly agreed.
Core Analytical Lenses
Analysis typically considers:
- Structural Design
How decisions are framed, constrained, or delegated within the system. - Bias Pathways
Where unequal impact may arise through data, logic, or deployment context. - Escalation & Oversight
Whether risks are identified, reviewed, and corrected — or amplified. - Accountability Gaps
Points where responsibility becomes unclear or diffused. - Mismatch Between Intent and Operation
Differences between how systems are described and how they function in practice.
Outputs
Engagements result in written analysis that may include:
- clearly scoped findings
- identified risk patterns or failure modes
- documentation suitable for boards, oversight bodies, or formal review
- limitations and boundaries of the analysis
No scoring, certification, or endorsement is provided.
Methodological Boundaries
SWANK::AI methodology does not include:
- legal or regulatory interpretation
- compliance certification
- technical remediation or system redesign
- enforcement or investigative action
All work remains analytical and non-directive.
Relationship to Engagement Scope
The methodology is applied only within the agreed scope of each engagement.
No open-ended analysis is conducted, and no conclusions are drawn beyond the defined question or materials reviewed.
For an illustrative example of how analysis is structured, see Case.
Chromatic Feedback Mirror Protocol
The Chromatic Feedback Mirror Protocol is an analytical technique used to examine how systems respond to feedback, challenge, or dissent, and how those responses are reflected back onto users or affected individuals.
The protocol focuses on identifying structural patterns where:
- feedback is minimised, deflected, or recharacterised
- challenge is reframed as non-cooperation, risk, or instability
- silence is implicitly rewarded while expression carries consequence
- responsibility is displaced away from decision-makers toward those raising concerns
In such conditions, systems may appear stable while risk and harm are being displaced rather than resolved.
The protocol distinguishes between reflective feedback (information that accurately describes system behaviour or impact) and projective response (defensive attribution, escalation, or narrative distortion generated by the system itself).
Failure to maintain this distinction is treated as a structural weakness.
Within analysis, encouraged or enforced silence is treated as a diagnostic signal, not a neutral outcome. Where silence becomes the safest or expected state, the system’s capacity for ethical correction and accountability is materially compromised.
The Chromatic Feedback Mirror Protocol does not assess intent or assign fault.
It documents systemic response patterns associated with escalation failure, accountability gaps, and persistence of harm.
The protocol is applied as a methodological lens within SWANK::AI’s fixed-scope engagements and informs how risks, bias pathways, and escalation failures are identified and documented.
The protocol is applied selectively where feedback dynamics are materially relevant to the system under review.
