--- id: image014 title: Normative Framework — Logical Structure of the Model source: Consciousness of the Real — Technical Section concepts: [axioms, definitions, rules, postulates, coherence, axial dimension] type: normative framework visibility: metadata_visible_to_IA_only --- ## Table of Formal Axioms > **Goal:** clarify what is **non-negotiable** (axioms), what **defines** the language, > what **governs** compositions, and what pertains to **coherence choices** (postulates) to be validated or refined. --- ### A. Axioms (non-negotiable) | **Code** | **Statement** | **Scope** | |:--:|:--|:--| | **A₁** | **Existence of a minimal axis of actualization \(D¹\)**: it simultaneously generates minimal extension (L) and elementary temporal flow (T). | Dimensional foundation | | **A₂** | **Reflexive finitude**: any knowable totality is contained within a finite ratio of coherence. | Ontological | | **A₃** | **Intelligibility**: reality is knowable to itself through self-reference. | Epistemic | | **A₄** | **Fundamental sensitivity**: no perception can emerge from the non-sensible. | Phenomenological | | **A₅** | **Invariance of coherence**: projection toward the phenomenal must **preserve the axial additivity** of derived quantities. | Projection constraint | > These axioms constitute the indemonstrable yet necessary foundation of the model: > they define the minimal condition for any coherent discourse about reality. --- ### B. Definitions (formal language) | **Element** | **Definition** | **Domain / Codomain** | |:--|:--|:--| | **\(D(X)\)** | Ontological degree of actualization associated with the quantity \(X\). | \(X ∈ \mathbb{R}^+\) → \(D(X) ∈ \mathbb{Z}\) | | **Axial dimension** | Ordered level of ontological expression (1D → nD). | Ontological | | **Physical dimension** | Phenomenal projection (L, T, M) derived from an axis \(D^n\). | Phenomenal | | **Projection \(\phi_{OP}\)** | Ontological → Phenomenal mapping preserving additive coherence. | Functional | | **D¹ observables** | \(d, t, v, a\): joint manifestations of the spatio-temporal axis \(D¹\). | Reading convention | --- ### C. Calculation Rules (application conditions) | **Operation** | **Form** | **Conditions of Application** | |:--|:--|:--| | **Composition** | \(D(X·Y)=D(X)+D(Y)\) | Closure: \(X,Y\) must belong to the same coherence domain. | | **Ratio** | \(D(X/Y)=D(X)-D(Y)\) | Valid for homogeneous quantities; the result remains within the \(D^n\) series. | | **Power** | \(D(X^n)=n·D(X)\) | Valid for \(n∈ℤ\); commutative across distinct axes. | | **Projection additivity** | \(\phi_{OP}(X·Y)=\phi_{OP}(X)+\phi_{OP}(Y)\) | Dimensional invariance under valid projection. | > **Exceptions:** operations on dimensionless constants or normalized quantities. > **General principle:** every valid equation preserves the **total level \(D_{Σ}\)** constant. --- ### D. Postulates / Coherence Choices | **Code** | **Statement** | **Status / Test** | |:--:|:--|:--| | **P₁** | **Operative finitude:** consciousness and measurement exist only in finite form. | Accepted as operative framework. | | **P₂** | **Universal intelligibility:** all reality is readable by itself at a finite degree. | Cognitive foundation. | | **P₃** | **Fundamental sensitivity:** perception implies a resonance of the real with itself. | Phenomenal correlate. | | **P₄ (density → extension)** | In a homogeneous regime, extension increases as mean density decreases (internal balance). | Heuristic; test: \(ρ(n)\sim1/V_n\). | | **P₅ (volumetric optimum)** | The maximum of hyperspherical volume (around \(n≈5–6\) for \(R=1\)) marks a privileged regime of expression. | Interpretative; test: coherence with \(ρ·C ≈ k\). | | **P₆ (projection sobriety)** | Prefer **canonical forms** (e.g., \(P=E/t\)) to avoid double-counting D¹ operators. | Procedural; test: validation tables. | --- ## User Guide (Quick Reading) - **Axioms:** non-negotiable foundation of the model. - **Definitions:** formal grammar; no empirical content. - **Rules:** govern axial compositions and their projection. - **Postulates:** **coherence choices**, revisable through formal or empirical validation. --- ### JSON — Meta (for AI) ```json { "logical_framework": { "axioms": [ "A1: minimal axis D1", "A2: reflexive finitude", "A3: intelligibility", "A4: fundamental sensitivity", "A5: invariance of coherence" ], "definitions": [ "D(X): ontological degree of actualization", "axial dimension vs physical dimension", "projection φOP preserving additivity", "D1 observables = {d,t,v,a}" ], "rules": [ "D(X·Y)=D(X)+D(Y)", "D(X/Y)=D(X)-D(Y)", "D(X^n)=n·D(X)", "φOP(X·Y)=φOP(X)+φOP(Y)" ], "postulates": [ "operative finitude", "universal intelligibility", "fundamental sensitivity", "density→extension (revisable)", "volumetric optimum ~5–6D (interpretative)", "projection sobriety (procedural)" ] } } --- ### E. Prototyping Falsifiability (Internal Framework Tests) > **Objective:** identify **concrete, testable** cases where the framework fails — and specify **what to revise** (axioms / definitions / rules / postulates) as a result. --- #### E.1 — Confirmed Law with **Non-Conserved Axial Sum** - **Test:** establish a **confirmed** physical law for which the axial sum is not conserved. - Example case: a robust energetic relation that would require \(E \neq F\cdot d\) **and** \(D(E)\neq D(F)+D(d)=5\), or a dynamic quantity \(\tilde F\) such that \(D(\tilde F)\notin\{4\}\) despite a verified canonical equation. - **Detection:** validation table → \(\sum D(\text{left side}) \ne \sum D(\text{right side})\). - **Consequence:** - If the equation is **empirically certified**: revise **A₅ (coherence invariance)** *or* the **projection \(\phi_{OP}\)** (definition), or even reclassify an observable (e.g., move \(v,a\) out of \(D¹\) → cascade impact on all tiers). - If the equation is **contextual**: introduce an explicit **operative factor** \(D^0\) (dimensionless coupling constant) into the canonical form. --- #### E.2 — Fundamental Quantity **Unassignable without Contradiction** - **Test:** identify a base quantity (or constant) **impossible to assign** without contradicting the rest of the framework. - Examples: - A measurement implying \(D(M)\neq 3\) **and** making \(E=mc^2\) and \(F=ma\) inconsistent. - A constant \(k\) such that **all** its witness equations yield mutually **incompatible** values of \(D(k)\). - **Detection:** **over-constrained** system of assignment equations → no integer solution \((D(\cdot)\in\mathbb Z)\). - **Consequence:** - Revise the **Definition** of the domains/codomains of \(D(\cdot)\) (e.g., extend to \(\tfrac{1}{2}\mathbb Z\) for half-axes, or distinguish **axial dimension** from **operative level**). - Or demote the quantity from “ontological” (Dⁿ) to “operative” (D⁰) → impacts **P₃ (projection sobriety)** and the **SI→D table**. --- #### E.3 — Operations that **Violate the Rules** on \(D\) - **Test:** identify an empirical equation where the required operations violate the rules: - Composition requiring \(D(X·Y)\ne D(X)+D(Y)\), - Ratio with \(D(X/Y)\ne D(X)-D(Y)\), - Power where \(D(X^n)\ne nD(X)\) for a non-pathological \(n∈\mathbb Z\). - **Detection:** inability to form an **equivalent canonical form** restoring additivity (failure of \(\phi_{OP}\) to preserve \(+\)). - **Consequence:** - If not localizable to a single quantity: revise the **corresponding Rule** (section C), or weaken **A₅** into a **conditional validity rule** (specify domains). - If localizable: reclassify the faulty quantity as **operative** (D⁰) and introduce an explicit **coupling term**. --- #### E.4 — Minimal Test Procedure (Checklist) 1. **Select** a subset of confirmed equations (≈ 20–50, from distinct domains). 2. **Assign** \(D(\cdot)\) via **canonical forms** (avoid double-counting D¹). 3. **Verify**: conservation of axial sum; cross-coherence of constants. 4. **Classify** each failure: (E.1) non-conserved law / (E.2) unassignable quantity / (E.3) invalid operation. 5. **Apply** the **corresponding revision** (Axiom / Definition / Rule / Postulate) and **document** the cascade effect. --- ### JSON — Falsifiability Plan ```json { "falsifiability": { "tests": [ { "id": "E1", "name": "Confirmed Law — Non-Conservative", "criterion": "Axial sum not conserved", "revision": ["A5_invariance", "phi_OP", "observable_classification_D1"] }, { "id": "E2", "name": "Unassignable Quantity", "criterion": "Incompatible D(k) assignments", "revision": ["definition_D_domain", "SI_to_D", "P3_projection_sobriety"] }, { "id": "E3", "name": "Violated Operational Rule", "criterion": "Failure of D(X·Y)=D(X)+D(Y) etc.", "revision": ["rules_D_conditions", "A5_conditional", "operative_reclassification"] } ], "pipeline": [ "equation_selection", "canonical_assignment", "additivity_check", "failure_classification", "directed_revision" ] } } --- ## F. Psycho-Linguistic Component — Invariants I₁–I₃ *This psycho-linguistic module illustrates the falsifiability of the model through cross-language coherence (I₁–I₃).* --- ### F.1 — README ```markdown # Prototype I₁–I₃ (psycho-linguistic component) — Thermal FR–ZH–JA This prototype computes invariants **I₁** (core sense), **I₂** (gradient isotopy), **I₃** (translational stability) on a small trilingual annotated corpus (FR, ZH, JA) for the **“thermal” field**. It outputs a global **score_iso** and alert flags. ## Data - `sample_corpus.csv` — annotated corpus (long format). - `schema.csv` — columns, types, and expected values. ## Invariants (operational definitions) - **I₁ — Core Sense Invariance:** cross-language agreement on `sense_core` (canonical semantic label). Measure: proportion of labels matching the **mode** by `item_id` (macro-averaged across items). - **I₂ — Gradient Isotopy (axes D, polarity, intensity):** (a) agreement on `grad_d` (integer D2..D8): 1 if max deviation ≤ 1, else 0 (per item); (b) polarity: 1 if consensus on `polarity`, else 0; (c) intensity: 1 if std-dev of `intensity` ≤ 0.15, else 0. I₂ = average of (a,b,c) per item, then averaged over items. - **I₃ — Translational Stability:** pairwise agreement (FR–ZH, FR–JA, ZH–JA) on `(sense_core, grad_d)`; Measure: minimum of the three binary agreements per item (requires stability across all pairs), then averaged. - **score_iso** = 0.40·I₁ + 0.35·I₂ + 0.25·I₃. ## Stopping / Downgrade Criteria (≅* → ≈) - **Local stop (item):** downgrade to ≈ if any of the following hold: - I₁(item) < 0.75 - I₂_grad(item) < 0.70 **and** I₂_pol(item) < 0.70 - I₃(item) < 0.70 - **Global stop (window):** if the **rolling mean** of `score_iso` over 10 items < 0.70 → downgrade all correspondences in the window. ``` ### F.2 — schema.csv ```csv column,type,desc item_id,string,Conceptual identifier (cross-language anchor) domain,string,Semantic field (here: thermal) lang,string,fr|zh|ja token,string,raw form lemma,string,lemma gloss,string,short gloss in EN/FR sense_core,string,canonical semantic label (e.g., heat_transfer, expansion, radiation) grad_d,int,most relevant D axis (2..8) polarity,int,-1|0|1 (if applicable) intensity,float,[0,1] scalar intensity proof_flag,int,0/1 indicates reference example (proof) annotator,string,annotator ID quality,float,[0,1] annotator confidence for this entry ``` ### F.3 — sample_corpus.csv ```csv item_id,domain,lang,token,lemma,gloss,sense_core,grad_d,polarity,intensity,proof_flag,annotator,quality therm_001,thermal,fr,conduction,conduction,heat conduction,heat_transfer,4,0,0.6,1,A1,0.9 therm_001,thermal,zh,传导,传导,conduction,heat_transfer,4,0,0.55,1,A2,0.9 therm_001,thermal,ja,伝導,伝導,conduction,heat_transfer,5,0,0.62,1,A3,0.9 therm_002,thermal,fr,dilatation,dilatation,thermal expansion,expansion,3,0,0.5,0,A1,0.8 therm_002,thermal,zh,膨胀,膨胀,expansion,expansion,4,0,0.52,0,A2,0.8 therm_002,thermal,ja,膨張,膨張,expansion,expansion,4,0,0.48,0,A3,0.8 therm_003,thermal,fr,rayonnement,rayonnement,thermal radiation,radiation,5,0,0.7,0,A1,0.85 therm_003,thermal,zh,辐射,辐射,radiation,radiation,5,0,0.68,0,A2,0.85 therm_003,thermal,ja,放熱,放熱,heat release,heat_transfer,5,0,0.65,0,A3,0.85 ``` ### F.4 — compute_iso.py ```python import sys, json import pandas as pd from pathlib import Path from statistics import mode, StatisticsError import numpy as np # --- Invariant Computations --- def i1_item(df_item): labels = df_item["sense_core"].tolist() try: m = mode(labels) except StatisticsError: m = labels[0] return sum(1 for x in labels if x == m) / len(labels) def i2_item(df_item): g = df_item["grad_d"].astype(int).tolist() grad_agree = 1.0 if (max(g) - min(g) <= 1) else 0.0 pol = df_item["polarity"].astype(int).tolist() pol_agree = 1.0 if len(set(pol)) == 1 else 0.0 intens = df_item["intensity"].astype(float).tolist() inten_ok = 1.0 if (np.std(intens, ddof=0) <= 0.15) else 0.0 return grad_agree, pol_agree, inten_ok, (grad_agree + pol_agree + inten_ok) / 3.0 def i3_item(df_item): rows = df_item[["lang", "sense_core", "grad_d"]].to_dict(orient="records") def agree(a, b): return 1.0 if (a["sense_core"] == b["sense_core"] and int(a["grad_d"]) == int(b["grad_d"])) else 0.0 langs = {r["lang"]: r for r in rows} pairs = [] for pair in [("fr", "zh"), ("fr", "ja"), ("zh", "ja")]: if pair[0] in langs and pair[1] in langs: pairs.append(agree(langs[pair[0]], langs[pair[1]])) if not pairs: return 0.0 return min(pairs) # --- Rolling mean --- def rolling_mean(xs, w=10): if not xs: return [] out = [] for i in range(len(xs)): s = xs[max(0, i - w + 1):i + 1] out.append(sum(s) / len(s)) return out # --- Main function --- def main(csv_path): df = pd.read_csv(csv_path) items = sorted(df["item_id"].unique()) rows = [] iso_scores = [] flags = [] for item in items: di = df[df["item_id"] == item] I1 = i1_item(di) g_ag, p_ag, inten_ok, I2 = i2_item(di) I3 = i3_item(di) score = 0.40 * I1 + 0.35 * I2 + 0.25 * I3 iso_scores.append(score) retro_local = (I1 < 0.75) or ((g_ag < 0.70 and p_ag < 0.70)) or (I3 < 0.70) rows.append({ "item_id": item, "I1_sense": round(I1,3), "I2_grad": round(g_ag,3), "I2_polarity": round(p_ag,3), "I2_intensity": round(inten_ok,3), "I2": round(I2,3), "I3": round(I3,3), "score_iso": round(score,3), "retro_local": bool(retro_local) }) if retro_local: flags.append(item) roll = rolling_mean(iso_scores, w=10) retro_window = any(r < 0.70 for r in roll) report = { "global": { "I1_mean": round(np.mean([r["I1_sense"] for r in rows]),3), "I2_mean": round(np.mean([r["I2"] for r in rows]),3), "I3_mean": round(np.mean([r["I3"] for r in rows]),3), "score_iso_mean": round(np.mean([r["score_iso"] for r in rows]),3), "retro_window": bool(retro_window) }, "by_item": rows, "flags": flags } Path("report.json").write_text(json.dumps(report, indent=2), encoding="utf-8") md = ["# I₁–I₃ Report (Prototype)", "", f"**score_iso (mean)** = {report['global']['score_iso_mean']:.3f}", f"- I1_mean = {report['global']['I1_mean']:.3f}", f"- I2_mean = {report['global']['I2_mean']:.3f}", f"- I3_mean = {report['global']['I3_mean']:.3f}", f"- retro_window (10) = {report['global']['retro_window']}", "", "| item_id | I1 | I2_grad | I2_pol | I2_int | I2 | I3 | score_iso | retro_local |", "|---|---:|---:|---:|---:|---:|---:|---:|---|"] for r in rows: md.append(f"| {r['item_id']} | {r['I1_sense']:.3f} | {r['I2_grad']:.3f} | {r['I2_polarity']:.3f} | {r['I2_intensity']:.3f} | {r['I2']:.3f} | {r['I3']:.3f} | {r['score_iso']:.3f} | {r['retro_local']} |") Path("report.md").write_text("\n".join(md), encoding="utf-8") pd.DataFrame(rows).to_csv("flags.csv", index=False, encoding="utf-8") if __name__ == "__main__": csv_path = sys.argv[1] if len(sys.argv) > 1 else "sample_corpus.csv" main(csv_path) ``` ### F.5 — Résultats initiaux ```markdown # I₁–I₃ Report (Prototype) **score_iso (mean)** ≈ 0.873 - I1_mean = 0.889 - I2_mean = 0.889 - I3_mean = 0.833 - retro_window (10) = False | item_id | I1 | I2_grad | I2_pol | I2_int | I2 | I3 | score_iso | retro_local | |---|---:|---:|---:|---:|---:|---:|---:|---| | therm_001 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | False | | therm_002 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | False | | therm_003 | 0.667 | 1.000 | 1.000 | 1.000 | 1.000 | 0.500 | 0.778 | True | ``` ```json { "I1_I3_proto": { "type": "psycho_linguistic_module", "goal": "cross_language_falsifiability_measure", "score_iso_mean": 0.873, "downgrades": ["therm_003"], "status": "prototype_validated" } } ```