Research Archive: What We Learned
Science advances by learning from failures AND by honestly interpreting valid data. This page documents what was overclaimed, what the data actually shows, and what we learned.
Where These Failures Fit
Each failure below is part of the longer research journey. Click any phase to see what led to it and what it enabled.
Phase 2 (2013-2015): Single-Frame Hypothesis
Bigeometric Christoffel failure below
Phase 6 (2022): MOO + Cosmology
k(L) numerology, 92% tension claim failures below
Phase 7 (2023): Composite Discovery
Scheme-invariance hypothesis (unproven) below
Phase 9 (2024): Spectral Gap Verification
What finally worked after these failures
What This Framework CAN and CANNOT Do
CAN Do
- +Detect and pre-condition power-law singularities (auto-logarithm)
- +Guarantee stability when mixing discretization schemes (9/9 tests)
- +Map Pareto trade-offs between competing objectives (MOO)
- +Mimic Dark Energy equation of state (100% curve-fit)
- +Find unitary-preserving quantum parameter regions
CANNOT Do
- -Solve H0 tension (requires dynamic, not uniform change)
- -Create super-convergence (it averages gaps, doesn't enhance)
- -Replace established numerical methods (diagnostic tool only)
- -Prove new physics (numerical tool, not physics theory)
- -Predict without fitting (k values are chosen, not derived)
Why Document Failures?
Most research only publishes successes. This creates a biased view of science where dead ends are hidden and the path to discovery looks smoother than it was.
This archive serves three purposes: (1) intellectual honesty, (2) helping others avoid the same dead ends, (3) documenting what was actually tried before claiming anything works.
Failed: Bigeometric Christoffel Symbols
The idea:
What We Tried
Replace classical derivatives with bigeometric derivatives in the Christoffel symbol definition, hoping to get a "bigeometric Riemann tensor" that would have better behavior at singularities.
Why It Failed
- - Bigeometric derivative requires positive-valued functions (g_uv can be negative)
- - The exp() wrapper distorts the tensor transformation properties
- - Resulting "Riemann tensor" doesn't satisfy Bianchi identities
- - No consistent variational principle
Lesson Learned
You can't just swap derivatives in established physics formulas. The structure of GR depends on specific properties of classical derivatives (Leibniz rule, linearity, chain rule) that NNC operators don't preserve.
Clarified: k(L) Pattern Overclaim
The empirical relationship (valid data):
What IS Valid
- + The data is reproducible - simulations are verified
- + The correlation is real across 61 orders of magnitude
- + Classical calculus (k=0) emerges naturally at large scales
- + The formula is a useful diagnostic tool for scheme selection
What We Overclaimed
We originally claimed this was a "universal physical law" explaining why quantum mechanics uses different math than classical physics. That interpretation went too far - the physical mechanism is unknown.
Current Status
The k(L) formula remains a valid empirical tool for selecting optimal calculus schemes. The data is correct; only the physics interpretation was overclaimed. See the full results page for honest assessment.
Failed: "92% Hubble Tension Resolved"
What We Claimed
Meta-Friedmann equations with k=0.077 at early dark energy epoch (z=2263) could reduce the Hubble tension from 5-sigma to sub-sigma. We called this "92% resolved" and displayed it prominently on the homepage.
Why It's Invalid
- - The k value was CHOSEN to fit the tension, not predicted
- - No mechanism for why k=0.077 at that specific redshift
- - BBN/CMB "constraints" were placeholder penalties, not real data
- - The model has as many free parameters as data points to fit
- - Circular: used tension data to find k, then claimed k "resolves" tension
The Honest Assessment
This was curve-fitting dressed up as physics. We found parameter values that matched observed discrepancies, then claimed the discrepancies were "resolved." This is the opposite of the scientific method.
Lesson Learned
A theory must predict BEFORE you look at the data. Post-hoc fitting doesn't count as evidence, no matter how good the fit looks.
Unproven: "Physical = Scheme-Invariant"
The Claim
Observables that remain consistent across multiple calculus schemes are "physical" and trustworthy. High cross-scheme variance indicates representation artifacts.
The Problem
- - This is a reasonable HEURISTIC, not a proven principle
- - Scheme-invariance is necessary but not sufficient for "physical"
- - A wrong answer could be scheme-invariant (all schemes wrong the same way)
- - No experimental validation of this criterion
Current Status
We've downgraded this from "core principle" to "diagnostic heuristic." Cross-scheme consistency is useful for flagging potential artifacts, but it doesn't validate physics. Experimental verification is still required.
What We Actually Learned
About the Math
- + NNC derivatives are well-defined numerical tools
- + Meta-integration fundamental theorems hold
- + Spectral gap of mixed operators is preserved (verified in 9/9 tests)
- - But physics applications require more than good math
About Scientific Method
- + Always document what you tried and why it failed
- + Distinguish prediction from post-hoc fitting
- + Mechanisms matter more than curve fits
- + Intellectual honesty is more valuable than impressive claims
Simulations: Valid Data, Recontextualized Interpretation
The simulations below produced valid, reproducible data. Only the original interpretations were sometimes overclaimed. Here's what the results actually show.
Wheeler-DeWitt Factor Ordering MOO
What the data shows: Pareto front of 20 ordering choices (p = -2 to +2). Trade-off exists: Low eigenvalue (p ~ -1.9) vs low sensitivity (p ~ 1.0). WKB semiclassical limit IS ordering-invariant.
Recontextualized: MOO is a useful tool for exploring quantum cosmology quantization ambiguities. The data doesn't pick a "correct" ordering - that requires experiments.
Chiral Anomaly Detection Framework
What the data shows: v1 naive fermions give false negative (doubling cancels anomaly). v2 Wilson fermions detect anomaly (with finite-lattice suppression as expected).
Recontextualized: Framework correctly handles scheme-dependent physics. Excellent teaching demonstration of lattice artifacts - not new physics, but valid pedagogy.
Kramers-Wannier Duality Validation
What the data shows: Critical coupling K_c found to 12 decimal places. Duality intact at self-dual point, correctly broken by external field.
Recontextualized: G-scheme language correctly captures Kramers-Wannier duality. Confirms known physics - validates the framework, doesn't discover new results.
Dark Energy w(z) Form
What the data shows: Meta-Friedmann CAN produce w = -0.909 matching DESI (100% curve-fit). BUT: H0 tension = 0% explained, S8 tension = 0% explained.
Why "Mimicker": Framework changes expansion rate uniformly - cannot do "A today, B yesterday" which is required to solve H0 tension. Mathematical feature of coordinate choice.
Open question: Is there a physical Lagrangian connecting meta-calculus to scalar-tensor gravity? Could be curve-fitting, could be deeper.
Wheeler-DeWitt Factor Ordering Trade-off
What the data shows: Minimize Ground State Eigenvalue requires p ~ -1.9. Minimize Sensitivity (stability) requires p ~ 1.0. Cannot optimize both simultaneously.
Interpretation: This defines a "Numerical Uncertainty Principle" - there is no perfect calculus. You must choose: Precision OR Stability. The solution is a Pareto curve of trade-offs.
Quantum Compatibility MOO
Original failure: Naive componentwise meta-derivatives caused 65% norm drift.
What MOO found: Constrained parameter regions DO preserve unitarity (drift < 1e-10). Blind application fails; constrained application may work. Needs physical validation.
Verified Contributions
1. The NNC Numerical Library
The core implementations of Grossman-Katz derivatives and meta-integration are mathematically correct and useful for numerical methods work. This is a tool, not a physics theory.
2. Spectral Gap STABILITY (9/9 Tests)
Composed operators maintain gap(P_mix) >= min(individual gaps). This is stability, NOT enhancement - the composed gap is BETWEEN component gaps (ratio ~1.003), providing a safety guarantee when mixing schemes. See verification
3. Log-Space Transform (7.7x Speedup)
Bigeometric-inspired coordinates improve stiff ODE solving. Same physics, fewer computations. See benchmarks
4. k(L) Scheme Selection Heuristic
Valid empirical formula for selecting optimal calculus schemes by scale. Data reproducible, mechanism unknown. Useful diagnostic tool. See data
Timeline of Honest Assessment
Initial framework development with ambitious claims
First falsifications documented (bigeometric Christoffel)
Website launched with "92% resolved" claims
Critical advisor review revealed curve-fitting issues
Pivot decision: Reframe as numerical methods tool, archive cosmological claims
The core numerical library and visualizers remain useful. The cosmological applications are archived as exploratory work.