# Proof: Discrete-Time Rescaling via \(R_m = A_m - \log(1 - U_m p_{\tau_m})\) Yields Exact Exp(1)

**Generated:** 2026-04-19
**Verdict:** PROVED
**Audit trail:** [proof_audit.md](proof_audit.md) | [proof.py](proof.py)

## Evidence Summary

| ID | Fact | Verified |
|----|------|----------|
| A1 | Analytical PIT verification (constant p) | Computed: P(Z <= z) = z to machine precision (max error \(1.1 \times 10^{-16}\)) |
| A2 | Monte Carlo KS test (constant p = 0.3) | Computed: KS stat 0.0031 < critical 0.0043 — Exp(1) not rejected |
| A3 | Monte Carlo KS test (constant p = 0.5) | Computed: KS stat 0.0027 < critical 0.0043 — Exp(1) not rejected |
| A4 | Monte Carlo KS test (time-varying p_k) | Computed: KS stat 0.0026 < critical 0.0043 — Exp(1) not rejected |
| A5 | Independence test via serial correlation | Computed: lag-1 corr = -0.0034, lag-2 corr = 0.0035, both within 95% CI |
| A6 | Analytical CDF derivation matches PIT | Computed: \(Z_{\text{claim}} = Z_{\text{PIT}}\) exactly (error = 0) for all test cases |

## Proof Logic

The claim asserts that for a correctly specified discrete-time spike model, \(R_m = A_m - \log(1 - U_m \cdot p_{\tau_m})\) is i.i.d. Exp(1). We prove this in three parts.

**Part 1: The formula implements the randomized PIT (A6, A1).** Conditional on \(\mathcal{F}_{\tau_{m-1}}\), the next spike time \(\tau_m\) has a discrete CDF \(F(n) = 1 - \exp(-H_n)\), where \(H_n = \sum_{k=\tau_{m-1}+1}^{n} -\log(1-p_k)\). The randomized probability integral transform states that \(Z = F(\tau_m^-) + U \cdot [F(\tau_m) - F(\tau_m^-)]\) is Uniform(0,1) for any discrete random variable, provided \(U\) is independent. We verify algebraically that \(Z_m = 1 - \exp(-R_m)\) with \(R_m = A_m - \log(1 - U_m p_{\tau_m})\) equals this PIT formula exactly: both reduce to \(1 - \exp(-A_m)(1 - U_m p_{\tau_m})\) (A6, verified to zero error across all test cases). The PIT theorem then guarantees \(Z_m \sim \text{Uniform}(0,1)\), hence \(R_m \sim \text{Exp}(1)\). The analytical CDF computation confirms \(P(Z \leq z) = z\) to machine precision for all tested values (A1).

**Part 2: Monte Carlo confirmation across model types (A2, A3, A4).** To rule out any implementation subtlety, we generate 100,000 rescaled intervals under three different model specifications: constant \(p = 0.3\) (A2), constant \(p = 0.5\) (A3), and sinusoidally modulated \(p_k = 0.1 + 0.08\sin(2\pi k/50)\) (A4). All three pass the KS test for Exp(1) at \(\alpha = 0.05\), with KS statistics well below the critical value of 0.0043. The means are 0.999, 0.996, and 0.998 respectively (all close to 1.0), and variances are 0.995 and 0.989 (close to 1.0).

**Part 3: Independence (A5).** The lag-1 autocorrelation of consecutive \(Z_m\) values is \(-0.0034\) and the lag-2 autocorrelation is \(0.0035\), both well within the 95% confidence interval for zero correlation (\(\pm 0.0062\)). This is consistent with the theoretical argument: since \(Z_m | \mathcal{F}_{\tau_{m-1}} \sim \text{Uniform}(0,1)\) is constant (doesn't depend on the conditioning), by the tower property \(Z_m\) is unconditionally independent of all past \(Z\)'s (A5).

## What could challenge this verdict?

Four adversarial checks were conducted. First, we verified that the previous disproof (of the formula \(R_m = A_m + U_m \cdot h_{\tau_m}\)) does not apply here — the two formulas are algebraically distinct, producing \((1-p)^U\) vs. \((1 - Up)\) respectively. Second, we confirmed the PIT theorem applies under conditional distributions where \(p_k\) depends on the filtration, since the conditioning yields a fixed CDF and \(U_m\) is independent by construction. Third, we verified independence analytically via the tower property, noting that the key is \(Z_m | \mathcal{F}_{\tau_{m-1}}\) being constant (not depending on the conditioning). Fourth, we checked boundary behavior at extreme \(p_k\) values and confirmed the formula is well-defined for all \(p \in (0,1)\).

## Conclusion

**PROVED.** The discrete-time rescaling \(R_m = A_m - \log(1 - U_m \cdot p_{\tau_m})\) produces exact i.i.d. Exp(1) residuals for any correctly specified model. This is verified by: (1) algebraic equivalence with the randomized PIT formula (zero error), (2) analytical CDF verification to machine precision (\(10^{-16}\)), (3) KS tests passing across three different model types (constant \(p = 0.3\), constant \(p = 0.5\), and time-varying \(p_k\)), and (4) no detectable serial dependence. Standard KS/Q-Q goodness-of-fit diagnostics applied to these residuals are therefore valid.

---

Generated by [proof-engine](https://github.com/yaniv-golan/proof-engine) v1.23.0 on 2026-04-19.
