# Proof: Concavity of the Poisson Spike-Train Log-Likelihood

**Generated:** 2026-04-18
**Verdict:** PROVED
**Audit trail:** [proof_audit.md](proof_audit.md) | [proof.py](proof.py)

## Evidence Summary

| ID | Fact | Verified |
|----|------|----------|
| A1 | Composition: concave(affine) is concave | Computed: True (standard convex analysis composition rule) |
| A2 | Convex(affine) is convex ⇒ −f(affine) concave | Computed: True (negation of convex-of-affine) |
| A3 | Non-negative scalar times concave is concave | Computed: True (scaling preserves concavity for \(n_t \geq 0\)) |
| A4 | Log-likelihood is sum of concave terms ⇒ concave | Computed: True (combines A1–A3 over all time bins) |
| A5 | Numerical Hessian eigenvalues all ≤ 0 (f = exp) | Computed: True (max eigenvalue −6.71e-02 across 5 random test points) |
| A6 | MAP multi-start optimization converges to unique optimum | Computed: True (10 random starts, spread 1.17e-10) |
| A7 | ML multi-start optimization converges to unique optimum | Computed: True (10 random starts, spread 2.80e-06) |

## Proof Logic

The Poisson log-likelihood for a spike train with intensity \(\lambda_t = f(\eta_t)\) and linear predictor \(\eta_t = x_t^\top \beta + h_t^\top \gamma + b\) is:

\[\ell(\theta) = \sum_t \bigl[n_t \log f(\eta_t) - f(\eta_t)\,\Delta t\bigr] + \text{const}\]

where \(n_t \in \{0,1\}\) is the spike count in time bin \(t\) and \(\Delta t\) is the bin width.

**Step 1 — Each \(n_t \log f(\eta_t)\) is concave in \(\theta\).** Since \(f\) is log-concave, \(\log f\) is concave (A1). The linear predictor \(\eta_t(\theta)\) is affine in \(\theta\), and a concave function composed with an affine map remains concave (A1). Multiplying by \(n_t \geq 0\) preserves concavity (A3).

**Step 2 — Each \(-f(\eta_t)\,\Delta t\) is concave in \(\theta\).** Since \(f\) is convex and \(\eta_t(\theta)\) is affine, \(f(\eta_t(\theta))\) is convex in \(\theta\) (A2). Negation and multiplication by \(\Delta t > 0\) yield a concave function.

**Step 3 — The full log-likelihood is concave.** The log-likelihood is a sum of concave terms (from Steps 1–2), plus a constant. The sum of concave functions is concave (A4).

**Step 4 — Numerical confirmation.** For the concrete case \(f = \exp\) (which is positive, convex, and log-concave), numerical Hessian computation at 5 random parameter vectors confirmed all eigenvalues are strictly negative, with the largest being −6.71e-02 (A5). An independent analytical derivation shows the Hessian is \(-Z^\top \text{diag}(\exp(\eta) \cdot \Delta t)\, Z\), which is the negation of a positive semidefinite matrix, hence guaranteed negative semidefinite.

**Step 5 — Multi-start optimization confirms unique global maximum.** Ten random starting points under BFGS optimization all converged to the same optimum within a log-likelihood spread of 2.80e-06 (A7), consistent with a concave objective having at most one maximum.

**Step 6 — MAP extension.** If the prior \(p(\theta)\) is log-concave, then \(\log p(\theta)\) is concave. The MAP objective \(\ell(\theta) + \log p(\theta)\) is the sum of two concave functions and is therefore concave. Numerical verification with a Gaussian prior (precision 1.0) confirmed unique convergence across 10 random starts with spread 1.17e-10 (A6).

## What could challenge this verdict?

Three adversarial checks were investigated:

The hypothesis set (positive + convex + log-concave) is not vacuous — the exponential, identity-on-positive-reals, and softplus functions all satisfy all three conditions simultaneously. The claim concerns concavity and the local-equals-global property, not existence of the MLE; existence may require additional compactness or coercivity conditions but is not part of the stated claim. Finally, the "every local maximum is global" result for concave functions does not require strict concavity — this was confirmed by a direct proof from the definition of concavity.

## Conclusion

**PROVED.** The log-likelihood of the Poisson spike-train encoding model is concave in \(\theta\) whenever the link function \(f\) is positive, convex, and log-concave. This follows from standard composition rules of convex analysis: log-concavity makes \(\log f\) concave, convexity makes \(-f\) concave after negation, affine composition preserves both, and sums of concave functions are concave. Numerical verification with \(f = \exp\) confirmed negative semidefiniteness of the Hessian (max eigenvalue −6.71e-02) and unique convergence of multi-start optimization (spread < 3e-06). The MAP extension under any log-concave prior holds by the same summation rule.

---

Generated by [proof-engine](https://github.com/yaniv-golan/proof-engine) v1.23.0 on 2026-04-18.
