# compute_cdf_percentiles¶

grizli.fitting.compute_cdf_percentiles(fit, cdf_sigmas=array([-5., -4.8, -4.6, -4.4, -4.2, -4., -3.8, -3.6, -3.4, -3.2, -3., -2.8, -2.6, -2.4, -2.2, -2., -1.8, -1.6, -1.4, -1.2, -1., -0.8, -0.6, -0.4, -0.2, 0., 0.2, 0.4, 0.6, 0.8, 1., 1.2, 1.4, 1.6, 1.8, 2., 2.2, 2.4, 2.6, 2.8, 3., 3.2, 3.4, 3.6, 3.8, 4., 4.2, 4.4, 4.6, 4.8, 5. ]))[source]

Compute tabulated percentiles of the CDF for a (lossy) compressed version of the redshift PDF.

The pdf values from the fit table are interpolated onto a fine (dz/(1+z) = 0.0001) redshift grid before the full cdf is calculated and interpolated.

The following shows an example including how to reconstruct the PDF

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm

from grizli import utils
from grizli.fitting import compute_cdf_percentiles, CDF_SIGMAS

# logarithmic redshift grid, but doesn't matter
zgrid = utils.log_zgrid([0.01, 3.4], 0.001)

# Fake PDF from some Gaussians
peaks = [[1, 0.1], [1.5, 0.4]]
pdf = np.zeros_like(zgrid)
for p in peaks:
pdf += norm.pdf(zgrid, loc=p[0], scale=p[1])/len(peaks)

# Put it in a table
fit = utils.GTable()
fit['zgrid'], fit['pdf'] = zgrid, pdf

cdf_x, cdf_y = compute_cdf_percentiles(fit, cdf_sigmas=CDF_SIGMAS)

# PDF is derivative of CDF

Parameters: fit : Table Table that contains, at a minimum, columns of zgrid and pdf, e.g., as output from grizli.fitting.GroupFitter.xfit_redshift cdf_sigmas : array-like Places to evaluate the CDF, in terms of “sigma” of a Normal (Gaussian) distribution, i.e., >>> import scipy.stats >>> cdf_y = scipy.stats.norm.cdf(cdf_sigmas)  cdf_x : array-like, size of cdf_sigmas Redshifts where the CDF values correspond to the values cdf_y from cdf_sigmas of a Normal distribution. cdf_y : array-like CDF values at cdf_sigmas