The test is to compare a sample chi² value (x) against a chi²-distribution with the appropriate degrees of freedom. One is typically interested in Pr(X ≥ x), which is the "p-value" of the test.
Chi² distributions arise from sums of squared standard normal distributed variables. For this reason, the log likelihood ratio statistics of linear models follow a Chi² distribution*. This is known as Wilk's theorem.
* exactly, when the distribution of the response variable is exactly normal, and approximately, when the distribution of the response is only approximately normal and/or the sample size is large, as a consequence of the central limit theorem. Therefore chi² tests can also be applied as approximative tests in generalized linear models. The prototypical example is the log-linear model and the analysis of contingency tables. When the response distribution has a nuisance parameter for the variance that is to be estimated from the data, the approximation is bad for small samples. In this case it is better to assume an F-distribution of an appropriately scaled statistic, which is again exact for exactly normal distributed variables (with unknown variance) and approximate otherwise.
I watched the video. I think the largest stumbling point for a beginner trying to understand the test would be the terms "expected" and "predicted" that are used throughout the video but aren't explained. Maybe this is explained in the subsequent video (which isn't out at the time of writing).