Basically, the Type I error occurs when the null hypothesis is true and your ML model rejects it (false positive). The Type II error occurs when the null hypothesis is false and it does not reject it (false negative). Therefore, the "risks" of these two errors are inversely related. It is important to determine which error (I or II) has the most "serious" consequences in your context (case study).
Type I error is equivalent to a False positive. Type II error is equivalent to a False negative. Type I error refers to non-acceptance of hypothesis which ought to be accepted. Type II error is the acceptance of hypothesis which ought to be rejected. Lets take an example of Biometrics. When someone scans their fingers for a biometric scan, a Type I error is the possibility of rejection even with an authorized match. A Type II error is the possibility of acceptance even with a wrong/unauthorized match.
To add to the answer of Jean-Claude Miroir, since you asked about the relation between Type 1 and Type 2 error in ML, the total error could be written as:
Type 1 error + Type 2 error = Total error.
As can be seen, the two types of errors are inversely related (keeping the total error fixed and > 0). Decreasing Type 1 error will result in an increase in Type 2 error, and vice versa. For ex. if we take the extreme cases: in classification problem, classifying all test samples as Positive will result in maximum Type 1 error 1, but Type 2 error will be 0. Conversely, if all the samples are classified as negative, there would be 0 Type 1 error, but maximum value for Type 2 error.