We have a field for this, computability theory. We prove theorems that show problems are undecidable. If it isn't undecidable, then there exists an algorithm that solves that problem.
NP-complete problems are decidable (all can reduce from SAT in some way, shape or form), hence the set of undecidable problems are independent of the set of NP-complete problems. Anything, I'd see a closer relationship between NP-hard problems and undecidable problems as there are undecidable problems that are NP-hard (e.g., Halting Problem). This would make sense as the set of NP-hard problems is at least as hard as the NP-complete problems.
The description of most NP-complete problems we know can relatively short. Typically NP-complete problems are easy to describe mathematically (not always true, but the core ones are). I wouldn't use a phrase like "irreducibly complex" for NP-complete problems as we don't know they are. Also, in what way would they be that way? Their description? The number of bits to encode algorithms onto a tape of a Turing Machine that can solve it? Algorithms for solving them in terms of their time complexities? It isn't clear. Note there are problems that aren't NP-complete but meet the lower bounds of their problem aimed to be solved (e.g., merge sort for comparison-based sorting), are those irreducibly complex?
I've read Chaitin's work in the past (I read some of this and own a couple of his books), most of the time he is discussing the realm of computability, not intractability when he is discussing these concepts directly. It would make sense to talk about these ideas in terms of computability theory. He is applying them to discuss other fields, which can be interesting at times.
Thanks for your input. Now I grasp the clear distinction between complexity and the complicated through the idea of computability and intractability. On the issue on "irreducibly complex", would info entropy be a good metric for it?
It appears to me that, in the paper you cited, Chaitin uses the phrase "irreducibly complex" as a synonym for "algorithmically random." This is quite different from computational complexity such as NP-completeness. Algorithmically random is defined in terms of Kolmogorov complexity. The prefix-free Kolmogorov complexity, K(s) of a binary string, s, is defined to be the length of the shortest program that outputs s. Prefix-free means that no program is a prefix of another program. We assume a fixed optimal computer. Changing optimal computers changes the Kolmogorov complexities of strings, but the change is bounded by a single constant for all strings. Results in Kolmogorov complexity are usually stated "up to an additive constant" just as results in analysis of algorithms are usually stated "up to a mupltiplicative constant" (big O notation). An infinite binary sequence, x, is algorithmically random if there is a constant b such that for all n, K(x[n]) >= n - b, where x[n] is the initial segment of x consisting of the first n bits of x. This means that the initial segments of x cannot be described using short programs. The programs must be almost as long as the strings themselves. (The programs can be shorter only by the additive constant b.) Algorithmically random sequences cannot be computable. A computable sequence x can be described by a program of finite length, so an initial segment x[n] can be described by the finite program plus a description of n, which can described using less than 2log(n)+O(1) bits. This implies x is not algorithmically random. However, although algorithmically random implies not computable, the converse implication doesn't hold. For instance, if x=x0, x1, x2, ... is algorithmically random, x0, 0, x1, 0, x2, 0, ... is not computable, but is not algorithmically random since an initial segment of length n only contains n/2 bits of information. Algorithmically random sequences are upredicatable in the sense that no program can predit the bits of the sequence with better than 50% accuracy.