Models like AlphaGo were more focused on reinforcement learning, so each step could improve. ChatGPT-style LLM algorithms take a different direction, focusing on more general purposes spanned by a large natural language database. Although both models share certain similarities. AlphaGo is more specialized and focused on a specific task. In the latest update of GPT it brings a new tool called GPTs with which they try to be more specific to adapt to each user but there is still some way to go.
As far as I know AlphaGo heavily relies on Monte Carlo Tree Search. To make a move, lots of games are played in parallel to the end, starting at the current board position. The first move that turns out to be the most successful, that's the actual chosen move. In my understanding, the neural nets are used to influence the probability distribution.
AlphaGo doesn't seem to obey Go proverbs or sentence-like wisdom, such as "Two eyes live", or, "When you can't make two eyes, attack a neighbor or escape to the environment.". These are things a LLM-based Go player could be taught.