I recommend you should use wavelets, it still is good approach. Other learning based approaches only support to choose the basic of wavelet functions. So, if you want to get high-score for compressing, you can combine wavelets and other learning based approaches, such as deep learning.
The Huffman Encoding can be considered as the most efficient algorithm among the compression algorithms by considering the compression time, decompression time and compression ratio of all the algorithms.
The future is never certain, but based on current trends some predictions can be made as to what may happen in the future of data compression. Context Mixing algorithms such as PAQ and its variants have started to attract popularity, and they tend to achieve the highest compression ratios but are usually slow. With the exponential increase in hardware speed following Moore's Law, context mixing algorithms will likely flourish as the speed penalties are overcome through faster hardware due to their high compression ratio. The algorithm that PAQ sought to improve, called Prediction by Partial Matching (PPM) may also see new variants. Finally, the Lempel-Ziv Markov chain Algorithm (LZMA) has consistently shown itself to have an excellent compromise between speed and high compression ratio and will likely spawn more variants in the future. LZMA may even be the "winner" as it is further developed, having already been adopted in numerous competing compression formats since it was introduced with the 7-Zip format. Another potential development is the use of compression via sub-string enumeration (CSE) which is an up-and-coming compression technique that has not seen many software implementations yet. In its naive form it performs similarly to bzip2 and PPM, and researchers have been working to improve its efficiency
There are controversial definitions of deep learning. Some consider a branch of machine learning, and there are other definitions.
Yet, with respect to your interest in data compression, reading this article would give you a proper perspective of how to implement a deep learning algorithm for such purposes:
https://arxiv.org/pdf/1705.05823.pdf
And there are already many tools, developing upon deep learning algorithms by many small and big companies, making it rather simple to implement or integrate to your data.
I recommend you should use wavelets, it still is good approach. Other learning based approaches only support to choose the basic of wavelet functions. So, if you want to get high-score for compressing, you can combine wavelets and other learning based approaches, such as deep learning.