I want to do some analyses on the edit history of wikipedia articles but the first big practical obstacle is simply to handle the data. Would importing the data into an SQL database be a good way of working with it? Any other suggestions?
Try JWPL, it parses the dump and loads it into a database.
As long as you stay in a single Wikipedia dump, you don't need distributed processing like Hadoop - it fits in a single decent machine (probably not your MacBook Air) .
Install standalone hadoop frame work either using VMWare or without virtual machine then try to import onto Hbase (its a columnar data base). After putting data on Hbase you can use Hive queries language to get desired analysis on that data.
In addition to the JWPL mentioned above you may also want to look at the Lucene Search API. There is an ExtractWikipedia class (in the benchmark package) that creates a Lucene index in order to benchmark performance.