with the dataset below I found that besides there is an 88% of total agreement (4 items 86%, 4 items 100%, 2 items 71%), Fleiss´s kappa and Krippendorff´s alpha get negative estimates. Is there a way to handle this?
if your goal is to find a test that will output a positive answer ... then, you can try as many as you wish !
if your goal is to understand why K's alpha gives you a negative answer, just remember that K's alpha is testing whether the observed data shows a significant agreement GIVEN the agreement/disagreement proportion
so, the more percent agreement you'll have, the more stringent the test wil be : if you have 90% "1" and 10% "0" in your table, an overwhelming proportion of random assignements of the observations will also reach a high K's alpha level, pushing the significant boundary quite high and possibly higher than your observed value
this does not mean that your data do not show "agreement", it means that GIVEN the observed percent agreement, your observed table is not sooo special
at the end of the day, it boils down to accurately define what you mean by "agreement" and why you want to test it