Losing about half your items is normal. Please refer to my study guide on EFA. It means that the items you have left measure these identified constructs reliably. What I suggest is that you reconsider the meaning of each of your factors based on the items that they include. This is a qualitative interpretation process.
@Simge As long as you do not loose the coherence of items with the conceptuality of construct being measured, there is nothing wrong dropping items. There are plenty of studies based on scale development, wherein the author(s) started with 40-50 items to validate the scale(s) and by the end of scale validation, they were left with about half of the items. Therefore, the important thing is that the number of items left after validation should measure what they're meant for.
When you dropped the items, you loose some items that related to measured constructs. Because of this, we write more items for an behaviour. For example 3 or 5 items should be written for a behaviour. So, if you drop all of them via EFA, you have to add new item for content validity. You can check this via subject matter expert. In summary, you can create a scale with a high factor loading and a high rate of explained variance, but it may not fully cover the structure you want to measure.
If you change a questionnaire around you are basically going back to the development stage. Fine, your re-analysed data may achieve good statistics on the sample data that you reduced but you are simply 'modelling the data' at this stage. As a reviewer I would reject any research that relies for its conclusions on this re-analysis. What you now need to do is to gather a fresh sample on your re-constructed questionnaire and run the whole gamut of statistics on the fresh sample. You will find (perhaps to your surprise but not to the surprise of an experienced psychometrician) that those lovely values you derived from your previous, reduced and edited subset are weakened - if they hold at all.