Yes, ethics in AI is a relevant topic for further research.
However there are many angles on this topic, for example ethics in how AI is designed, constructed, used, or perhaps even 'killed'. Will an AI unit deserve the same legal standing as an unborn foetus, domesticated animal, farm animal or pest? Who decides? Who writes the ethical rules that defines the ethics of the AI? Will there be an international standard, or will dictator X's AI machines be allowed different ethical rules to NATO's and some Enterprise AIs? There are many AI ethics issues to explore, but only you can decide which research angle you want to pursue.
I drove past the unblinking stare of the red IR eye of the police speed camera the other night and shuddered - because I realised how future AI will be critically monitoring (and punishing) the future human generations. What are the ethics on the future of "cross this arbitrary, unquestionable line and you'll be zapped"? The most unethical part, I think, is that the majority of global human population have little idea of AI impact and how they will be outlaws if they actively avoid AI interaction.
There are three types of people - those that make it happen, those that watch it happen and those that ask "what the hell just happened?". I think with AI, most humans will be in the latter group, as we silently and obediently toe the line waiting for the AI device to check us in/over/out and approve our freedom. Dystopian? Well, maybe for you. But it is like heaven for the AI. And who's running the show?
My hypothesis on the ethics of AI is that the medium term eventuality will result in zero ethics. Because, a thought experiment suggests that if A programs ethics into their AI, and B doesn't, then B 's unethical AI will probably win an unregulated fight. And the golden rule is "the winner makes the rules". Run that program a few times and see what the end result is. Hopefully my hypothesis can be proven wrong or that an intervention to somehow police and arrest unethical, rogue AI is quickly designed and deployed!
I think you have a risk of hitting a dead end as you would have to select a value or belief system against which to ground any discussion about ethics.
It seems that if you follow ethics related developments, you are bound to eventually discover that it is primarily not about the "content" (in your case AI/Information Systems) but the presuppositions of the adopted reference system.
Your challenge may be that in order to talk at all about the "ethics" (even in the context of the AI and Systems) you would have to somehow link the dicsussion to humans (i.e., the agents on the receiving end of the potential impacts of tyhe Systems/AI) and their values, and these cannot be discussed without making almost metaphysical committments.
Thus I would rather approach it from the other side, also embedded in your questions, such as: Considering AI based robotics and Deel Learning allows to "let go" of the 50, 70, or 80% of the current workforce, what type of real consequences could it have on the actual population, irespectively if it woudl be justified or not as "ethica" or not. Goold Luck!