While working in usable security, we often get stuck at points where we need to decide what to keep make a priority, user experience or maintaining security. What are the thumb rules or principles which help in making such decisions faster?
Security, usability, and Total Cost of Ownership (TCO) are fundamental parameters which are the most difficult to reconcile. It is a hot mixture of science (security), taste and impression of users (usability) being highly variable depends of whose interests are being protected – the employer’s, or their own ones, and economical interests (TCO) of the vendors and customers of the security system.
There are rules of thumb based on reasons, and there is reality full of fools, prejudices, little understanding, and self-interest based pressures. User authentication security systems are a clear example of these contradictions often leading to massive breaches of user credentials by industry giants and governments of even the most advanced industrial nations. In a way, it is one of the central issues in advancing cyber security.
If I understand your question correctly, the key challenge you have is how to determine which one of your priorities is the top priority. You may find this either humorous or sad, but I once spent a whole year deeply contemplating this very question.
In the end I realized that the notion of a priority of priorities is a futile, yet necessary conundrum. I resolved the question by accepting that given context X, X(p1) applies within its systems hierarchy. In a complex-adaptive system though, it should happen that multiple value priorities would share equal positions.
Although the above is true, there were a few heuristic rules, which emerged from years of applied analysis. To achieve this, a consistent scientific tool was applied under field conditions. Some of these heuristics were:
1) Once normalized, there can always only be a single instance of a P1 (Priority 1) for every context.
2) Every context is unique
Here is a brief "How to" guideline: You could assign a GPID (Global priority ID) to every component who carries the proven hierarchical weight of P1. It may be assigned as part of the keyfield on a relational database, or similar. In this manner the relationship between priorities and contexts become tracable, meaning you can see what you need to manage.
However, suppose 2 priorities were in conflict for resources, both being P1 but belonging to different contexts at the same moment in time? Then the hierarchy of criticality could be used to resolve the conflict. This is not the same as systems constraint resolution, which would call for instantiation of the constraint in question.
Although a relationship between the two can be proven to exist, a priority is not the same thing as a constraint. A constraint typically represents the core component of a systems context, whereas a priority generates robustness in the system via its hierarchical control.
In other words, your question might've been rephrased as follows: "How is more control asserted over a system?"
What was learned from field studies was that any component within a systems context, which was assigned the weights of P1 (highest priority) and C1 (most critical to the existence of the context), would always inherit the most resources.
Where 2 instances of GPIDs existed in the same moment, they should be logically merged and timesliced to share resources equally. However, the instant any one of the GPIDs moved to another position in its hierarchy - even fleetingly - then the remaining GPID increments in resource allocation on a per instance basis.
For example, if during the execution of such a conflict one of the processes' priority 1's switched to any lower priority, then immediately switched back to P1 afterwards, that process would lose 1% of resources, and so on till the lowest value of 1% (not zero).
In this manner, the resource-to-priority allocation would service the core directly and thus optimally support the intent of the overall system. Neat, isn't it?
The structure I now shared with you would enable more accurate systems-control decision making by either a machine, or a human supervisor.
My main principles would be, what are you trying to protect exactly (such as, the identity of users, the session itself, the type of message being sent, ...), and then what are the consequences of a security breach, within that system? These are the main considerations. The more severe the consequences, the more one should be willing to make security a higher priority, and to accept that the "user experience" will get a lower priority.
I think you need to establish this at the beginning of your project, explain your criteria, and then move on from there.
I don't think you'll find simple answers, and I also think that the results will depend heavily on you going in assumptions. There are entire books written on this subject matter. Take a look at the notes here: