That's understandable, and so perhaps that we need to re-think our computing model to replace yes/no popups with something a bit more involving.
By "involving" I do not necessarily mean harder, the goal isn't to make computers less accessible, but if we give users the ability to control whatever go in and out their devices maybe that making them a bit more interactive so instead of clicking "yes" to allow keyboard access you could drag a keyboard icon to the app or any other device you want to use as input instead.
Consent doesn't need to be presented as yes/no, there are multiple ways to make users understand what is being accessed.
There are but anything that requires decisions, individual separate decisions, every time will induce fatigue. Addressing that problem isn’t going to happen by changing the ui presented at each individual interaction, it happens by finding a way to make the vast majority of those decisions ahead by stating your personal default, then overriding it as necessary. A process that can be done once, or occasionally, and applied 100 times can be involved and still be worthwhile. Making it involved and repeated ad nauseam is a good way to ensure nobody will actually bother every time.
There is also a problem with how apps are made, they depend way too much on environment calls. And obviously adding 50 permissions isn't realistic.
Now, making the base process manual doesn't mean that the user will need to do the same thing over and over. I am not against automation, but I believe that it should come from the user, not a fancy system that some external entity decided is the best way to handle permission.
By outsourcing the responsibility to your data, you are effectively giving up on understanding it.
Ideally, I believe that consumer computers should become interactive/programmable systems where the OS is responsible for exposing all the environment calls, and the apps all stateless functions. The process of mapping environment access to app is manual, but create consent AND customizability. Plus additional benefits like easing the development of cross-platform applications and maintainability.
By "involving" I do not necessarily mean harder, the goal isn't to make computers less accessible, but if we give users the ability to control whatever go in and out their devices maybe that making them a bit more interactive so instead of clicking "yes" to allow keyboard access you could drag a keyboard icon to the app or any other device you want to use as input instead.
Consent doesn't need to be presented as yes/no, there are multiple ways to make users understand what is being accessed.