It's common in restricted environments. Egress for 80/443 allowed and DNS must use local recursive DNS servers. Those internal DNS servers probably pass through a few SIEM and other security devices and are permitted out, usually to minimize data exfiltration. Though in those cases 80 and 443 are often using a MITM proxy as well for deep packet inspection. There are both commercial and open source MITM proxies. Fans of HTTP/3 and QUIC would not like most of the MITM proxies as they would negotiate a specific protocol with the destination server and it may not involve QUIC.
I worked in an environment with similar setup. First step for all devices allowed to connect to the network was to install the companies custom CA root certificate. There are a lot of sharp edges in such a setup (like trying to get Charles or other debugging proxies to work reliably). But in highly sensitive environments it would seem the policy is to MiTM every packet that passes through the network.
I wasn't involved but another team did some experimenting with HTTP/2 (which at the time was still very early in its rollout) and they were struggling with the network team to get it all working. Once they did get it to work it actually resulted in slightly less performant load times for our site and the work was de-prioritized. I recall (maybe incorrectly) that it was due to the inefficiency of forcing all site resources through a single connection. We got better results when the resources were served from multiple domains and the browser could keep open multiple connections. But I wasn't directly involved so I only overheard things and my memory might be fuzzy on the details of what I overheard.
Funnily enough, we did have to keep a backup network without the full snooping proxy for things like IoT test devices (including some smart speakers and TVs) since installing certs on those devices was sometimes impossible. I assume they were still proxying as much as they could reasonably manage.