Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a corporate laptop which funnels all traffic through zscaler.

I was somewhat surprised when I was getting a different IP address on ipinfo.io (my home IP) compared with whatsmyip.org (a zscaler datacentre IP)

Curlling ipinfo.io though came back with the zscaler address.

Turns out they don't funnel UDP via zscaler, only TCP.

Looking into zscaler, https://help.zscaler.com/zia/managing-quic-protocol

> Zscaler best practice is to block QUIC. When it's blocked, QUIC has a failsafe to fall back to TCP. This enables SSL inspection without negatively impacting user experience.

Seems corporate IT is resurging after a decade of defeat



Zscaler is pure crap, we use it at work too. It's especially hard to configure docker containers for its proxy settings and ssl certificate.

When I test something new in our lab I spend 10 minutes installing it and half a day configuring the proxy.


Man, here I am reading this while fighting zScaler when connecting to our new package repository (it breaks because the inspection of the downloads takes too long). No one feels responsible for helping developers. Same with setting up containers, git, Python, and everything else that comes with its own trust store, you have to figure out everything by yourself.

It also breaks a lot of web pages by redirecting HTTP requests in order to authenticate you (CSP broken). Teams GIFs and GitHub images have been broken for months now and no one cares.


Ahhhh so that's why my teams gifs don't work. Thanks.

We use an external auth provider which makes even more complex config yeah.


At least for me that’s the problem. When I open the redirect url manually it also fixes the problem for some time.

You can open the Teams developer tools to check this. Click the taskbar icon 7 times, then right click it. Use dev tools for select web contents, choose experience renderer AAD. Search for GIFs in Teams and monitor the network tab


amen, brother!


It is the single most annoying impediment in corporate IT. And you are on your own when you need to work around the issues it causes. Is it really providing value, or is it just to feel better about security?


It's not just an impediment. It's corporate spyware and possibly a prototype for Great Firewall 2.0.


It causes minor annoyances with ssl + maven as well, which can be fixed by -Dmaven.wagon.http.ssl.insecure=true.

Well, at least they tried I guess.


No, setting any variable including the line

"http.ssl.insecure=true"

Is not a fix under any circumstance.


Sure it is. The org insists on making your life difficult, and you just want to get your work done. If they really cared about security they would prioritise fixing stuff like this, but they don't, so you know they don't really care, it's just for show and a need for control.

And if they don't really care about security, why should you?


Which zscaler products does your company use? Do you have an idea of what better solutions are out there?


The cloud service. I don't know what it's called exactly. It just says "Zscaler".

In terms of better solutions, I would prefer a completely different approach. Securing the endpoint instead of the network. Basically the idea of Google's "BeyondCorp".

What happens now is that people just turn off their VPN and Zscaler client to avoid issues, when they're working from a public hotspot or at home. In the office (our lab environment) we unfortunately don't have that option.

But by doing so they leave themselves much more exposed than when we didn't have Zscaler at all.


I hate corporate IT. Security with decades old arcane practices. Killing user experience with any way possible. MITM all around..


Blame viruses, malware, phishing, ransomware, etc. IT has a responsibility to keep the network secure. Google is already experimenting with no Internet access for some employees, and that might be the endgame.


This has nothing to do with security and more with ineffective practices based on security where nobody knows why its done just that its done. Running MitM on connections basically breaks basic security mechanism for some ineffective security theater. This is basically "90-day password change" 2.0.


> where nobody knows why its done just that its done

Compliance. You think your IT dept wants to deploy this crap? How ever painful you think it is as an end user multiply it having to support hundreds/thousands of endpoints.

Look, I hate traffic inspection as much as the next person but this is for security, it's just not for the security you want it to be. This is so you have an audit trail of data exfiltration and there's no way around it. You need the plaintext to do this and the whole network stack is built around making this a huge giant pain in the ass. This is one situation where soulless enterprises and users should actually be aligned. Having the ability in your OS to inspect the plaintext traffic of all incoming and outgoing traffic by forcing apps off raw sockets would be a massive win. People. seem to understand how getting the plaintext for DNS requests is beneficial to the user but not HTTP for some reason.

Be happy your setup is at least opportunistic and not "block any traffic we can't get the plaintext for."


> You think your IT dept wants to deploy this crap?

Yes, they do.


No, they really really don't. Source: I've worked in corporate IT for many years, and this kind of shit is always forced upon us just as much as it is on you guys. We hate it too.


Compliance with exactly what? Their own rules?


Not the OP, but currently I work in a regulated industry (financial) where Corporate Risk and Legal depts ask for this stuff (and much more) to satisfy external auditors. The IT people hate it just as much.

I had never experienced just how much power a single dept could hold until we got acquired by a large finance enterprise and had to interact with the Risk dept.


Still, what exactly has changed in the last year that Zscaller/Netskope became prevalent? What law has changed? Can someone pinpoint on it. I work for telecom company for example, two years ago there was no zscaller/netskope MITM in my request from the corporate laptop to Internet, today there is one. What law has changed if any what mandates that? If that matter ISP is registered at NJ.


20 years ago I was configuring VPNs on work laptops that then had all the exit traffic routed to a Bluecoat system to MITM the traffic. The difference is that zScaler is "Zero Trust" so you are actually not on a VPN anymore. It's intercepting the traffic locally and then determining what to do with it. At my current workplace we are using it to access internal services only; allowing all external traffic to exit directly.


Not in your country, but my point about compliance wasn't that a law requires it specifically (laws don't specify technical "solutions" anyway) - just that often the IT dept is compelled by other depts (eg Risk) to implement and support stuff that allows that other dept to show auditors that they are doing something rather than being negligent.


MitM can absolutely stop threats if done correctly. A properly configured Palo Alto firewall running SSL Decryption can stop a random user downloading a known zero-day package with Wildfire. Not saying MitM is an end all be all, but IMHO the more security layers you have the better.

At the end of the day, it's not your network/computer. There's always going to be some unsavvy user duped into something. If you don't like corporate IT, you're free to become a contractor and work at home.


"A properly configured Palo Alto firewall running SSL Decryption can stop a random user downloading a known zero-day package with Wildfire."

Instead that Corp IT should have put a transparently working antivirus/malware scanner on the workstation that would prevent that download to be run at all. ?

DPS/MITM are not security layers but more of privacy nightmares.


> Instead that Corp IT should have put a transparently working antivirus/malware scanner on the workstation that would prevent that download to be run at all. ?

Sure. Then come the complaints that this slows down endpoint devices and has compatibility issues. Somebody gets the idea to do this in the network. Rinse. Repeat.


Our CorpIT has that and fine tuned it to perfection. No one complains now. So it's possible.

Unfortunately they still do MITM which breaks connections regularly.


It's a knife's edge. One OS patch, or one vendor change in product roadmap, and you can be right back to endpoint security software performance and compatibility hell. Stuff has gotten better but it's still fraught with peril.


I disagree, I think you should have both as an endpoint scanner (either heuristics or process execution) may not catch anything. (for example a malicious Javascript from an advertisement)

Why do you care so much about your privacy while you're on company time using their computers, software, and network? If you don't like it, bring your own phone/tablet/laptop and use cellular data for your personal web browsing. FWIW, it's standard practice to exempt SSL decryption for banking, healthcare, government sites, etc.


Nobody complained that they couldn't browse Reddit privately. Everybody was complaining that they couldn't perform their work.


There are real valuable practices that helps security, and there are practices just break security.

Particularly MITM practice is a net negative. Rolling password resets and bad password requirements are also net negatives. Scanners which does not work as intended, which are not proofed at all and introduce slowness, feature breaks are possible negatives.

Also at some places they introduce predatory privacy nightmares like key loggers, screen recorders..


Full inspection of user traffic is required to implement:

* Data leakage policy (DLP; insider threat, data exfiltration)

* Malware scanning

* Domain blocking (Gambling, Malware)

* Other detection mechanisms (C2)

* Logging and auditing for forensic investigations

* Hunting generally

I dont see how this breaks security, and of course you also didnt elaborate on why it should be. Assumed TLS MitM is implemented reasonably correctly.

Dont worry tho, zero trust will expose the company laptops again to all the malicious shit out there.


> I dont see how this breaks security

You’re training users to ignore certificate errors – yes, even if you think you’re not – and you’re putting in a critical piece of infrastructure which is now able to view or forge traffic everywhere. Every vendor has a history of security vulnerabilities and you also need to put in robust administrative controls very few places are actually competent enough to implement, or now you have the risk that your security operators are one phish or act of malice away from damaging the company (better hope nobody in security is ever part of a harassment claim).

On the plus side, they’re only marginally effective at the sales points you mentioned. They’ll stop the sales guys from hitting sports betting sites, but attackers have been routinely bypassing these systems since the turn of the century so much of what you’re doing is taking on one of the most expensive challenges in the field to stop the least sophisticated attackers.

If you’re concerned about things like DLP, you should be focused on things like sandboxing and fine-grained access control long before doing SSL interception.


A competent organisation will have a root certificate trusted on all machines so you won't be ignoring certificate errors. You are right however that you are funnelling your entire corporate traffic unencrypted through a single system, break into that and you have hit the goldmine.


> A competent organisation will have a root certificate trusted on all machines so you won't be ignoring certificate errors.

You definitely need that but ask anyone who’s done it and they’ll tell you that flushing out all of the different places interception causes problems with pinned certs, protocol level incompatibilities, etc. which inevitably someone will try to solve by turning off some security measures. This will inevitably include this like your help desk people trying to be helpful and not realizing that the first hit on Stack Exchange suggesting adding “-k” is not actually a good idea.

This is exacerbated by the low quality of the vendor appliances most places use to implement these policies. For example, Palo Alto will break the Windows SChannel certificate revocation check - there’s still no workaround but I guarantee you won’t know all of the places where that’s been disabled. They also don’t support the secure session renegotiation extension to TLS 1.2 (RFC 5746 from 2010) which I know because OpenSSL 3 started requiring that and had to stop multiple teams from “solving” using a terrible solution from the first hit on Google. Amusingly, they do correctly implement TLS 1.3 so I’ve been able to fix this for multiple open source projects by getting them to enable 1.3 in their CDN configuration.


Correct, this is table stakes to get SSL Decryption working for any vendor. Typically we're talking about Windows PC's joined to Active Directory and they already trust the domain's CA. The firewall then gets it's own CA cert issued by the domain CA, so when you go to www.facebook.com and inspect the certificate it says it is from the firewall.

Most orgs don't inspect sensitive things like banking, healthcare, government sites, etc. Also it's very common to make exceptions to get certain applications working (like Dropbox).


Yes, if you want/need to do those things, then you need to inspect user traffic. But why do you want/need to do those things in the first place? What's your threat model?

Doing this breaks the end-to-end encryption and mutual authentication that is the key benefit of modern cryptography. The security measures implemented in modern web browsers are significantly more advanced and up-to-date than what systems like Zscaler are offering, for example in terms of rejecting deprecated protocols, or enabling better and more secure protocols like QUIC. By using something like Zscaler, you're introducing a single point of failure and a high value target for hackers.


Most of what you said is inaccurate in practice.

A competent org and good mitm device will have trusted internal root certs on all endpoints, so cert errors are not a problem. The proxy can be set to passthrough or block sites with cert errors (expired, invalid), so there isn't any "bad habits training" of users clicking through cert errors. Several vendors today support TLS 1.3 decryption.

I don't know what you mean by SPOF for a proxy: they are no more a SPOF than any properly redundant network hop.

A proxy doesn't break encryption. Endpoints trust the mitm.

Now, I think that someday the protocols of the web such as quic will get so locked down that the only feasible threat prevention will be heuristic analysis of network traffic, and running all threat scanning on endpoints (with some future OS that has secure methods of stopping malicious network or executables before said traffic leaves some quarantine).

I'm a network guy, not an endpoint guy.


Everything I wrote earlier is based on the use of Zscaler proxy at work, so it's very much about practice, not theory.

Yes, of course the Zscaler root certs have been installed on our endpoints. The problem is that the proxy is replacing the TLS certificate of the origin server with its own certificate, which makes impossible for the browser to verify the identity of the origin server and trust the communication. The browser can only verify that it is communicating with the proxy; it cannot verify anymore that it is communicating with the origin server.

That's what makes Zscaler and similar solutions a SPOF. I know that Zscaler is using a distributed architecture with no hardware or network SPOF. But Zscaler is a SPOF from an organizational perspective. If you hack them, you get access to everything. That's what me and other commenters meant by SPOF in that context.

> A proxy doesn't break encryption. Endpoints trust the mitm.

I didn't write that it's breaking encryption. I wrote it's breaking end-to-end encryption and authentication. I'm sure you understand the difference.

> Now, I think that someday the protocols of the web such as quic will get so locked down that the only feasible threat prevention will be heuristic analysis of network traffic

We're already there. HTTP/3 (QUIC) already amounts for about 30% of the traffic served by Cloudflare to humans [1]. QUIC is actually offering a higher level of security by encrypting more metadata that HTTP/1 and 2 (specifically the part within the TCP headers that can be leveraged by an attacker when it is in clear).

> A competent org and good mitm device

That's the main problem. Those proxies are usually less scrutinized and have smaller engineering and security teams than major modern web browsers like Edge, Chrome, Firefox and Safari, and as a consequence have more vulnerabilities.

In general, major modern web browsers enforce stronger security requirements than Zscaler:

- For example, the following website, using a potentially insecure Diffie-Hellman key exchange over a 1024-bit group (Logjam attack), is blocked by Chrome and Firefox but not by Zscaler: https://dh1024.badssl.com/

- Same for that website using a revoked certificate: https://revoked.badssl.com/

- Same for that website requiring certificate transparency but not sending a Signed Certificate Timestamp: https://no-sct.badssl.com/

[1] https://blog.cloudflare.com/http3-usage-one-year-on/


Oof, I’ve complained about practical problems in my developer life above, but that’s even worse than I thought. I was able to reproduce dh1024 and no-sct on my work laptop with zScaler. Interestingly it blocks the revoked one by turning it into a self-signed one.

Also failing:

- pinning-test

- all dh*, except for dh480 and dh512


> Interestingly it blocks the revoked one by turning it into a self-signed one.

Well spotted! That's crazy...


> But why do you want/need to do those things in the first place? What's your threat model?

Not everyone in a company is savvy or hard at work. Randy in accounting might spend spend an hour or more a day browsing the internet and scooping up ads and be enticed to download something to help speed up their PC which turns out to be ransomware.


This assumes Randy is incompetent, but not malicious. Nothing is stopping an attacker from contacting Randy out of band, say over a phone or personal email, and then blackmailing him to get him to hand out company information. The key here is to scope down Randy's access so that no matter what kind of an employee he is, the only access Randy has is the minimum necessary and that all of his accesses to company information is logged for audit and threat intelligence purposes.

That's the problem with these MITM approaches. They open up a new security SPOF (what happens if there's an exploit on your MITM proxy that an attacker uses to gain access to the entire firehose of corporate traffic) while doing little to protect against malicious users.


I think the undertone of your comment says a lot - corporations that feel the need to MITM all traffic tend to not trust their employees (from my experience dealing with this area) - either their competence or their work ethic.

All round, full traffic inspection is generally a bad idea except for some very limited cases where there is a clearly defined need.


In which case as Randy only has access to a few files you simply restore the snapshot of those files and away you go.


* Data leaks are not prevented by MITM attack. A sufficiently determined data leaker will easily find easier or elaborate ways to circumvent it. * Malware scanning can be done very efficiently at the end user workstation. ( But always done inefficiently ) * How domain blocking requires a MITM? * C2 scanning can efficiently done at the end user workstation. * Audits does not require "full contents of communication"

Is MITM ever the answer?

Stealing a valid communication channel and identity theft of remote servers is in fact break basic internet security practices.


> IT has a responsibility to keep the network secure.

Yes, but TLS inspection is not the solution.

> Google is already experimenting with no Internet access for some employees, and that might be the endgame.

Source? And I'm pretty sure they are not considering disconnecting most of their employees who actually need Internet for their job.


Source: https://arstechnica.com/gadgets/2023/07/to-defeat-hackers-go...

Eventually I think the endgame here is that you use your own personal BYOD device to browse the internet that is not able to connect to the corporate network.


Thanks for the link. I’ve seen it done if the defense industry. Interesting to see Google doing this for a small subset of their employees not needing Internet for their job.


Link for more info? That seems impossible to make work.


I read this recently for sysadmins at Google and Microsoft that have access to absolute core services like authentication, which does make sense to keep these airgapped


This sounds like a misunderstanding of the model. Usually these companies have facilities that allow core teams to recover if prod gets completely fucked e.g. auth is broken so we need to bypass it. Those facilities are typically on separate, dedicated networks but that doesn’t mean the people who would use them operate in that environment day to day.


I know that they have a gigantic intranet, that might make the lack of internet during the workday less painful



Google disabling Internet access is very different from your typical company doing that. Watching a YouTube video? Intranet and not disabled. Checking your email on Gmail? Intranet and not disabled. Doing a web search? Intranet and not disabled. Clicking on a search result? Just use the search cache and it's intranet.


> IT has a responsibility to keep the network secure

By chopping the head off?


Blame the law. Companies are bound by it. Actually blame terrible programming practices and the reluctance to tie the long tail of software maintenance and compliance to the programmers and product managers that write them.


companies can be held liable for what people using their networks do, so they need a way to prove it's not their fault and provide the credentials of the malevolent actor.

it's like call and message logs kept by phone companies.

nobody likes to keep them but it's better than the breaking the law and risking for someone abusing your infrastructure.

it would also be great if my colleagues did not use the company network to check the soccer stats every morning for 4 hours straight, so the company had to put up some kind of domain blocking that prevents me from looking up some algorithm i cannot recall from the top of my mind on gamedev.net because it's considered "gaming"


Looking up soccer stats is not illegal so the company doesn't have to block it.

Blocking the website instead of punishing them in their performance reviews (assuming it does impact their performance, if they're still productive why even care) is useless, they'll use their phones and still spend time on it.


> Looking up soccer stats is not illegal so the company doesn't have to block it.

it's not because it's illegal, it's because they are wasting time on the job using company's equipment for something not work related. And usually they end up clicking everywhere on shady ads, trackers etc.

We are in Italy, there's no such thing as performance review here, if you get hired you get paid every month (actually 13 times a year, sometimes 14) and nobody can fire you ever again.

> they'll use their phones and still spend time on it.

their choice on their equipment


> without negatively impacting user experience

I can't stop laughing.


Hopefully, IT doesn't notice when I use my `kill-zscaler.sh` script. It's horrible to work around when you arrive on a new company using it.


The only reason I have one is so I can prove the problem is with zscaler and not my network.

I remember someone complaining about the speeds writing to a unc file share over a 10G network, using a blackmagic file writing test tool

It was really slow (about 100MB/sec), but iperf was fine.

I did a "while (true) pkill sophos" or similar. Speeds shot upto about 1GB/sec.

Closed the while while loop, sophos returned in a few seconds, and speeds reduce to a crawl again.

But who needs to write at a decent speed in a media production environment.

And people still wonder why ShadowIT constantly wins with the business.


What all is in that script? I'd love to have it. Sincerely, another dev abused by Zscaler.



Care to share?



Should have tried searching for it first I guess, thanks!


You can try to block zscaller ( or netskope) IP on you home router. Most of the times IT laptops are default to 'normal' web behaviour if zscaller/netskope is not available.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: