When I have an AKS cluster behind NVA (not azure firewall), loadbalancer type services shouldn't work, yet somehow they do

Zegan, Michał 21 Reputation points
2021-10-04T18:40:12.967+00:00

Hello,
I have the following problem... or rather I expected something not to work yet it does and I don't know why.
It is an education/test setup so note I am intentionally not using things like azure firewall or available NVA images due to costs.

My setup:

I have a hub/spoke topology. generally hub with some shared services and one spoke for dev environment peered to the hub. In the hub there is a NVA vm.
Note the NVA vm is custom. Basically it's a simple ubuntu linux with iptables rules set to snat all packets going out of eth0 if destination is not in private range 10.0.0.0/8. Note there are no firewalling/filtering rules at all so all forward traffic is accepted for now.
Routing is set up so that all traffic from spoke to hub, through hub to other spokes or to internet go through the nva, of course the hub has a respective routing rule so that traffic from hub to spokes also goes through it. All subnets are subject to this in a spoke network. And that generally works, traceroute shows traffic going through nva. The NVA has a public ip attached.
I have a small single-node AKS cluster set up in the spoke network. The node uses azure networking and is put in the spoke subnet behind an NVA. Testing outbound connectivity from the container running on a node shows traffic really flowing out of nva.

Now, the problem:

Generally tried to install nginx ingress controller. It launches one nginx pod on the node and creates a kubernetes service of type loadbalancer, attached to newly allocated public ip. This causes lb to be created and inbound rules appropriately set up. That would of course set up properly.
However I would expect that connecting to this ip port 80 fails, reason being that connection request packet flows from lb public ip directly to the node and the reply should flow through nva and exit with public ip of the NVA, so even though NVA accepts the packet, it should never reach the destination/be treated as a correct reply.
Yet, for some weird reason it works. Can anyone explain how is that possible? I believe I have tested that attaching public ip directly to a vm in a spoke really doesn't work. Why this does?

The reason why I first tested it for not working is because I had a solution where I'd have a special exposed/dmz subnet in front of the NVA where I'd create a node pool and put ingress controller there. This solution works, but everything I know says putting it directly in default node pool being placed behind NVA should not work... And I wanted to verify my expectations by first putting nginx behind nva, then putting it correctly.

Azure Virtual Network
Azure Virtual Network
An Azure networking service that is used to provision private networks and optionally to connect to on-premises datacenters.
2,139 questions
Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS)
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
1,854 questions
{count} votes