ISTIO as a cluster IP in Google kuberentes Engine
By default, when running services on Google Kubernetes Engine (GKE), the preferred method for exposing these services to external traffic is using Ingress. Ingress provides a centralized entry point to the cluster and allows for the routing of external traffic to different services based on defined rules.
However, as the number of services and complexity of the system grows, relying solely on Ingress for service exposure can become challenging and less efficient. Each service would require its own Ingress resource, which in turn creates individual load balancers, leading to increased costs and management overhead.
To address these challenges and achieve better scalability and cost-effectiveness, implementing a service mesh such as Istio becomes essential. It is a service mesh solution that provides a dedicated infrastructure layer that handles the communication between services within the cluster. It enables advanced traffic management, observability, and security features, allowing for better control and management of the services. Istio’s modernized service networking layer implementation provides a transparent and language-independent way to flexibly and easily automate application network functions.
Problem Statement
In our scenario, each service running in the GKE cluster is assigned its own load balancer, which proves to be costly. Therefore, we must implement a service mesh to address this issue and achieve a more cost-effective solution.
There is a lack of available information or resources on the internet regarding running an ISTIO service type as a cluster IP.
The only solutions we get on the internet are
- Service type as Network load balancer: Istio default profile gives service type as load balancer which works on layer 4 (Network layer). So we can’t attach SSL directly in GCP. It’s difficult to manage a Load balancer.
- Service type as Nodeport: Another way was to implement Istio with a service type as Nodeport. But it has a few limitations.
- You need to track which nodes have pods with exposed ports.
- It only exposes one service per port
- The ports available to NodePort are in the 30,000 to 32,767 range.
Solution
This blog talks about the solution through which one can launch the Istio service type as cluster ip, which is a much more secure way of implementing the action-service mess on GKE cluster
Installation
Choose the type that best suits your needs and platform.
In our case we use istioctl to install Istio in the GKE cluster.
$ curl -L https://istio.io/downloadIstio | sh - $ cd istio-1.17.1 $ export PATH=$PWD/bin:$PATH $ istioctl install --set profile=Default -y
Istio offers several configuration profiles. These profiles provide pre-canned customizations of the Istio control plane and the sidecars for the Istio data plane. You can start with one of Istio’s built-in configuration profiles and then tailor the configuration to your specific needs. There are five built-in profiles; To check more profiles, click on the Link=Click Here
As this blog talks about the implementation we had in our project, we will go ahead with the Default profile.
Configuration
Configuration file as below :-
-> Default.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
spec:
hub: docker.io/istio
tag: 1.15.2
meshConfig:
defaultConfig:
proxyMetadata: {}
enablePrometheusMerge: true
accessLogFile: /dev/stdout
extensionProviders:
- name: otel
envoyOtelAls:
service: opentelemetry-collector.istio-system.svc.cluster.local
port: 4317
# Traffic management feature
components:
base:
enabled: true
pilot:
enabled: true
k8s:
env:
- name: PILOT_TRACE_SAMPLING
value: "100"
resources:
requests:
cpu: 1000m
memory: 4096Mi
# Istio Gateway feature
ingressGateways:
- name: istio-ingressgateway
enabled: true
label:
app: istio-ingressgateway
istio: ingressgateway
k8s:
resources:
requests:
cpu: 100m
memory: 400Mi
service:
type: ClusterIP
ports:
## You can add custom gateway ports in user values overrides, but it must include those ports since helm replaces.
# Note that AWS ELB will by default perform health checks on the first port
# on this list. Setting this to the health check port will ensure that health
# checks always work. https://github.com/istio/istio/issues/12503
- port: 15021
targetPort: 15021
name: status-port
- port: 80
targetPort: 8080
name: http2
- port: 443
targetPort: 8443
name: https
- port: 31400
targetPort: 31400
name: tcp
# This is the port where sni routing happens
- port: 15443
targetPort: 15443
name: tls
serviceAnnotations:
cloud.google.com/backend-config: '{"default": "ingress"}'
cloud.google.com/neg: '{"ingress": true}'
egressGateways:
- name: istio-egressgateway
enabled: true
k8s:
resources:
requests:
cpu: 100m
memory: 400Mi
# Istio CNI feature
cni:
enabled: false
# Remote and config cluster configuration for an external istiod
istiodRemote:
enabled: false
# Global values passed through to helm global.yaml.
# Please keep this in sync with manifests/charts/global.yaml
values:
defaultRevision: ""
global:
istioNamespace: istio-system
istiod:
enableAnalysis: false
logging:
level: "default:info"
logAsJson: false
pilotCertProvider: istiod
jwtPolicy: third-party-jwt
proxy:
image: proxyv2
clusterDomain: "cluster.local"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 1024Mi
logLevel: warning
componentLogLevel: "misc:error"
privileged: false
enableCoreDump: false
statusPort: 15020
readinessInitialDelaySeconds: 1
readinessPeriodSeconds: 2
readinessFailureThreshold: 30
includeIPRanges: "*"
excludeIPRanges: ""
excludeOutboundPorts: ""
excludeInboundPorts: ""
autoInject: enabled
tracer: "datadog" #"zipkin"
proxy_init:
image: proxyv2
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 10m
memory: 10Mi
# Specify image pull policy if default behavior isn't desired.
# Default behavior: latest images will be Always else IfNotPresent.
imagePullPolicy: ""
operatorManageWebhooks: false
tracer:
datadog: {}
stackdriver: {}
imagePullSecrets: []
oneNamespace: false
defaultNodeSelector: {}
configValidation: true
multiCluster:
enabled: false
clusterName: ""
omitSidecarInjectorConfigMap: false
network: ""
defaultResources:
requests:
cpu: 10m
defaultPodDisruptionBudget:
enabled: true
priorityClassName: ""
useMCP: false
sds:
token:
aud: istio-ca
sts:
servicePort: 0
meshNetworks: {}
mountMtlsCerts: false
base:
enableCRDTemplates: false
validationURL: ""
pilot:
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
replicaCount: 1
image: pilot
traceSampling: 100.0
env: {}
cpu:
targetAverageUtilization: 80
nodeSelector: {}
keepaliveMaxServerConnectionAge: 30m
enableProtocolSniffingForOutbound: true
enableProtocolSniffingForInbound: true
deploymentLabels:
podLabels: {}
configMap: true
telemetry:
enabled: true
v2:
enabled: true
metadataExchange:
wasmEnabled: false
prometheus:
wasmEnabled: false
enabled: true
stackdriver:
enabled: false
logging: false
monitoring: false
topology: false
configOverride: {}
istiodRemote:
injectionURL: ""
gateways:
istio-egressgateway:
env: {}
autoscaleEnabled: true
type: ClusterIP
name: istio-egressgateway
secretVolumes:
- name: egressgateway-certs
secretName: istio-egressgateway-certs
mountPath: /etc/istio/egressgateway-certs
- name: egressgateway-ca-certs
secretName: istio-egressgateway-ca-certs
mountPath: /etc/istio/egressgateway-ca-certs
istio-ingressgateway:
autoscaleEnabled: true
type: ClusterIP
name: istio-ingressgateway
env: {}
secretVolumes:
- name: ingressgateway-certs
secretName: istio-ingressgateway-certs
mountPath: /etc/istio/ingressgateway-certs
- name: ingressgateway-ca-certs
secretName: istio-ingressgateway-ca-certs
mountPath: /etc/istio/ingressgateway-ca-certs
Enabling sidecar injection
Istio will automatically inject sidecar containers into application pods launched in any namespace labeled with is-injection=enabled.
So, we will label the namespace where the application is running to attach the sidecar, which is an envoy proxy sidecar.
$ kubectl label namespace default istio-injection=enabled
Gateways
A gateway controls the flow of traffic into and out of the service mesh. Behind the scenes, a gateway is an Envoy proxy instance deployed in a standalone configuration (not attached to an application container) at the notional boundary of the data plane.
Use cases for gateways revolve around the management of inbound traffic.
Gateways act similarly to regular Kubernetes ingress resources, but istio-ingress gateway has more features to route traffic with proper traffic management inside the mesh.
Gateway with the below configuration:–
-> Istio-ingress-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: Example-gateway #subjected to cluster
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- hosts:
- "*" #subjected to change
port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
Gateway is configured for incoming traffic to service mesh or into the cluster.Even if the ingress gateway is set up it is still unreachable from the internet. We will now connect the GCLB for reachable from the internet.
The load balancer offers a couple of great features that can be useful to serve the traffic that is as follows:-
● Anycast IP
● Container-native LB / Network Endpoint Group
● Prevent DDOS attacks because it is cloud-managed.
Now, we will set up GCLB in front of the istio-ingress gateway.
First, we will need a standard Kubernetes ingress resource and two other resources: the FrontendConfig and the BackendConfig. We will set the GCLB’s health-check configuration thanks to the BackendConfig.
-> Backend.yaml
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: ingress
namespace: istio-system
spec:
timeoutSec: 120
healthCheck:
checkIntervalSec: 10
timeoutSec: 2
port: 15021
type: HTTP
requestPath: /healthz/ready
The FrontendConfig will then be used to do HTTPS redirection directly at the GCLB level.
-> Frontend.yaml
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: http-redirect
namespace: istio-system
spec:
redirectToHttps:
enabled: true
responseCodeName: PERMANENT_REDIRECT
Before that, we needed to do the redirection on the ingress gateway, and that was not ideal for our users and the infrastructure, having back-and-forth non-added-value traffic. We usually promote managing redirections on the top of our architecture, avoiding the HTTP request going all the way down to the ingress gateway to be redirected to HTTPS.
Both resources will be referenced via annotations: in the ingress resource for the FrontendConfig, and the backend service for the BackendConfig.
# in the ingress gateway Service resource cloud.google.com/backend-config: '{"default": "ingress"}' # in the Ingress resource networking.gke.io/v1beta1.FrontendConfig: http-redirect
After setting these two config resources Now, we will create our Ingress. It’s a standard Kubernetes Ingress object with annotations to work with GCP.
We are using an ingress with annotations kubernetes.io/ingress.class: gce. in the GKE cluster, a controller watches that annotation to create the GCLB based on the config we choose.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: istio-ingress
namespace: istio-system
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: "crt"
kubernetes.io/ingress.global-static-ip-name: "cluster-ip" # reserve global static IP
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: http-redirect
spec:
# tls:
# - secretName: tls-cert-ingress
# hosts:
# - 'example.com'
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: istio-ingressgateway
port:
number: 80
Conclusion
In summary, while GKE default utilizes Ingress for service exposure, as the system grows in complexity and scale, implementing a service mesh like Istio becomes crucial to optimize resource utilization, simplify management, and achieve cost-effectiveness. Istio’s ingress gateway offers a centralized entry point for external traffic, eliminating the need for individual load balancers and providing advanced traffic management capabilities for the services running in the GKE cluster.
Feed your curiosity – read more of our insightful blogs.