One node cluster configuation¶
Clone repositories¶
BMRA
$ git clone https://github.com/intel/container-experience-kits.git
$ git submodule update --init --recursive
Chosen environmental variable for
PROFILEisfull_nfvNOTE: If you create one node cluster, it is important to change the filename
<your_path>/containers.orchestrators.kubernetes.container-experience-kits/host_vars/node1.yamlto<your_path>/containers.orchestrators.kubernetes.container-experience-kits/host_vars/controller1.yamlUsed network plugin: Calico 3.21.4 and MTU=1500. You can set Calico in
<your_path>/containers.orchestrators.kubernetes.container-experience-kits/group_vars/all.yml:
and comment out preflight check in the file: <your_path>/containers.orchestrators.kubernetes.container-experience-kits/playbooks/preflight.yml
#Multus is required for CEK deployment
#- name: assert that Multus is enabled in the config
# assert:
# that:
# - "kube_network_plugin_multus"
# fail_msg: "Multus must be enabled to have fully functional cluster deployment"
Deployment of Istio is automated and defined in <your_path>/containers.orchestrators.kubernetes.container-experience-kits/group_vars/all.yml (in BMRA directory):
# Service mesh deployment
# https://istio.io/latest/docs/setup/install/istioctl/
# for all available options, please, refer to the 'roles/service_mesh_install/vars/main.yml;
service_mesh:
enabled: true
profile: default
CPU Power Management
There are scripts to manage CPU power and performance settings in CommsPowerManagement repository. Recommended script is power.py to set uncore frequency, p-state and frequency governor. Clone to the machines on which you want to set parameters.
Kubernetes CPU Management Policies¶
The CPU manager static policy allows pods with QoS: Guaranteed access to exclusive CPUs on the node also it is preferable to explicitly assign the cores to kublet itself.
This can be configured in the file: /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.5 --max-pods=240 --reserved-cpus=94-95 --cpu-manager-policy static"
"resources" to 1 CPU and memory: 1 Gi (both are applicable for "limits" and "requests"):
"proxy": {
"autoInject": "enabled",
"clusterDomain": "cluster.local",
"componentLogLevel": "misc:error",
"enableCoreDump": false,
"excludeIPRanges": "",
"excludeInboundPorts": "",
"excludeOutboundPorts": "",
"holdApplicationUntilProxyStarts": false,
"image": "proxyv2",
"includeIPRanges": "*",
"includeInboundPorts": "*",
"includeOutboundPorts": "",
"logLevel": "warning",
"privileged": false,
"readinessFailureThreshold": 30,
"readinessInitialDelaySeconds": 1,
"readinessPeriodSeconds": 2,
"resources": {
"limits": {
"cpu": "1",
"memory": "1Gi"
},
"requests": {
"cpu": "1",
"memory": "1Gi"
}
},
"statusPort": 15020,
"tracer": "zipkin"
},
"proxy_init": {
"image": "proxyv2",
"resources": {
"limits": {
"cpu": "2000m",
"memory": "1024Mi"
},
"requests": {
"cpu": "10m",
"memory": "10Mi"
}
}
},
Running state. If so, check again the QoS. There should be Guaranteed for all nighthawk server pods now.
Cluster configuration¶
The config folder contains several configuration files of Istio and Nighthawk services for the measurement. The files can be used to deploy the appropriate configuration. The example gives the http1 configuration.
Nighthawk server configmap:
Nighthawk server deployment (replace hostname localhost with your node name in nighthawk-server-deploy.yaml for both keys: kubernetes.io/hostname):
Nighthawk server service:
Istio gateway and virtual service:
Istio ingress gateway e.q running on 8 vCPU:
Enable sidecar injection:
Change Istio horizontal scaling (targetCPUUtilizationPercentage) from 80% to 100%.
It's required to add an additional port to istio-ingressgateway service and change type to NodePort:
apiVersion: v1
kind: Service
metadata:
annotations:
...
ports:
- name: status-port
nodePort: 31075
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 32245
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 31124
port: 443
protocol: TCP
targetPort: 8443
- name: nighthawk
nodePort: 32222
port: 10000
protocol: TCP
targetPort: 10000
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}