anan@think:~/works/openshift-versions/works$ cat install-config.yaml.bkup
additionalTrustBundlePolicy: Proxyonly
apiVersion: v1
baseDomain: qe.devcluster.openshift.com
compute:
- architecture: amd64
hyperthreading: Disabled
name: worker
platform: {}
replicas: 3
controlPlane:
architecture: amd64
hyperthreading: Disabled
name: master
platform: {}
replicas: 3
metadata:
name: weli-test
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-east-1
vpc: {}
publish: External
Created
December 22, 2025 13:54
-
-
Save liweinan/28927a870099494b6e23fc8aaf58c3c3 to your computer and use it in GitHub Desktop.
Test Case Manual Log
Author
Author
OCP-29648
anan@think:~/works/openshift-versions/works$ cat install-config.yaml.bkup
additionalTrustBundlePolicy: Proxyonly
apiVersion: v1
baseDomain: qe.devcluster.openshift.com
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
aws:
amiID: ami-01095d1967818437c
replicas: 3
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform:
aws:
amiID: ami-0c1a8e216e46bb60c
replicas: 3
metadata:
name: weli-test
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-east-1
vpc: {}
publish: External# 查看 master 节点的 AMI(应该看到 ami-0c1a8e216e46bb60c)
echo "Master 节点 AMI:"
aws ec2 describe-instances \
--region "${REGION}" \
--filters "Name=tag:kubernetes.io/cluster/${INFRA_ID},Values=owned" \
"Name=tag:Name,Values=*master*" \
"Name=instance-state-name,Values=running" \
--output json | jq -r '.Reservations[].Instances[].ImageId' | sort | uniq
# 查看 worker 节点的 AMI(应该看到 ami-01095d1967818437c)
echo "Worker 节点 AMI:"
aws ec2 describe-instances \
--region "${REGION}" \
--filters "Name=tag:kubernetes.io/cluster/${INFRA_ID},Values=owned" \
"Name=tag:Name,Values=*worker*" \
"Name=instance-state-name,Values=running" \
--output json | jq -r '.Reservations[].Instances[].ImageId' | sort | uniq
Master 节点 AMI:
ami-0c1a8e216e46bb60c
Worker 节点 AMI:
ami-01095d1967818437c
Author
OCP-21531
Verify the Pull Secret:
anan@think:~/works/openshift-versions/421nightly$ vi ../auth.json
anan@think:~/works/openshift-versions/421nightly$ oc adm release extract --command openshift-install --from=registry.ci.openshift.org/ocp/release:4.21.0-0.nightly-2025-12-22-170804 -a ../auth.json
anan@think:~/works/openshift-versions/421nightly$ du -h openshift-install
654M openshift-installExport variables:
anan@think:~/works/openshift-versions/work3$ export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=registry.ci.openshift.org/ocp/release:4.21.0-0.nightly-2025-12-22-170804
anan@think:~/works/openshift-versions/work3$ export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE=ami-01095d1967818437cUsing different version installer to install the cluster:
anan@think:~/works/openshift-versions/work3$ ../421rc0/openshift-install version
../421rc0/openshift-install 4.21.0-rc.0
built from commit 8f88b34924c2267a2aa446dcdc6ccdd5260f9c45
release image quay.io/openshift-release-dev/ocp-release@sha256:ecde621d6f74aa1af4cd351f8b571ca2a61bbc32826e49cdf1b7fbff07f04ede
WARNING Found override for release image (registry.ci.openshift.org/ocp/release:4.21.0-0.nightly-2025-12-22-170804). Release Image Architecture is unknown
release architecture unknown
default architecture amd64anan@think:~/works/openshift-versions/work3$ ../421rc0/openshift-install create cluster
WARNING Found override for release image (registry.ci.openshift.org/ocp/release:4.21.0-0.nightly-2025-12-22-170804). Release Image Architecture is unknown
INFO Credentials loaded from the "default" profile in file "/home/anan/.aws/credentials"
WARNING Found override for OS Image. Please be warned, this is not advised
INFO Successfully populated MCS CA cert information: root-ca 2035-12-23T03:35:54Z 2025-12-25T03:35:54Z
INFO Successfully populated MCS TLS cert information: root-ca 2035-12-23T03:35:54Z 2025-12-25T03:35:54Z
INFO Credentials loaded from the AWS config using "SharedConfigCredentials: /home/anan/.aws/credentials" provider
WARNING Found override for release image (registry.ci.openshift.org/ocp/release:4.21.0-0.nightly-2025-12-22-170804). Please be warned, this is not advised Check the installed cluster version and the used amiID:
anan@think:~/works/openshift-versions/work3$ export KUBECONFIG=/home/anan/works/openshift-versions/work3/auth/kubeconfig
anan@think:~/works/openshift-versions/work3$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.21.0-0.nightly-2025-12-22-170804 True False 71m Cluster version is 4.21.0-0.nightly-2025-12-22-170804$ oc get machineset.machine.openshift.io -n openshift-machine-api -o json | \
jq -r '.items[] | .spec.template.spec.providerSpec.value.ami.id'
ami-01095d1967818437c
ami-01095d1967818437c
ami-01095d1967818437c
ami-01095d1967818437c
ami-01095d1967818437c
Author
OCP-22425
OCP-22425
Cluster A:
anan@think:~/works/openshift-versions/work3$ oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-106-174.ec2.internal Ready control-plane,master 8h v1.34.2
ip-10-0-157-14.ec2.internal Ready control-plane,master 8h v1.34.2
ip-10-0-30-65.ec2.internal Ready worker 8h v1.34.2
ip-10-0-54-54.ec2.internal Ready worker 8h v1.34.2
ip-10-0-74-122.ec2.internal Ready worker 8h v1.34.2
ip-10-0-76-206.ec2.internal Ready control-plane,master 8h v1.34.2anan@think:~/works/openshift-versions/work3$ oc get route -n openshift-authentication
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
oauth-openshift oauth-openshift.apps.weli-test.qe.devcluster.openshift.com oauth-openshift 6443 passthrough/Redirect None
anan@think:~/works/openshift-versions/work3$ oc get po -n openshift-apiserver
NAME READY STATUS RESTARTS AGE
apiserver-6b767844c6-2jztv 2/2 Running 0 8h
apiserver-6b767844c6-g4rck 2/2 Running 0 8h
apiserver-6b767844c6-jzv4z 2/2 Running 0 8hanan@think:~/works/openshift-versions/work3$ oc rsh -n openshift-apiserver apiserver-6b767844c6-2jztv
Defaulted container "openshift-apiserver" out of: openshift-apiserver, openshift-apiserver-check-endpoints, fix-audit-permissions (init)
sh-5.1# Cluster B:
anan@think:~/works/openshift-versions/works2$ oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-122-6.ec2.internal Ready control-plane,master 27m v1.34.2
ip-10-0-134-89.ec2.internal Ready control-plane,master 27m v1.34.2
ip-10-0-141-244.ec2.internal Ready worker 13m v1.34.2
ip-10-0-31-52.ec2.internal Ready worker 21m v1.34.2
ip-10-0-67-21.ec2.internal Ready control-plane,master 27m v1.34.2
ip-10-0-96-196.ec2.internal Ready worker 21m v1.34.2anan@think:~/works/openshift-versions/works2$ oc get po -n openshift-apiserver
NAME READY STATUS RESTARTS AGE
apiserver-574bdcd758-j85sh 2/2 Running 0 10m
apiserver-574bdcd758-l98ph 2/2 Running 0 10m
apiserver-574bdcd758-p922j 2/2 Running 0 8m8s
anan@think:~/works/openshift-versions/works2$ oc rsh -n openshift-apiserver apiserver-574bdcd758-j85sh
Defaulted container "openshift-apiserver" out of: openshift-apiserver, openshift-apiserver-check-endpoints, fix-audit-permissions (init)sh-5.1# curl -k https://oauth-openshift.apps.weli-test.qe.devcluster.openshift.com/healthz
oksh-5.1#
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
# OCP-22663 - [ipi-on-aws] Pick instance types for machines per region basis
## Test Case Overview
This test case validates that the OpenShift installer correctly selects instance types for AWS machines based on regional availability. The installer uses a priority-based fallback mechanism to select the best available instance type for each region.
## Current Implementation Behavior
The installer uses the following instance type priority list for AMD64 architecture:
1.
m6i.xlarge(primary preference)2.
m5.xlarge(fallback)3.
r5.xlarge(fallback)4.
c5.2xlarge(fallback)5.
m5.2xlarge(fallback)6.
c5d.2xlarge(fallback)7.
r5.2xlarge(fallback)The installer automatically checks instance type availability in the selected region and availability zones, selecting the first available type from the priority list.
## Test Steps
### Test Case 1: Standard Region with m6i Available
Objective: Verify that the installer selects
m6i.xlargewhen it's available in the region.Prerequisites:
us-east-1,us-west-2,ap-northeast-1,eu-west-1)Steps:
1. Create the Install Config asset:
2. Modify the
regionfield ininstall-config.yaml:3. Generate the Kubernetes manifests:
Expected Result:
m6i.xlargeas the instance type### Test Case 2: Region Where m6i is Not Available
Objective: Verify that the installer falls back to
m5.xlargewhenm6iis not available in the region.Prerequisites:
m6iinstance types are not available (e.g.,eu-north-1,eu-west-3,us-gov-east-1)Steps:
1. Create the Install Config asset:
2. Modify the
regionfield ininstall-config.yaml:3. Generate the Kubernetes manifests:
Expected Result:
m6i.xlargeis not available and fall back tom5.xlarge### Test Case 3: Full Cluster Installation Verification
Objective: Verify that the selected instance type works correctly during actual cluster installation.
Prerequisites:
Steps:
1. Use the install config from Test Case 1 or Test Case 2
2. Launch the cluster:
Expected Result:
## Additional Verification
### Verify Instance Type Selection Logic
To understand why a specific instance type was selected, check the installer logs:
Look for log messages related to instance type selection and availability checks.
### Manual Instance Type Availability Check
You can manually verify instance type availability in a region using AWS CLI:
## Notes
1. Instance Type Availability: Instance type availability can vary by region and availability zone. The installer automatically handles this by checking availability and selecting the best option.
2. Regional Overrides: If specific regions require different instance type priorities, they can be configured in
pkg/types/aws/defaults/platform.gousing thedefaultMachineTypesmap.3. Architecture Support: This test case focuses on AMD64 architecture. ARM64 architecture uses different instance types (e.g.,
m6g.xlarge).4. Version Compatibility:
m6i.xlarge, with fallback tom5.xlargeifm6iis not availablem5.xlargem4.xlarge## Implementation Details
This section explains how the instance type selection logic works in the codebase, including the key components and their interactions.
### 1. Instance Type Defaults Definition
Location:
pkg/types/aws/defaults/platform.goThe
InstanceTypes()function defines the default priority list of instance types based on architecture and topology:Key Points:
defaultMachineTypesmap### 2. Instance Type Selection Logic
Location:
pkg/asset/machines/aws/instance_types.goThe
PreferredInstanceType()function selects the best available instance type by checking availability in the specified zones:The
getInstanceTypeZoneInfo()function queries AWS EC2 API to check instance type availability:Key Points:
### 3. Master Machine Configuration
Location:
pkg/asset/machines/master.goThe master machine configuration integrates the instance type selection logic:
Key Points:
InstanceTypes()to get priority listPreferredInstanceType()to select best available type### 4. Machine Manifest Generation
Location:
pkg/asset/machines/aws/machines.goThe
Machines()function generates Kubernetes Machine manifests with the selected instance type:The
provider()function creates the AWS machine provider configuration:Key Points:
PreferredInstanceType()### Execution Flow Summary
1. User creates install-config → Specifies region (and optionally instance type)
2. Master machine configuration (
master.go):InstanceTypes()to get priority listPreferredInstanceType()to select best available type3. Instance type selection (
instance_types.go):4. Machine manifest generation (
machines.go):## Related Code References
pkg/types/aws/defaults/platform.gopkg/asset/machines/aws/instance_types.gopkg/asset/machines/aws/machines.gopkg/asset/machines/master.go