5V0-21.21 Exam questions (70-79) vSAN考试题目

5V0-21.21 Exam questions

To study for VMware HCI Master Specialist Exam | vSAN认证考试题目学习

74. During a vSAN design workshop, the customer expressed the following requirements:

• The ESXi hosts will have their disk related hardware changed to become vSAN nodes.
• The default storage policy will be based on an FTT=2 (Failures to Tolerate).
• The vSAN Primary level of failures to tolerate (PFTT) will be set to a value of 2.

Based on the request, the architect was shown the following information about the current cluster:

• The number of existing ESXi hosts is 8.
• vSphere High Availability (HA) Admission control set to 37.5%.

How does this information impact the vSAN cluster design?

  • A. Secondary level of failures to tolerate must be defined with a value of 2.
  • B. The current configuration is recommended for the vSAN transformation.
  • C. The vSphere HA Admission Control must be set to a maximum of 25%.
  • D. Proactive HA should be enabled on the vSAN cluster.

Explaination:

To understand how the given information impacts the vSAN cluster design, let’s analyze each point:

  1. ESXi Hosts Becoming vSAN Nodes: This means that the existing hardware will be repurposed for vSAN. This transition requires ensuring that the hardware is compatible with vSAN requirements and that the necessary changes (like adding SSDs or adjusting network configurations) are made.
  2. Default Storage Policy – FTT=2 (Failures to Tolerate): This setting means that the vSAN cluster should be able to tolerate two host or disk failures without data loss. This requirement directly impacts the number of resources (like disk groups and capacity) needed.
  3. vSAN Primary Level of Failures to Tolerate (PFTT) = 2: PFTT indicates how many host, network, or disk group failures the cluster can tolerate. A value of 2 requires a minimum of 5 hosts (3 hosts for data and 2 for failures). Given there are 8 existing ESXi hosts, this requirement can be met.
  4. Number of Existing ESXi Hosts is 8: With 8 hosts, the cluster can meet the PFTT=2 requirement. However, the actual capacity and performance will depend on the individual host configurations.
  5. vSphere HA Admission Control Set to 37.5%: This setting reserves a percentage of cluster resources for failover capacity. With 37.5%, the cluster can tolerate the failure of three out of eight hosts (since 37.5% is approximately 3/8). This setting might be overly conservative for a vSAN cluster with a PFTT=2, as it reserves more resources than necessary for the defined level of failure tolerance.

Given these considerations:

  • A. Secondary Level of Failures to Tolerate Must Be Defined With a Value of 2: This isn’t necessarily required by the given information. Secondary level of failures to tolerate (SFTT) is used in vSAN stretched clusters or 2-node configurations, which is not mentioned in the scenario.
  • B. The Current Configuration is Recommended for the vSAN Transformation: The current configuration seems viable, but the HA Admission Control setting might be more conservative than necessary.
  • C. The vSphere HA Admission Control Must Be Set to a Maximum of 25%: This might be more aligned with the PFTT=2 requirement, as it would reserve capacity for two host failures (25% is approximately 2/8).
  • D. Proactive HA Should Be Enabled on the vSAN Cluster: Proactive HA is generally recommended for vSAN clusters, but it’s not directly related to the specific requirements provided.

Therefore, the most relevant impact based on the provided information seems to be related to the HA Admission Control setting. Option C, suggesting an adjustment of the vSphere HA Admission Control to a maximum of 25%, appears to be the most aligned with the provided requirements. However, the exact percentage should be calculated based on the specific capacity and performance needs of the vSAN cluster.

74. 在 vSAN 设计研讨会上,客户提出了以下要求:

• ESXi 主机将更改其与磁盘相关的硬件以成为 vSAN 节点。 • 默认存储策略将基于 FTT=2 (容错能力)。 • vSAN 主要的故障容忍级别 (PFTT) 将设置为 2。

根据请求,架构师了解到当前集群的以下信息:

• 现有 ESXi 主机数量为 8。 • vSphere 高可用性 (HA) 准入控制设置为 37.5%。

这些信息如何影响 vSAN 集群设计?

  • A. 必须定义次级故障容忍级别,其值为 2。
  • B. 当前配置推荐用于 vSAN 转换。
  • C. vSphere HA 准入控制必须设置为最高 25%。
  • D. 应在 vSAN 集群上启用主动 HA。

解释:

为了理解给定信息如何影响 vSAN 集群设计,让我们分析每个点:

ESXi 主机成为 vSAN 节点:这意味着现有硬件将被重新用于 vSAN。这种转换需要确保硬件与 vSAN 要求兼容,并进行必要的更改(如添加 SSD 或调整网络配置)。

默认存储策略 – FTT=2 (容错能力):这个设置意味着 vSAN 集群应该能够在不丢失数据的情况下容忍两个主机或磁盘故障。这个要求直接影响所需的资源数量(如磁盘组和容量)。

vSAN 主要故障容忍级别 (PFTT) = 2:PFTT 表示集群可以容忍多少个主机、网络或磁盘组故障。值为 2 要求至少有 5 个主机(3 个用于数据和 2 个用于故障)。鉴于有 8 个现有 ESXi 主机,这个要求可以满足。

现有 ESXi 主机数量为 8:有了 8 个主机,集群可以满足 PFTT=2 的要求。然而,实际容量和性能将取决于各个主机的配置。

vSphere HA 准入控制设置为 37.5%:这个设置为故障转移容量预留了一定比例的集群资源。有了 37.5%,集群可以容忍八个主机中的三个故障(因为 37.5% 大约是 3/8)。对于一个 PFTT=2 的 vSAN 集群来说,这个设置可能过于保守,因为它为定义的故障容忍级别预留了更多资源。

考虑到这些因素:

A. 必须定义次级故障容忍级别,其值为 2:这不一定是给定信息所要求的。次级故障容忍级别(SFTT)用于 vSAN 延伸集群或 2 节点配置,这在场景中没有提及。

B. 当前配置推荐用于 vSAN 转换:当前配置似乎是可行的,但 HA 准入控制设置可能比必要的更保守。

C. vSphere HA 准入控制必须设置为最高 25%:这可能更符合 PFTT=2 的要求,因为它将为两个主机故障预留容量(25% 大约是 2/8)。

D. 应在 vSAN 集群上启用主动 HA:通常建议在 vSAN 集群上启用主动 HA,但它与提供的具体要求没有直接关系。

因此,根据提供的信息,最相关的影响似乎与 HA 准入控制设置有关。选择 C,建议将 vSphere HA 准入控制调整为最高 25%,似乎最符合提供的要求。然而,确切的百分比应根据 vSAN 集群的具体容量和性能需求计算。

有VM问题需要协助?

免费试用VMware技术助理(已接Deepseek)!即时解答VM难题

→ 🤖VM技术助理

解析和诊断各类vCenter错误,ESXi日志,虚拟机vmware.log

→ 📕VMware日志分析器

图书推介 - 京东自营

24小时热门

还有更多VMware问题?

免费试下我们的VMware技术助理(已接Deepseek)!即时解答VM难题 → 🤖VM技术助理

试试 📕VMware日志分析器 免费诊断各类vCenter错误,ESXi日志,虚拟机vmware.log等等

########

扫码加入VM资源共享交流微信群(请备注加群

需要协助?或者只是想技术交流一下,直接联系我们!

推荐更多

vSphere HA配置失败:Cannot complete HA agent on host
疑难杂症

vSphere HA配置失败:Cannot complete HA agent on host

在配置 VMware vSphere HA 时可能遇到“Cannot complete the configuration of the vSphere HA agent on the host”错误。本文分析问题根源——vsphere-fdm VIB 缓存和 vCenter Update Manager 数据库冲突,并提供从清理 VCDB 到重新生成集群镜像的详细解决方案。

ESX 9.x 中已弃用的 CPU 系统/服务器及支持影响
运维必备

ESX 9.x 中已弃用的 CPU 系统/服务器及支持影响

ESX 9.x 中已弃用的 CPU 系统/服务器及支持影响。ESX 9.x 中哪些 CPU 系列被弃用?本文详细介绍 ESX 9.0 中已弃用和生命终止的 CPU 系统/服务器,以及对支持的影响。 本文针对该问题提供了深度剖析与实测解决方案。

ESXi服务器硬件传感器的状态不断反复红绿切换?
疑难杂症

ESXi服务器硬件传感器的状态不断反复红绿切换?

在 ESXi 8.x 主机中,可能出现硬件传感器状态频繁红绿切换、hostd 持续记录 Hardware Sensor Status 告警,但 IPMI 和实际硬件均正常。本文从日志分析入手,定位为 BIOS 与 ESXi 兼容性问题,并通过升级服务器固件彻底解决硬件健康误报。

vCenter 中孤立虚拟机记录的完美清理方案
运维必备

vCenter 中孤立虚拟机记录的完美清理方案

vCenter 中孤立虚拟机记录的完美清理方案。vCenter 中存在无法删除的孤立虚拟机记录?本文提供两种解决方案,让你快速清理 vCenter 数据库中的孤立 VM 记录。 本文针对该问题提供了深度剖析与实测解决方案。

//omg10.com/4/9119499