clustering
172 TopicsPowerShell counterpart for Failover Cluster Manager "Live Migration Settings"
In Failover Cluster Manager, there's "Live Migration Settings" where I can define what cluster networks I want to carry live migration traffic. Even after some research, I cannot find a PowerShell cmdlet that lets me do the same...35Views0likes0CommentsiSCSI Target servers - high availability?
Hello, I have two VMs configured as iSCSI target servers with a 600GB VDHX file on each, and then two VMs configured as file servers in a failover cluster. The iSCSI servers should serve out the data that is shared in the failover cluster file server. I would like to also configure the iSCSI target servers in high availability mode, so that they replicate their data and if one of the iSCSI target server go down, the shares and data is still accessible. How would I go about doing this? So far I've tried setting up storage replica, but since I only have one site, it doesn't allow me to replicate to the second iSCSI disk from the one that currently has data. I also tried iSCSI Target-role in the failover cluster manager, but it puts me back at the same situation where if the storage server with the virtual iSCSI disk goes down, I lose access to all data.1.4KViews1like1CommentProactive private share #microsoft
Many comapanies is part of my job last two year. I desade to implement and create new public communty group at microsoft, with public share. Like universal patch insade giant org and partner companes ,microsoft deside lead postiton. Proacticve and future plan is constructed to grow on world network with client and our partners57Views0likes2CommentsBLOG: Windows Server / Azure Local keeps setting Live Migration to 1 - here is why
Affected products: Windows Server 2022, Windows Server 2025 Azure Local 21H2, Azure Local 22H2, Azure Local 23H2 Network ATC Dear Community, I have seen numerous reports from customers running Windows Server 2022 servers or Azure Local (Azure Stack HCI) that Live Migration settings are constantly changed to 1 per Hyper-V Host, as mirrored in PowerShell and Hyper-V Host Settings. The customer previously set the value to 4 via PowerShell, so he could prove it was a different value at a certain time. First, I didn't step into intense research why the configuration altered over time, but the stumbled across it, quite accidently, when fetching all parameters of Get-Cluster. According to an article a LCU back in September 2022 changed the default behaviour and allows to specify the live migrations at cluster level. The new live migration default appears to be 1 at cluster level and this forces to change the values on the Hyper-V nodes to 1 accordingly. In contrast to the commandlet documentation, the value is not 2, which would make more sense. Quite unknown, as not documented in the LCU KB5017381 itself, but only referenced in the documentation for the PowerShell commandlet Get-Cluster. Frankly, none of the aren't areas customers nor partners would check quite regularly to spot any of such relevant feature improvements or changes. "Beginning with the 2022-09 Cumulative Update, you can now configure the number of parallel live migrations within a cluster. For more information, see KB5017381 for Windows Server 2022 and KB5017382 for Azure Stack HCI (Azure Local), version 21H2. (Get-Cluster).MaximumParallelMigrations = 2 The example above sets the cluster property MaximumParallelMigrations to a value of 2, limiting the number of live migrations that a cluster node can participate in. Both existing and new cluster nodes inherit this value of 2 because it's a cluster property. Setting the cluster property overrides any values configured using the Set-VMHost command." Network ATC in Azure Local 22H2+ and Windows Server 2025+: When using Network ATC in Windows Server 2025 and Azure Local, it will set the live migration to 1 per default and enforce this across all cluster nodes. Disregarding the Cluster Settings above or Local Hyper-V Settings. To change the number of live migration you can specify a cluster-wide override in Network ATC. Conclusion: The default values for live migration have been changes. The global cluster setting or Network ATC forcing these down to the Hyper-V hosts based on Windows Server 2022+/ Azure Local nodes and ensure consistency. Previously we thought this would happen after using Windows Admin Center (WAC) when opening the WAC cluster settings, but this was not the initial cause. Finding references: Later the day, as my interest grew about this change I found an official announcement. In agreement to another article, on optimizing live migrations, the default value should be 2, but for some reason at most customers, even on fresh installations and clusters, it is set to 1. TLDR: 1. Stop bothering on changing the Livemigration setting manually or PowerShell or DSC / Policy. 2. Today and in future train your muscle memory to change live migration at cluster level with Get-Cluster, or via Network ATC overrides. These will be forced down quite immediately to all nodes and will be automatically corrected if there is any configuration drift on a node. 3. Check and set the live migration value to 2 as per default and follow these recommendations: Optimizing Hyper-V Live Migrations on an Hyperconverged Infrastructure | Microsoft Community Hub Optimizing your Hyper-V hosts | Microsoft Community Hub 4. You can stop blaming WAC or overeager colleagues for changing the LM settings to undesirable values over and over. Starting with Windows Admin Center (WAC) 2306, you can set the Live Migration Settings at cluster level in Cluster > Settings. Happy Clustering! 😀1.1KViews2likes0Commentsfeature Installation Error
I am facing this issue in Windows Server 2019 STD. i am also tried to solve this issue to select sources\sxs path from the OS media but still i am getting the same error. Mistakenly i have removed .Net framework from this server and after that i am facing this issue. please help me to solve this issue.43Views0likes0CommentsASHCI cluster with different RAM amounts
been looking through server requirements and things, and i cant see a definitive answer of whether i need to have the same amount of RAM per ASHCI host. i appreciate it may not be best practice due to failing over VMs within the cluster, and being in a position where you over provision RAM and could end up in a sticky situation, but given that you can not only set preferred owners, but also have the ability to set possible owners, you should be able to account for that. tldr: ASHCI cluster can i have have one node with 4TB of RAM and 5 nodes with 2TB of RAM44Views0likes1CommentASCHI cluster different RAM amounts per node
is it a supported model for ASHCI to have 1 node in a cluster to have different amounts of RAM? i appreciate that you can specify preferred owners, and possible owners on VMs, to restrict/ allow VMs to go to specific hosts, and understand that under normal circumstances you probably wouldnt want to have hosts with different amounts of RAM to circumvent over provisioning and then getting into difficulties upon loosing hosts, but am looking at making one of my ASHCI hosts a SQL server, i dont want to remove it from the cluster due to impacting storage, but need to increase the RAM, and if i can do it to just a single host id prefer that.45Views0likes3CommentsFailover Cluster Manager error when not running as administrator (on a PAW)
I've finally been trying (hard) to use a PAW, where the user I'm signed into the PAW as does NOT have local admin privileges on that machine, but DOES have admin privileges on the servers I'm trying to manage. Most recent hiccup is that Failover Cluster Manager aka cluadmin.msc doesn't seem to work properly if you don't have admin privileges on the machine where you're running it from. Obviously on a PAW your server admin account is NOT supposed to be an admin on the PAW itself, you're just a standard user. The error I get when opening Failover Cluster Manager is as follows: Error The operation has failed. An unexpected error has occurred. Error Code: 0x800702e4 The requested operation requires elevation. [OK] Which is nice. I've never tried to run cluadmin as a non-admin, because historically everyone always just ran everything as a domain admin (right?) so you were an admin on everything. But this is not so in the land of PAW. I've run cluadmin on a different machine where I am a local admin, and it works fine. I do not need to run it elevated to make it work properly, it just works. e.g. open PowerShell, cluadmin <enter>. PowerShell has NOT been opened via "Run as administrator" (aka UAC). I've tried looking for some kind of access denied message via procmon but can't see anything obvious (to my eyes anyway). A different person on a different PAW has the same thing. Is anyone successfully able to run Failover Cluster Manager on a machine where you're just a standard user?1.4KViews1like2CommentsS2D 2 Nodes Hyper-V Cluster
The Resources: 2 physical Windows 2022 Data Center servers: 1- 2 sockets 24 physical CPU core 2- 64 GB RAM 3- 2 * 480GB SSD RAID 1 (OS Partition) 4- 4 * 3.37TB SSD RAID 5 the total approximately 10 or 11 TB after the RAID 5- 4 * 10gb Nic 1 Physical Windows 2016 standard server with DC role The Desired: I want to make 2 nodes Hyper V cluster with S2D to combine the 2 servers storage into 1 shared clustered volume and store the VMs on it and make a high availability solution which will fail over automatically when one of the nodes is down without using an external source for the storage such as SAS, is this scenario possible?199Views0likes0Comments