Hyper-V
323 TopicsWindows Server 2016 | Hyper V VM Network Adapter Issue
Hello, we have had an issue for the past week with our Hyper V virtual machines not receiving internet although being connected to an External Hyper Network Switch. Making sure they had internet, we tried switching the NIC correlated with the External Switch and have still had no luck. These systems are crucial to everyday company productivity so we are trying to avoid reinstalling Hyper V at risk of losing functionality with these VMs, an APP and SQL Server, both the VMs are running on Windows Server 2016 along with the domain controller. The computers in the office are having no trouble connecting to the domain controller it is just when, because of the no network connection, they try and connect to these VMs they have no luck. We are getting a new server next week so any help quickly would be appreciated. Thanks!25Views0likes0CommentsHyper-V 2022 - VMSS logs constantly about Hyper-V-VmSwitch
Hi guys, any hyper-v gurus around? I have a new 2022 host which will be deployed to production soon. I've found by luck that vmms process (Virtual Machine Management service) constantly logs Verbose messages about "Ioctl Begin ioctlCode: 0xD15" and " Ioctl End ioctlCode: 0xD15, delta (100 ns): 80, ntStatus: 0x80000005(NT=Buffer Overflow)" with Event ID 0 and source Hyper-V-VmSwitch. I've looked around and had no luck finding the cause. It's happening even if I stop all the VMs and I even removed the vSwitch - there is no vSwitch on the host and it still logs like hell. The source is well know SID 1-5-18 (SECURITY_LOCAL_SYSTEM_RID) Anyone saw this before or have any idea what could be the issue here? Thanks for any ideas Martin6.6KViews2likes8CommentsNoob needs help with RDP Services
I am new to Windows server management. I setup a 2019 Server in a VM (Hyper-V). I installed the licenses we got for RDP from MS after installing the Remote Desktop Services. I am getting an error about Remote Desktop Licensing Mode is not configured. Tells me to use Server Manger to specify RD Connection Broker. Either I neglected to install it or configure it, not sure. Articles I find say go to Server Manager -> Remote Desktop Services -> Overview... BUT, that tells me I am logged in with a local account but must use a domain account to manage servers and collections. Again, not using a DC. This server is not part of a domain. We do not run AD internally only AzureAD online. We have 1 program we still run internally and users RDP to it. Should I remove the service and reinstall? What about the licenses I added already? How to I keep them? Any assistance will be greatly appreciated... J18Views0likes0CommentsFeedback on ansible hyperv collection
Hi All, I'm looking for feedback of any sort on an ansible collection i've started to manage VM's on Hyper-V. The repos is at gocallag/hyperv (github.com) and the collection can be obtained via ansible-galaxy collection install gocallag.hyperv You can find the collection over at Ansible Galaxy - gocallag.hyperv Probably still quite a few bugs in it, but happy for any feedback.858Views0likes1CommentNo SET-Switch Team possible on Intel X710 NICs?
Hello, we have lot of servers from different vendors using Intel X710 DA2 network cards. They work fine in standalone and they work fine if we create switch independet teams using Server Manager, Regardless of Dynmic or Hyper-V Port. But sadly we can't use these teams in Server 2025 because have to create SET-Switch Teams instead. But as soon as we create an Hyper-V SET-Switch Team with X710 cards, they have limited to no network communication. They still can communicate with some servers, are slow with some ohters, and can't communicate with some at all. Especially communication to other servers, which also use X710 cards with SET-Switches, is zero. SET-Teams with other cards like E810 work just fine. I've read several times that the X710 cards just wont work with SET, even since Server 2016. But I can't really give up on this, since we would have to replace a lot of them. We have tried to disable a lot of features like VMQ, RSS, RCS... but couln't make it work. Firmware and Drivers are the most recent, but it happens with older versions too. Does anyone have a solution? Thank you!77Views0likes0CommentsUntagged VLAN - Server 2025 Hyper-V
Hi, I have a strage issue and not finding a solution. Using Server 2025 with two node Hyper-V cluster. Most of the machines using VLANs which works fine. Some machines using no VLAN config. Which usually means the "Access VLAN 1" regarding our switch configuration. With Server 2019 this worked fine. With Server 2025 same NIC port, same server/NIC hardware "Untagged" VMs don't get any network connection. If I add a second NIC to the VM "Untagged" the NIC get immidiatly an IP address and has a proper connection. If I remove the first NIC, the second NIC stop working. It looks like something has changed with Server 2025 (maybe already with Server 2022). Do you have any idea what kinde of problem I have found? Thanks Jack190Views0likes2CommentsPowerShell counterpart for Failover Cluster Manager "Live Migration Settings"
In Failover Cluster Manager, there's "Live Migration Settings" where I can define what cluster networks I want to carry live migration traffic. Even after some research, I cannot find a PowerShell cmdlet that lets me do the same...34Views0likes0CommentsHyper-V SMB UNC LocalIP/name avoiding loopback
Hello, I have the following scenario: Two standalone servers joined to a domain (I can't set up a failover cluster at the moment) What I want to achieve is: Access the storage resources via UNC SMB paths in order to perform a storage migration from both nodes \\svhpv1\VMs \\svhpv2\VMs But when I try to have svhpv1 access \\svhpv1\VMs, I get an "Access Denied" error. I believe this error occurs because the access is being attempted through the loopback adapter. I’m experiencing the same issue with svhpv2 when trying to access \\svhpv2\VMs. How can I avoid the loopback so I can properly configure my scenario? Any help is appreciated.12Views0likes0CommentsNUMA Problems after In-Place Upgrade 2022 to 2025
We have updated several Hyper-V hosts with AMD Milan processors from Windows 2022 to Windows 2025 using the in-place update method. We are encountering an issue where, after starting about half of the virtual machines, the remaining ones fail to start due to a resource shortage error. The host's RAM is about 70% free. We can only get them to start by enabling the "Allow Spanning" configuration, but this reduces performance, and with so many free resources, this shouldn't be happening. Has anyone else experienced something similar? What has changed in 2025 to cause this issue? The error is: Virtual machine 'R*****2' cannot be started on this server. The virtual machine NUMA topology requirements cannot be satisfied by the server NUMA topology. Try to use the server NUMA topology, or enable NUMA spanning. (Virtual machine ID CA*****3-ED0E-4***4-A****C-E01F*********C4). Event ID: 10002 <EventRecordID>41</EventRecordID> <Correlation /> <Execution ProcessID="5524" ThreadID="8744" /> <Channel>Microsoft-Windows-Hyper-V-Compute-Admin</Channel> <Computer>HOST-JLL</Computer>183Views0likes3CommentsBypass LBFO Teaming deprecation on Hyper-V and Windows Server 2022
Starting with Windows Server 1903 and 1909, Hyper-V virtual switches on an LBFO-type network adapter cluster are deprecated (see documentation). The technology remains supported, but it will not evolve. It is recommended to create an aggregate of type SET. In practice The SET is a very interesting technology that has some constraints. The interfaces used must have identical characteristics: Manufacturer Model Link speed Configuration Even if these constraints do not seem huge, we are very far from the flexibility of LBFO Teaming. As a reminder, this one has absolutely no constraints. In practice the SET is recommended with network interfaces of 10Gb or more. Therefore, we are very far from the target of the LBFO (use of all integrated boards with motherboard pro, Home Lab, refurbish). If SET cannot be used As of Windows Server 2022, it is not possible to use the Hyper-V Management Console to create a virtual switch with LBFO, as it will prompt an error saying that LBFO have been depreciated. However, it is possible to use PowerShell to create this virtual switch. First, create the Teaming of your network cards using the Server Manager, in my case the teaming will be with LACP mode and Dynamic load balancing mode. Then execute the below PowerShell Command to create the virtual switch based on the teaming created in the previous step: New-VMSwitch -Name "LAN" -NetAdapterName "LINK-AGGREGATION" -AllowNetLbfoTeams $true -AllowManagementOS $true In detail: The virtual switch will be named "LAN" The network adapter cluster teaming is named "LINK-AGGREGATION" The aggregate remains usable to access the Hyper-V host. You will see your network teaming up and running on Hyper-V host. Thats it!145KViews6likes10Comments