Tag Archives: Windows Server vNext

Microsoft TechDay 2015 – Windows Server vNext

Some notes from Microsoft TechDay 2015 dedicated to Windows Server vNext which took place yesterday at co-working space Nagatino-Club. Note those are just quick and crude notes accompanied by some pictures from event, nothing more.
UPDATE (25.06.2015): Video recordings of this event have been published and can be found on Channel 9 – Channel 9 – Microsoft TechDay 2015.
01 - vNext Event Banner
Empty hall before beginning:
02 - Hall before beginning
Just a picture taken out of the window of Nagatino-Club coworking. If you look under red arrow carefully enough you may see towers of Moscow City commercial district under red arrow. 🙂 I used to work there for a while and IMHO it has a striking similarity to Canary Wharf in London, though you won’t notice this looking at the picture below.
03

Session 1 – What’s new in Hyper-V (Mikhail Komarov, Hyper-V MVP)

\n\n

Virtual TPM which allows use of BitLocker in VMs. Though it’s a bit unclear how useful it is in case we can steal VM files and this vTPM is included inside. Just an extra password/key?
2012 R2 allowed capping of IOPS per disk (set max allowed number) for VM, 2016 allows to do this per VM. Also we can do this via policy on a failover cluster level. MS calls this stuff Storage Quality (think of network QoS, now you have similar stuff for storage subsystem).
Cluster which uses 2012 R2 can be upgraded to 2016 CTP without downtime, though this may not be the case for RTM. This something interesting to try in a lab environment.
Better handling of storage subsystem failures – VMs not being moved immediately to other cluster node, but rather wait if delay was actually down to network latency (temporary NIC or switch overload for example). So the overal tendency is to maximize usage of commodity hardware and building redundant and resilient systems out of it with set of Windows Server technologies. Potentially you can save on expensive servers and SAN devices, and pay for commodity hardware and Microsoft licenses – in certain scenarios it will be significantly cheaper. There was certain skepticism from some community members that such solution will provide data safety and recoverability of the same grade as expensive hardware storage systems (e.g. devices from EMC or IBM).
VHDX can be resized interactively.
Memory can be added without VM downtime, and the same is possible for vNICs (MS implemented this based on high demand from Hyper-V technical community).
Snapshots. Though those are not recommended for most of production scenarios people use it anyway. MS improved snapshot process in response to this with “production checkpoint” option which uses VSS and some other technics for Linux guests to make snapshots more appropriate for production VMs.
Ability to connect with PowerShell into guest OS running inside of VM without need for network connectivity.
Hyper-V backup evolution. New architecture, changes tracking now included into platform and does not require file system level filters which were used before.
Hyper-V platform enhancements. New Hypervisor power management modes.
RemoteFX improvements. OpenGL 4.4 and OpenCL 1.1 API support. Larger size of video RAM which can be allocated and configured.
SLAT is a requirement for new Hyper-V both for client and server versions.
Session 2 – Nano Server:  future of Windows Server starts now (Georgiy Gadzhiev)
Session 2 Gadzhiev 1
Problems: numerous reboots, large server OS images, high demand for resources. Answer – Nano Server. Sounds like something we already heard with introduction of Server Core? Nano Server takes this idea one step further leaving only core and what is necessary for particular role/workload only, i.e. not only GUI removed but all unnecessary roles and features removed too giving you sort of container for particular role only.
So 2008 introduced Core/Full Server idea, 2012 added Minimal Server Interface, 2016 – Nano Server
Because of small size of Nano Server VMs they moving faster between cluster nodes.
Azure/Cloud Platform System
Nano Server allowed removal of previously tightly-coupled components. It includes ASP .NET core. Now we have containers for runnin of apps which isolated from base OS in a way somewhat similar to what we have with App-V containers.
We can configure Nano Server with PowerShell Desired State Configuration. Remote management with Core PowerShell and WMI. It can be integrated into DevOps tool set.
PowerShell replaces local management tools for Nano Server.
Support for cloud apps: Win32 subset; CoreCLR, PaaS & ASP.NET 5
Development. Windows SDK and Visual Studio 2015 oriented towards Nano Server. Toolset can be loaded from VS collection.
Reverse Forwarders / Backward Compatibility. Nano Server can forward calls to old/removed DLLs to their available substitutes (reverse forwarders) and call can be processed successfully if it lies within Nano Server SDK.
Even less amount of updates combined with ability to avoid Nano Server restart by restarting individual components.
Nano Server in Windows Server vNext. Distributive contains folder NanoServer, no option for it in setup. Drivers have to be added using DISM. We can import Server Core drivers package into Nano Server.
Important thrend: move to DevOps affects infrastructure specialists more and more and they now have to serve developers allowing rapid provisioning of containers and providing them with self service, and enevitably they have to learn how to code to some extent.
Session 3 – Containers in Windows Server (Mikhail Voitko)
Docker project it is open source world interpretation of containers idea.
Data Center evolution: hardware servers, separate server for each workload; next virtualization arrived allowing consolidation of workloads within one hardware server, next SAN and network virtualization entered the scene; on top of this cloud idea solidified/come to fruition with publicly available services (IaaS, PaaS, SaaS) we can use without knowledge of what actually happening on back end.
One of use cases for containers – DevOps.
Project Docker exists from around 2008. Environment for Windows Containers execution could be Windows Server or Linux.
Containers Images allow for code injection into image which can be further saved into images repository.
Container execution environment:
Old model:
Physical node
OS
Virtualization layer
Guest OSs
Apps inside of VMs
New one (containers):
Physical node
OS
Containers – apps within container
Virtualization layer
Guest OSs
Apps inside of VMs
Apps for containers can be commercially distributed.
Container provisioning is very quick and it makes it useful for development scenarios.
Cloud integration enables storing repository and/or containers in the cloud.
So main idea: container is a sort of middle ground between full OS and VMs – it has VM’s isolation, but it has better portability and speed of provisioning. For IT Pro is just an extra option in addition to virtualization with faster provisioning times. Some software has too much of separate parts which makes allocation of dedicated VMs for all of them is not justified and containers can be good for this usecase.
 
Containers use cases: application testing, resource management and isolation on container level, rapid provisioning, tiering with containers as additional layer with final target to improve stability; distributed processing, scalability, web apps.
Container OS environments:
Server Core / traditional apps
Nano Server / cloud
Container Windows Server
Container Hyper-V
Container management: Docker, PowerShell, Other
Microservices structure.
PID contains part which identifies container.
Session 4 – Storage Replica in Windows Server vNext (Mikhail Komarov, Hyper-V MVP)
Volume level replication
Block level volume replication
Synchronous replication/Asynchronous replication
Transport SMB 3.1.1
Geo-stretched cluster
Server-Server
DFSR replicates on a file level and unable to replicate file locked by apps, block level replication ignores app locks.
Synchronous replication uses journal on source and destination servers for which you can allocate SSD drive. Network can be a bottleneck.
Asynchronous can work over slow network.
SMB 3.1.1
Scalability and performance
SMB multichannel – allows use of RDMA adapters which cannot be teamed
SMB Direct (RDMA)
Requirements:
Datacenter edition
AD
Drives: GPT, not MBR. SAS JBOD arrays supported. Windows allows use of storage tiering, placing frequently accessed data on SSD drives within JBOD array.
Performance factors for synchronous synchronization:
Packet switching latency
Journal (flash drives – SSD, NVRAM)
Storage Replica is not:
a solution for “shared nothing” cluster
a solution for back up
substitute for DFSR
PowerShell module for Storage Replica.
Azure Site Recovery is supported for Storage Replica.
Session 5 – Storage System based on servers with local disks (Sergey Gruzdov)
Session 5 Gruzdov 2
Storage Spaces Direct
Works only AD DS environment
Simple scalability by adding new nodes
Minimum 4, maximum 12 nodes
Demo steps – creating cluster with storage spaces 
test-cluster cmdlet
Minimum requirement – 2 available disks apart from system disk
new-cluster cmdlet: specify name, nodes and static IP
get-storageenclosure
(get-cluster clustername).DASModeEnabled=1
(get-cluster clustername).DASModeOptimization=1
Get-StorageSubsystem -name clustername | get-physicaldisk |? CanPool -eq $true
New-StoragePool -StorageSubSystemName clustername -friendlyname poolname -writecachesizedefault 0
New-Volume
ReFS – speed and efficiency
Effective checkpoints and backups for VMs
Quick fixed disk creation (VHD/VHDX)
In ReFS we operate with meta-data and this speed up certain operations (snapshot deletion, fixed disk creation almost instant operations).
Session 6 – RDP enhancements (Dmitry Razbornov)
Session 6 Razbornov 1
 
GPU virtualization:
Software GPU WARP
vGPU RemoteFX
RemoteFX vGPU evolution started from technology acquisition from Calista Technologies.
vNext:
OpenGL and OpenCL API
1 GB of dedicated video memory independently of monitor and resolution
4K support
Compliance with industry standards and recommendations
Codec investments – H.264/AVC
QHD (2550×1440), 4K (3840×2160)
Hardware offloading for H.264/AVC available on different devices
Problem here is that this codec built for video, and static pictures/text quality can suffer. This was improved.
Updated RDP client to v10 in Windows 8 is not enough, you need Windows 10 to use vGPU RemoteFX improvements.
Please follow and like us: