Tuesday, February 17, 2015

Anglia Ruskin University - Onwards and Upwards for our VDI - What we do now

So, Virtual Desktop Infrastructure.  We've been a pretty early adopter with an initial implementation for our students and staff starting about four years ago - 2011.

This blog is intended to provide a basis for Anglia Ruskin IT Services discussion (and wider) but also information for suppliers who might be able to help us with our VDI aspirations - more of this to follow.

I've outlined, at a high level, our technology in use.  The next blog entry will detail what we want to do next with VDI - future use cases.

Our technology is centred around VMWare View (Horizon), HP blade servers, Violin Storage and zero clients.   Augmented by AppV for application virtualisation and Profile Unity for easy profile and application setting management.

It's been tremendously successful with an appreciable uplift in our student experience.  Essentially, it means that students (and our staff) get a good consistent experience across our various locations and devices.  Our total combined staff and student user population is about 35,000.  

To get a bit more specific:

End devices.

Our end devices are mainly zero client with about 950 at our Cambridge Campus and 840 in Chelmsford - roughly 1800 total.   They're a mixture of Tera 1 and Tera 2 supporting our main PCOIP protocol.

We've recently been looking at generation three LG 23" All-In-One V Series as a possible replacement for some of our older end devices

Servers

Our Cambridge Campus

We're using a combination of HP BL460c G7 and Gen 8 servers spread over two HP c7000 blade chassis.  18 blades (all with 192GB memory) - 6 Gen eight and 12  G7.

Our Gen 8 blades also have a Teradici Apex 2800 Tera 2 card installed - to provide better performance.

We used a rough and ready capacity planning rule of thumb of 50 VMs per host.  That means we ought to have capacity for about 900 concurrent VMs in Cambridge.  This is putting aside any performance uplift me might get from our newest Gen 8 blades.  We've also installed Apex 2800 PCOIP acceleration cards in all the Gen 8s.  This ought to mean 50 per blade is a very safe performance bet and this has mostly been true.   We do get the occasional slow down in specific VMs - but it has been hard to establish where the bottleneck might be (despite investigation).  Saying that, the other 99% of the time we have great performance.

On top of that, we have four servers in our VDI management cluster - all G7s with 192GB of memory.

Our two c7000 Cambridge chassis have 5 spare slots in each - 10 total.

Our Chelmsford Campus

Chelmsford is pretty similar - but with a slightly lower capacity.

14 servers in total - 5 Gen 8 with TeraDici Apex 2800 offload card and 9 G7s all HP BL460c and 192GB of memory.

Our VDI management cluster comprises three G7s.

Theoretically, using our rough rule of thumb explained earlier, this means we can support 700 concurrent users in Chelmsford.

Our two c7000 Chelmsford chassis also have 5 spare slots in each - 10 in total.

Our client VMs.

We run Windows 7 Enterprise SP1 with 4GB of memory.  Over 100 applications mostly streamed using AppV.

Other software versions.

VMWare Sphere 5.1, VMWare View Horizon 5.2.

A note on end user experience.

It's worth saying upfront that our emphasis is on providing a PC like end user experience.  Our rationale isn't about stuffing our servers with as many VMs as possible to maximise value for money.  Our objectives are far more focused on delivering the best possible experience for our students and staff.  Of course, there are limits to this.  But, if we compromise end user experience too much then we'd be better off providing PCs.  

No comments:

Post a Comment