As I said in my earlier blog, we have VDI aspirations. (Hosted Virtual Desktop is the term Gartner tend to use)
This particular blog talks about our intention to push our VDI use cases further. This means we need more hardware and software - which is discussed in outline below.
Our current VDI delivers a large range of applications but.. What we do not do terribly well at the moment is provide heavy graphics and processor dependant applications and content.
As a consequence, there are a number of niche and not so niche applications that we can't provide to our students (or staff) on our VDI environment. Of course, this means we are stuck with thick clients for these applications. So, we're left with students needing to visit specific physical locations to access these more specialist applications. We want to fix this if we can. Being able to access applications from almost any device at any time is a significant gain.
We're also the entrepreneurial university of the year and maybe we can put a package of software together that might help budding business entrepreneurs - web site software, maybe some free hosting? what else?
Specifically, we're talking about applications like the following (I'll add a more complete list at the end later):
Google Earth, Cry Engine, 3D Studio Max, Autodesk Maya, Adobe After Effects CS6, Adobe Photoshop CS6, Epic, Adobe Premiere Pro CC, AutoDesk AutoCAD Architecture 2014, Autodesk Structural Detailing 2015, AutoDesk Inventor, ArcGIS 10, World Wide Telescope, Adobe Audition CC, Adobe Edge Animate CC 2014.1, Adobe Fireworks CS6 (included with CC), Adobe Flash Professional CC 2014, Adobe Illustrator CC 2014.
We probably need more fast storage to run these new applications too but that'll be the topic of a further blog post.
So, we think we need to complement our environment with NVidia graphics processors. With the new VMWare vGPU profiles we can hopefully use both Grid K1 and K2 processors. Particular users will be assigned particular vGPU profiles - meaning we can support a terrific range of graphics heavy applications. However, the bulk of our users will run on the lesser K1 chips - with the K2 processors being dedicated to heavy use and specialist users.
The relevant NVidia graphic boards look like:
A list of certified applications for the Grid cards is here.
Some capacity planning is obviously vital.
Should we host everything in our main Cambridge data centre?
If we did this then we'd cut down on hardware and complexity. The disadvantage is clearly around disaster recovery and perhaps end user performance. But, we do have 10Gbit links with about 10ms latency. It might be fine.
We would engineer our environment in Cambridge to have no single point of failure. But, if we lost both our resilient 10Gbit comms links then we'd be in trouble. But, for a limited number of users at Chelmsford. This might be a reasonable risk to accept.
The Capacity Requirements
We think the total user community who will benefit from this new graphics capable technology is pretty small. In Cambridge - probably about 100 concurrent users. Chelmsford - probably 50-60 concurrent users. This is going to be a high estimate.
Of the total number who could use the new graphic capability perhaps 20 users in Cambridge and 10 users in Chelmsford would make use of the more top end 'Designer/Power User' capability - i.e. K2 Grid.
What do we need to do to verify these estimates? - clearly identify the applications that need the capacity. I imagine we will not really be able to do this until we create a working proof of concept. But, applications like ArcGIS are certified with shared K2 but not K1.
What would the NVidia Grid capacity look like?
Each K2 K220Q supports 16 users (with 512MB graphics memory). Two = 32. We can probably get by with Two or three K2s as long as we have at least two separate servers to support them.
Each K1 K140Q supports 32 users (with 512MB graphics memory). If we need to support about 120 concurrent users then we need four K1s - with at least three being in separate servers.
You'll notice I've assumed we only need 512MB of graphics memory. This is a guess.
That means we need at least five extra servers to support the new cards.
Our ideal architecture (if it works) would be to also insert an Apex 2800 card into each server - to get extra PCOIP acceleration.
What about the servers?
The question is - can we still use our existing chassis based HP Gen 8 (perhaps Gen 9) blades?
Our belief is that we at least need a PCI expansion blade to fit in the K1/K2s and Apex 2800 for our existing Gen 8 BL460c blades. However, the BL460s are not supported by NVidia. More VMWare View K1/K2 information here.
The likelihood is the BL460c is a dead end. We ought to plan on that basis - running an unsupported VDI environment is not a terribly good idea.
This is where we need help from HP and nVidia - can we do this? or do we need to move to another server architecture?
The nVidia server compatibility list (click here) suggests only a chassis based or specialist workstation WS460c (I'll discount the SL250 as a HPC focused solution). VMware certification for WS460c here. This would be a pity - as it would undermine our 'standard blade' approach. But, they're still blades and we could still utilise our c7000s - so perhaps not so bad. The Apex 2800s are also supported in the WS460s - so looking feasible.
But, will a K2 (and or K1) and Apex fit in the same server?
So, if it all works out this would mean five new WS460s.
So, HP and NVidia (and helpful solution providers) what's the best approach?
What about our VMWare Desktop Virtualisation Software?
All of this will be dependant on an upgrade to our existing VMWare View Horizon 5.2 environment - to the latest version 6.
What have we discovered?
13/3/15 - Good news
Well, happily, we have had the above approach informally validated by a helpful supplier. Even better, they have a hosted POC environment we can use to validate that some of our applications work well. Testing will take place in the next couple of weeks.
Also, there looks to be some good news on the graphics front and more improvements.
So, firstly, the Apex 2800 card can fit with a K1 or K2 in the WS460c. Hurrah. We get the benefits of PCOIP acceleration and graphics from the Ks. But, we appear to need vSphere ESXi v6 (due out soon). Apparently the associated new pGPU driver claims to double the density of users on the K1 and K2 Grid cards. Which would be good news.
One aspect to consider is around suppliers supporting and licencing their software in a virtualised environment. Will they all be happy to do this?
ReplyDeleteYes - it's a good point. Things seem to have moved on in the last 2 or 3 years with suppliers more keen to support virtualisation but...
DeleteI’ve been surfing online more than three hours today, yet I never found any interesting article like yours. It’s pretty worth enough for me. In my opinion, if all webmasters and bloggers made good content as you did, the web will be a lot more useful than ever before.
ReplyDeleteNational University of Singapore