North Carolina Sate University is land-grant university and constituent institution of The University of North Carolina Of ce of Information Technology NC STATE UNIVERSITY Hillsborough Street Box Raleigh, North Carolina - September , HPC Partners Program Memorandum of Understanding General Terms: NC State Of ce of Information Technology (OIT) High Performance Computing partners program offers NC State faculty the opportunity to purchase, through OIT, compatible HPC systems and have them housed and maintained by OIT with the partner receiving rst priority access to the quantity of resources purchased When resources are not being used by the partner they will be available for use by the general NC State HPC community Currently, the hardware available under the partners program is Lenovo servers These servers have two Intel Xeon processors each with thirty-two processor cores, gigabytes of memory, an internal disk for operating system and swap space, In niBand network interface, and gigabit/second Ethernet interface Purchase price of these servers includes ve years of hardware maintenance OIT will be responsible for all aspects of the operation and maintenance of the servers with the objective of keeping them available hours per day, days week, days year on best effort basis Operating experience with the Lenovo hardware to date has been better than % availability OIT will provide racks, chassis, networking and all hardware necessary to operate the servers The servers will be located in secure location with appropriate cooling and power, including battery backup and emergency diesel generator The Linux operating system will be installed and maintained on the servers The servers will use the home le system for the University Linux cluster and will have access to the shared le systems and applications on that cluster Shared le systems include TB parallel le system, TB of NFS mounted scratch space, and hierarchically managed le system for long term data storage with more than TB of disk space and more than TB of existing tape storage expandable to more than petabyte Speci applications needed by the partner can be installed in the shared le space and protected as necessary provided the applications are available for the Linux operating system and the Intel Xeon processor Intel and Portland Group C/C++, Fortran, and Fortran// compilers and Intel Math Library have been licensed for the University Linux cluster and will be available on the partner's servers fi fi fi fi fi fi fi fi fi fi fi fi fi Platform Computing's load sharing facility (LSF) will be used for queuing and scheduling of these servers partner's queue will be con gured for access to the number of processors purchased with queue parameters determined in consultation with the partner It is expected that the partner's queue may provide access to any of the University Linux cluster resources up to the number of servers purchased This will provide fastest access to resources and will insulate the partner from any speci hardware failures [eg if the partner purchases two blade servers, when the partner's job is submitted to two servers, the rst two servers available matching the job resource requirements will be assigned to the partner, so even if any of the servers purchased by the partner are down for some reason the partner's access to resources will continue] All access to the partner's servers must be via LSF Cluster nodes are connected only to private networks – one dedicated for message passing and the second for job control and storage Jobs are submitted from cluster login nodes Access to login nodes is allowed using ssh, scp, or sftp Static, shared login nodes are available through round-robin DNS or, for intensive work, dedicated login node can be reserved through the virtual computing lab (VCL) service OIT agrees to house and operate the blade servers for ve years At the end of ve years the partner may either take possession of the blade servers or negotiate new agreement with OIT Speci Details for Department of Statistics Partnership: Mainline has provided quote MIS--- for four compute nodes with GB memory and quote MIS--- for compute node with two Nvidia A GPUs Cost per GB compute node is $, and $, per compute node with GPUs Five GB nodes and two GPU compute nodes will be added to HPC cluster and dedicated partner queues will be created for Statistics users with access to cores and four A GPUs The partner queues will allow jobs from Statistics students who are members of the Statistics HPC Project (and other projects the Statistics Department Head may specify) to have high priority access to the number of CPU cores and GPUs being added to the cluster The fair share priority for Statistics project will also allow project members to run jobs with higher priority in the generally available queues Cost for the nodes is $, OIT HPC staff in collaboration with Sciences IT staff will provide documentation for installing applications (such as and Python) and assistance when issues are encountered with installation of particular modules Generally, assistance will be providing additional instructions and troubleshooting Business contact name _______________ Business contact phone _______________ _____________________________ Date fi ______________________________ Date fi ______________________________ Eric Sills, Assistant Vice Chancellor Shared Services Of ce of Information Technology fi ______________________________ Sujit Ghosh, Interim Department Head Department of Statistics fi fi fi Account/Project to charge ______________