12.07.2015 Views

PR110112 - Annex 5 - Tender Evaluation Matrix Notes v3.0 - STFC's ...

PR110112 - Annex 5 - Tender Evaluation Matrix Notes v3.0 - STFC's ...

PR110112 - Annex 5 - Tender Evaluation Matrix Notes v3.0 - STFC's ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

PROTECT ‐ CONTRACT• Response times for hardware diagnosis and replacement• Responsibility for delivery, fitting and despatch of parts• Policy on EOL hardware disposal• Escalation procedure2.3 System Software SupportThe Supplier shall specify the proposed system software support arrangements for the initial 3 years including:• A definition of the scope of system software• Nominated providers of support for all aspects of the system software• Contact details for the above• Hours of support throughout a calendar year• Responsibility and capability for fault diagnosis• Mechanism for application of fixes and upgrades• Responsibility for installation of software upgrades• Escalation procedure• Tools and interfaces to allow monitoring of the hardware status, including peripherals, and systemsoftware3 ICE‐CSE BASE (M3, M4, D4.1, D4.2, M5, M6, D6.1, M7, D7.1, M8, D8.1, D8.2)ICE‐CSE BASE is a “conventional” HPC system, on which we want to be able to engage industrial partners and runour current suit of HPC applications.3.1 Permanent File StoreWhilst the file store requirements are specified in the ICE‐CSE BASE section it is emphasised that the file storemust also support the ICE‐CSE ADVANCE system, and should be sized and configured to support both ICE‐CSEBASE and ICE‐CSE ADVANCE systems and will be expected to meet similar performance requirements with eachsystem as detailed in the IO benchmark requirements.3.1.1 PerformanceThe assessment of file store performance is based upon the ratio (rp)rp= aggregatebandwidthtotalmemoryWhere• aggregate bandwidth is in bytes/second• total memory is in bytesA minimum rp of 1/250 is mandatory.The aggregate performance will be measured using IOR, running on all nodes in both systems at the same time.The benchmark file will be provided separately, but will run a set of tests at block sizes from 1MB to 512MB,with transfer sizes from 64k to 1024k.3.1.1.1 Minimal systems exampleSuppose ICE‐CSE BASE has 2048 cores each with 1GB memory and 2 X 256GB large memory nodes. Totalmemory = 2,560GBI C E ‐ C S E T e n d e r E v a l u a t i o n M a t r i x – A d d i t i o n a l N o t e s Page 3


PROTECT ‐ CONTRACTSuppose ICE‐CSE ADVANCE has 16,384 cores each with 1GB. Total Memory = 16,384 GBTotal memory = 16,384 + 2,560= 18,944 GBMinimum combined aggregate performance should be 18,944/250 = 76GB/SMinimum usable storage capacity = 10 x Total Memory = 10x18,944GB = 189,440 GB which is less than the200TiB mandatory, therefore usable storage = 200TiB.3.1.1.2 Maximal Systems (or Mandatory Plus all desirables)ICE‐CSE BASE: 3072*1GB + 4x 512GB = 5,120GB MemoryICE‐CSE ADVANCE: 32,768 cores x 1GB/core = 32,768GB MemoryTotal Memory: 37,888GBMinimum combined aggregate performance should be 37,888GB /250 = 151 GB/SMinimum usable storage capacity = 15x Combined System Memory = 15x37,888 = 568,320GB3.2 Power ConsumptionWe are keen to minimise operating costs and believe this can be accomplished if we are able to monitor andcontrol power usage down to the level of single parallel jobs. Tools which enable us to perform this task aretherefore desired.4 ICE‐CSE ADVANCE System (M9, D9.1, D9.2, D9.3, D9.4, M10, M10.1, M11, D11.1, D11.2, D11.3, M12)The ICE‐CSE ADVANCE system should represent “state of the art” technology in the supplier’s roadmap along thepath to Exascale computing. We wish to use this system to• demonstrate extreme parallel scaling and performance over a large number of nodes,• to optimise existing codes in order to enable them to run on future Exascale systems and• to develop new codes where existing codes have reached the limit of their capabilities.5 ICE‐CSE FUTURE (M13, D13.1, D13.2, D13.3)The reason for establishing the ICE‐CSE collaboration is to give us access to new technology developments whenthey become available. We recognise that we cannot pre‐pay for future hardware, and therefore do not requirethat new hardware be delivered to us on site in Daresbury, but we want to define part of the collaboration asgiving us access to this sort of equipment.We also want to be able to contribute to our partner’s product development plans, so having access to preproductionequipment and a feedback route back into product development will both help enable this.6 ICE‐CSE APPS (M14, D14.1, D14.2, D14,3, D14.4, D14.5, D14.6, D14.7, D14.8)We ask that the supplier prepare a business plan, governance structure and examples of similar collaborations inwhich various aspects of the collaboration might be included.We intend that this collection of requirements is consistent with information CSED has published: that the“collaboration” aspect of ICE‐CSE is vitally important to us and that we intend to recognise input in this areasignificantly in determining our preferred supplier.I C E ‐ C S E T e n d e r E v a l u a t i o n M a t r i x – A d d i t i o n a l N o t e s Page 4


PROTECT ‐ CONTRACTThe ICE‐CSE collaboration should be viewed as a stepping stone to a longer term and larger venture, which wehave been discussing for a number of years. We continue to seek additional funding and urge collaborators tothink of how the role of the ICE‐CSE collaboration could be expanded with additional funding.6.1 D14.8Additional capital and recurrent funds are most likely to come from the Department for Business, Innovation &Skills (BIS). Typically this must be matched by investments from other partners.Suppliers might like to consider initiatives such as Technology Innovation Centres (TICs) and their fundingmodels.I C E ‐ C S E T e n d e r E v a l u a t i o n M a t r i x – A d d i t i o n a l N o t e s Page 5

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!