PRESS RELEASE: Edinburgh, January 15, 2024. smartR AI™ and EPCC, part of the University of Edinburgh, are partnering together on a super computer trial project using the Cerebras CS-2 Wafer-Scale Engine (WSE) system.
EPCC is the UK’s leading centre of Supercomputing and Data Science expertise, pushing the boundaries of High-Performance Computing and Data Science research. In collaboration with smartR AI, a Scottish-based consultancy excelling and specializing in Natural Language Processing (NLP) applications of AI, EPCC set out to conduct an analysis of the real-world performance advantage that EPCC’s Wafer-Scale Engine CS-2 server can offer to local AI businesses. In response smartR AI set out to investigate generative AI capabilities within a business environment.
We conducted a comparison between two advanced hardware setups designed for optimized parallel computation. First, we examined the Cerebras Wafer-Scale Engine CS-2 chip, notable for its colossal compute core scale with 850,000 AI-optimized cores. This chip addresses deep learning bottlenecks by efficiently utilizing its cores and 40 GB of on-chip memory, boasting an exceptional 20 PB/s memory bandwidth. Conversely, the smartR AI Alchemist server employing the Nvidia RTX 3090.
This head-to-head analysis sheds light on parallel computation possibilities, impacting the evolution of LLMs and deep learning applications. The results are staggering, with 10x faster training loss convergence of a GPT-2 model pre-training.
In this comparison experiment, we conducted pre-training on a version of the GPT-2 model. As the original OpenAI GPT2 paper uses, we employed an open version of WebText, OpenWebText, which is an open-source recreation of the WebText corpus. The text is web content extracted from URLs shared on Reddit with at least three upvotes. (38GB).
Thus far, running the sample models smartR AI has managed to train a model from scratch in under 1 hour on the EPCC system with the Cerebras CS-2 chip, compared to the massive 10 hours that it took to complete on their own internal system with an Nvidia RTX 3090 GPU. The company’s engineers working on this project are quite certain they will be able to utilize more of the resources available on EPCC’s Cerebras system to make further improvements to training speeds, and are excited to undertake a larger scale project to evaluate the limits of the system. The following graphic shows the results of smartR AI’s performance trial to date.
GPT Training on EPCC CS-2 server vs. internal server
Julien Sindt, Business Development Manager at EPCC, commented on the results of phase 1 of the project: “These impressive results from smartR AI give clear confirmation of our belief that the Cerebras CS-2 is a game-changer for training large language models. The Cerebras team has recently developed new upgrades for the system which we expect will enable training times to be reduced even further. We look forward to sharing these benefits with our partners.”
Oliver King-Smith, founder and CEO of smartR AI commented on the collaboration: “We are very fortunate to be able to work with EPCC on this important LLM and GPT related performance project, and look forward to the potential to incorporate other similar tests with, for example, the EPCC’s new Graphcore system.”
About EPCC, University of Edinburgh
Based in the University of Edinburgh, EPCC provides supercomputing and data services to industry and academia. Since our inception in 1990, we have gained an impressive reputation for leading-edge capability in all aspects of high-performance computing (HPC), data science, and novel computing. This expertise is reinforced by deep ties with industry and academia.
We have a strong track record of working with businesses, leveraging our expertise and facilities to accelerate the adoption, and spread the benefits, of high-performance computing. We operate a remarkable collection of computing and data storage facilities at our Advanced Computing Facility, including hosting the UK National Supercomputing Services ARCHER2 and Cirrus, and have been chosen to host the UK’s first Exascale supercomputer. We are a leading provider of high-performance computing and data science education and training, and conduct research at the leading edge of these fields.
Contact details:
EPCC | www.epcc.ed.ac.uk |
---|---|
Tracy Peet, Communications Officer | t.peet@epcc.ed.ac.uk |
Address | Bayes Centre, 47 Potterrow, Edinburgh, EH8 9BT |
About smartR AI
smartR AI augments your team with AI expertise. We work closely with all stakeholders to drive enterprise-wide implementation of AI, streamlining workflows. At smartR AI, we spend the time to learn about your business and collaborate closely with your teams to develop a customized AI solution unique to you. Our smartR team has years of experience adapting AI solutions to real world needs. We’ve developed proprietary model building blocks to accelerate the development of your project.
-
For business applications we have SCOTi – your loyal AI pal.
-
For medical, health and wellbeing applications, we have alertR – a behavioral intelligence-based alerting system.
We specialize in providing safe private models, that manage risk, while providing high reward. As our models are specifical trained for you, they work naturally with people to enhance and optimize productivity, and reveal previously unseen insights from your vast data pools. But most importantly, smartR is committed to providing safe AI programs within your own secure and private ecosystems.
We invent tomorrow’s products today by breaking free from pre-programmed rules. As intelligence moves to the edge of the network, smartR AI is all about doing things the smartest way. smartR AI improves your life intelligently by empowering your workforce with actionable insights.
Contact details:
smartR AI | www.smartr.ai |
---|---|
Oliver King-Smith, Founder and CEO | oliverks@smartr.ai |
Naomi Thomas , PR | n.thomas@smartr.ai |
Head Office | Thistle Court, Rm 11,1-2 Thistle Street, Edinburgh EH2 1DD |
Phone | UK: +44 7950 292 546 |