Please check back on November 4th for links to a YouTube playlist and the individual videos.
Detailed Presenter Info (Listed in Alphabetical Order)
Neil Ashton, AWS – OpenFOAM on AWS with Graviton2
In this talk we will discuss how AWS is helping the CFD community to take advantage of on-demand access to Amazon EC2 Graviton2 Arm-based instances. We will focus on the recent work to assess the price/performance benefits for the popular open-source code OpenFOAM and show how it offer up to 37% better price/performance than x86-based Amazon EC2 instances.
Speaker bio:
Dr. Neil Ashton is a Principal CFD Specialist Solution Architect at Amazon Web Services and a Visiting Fellow at the University of Oxford. Previous positions were at NASA Ames Research Centre, Renault/Lotus Formula 1 Team as well as consultancy roles for Formula 1, Audi, British Cycling, Williams Advanced Engineering and others. His expertise is on applying and developing the next generation of Computational Fluid Dynamics approaches with a particular focus on interplay between high-fidelity turbulence models and High-Performance Computing (HPC). He has authored over 30 papers in leading international journals and conferences and has had leadership roles including chair and lead organiser for the ICCFD10 conference, 1st automotive CFD Prediction Workshop and also been a reviewer for the EU commission as well as a number of leading academic journals.
Ben Bennett, HPE – Catalyst UK – Lessons from deploying Homogeneous Arm-based clusters
- Sim Card Reader Driver free download - Realtek USB 2.0 Card Reader, Foxit Reader, SCR3310 USB Smart Card Reader drivers, and many more programs Join or Sign In Sign in to add and modify your.
- SimDE™ MODEL has a seamless validation process for IBIS buffer model generation. It remembers all the settings for the Spice model when generating the IBIS buffer and uses them for your IBIS model validation on your own topologies. It also provides a detailed DPI (Differential Peak Index) and DAI (Differential Average.
Tweet with a location. You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications.
Scientific atlanta driver download. As we enter the third and final year of the Catalyst UK program, Dr. Ben Bennett will look at some of the key insights from both the Arm system deployments and software evolution and from the science that has been done on these machines.
Speaker bio:
Ben is an HPC Marketing Strategist working in the EMEA Organization, based out of the UK. In this role Ben focuses on the three to five-year horizon of aligning customer requirements to long range product plans, working with customers and their stakeholders to ensure the benefits and requirements of future technologies are being planned for. Ben also works with HPE’s corporate affairs team to drive HPC content, and knowledge, into their work with the science, research and industrial components of public bodies, both at a local, national, and European level. Ben has over 30 years of supercomputing experience in academia and industry both as a supplier and purchaser of High Performance Computers. Ben holds a Bachelor’s degree in Electrical & Electronic Engineering and a PhD in Control Engineering looking at data structures in real-time parallel control systems used in flight control systems.
Bine Brank, Juelich Supercomputing Center – SVE Auto-Vectorization Evaluation
With the first SVE machines becoming available, the importance of having code which supports SVE is crucial. In this talk, we will present some of the analysis of the latest compilers and their ability to vectorise code for SVE.
Speaker bio:
Bine Brank is a PhD student at Juelich Supercomputing Centre who specializes on SVE. Parts of his work are done in context of Mont-Blanc 2020 project and EPI.
Andrew Davis, UKAEA – UKAEA use of Arm Catalyst
Speaker bio:
Professor Andrew Davis is part of the United Kingdom Atomic Energy Authorities Advanced Computing Programme, where he looks at the computational tools needed to simulate fusion reactors and their components in high fedelity.
Cosimo Gianfreda, E4 – E4 and Arm: a long history and a bright future
E4 and ARM have initiated their collaboration in 2011, and E4 presented the first ARM-based server powered by NVIDIA K20 GPU at SC 2013. Since then, E4 has constantly developed an ARM-based product line to meet the demands of its customers and prospects. The technological leadership of E4’s products, a key feature of E4’s philosophy, has been achieved thanks to the constant and effective interaction with the ARM ecosystem, and this interaction enables E4 to keep its technological leadership for the future as an early adopter of the new technologies. This video will present how E4 is contributing to the ARM ecosystem and will show ARMIDA (ARM Integrated Development of Applications), the ARM-powered, GPU-accelerated flagship product of E4 for the ARM space.
Speaker bio:
Cosimo Gianfreda, Chief Technology Officer of E4, he is working on ICT from mid-80s. From 2002, he is focused on HPC and he is the designer of all E4 hardware solution always looking for the magical point where best performances meets lower power consumption. https://www.linkedin.com/in/cosimo-damiano-gianfreda-01a83820/
Todd Gamblin, LLNL – What’s New with Arm on Spack?
The Spack ecosystem continues to grow with added support for Arm systems. This talk presents the latest contributions to Spack to support Arm HPC.
Debra Goldfarb, AWS – Arm, HPC, and the AWS Cloud: Technologies at the Crossroads
We’re seeing a growing appreciation in HPC for two areas of technology that have not historically been associated with the industry: cloud computing and Arm. Ian will discuss the rise of both and how you can leverage them in solving your complex science and engineering problems.
Brent Gorda, Arm – Business Overview from Arm
Brent will provide an overview of the HPC business at Arm. With the arrival of Fugaku from Fujitsu, we have an interesting and important data point. There is, of course, much more to come as partners prepare their silicon. Where HPC fits in Arm’s world is equally interesting and will impact the community.
Speaker bio:
Brent Gorda is the Senior Director for the HPC business at Arm. He has a long background in HPC and is a entrepreneur at heart. Brent is eager to enable change across the board: the business model, the technology and the software environment – for the betterment of performance challenging workloads. Handing advanced computational tools to smart people trying to make breakthrough improvements is a fundamental driver.
Si Hammond, Sandia National Lab – Early Experiences with A64FX
Fugaku, an Arm-based architecture is currently the most powerful supercomputer in the world. The architecture in use in the machine is interesting for many reasons, but the principal focus for most evaluations is the extremely high memory bandwidth available on the A64FX processor and the availability of HPC-optimized implementations of Arm’s Scalable Vector Extensions (SVE). With these reasons in mind, Sandia has deployed Inouye, a small A64FX based testbed, to allow ASC users to explore the potential of these hardware features on their own codes and benchmarks. In this talk we describe our very early experiences with the A64FX processor and provide an overview of our current porting and optimization efforts, including the development of an ATSE-based curated and optimized HPC stack for our testbed system.
Speaker bio:
Si Hammond is a Principal Member of the Technical Staff in the Scalable Computer Architectures group at Sandia National Laboratories. In the last ten years, his career has focused on application porting, optimization and performance analysis for high-end HPC systems in support of the Department of Energy’s (DOE) Exascale Computing Project. During that time he was the Hardware Evaluation Technical Lead for the DOE and a technical reviewer for the several NNSA/ASC supercomputing deployments, as well as the application deployment lead for Sandia’s Vanguard program, deploying, Astra, the world’s first petascale Arm system.
Terje Kvernes, EasyBuild – EasyBuild on Arm
EasyBuild is a staple in the HPC community for managing scientific software in an efficient way. As the Arm architecture makes inroads into the HPC sphere, what challenges will one encounter and how can EasyBuild help the transition?
Speaker bio:
Terje Kvernes is the head systems administrator at the Department of Mathematics at the University of Oslo, Norway. He has two decades of experience with large scale heterogenous unix environments, networking, and storage. One of the more recent developments is how his team now provides both hardware and software infrastructure targeted at machine learning for the department and affiliated research institutions.
Youngsu Kwon, PhD, ETRI – The K-Supercomputer CPU’s Design, Perspective, Overview and Applications
The K-Artificial Brain 21 is the first supercomputer CPU being designed by ETRI, Korea targeting tens of DP-TeraFLOPS with specialized power efficiency strategies for Korean Supercomputer, Supreme-K. The internal architecture of K-AB21 incorporates multiple Zeus cores, V1s from ARM, multi-purpose custom instruction-programmable dense-sparse matrix accelerator for HPC and AI, the XPU from ETRI. In this talk, we present the design perspective, architectural overview, and future supercomputing applications of K-AB21.
Speaker bio:
Youngsu Kwon was received B.S., M.S., and Ph.D. degrees from Korea Advanced Institute of Science and Technology (KAIST), Republic of Korea in 1997, 1999, and 2004, respectively. He had been with Microsystems Technology Laboratory (MTL), Massachusetts Institute of Technology as a Postdoctoral Associate from 2004 to 2005 for designing 3-Dimensional chips. He is now with AI SoC Research Department, Electronics and Telecommunications Research Institute (ETRI), Republic of Korea, since 2005. In ETRI, he is a Director, Principal Research Staff of the AI SoC Research Department devoted to the design of high-performance AI processor, AB. He has special interests in many-core architecture, AI processor design, low-power architecture design, computer-aided design, and algorithmic optimizations of circuits and systems. He received Presidential Prize from Korean Government in 2016, Official Commendations from the Ministry of Science and ICT as well as Ministry of Industry in 2016, the Excellent Researcher Award from Korea Research Council in 2013, Industrial Contributor Award from Korean Federation of SMEs in 2013, and medals from Samsung’s Thesis Prizes in 1997 and 1999 while he has been authoring over 30 journal and international conference papers.
Tim Lin, S-Cube – AWS Graviton2 performance on S-Cube’s Finite-Difference PDE workload
S-Cube XWI is a specialized exploration seismology algorithm used to infer wave velocities inside the Earth’s crust at a very fine resolution from seismic measurements taken at the surface. The majority of the computation workload of XWI is performed by a finite-difference based wavefield simulation engine used to iteratively correct initial guesses of the acoustic velocity model. Due to historical reasons the critical section of this engine has two equivalent hand-tuned implementations, one using SSE and the other using AVX2, which means that we have a native 128 bit-width SIMD implementation suitable for translation to Neon. We have used a combination of SIMDe 0.6.0 and GCC 10.2.0 to make both the SSE and AVX code path run on Neon-enabled Graviton 2 instances, and would like to report some interesting benchmark results comparing to the latest Intel Cascadelake Xeons running these SIMD codepaths natively.
Speaker bio:
Paradigm usb devices driver download. Dr. Tim Lin is the Principal Cloud Developer at S-Cube. He is responsible for designing and maintaining the AWS-based HPC infrastructure at S-Cube which brings together a large variety of codebases, tools, and programming environments of various vintages into successful commercial operaton. Tim holds a PhD in Geophysics from the University of British Columbia.
Simon McIntosh-Smith, University of Bristol – Early results from Isambard2
Isambard became the world’s first Arm-based production supercomputer when it went live in the fall of 2018. Isambard has since gone on to provide many of the first results to show that Arm-based server CPUs are performance competitive with state of the art x86 processors, while providing compelling cost-performance benefits. Isambard 2 is a £6.7M project to build on these successes. The XC50 production system has been doubled in size to 21,000 Arm-cores, while the latest Arm technologies have been added in the form of a Fujitsu A64fx system. Isambard 2 also includes the latest CPU and GPU technologies from AMD, Intel and NVIDIA, to enable rigorous cross-architecture comparisons. In this talk we will present the latest performance comparisons from all of these technologies, including the A64fx.
Speaker bio:
Simon McIntosh-Smith is a full Professor of High Performance Computing at the University of Bristol in the UK. He began his career in industry as a microprocessor architect, first at Inmos and STMicroelectronics in the early 1990s, before co-designing the world’s first fully programmable GPU at Pixelfusion in 1999. In 2002 he co-founded ClearSpeed Technology where, as Director of Architecture and Applications, he co-developed the first modern many-core HPC accelerators. He now leads the High Performance Computing Research Group at the University of Bristol, where his research focuses on advanced computer architectures and performance portability. He plays a key role in designing and procuring supercomputers at the local, regional and national level, including the UK’s national HPC service, Archer. In 2016 he led the successful bid by the GW4 Alliance along with the UK’s Met Office and Cray, to design and build ‘Isambard’, the world’s first production, ARMv8-based supercomputer. In 2020 Isambard has received a major upgrade, making it one of the largest Arm-based systems in the world, while adding the latest Arm processors, Fujitsu’s A64FX, and the latest CPUs and GPUs from AMD, Intel and NVIDIA. He is also a major contributor to the UK’s Exascale initiatives, including ExCALIBUR and ASiMoV.
Sim Driver Taylormade
Nils Meyer, Regensburg University – Lattice QCD on QPACE 4
QPACE 4 is the latest member of the QCD PArallel Compute Engine (QPACE) series, which was deployed at Regensburg University, Germany, in June 2020. It features 64 Fujitsu A64FX model FX700 CPUs interconnected by InfiniBand EDR. In this presentation we comment on system operation, compiler capabilities and discuss the implementation of SVE in the Grid Lattice QCD framework. We also show Grid benchmarks on QPACE 4. Watch at night. Use headphones.
Speaker bio:
Nils Meyer is a post-doctoral researcher at the physics department of Regensburg University in Germany
Ross Miller, Oak Ridge National Lab – Sysadmin’s perspective of Apollo 80 / Apollo 70
The Wombat cluster at Oak Ridge National Lab is an experiment testbed for various technologies related to the ARM architecture. In the most recent upgrade, we’ve added 16 nodes with Fujitsu A64FX processors. These are some of the first processors to implement ARM’s Scalable Vector Extensions. This presentation will briefly discuss the hardware and capabilities of the Wombat cluster and then present some very early performance numbers from the A64FX processors.
Speaker bio:
Ross Miller is a software developer at the National Center for Computational Sciences at the Oak Ridge National Laboratory in Oak Ridge, Tennessee, USA. He has B.S. and M.S. degrees in computer science from Baylor University and has worked on everything from embedded microprocessors for avionics up to the largest supercomputers in the world. He has been working for NCCS for the last 11+ years and has been directly involved in ARM HPC efforts for the last 4.
Sim Driver Review
Andrei Poenaru, University of Bristol – Challenges and Opportunities Around Using SVE in Scientific Applications
The Arm Scalable Vector Extension (SVE) is a modern vector instruction set on track to become the standard in upcoming high-performance Arm processors. Since its announcement in 2016, the High Performance Computing Group at the University of Bristol has been heading several performance-related studies of SVE. This talk will present the latest research around the challenges and opportunities of SVE and will offer some considerations for future generations of vector processors.
Speaker bio:
Andrei Poenaru is a PhD Student in the High Performance Computing Group at the University of Bristol. His research is centred around advanced and future architectures for HPC, and he has been involved in several studies of performance and portability across diverse modern architectures. His current projects are focused on vectorisation in the context of Arm SVE and the latest Arm-based high-performance processors.
Craig Prunty, Silicon Pearl – SiPearl – The European High Performance Processor Company
SiPearl is the European High Performance processor company formed out of the European Processor Initiative. We provide an overview of SiPearl, some of the trends we see for Exascale Computing, and some resulting key performance indicators (KPIs).
Speaker bio:
Sim Driver Vs Sim Max
Craig Prunty joined SiPearl in May 2020 as VP Marketing & Business Development. Prior to SiPearl, Craig was Marketing Director for Marvell Semiconductor’s Server Processor Business Unit in Santa Clara, California. Previous experience in the Semiconductor industry includes sales, marketing, and technical roles with Cavium, AppliedMicro (AMCC), Lockheed-Martin, and Unisys. Craig holds a B.S. in Mathematics from Lewis & Clark College in Portland, Oregon, and an MS in Electrical Engineering from San Diego State University.
Guillaume Quintin, Agenium Scale – The NSIMD library, Application to EFISPEC3D, GROMACS and SVE programming
This talk is about the NSIMD library and its applications to GROMACS and EFISPEC3D codebases. We will show that NSIMD does not incur any overhead at runtime and that it can be used to write portable code with ame performances as hand-written intrinsics. We will then see how to write code in a SPMD manner to targets CPUs and GPUs with NSIMD.
Speaker bio:
Guillaume Quintin got a PhD in computer science after his studies in pure mathematics. He is an expert in software development. He is responsible for the open source project NSIMD (https://github.com/agenium-scale/nsimd) specialized in High Performance Computing (HPC). He participates in its development and in particular in GPU abstraction including NVIDIA and AMD GPUs using CUDA and ROCm respectively. He also manages other software development projects and supervises Agenium Scale’s technical teams by supporting them with his technical expertise.
Guillaume’s experience includes development, optimization and analysis of computation kernels that run on CPUs and GPUs. For example, he has worked for several years on the use of the TensorRT library (Nvidia’s CUDA-based library for neural network inference) and has created several plugins for the latter in order to perform 8-bits quantization. Guillaume is also an expert in computer algebra. He has carried out academic work applying formal computation to the fields of error-correcting codes and cryptography. Optimization is at the heart of computer algebra and is divided into two major areas: algorithmic optimization and implementation optimization. The work carried out by Guillaume focuses on these two axes, combining them to provide the maximum possible optimization: implementation of a FFT that is faster than the state of the art and using SIMD, implementation of list decoding algorithms such as Guruswami-Sudan in the form of a C library, development of modular calculations algorithms using floating numbers.
Roxana Rusitoru, Arm – Deep learning with Arm SVE
Join us for a short overview of deep learning using Arm SVE, showcasing both the architectural extension and the vast software ecosystem available today for machine learning. We also cover a few other ML-friendly hardware features.
Speaker bio:
Roxana Rusitoru is a Staff Research Engineer in Arm’s Research division, working in the Machine Learning group. She has been a part of the Mont-Blanc 1, 2 and 3 projects, leading the Software ecosystem in the latter, in addition to her technical contributions. Her most recent research has been on Arm’s Scalable Vector Extension (SVE), focusing on application analysis and optimization, HPC power management and deep learning training for SVE.
Stephen Sachs, AWS – Move HPC Workloads onto AWS Graviton2
The second generation ARM based Graviton2 instances provide a major leap in performance and compute capability, elevating them to be a viable competitor for HPC workloads. AWS ParallelCluster provides a familiar HPC cluster environment for the user while leveraging the elasticity of the cloud to provision compute and storage resources. This talk presents a comprehensible on-boarding experience for HPC users to take advantage of these tools and run their workloads on ARM based instances in the cloud
Speaker bio:
Dr. Stephen Sachs is a Sr. Application Engineer on the AWS Application Performance Team. He has been working on HPC workloads for the last decade in various positions ranging from application optimization to customer support at Amazon Web Services and Cray. Stephen holds a diploma in mathematics and a doctor in mechanical engineering.
Karl Schulz, OpenHPC, University of Texas – OpenHPC: Project Overview and Recent Updates
Over the last several years, OpenHPC has emerged as a community-driven stack providing a variety of common, pre-built ingredients to deploy and manage HPC Linux clusters. Formed initially in November 2015 and formalized as a Linux Foundation project in June 2016, OpenHPC is currently comprised of over 35 members from academia, research labs, and industry. To date, the OpenHPC software stack aggregates over 80 components ranging from administrative tools like bare-metal provisioning and resource management to end-user development libraries that span a range of scientific/numerical uses. OpenHPC adopts a familiar package repository delivery model and now supports multiple base Linux distros and architectures. This presentation includes a short overview of the project and highlights details from the most recent 2.X release.
Speaker bio:
Karl received his Ph.D. from the University of Texas in 1999. After completing a post-doc, he worked in industry for the CD-Adapco group to develop CFD software. Karl returned to UT in 2003, joining the research staff at the Texas Advanced Computing Center. During his tenure at TACC, Karl was actively engaged in HPC research, scientific curriculum development, technology evaluation, and strategic initiatives serving on the Center’s leadership team. Karl also served as the Chief Software Architect for the PECOS Center within the Oden Institute for Computational Engineering and Sciences.
In 2014, Karl joined the Data Center Group at Intel where he led the technical design and release of OpenHPC. He continues to remain engaged in the project and is currently serving as the Project Lead. In 2018, Karl returned to UT as a Research Associate Professor in an interdisciplinary role with the Oden Institute and the Dell Medical School.
Gilad Shainer, Nvidia – UCX on Arm
Speaker bio:
Gilad Shainer serves as senior vice-president of marketing for Mellanox networking at NVIDIA, focusing on high- performance computing, artificial intelligence and the InfiniBand technology. Mr. Shainer joined Mellanox in 2001 as a design engineer and later served in senior marketing management roles since 2005. Mr. Shainer serves as the chairman of the HPC-AI Advisory Council organization, the president of UCF and CCIX consortiums, a member of IBTA and a contributor to the PCISIG PCI-X and PCIe specifications. Mr. Shainer holds multiple patents in the field of high-speed networking. He is a recipient of 2015 R&D100 award for his contribution to the CORE-Direct In-Network Computing technology and the 2019 R&D100 award for his contribution to the Unified Communication X (UCX) technology. Gilad Shainer holds a MSc degree and a BSc degree in Electrical Engineering from the Technion Institute of Technology in Israel.
Toshiyuki Shimizu, Fujitsu – Supercomputer Fugaku and its processor A64FX CPU
Supercomputer Fugaku, which is developed by RIKEN and Fujitsu, is the first supercomputer in the history receiving the first prizes in three major supercomputers’ rankings, TOP500, HPCG, and Graph500 at the same time in June 2020. In addition to that it also received the first prize in the HPL-AI ranking. Fugaku uses the Fujitsu designed Arm CPU, A64FX, which is the first implementation of SVE, scalable vector extension, for Armv8-A. The A64FX CPU is also used for Fujitsu commercial supercomputer PRIMEHPC FX1000 and FX700.
In this presentation, our recent efforts and achievements regarding the A64FX CPU and its systems will be presented and discussed.
Speaker bio:
Mr. Toshiyuki Shimizu is Principal Engineer of Platform Development Unit, at Fujitsu Limited. Mr. Shimizu started at Fujitsu Laboratories with the research and development of the AP1000 massively parallel supercomputer system. His primary research interest is in interconnect architecture, most recently culminating in the development of the Tofu interconnect for the K computer. He led the development of Fujitsu’s high-end supercomputer PRIMEHPC series and Supercomputer Fugaku.
Mr. Shimizu received his Master of Computer Science degree from Tokyo Institute of Technology in 1988.
Sim Driver Weights
Karl Shultz, OpenHPC – OpenHPC: Project Overview and Recent Updates
John Stone, University of Illinois – NAMD and VMD Performance on ARM GPU Platforms
This talk will briefly summarize and provide a updates on the current state
of GPU-accelerated ARM platform support in the NAMD parallel molecular dynamics
engine, and VMD, a high-performance molecular modeling environment for
preparing, visualizing, and analyzing biomolecular simulations. Using the “Wombat” ARM64 cluster at Oak Ridge National Laboratory, we have recently had the opportunity to benchmark current versions of NAMD and VMD for representative molecular modeling tasks, with particular emphasis on compute heavy-operations that benefit from CUDA GPU-accelerated kernels and heterogeneous computing techniques. We highlight results that demonstrate areas where performance on the Wombat cluster is most comparable
to that achieved on high-end Intel x86- and IBM POWER9-based compute nodes,
and areas where the individual strengths and weaknesses of the different
platforms contribute to particular performance advantages or differences. We identify cases where our current ARM64 developments will benefit from completion of in-progress development of ARM64-specific vectorized kernels. Finally, we present observations from very early experiences with the ARM scalable vector extensions (SVE), and vector length agnostic programming approaches, as compared with traditional vectorization
on fixed-length SIMD hardware architectures.
Speaker bio:
John Stone is the lead developer of VMD, a high performance tool for preparation, analysis, and visualization of biomolecular simulations used by over 100,000 researchers all over the world. Mr. Stone’s research interests include molecular visualization, GPU computing, parallel computing, ray tracing, haptics, virtual environments, and immersive visualization.
Mr. Stone was inducted as an NVIDIA CUDA Fellow in 2010.
Michele Weiland, University of Edinburgh – Lessons learned for the UK Catalyst Programme
In December 2018, EPCC took delivery of a ThunderX2 based Apollo 70 system from HPE, as part of our involvement in the UK Catalyst collaboration. Since then, we have undertaken a significant amount of work on the system – and we have learned a lot in the process. This presentation will show how far we have already come in advancing the Arm ecosystem, and where we still have some ground left to cover – looking at everything from applications and system software to storage.
Speaker bio:
Dr Michèle Weiland is a Senior Research Fellow at EPCC. She specialises in novel technologies for extreme scale parallel computing, leading EPCC’s technical work in the ASiMoV Strategic Prosperity Partnership with Rolls-Royce. She also leads on EPCC’s involvement in the Catalyst UK programme, a partnership with HPE and Arm to accelerate the adoption of the Arm ecosystem. She is the EPCC PI on a number of Exascale-focussed research grants, including the EC Horizon 2020 projects HPC-WE and SAGE2. She is a member of the UK’s EPSRC e-Infrastructure Strategic Advisory Team and Associate Director of the Arm HPC User Group.
Jeff Wittich, Ampere Computing – Running HPC in the Cloud with Ampere Altra processors
Ampere has built the world’s first server-class processor designed from the ground up to meet the requirements of cloud workloads. HPC in the cloud is a recent trend that is forecasted to accelerate in the near future, in part due to the virtually unlimited burst capacity and elasticity the cloud offers. This session will showcase the unique features of Ampere Altra that make it the perfect choice for cloud HPC workloads. We will share data on traditional HPC applications like CFD and seismic modeling that demonstrate Ampere Altra’s industry-leading performance that is achievable with very little tuning.
Sim Driver Download
Speaker bio:
Jeff Wittich is the senior vice president of Products at Ampere. Jeff has extensive leadership experience in the semiconductor industry in roles ranging from product and process development to business strategy to marketing. Prior to joining Ampere, he worked at Intel for 15 years in a variety of positions throughout the company. Most recently, he was responsible for the Cloud Service Provider Platform business, driving global market reach, product customization, and ultimately defining the products and platforms being used across the cloud worldwide. While at Intel, Jeff also led a product development team responsible for 5 generations of Xeon processors. Jeff has an MS in Electrical and Computer Engineering from the University of California, Santa Barbara, and a BS in Electrical Engineering from the University of Notre Dame.