Achieve Massively Parallel Acceleration with GPUs

Register below to view the webinar on-demand

ABSTRACT

The past decade has seen a shift from serial to parallel computing. No longer the exotic domain of supercomputing, parallel hardware is ubiquitous and software must follow: a serial, sequential program will use less than 1% of a modern PC's computational horsepower and less than 4% of a high-end smartphone. GPUs have proven themselves as world-class, massively parallel accelerators, from supercomputers to gaming consoles to smartphones, and CUDA is the platform best designed to access this power.   

In this webinar, we’ll cover the many different ways of accelerating your code on GPUs; from GPU-accelerated libraries, to directive-based programming using OpenACC directives, and finally to writing CUDA directly in languages such as C/C++, Fortran, or Python. In addition to covering the current state of massively parallel programming with GPUs, we will briefly touch on future challenges and potential research projects. Finally, you will be provided with a number of resources to try CUDA yourself and where to go to learn more.

Presenter: Mark Ebersole, CUDA Educator/Developer, NVIDIA
As GPU Programming Educator at NVIDIA, Mark Ebersole teaches developers the benefits of GPU computing using the CUDA parallel computing platform and programming model, and the benefits of GPU computing. With more than 10 years of experience as a systems programmer, Mark has spent much of his time at NVIDIA as a GPU systems diagnostics programmer in which he developed a tool to test, debug, validate, and verify GPUs from pre-emulation through bringup and into production. Before joining NVIDIA, he worked at IBM developing Linux drivers for the IBM iSeries server. Mark holds a B.S. degree in math and computer science from St. Cloud State University.

 

Moderator: Jeffrey K. Hollingsworth, University of Maryland, College Park; SIGHPC
Jeffrey K. Hollingsworth is a Professor of the Computer Science Department at the University of Maryland, College Park. Dr. Hollingsworth’s research seeks to develop a unified framework to understand the performance of large systems and focuses in several areas. First, he developed a new approach, called dynamic instrumentation, to permit the efficient measurement of large parallel applications. Second, he has developed an auto-tuning framework called Active Harmony that can be used to tune kernels, libraries, or full applications. Third, he is investigating the interactions between different layers of software and hardware to understand how they influence performance. He is Editor-in-Chief of the journal Parallel Computing, was general chair of the SC12 conference, and is Vice Chair of ACM SIGHPC.


Offer for Non-Members and Past Members: Save 15% on ACM Professional Membership


NVIDIA is offering all webinar registrants a 20% discount on registration to their upcoming 2014 GTC (GPU Technology Conference) when you use the code GM20ACM. The 2014 GTC, the world's biggest GPU developer conference, takes place on March 24-27 in the heart of Silicon Valley. It offers rare opportunities to learn how to harness the latest GPU technology along with face-to-face interaction with industry luminaries and NVIDIA technologists.



 If you have previously registered for this event, please login below:
 Email
 LOGIN

Registration is required to attend this event. Please register now.
First Name*
Last Name*
Email*
What is your ACM membership status?*Current Member
Past Member
Non-Member
You must have Javascript and Cookies enabled to access this webcast. Click here for Help.
 
Please enable Cookies in your browser before registering for the webcast.
 
*Denotes required.
 
REGISTER