The slowdown of Moore’s law coupled with the surging democratization of machine learning has spurred the rise of application-driven architectures as CMOS scaling, alone, is no longer sufficient to achieve desired performance and power targets. In order to keep delivering energy efficiency gains, specialized SoCs are exhibiting skyrocketing design complexity with increasing development efforts. In this webinar, we will shed light on our agile algorithm-hardware co-design and co-verification methodology powered by High-Level Synthesis (HLS), which enabled us to reduce front-end VLSI design efforts by orders of magnitude during the tapeout of three generations of edge AI many-accelerators SoCs. With a particular focus on accelerator design for Natural Language Processing (NLP), we will share details on proven practices and overall learnings from a high-productivity digital VLSI flow which leverages Catapult HLS in order to efficiently close the loop between the application’s software modeling and the hardware implementation. Finally, we will mention some of the HLS challenges we encountered, offer recommendations cultivated from our learnings, and highlight internal and external efforts to further improve HLS user experience and ASIC design productivity. 

What you will learn: 

Who should attend:

_1642449373945