集積回路設計・設計技術のエキスパートによる講演会を開催いたします。
日時: 2025/6/9(月曜) 13:30-16:00
会場:
京都大学 吉田キャンパス 総合研究9号館北館
(下記地図の63番の建物)
2階 N2講義室
京都市左京区吉田本町
https://www.kyoto-u.ac.jp/ja/access/campus/yoshida/map6r-y
プログラム
13:30 Opening
13:35 Title: AI-Empowered Heterogeneous Computing for Physical Design Automation towards Timing Closure
Speaker: Prof. Yibo Lin (Peking University)
14:05 Title: Design-Agnostic Bi-Voltage Scaling for Efficient Cryo-CMOS DVFS
Speaker: Prof. Longyang Lin (Southern University of Science and Technology)
14:35 Title: Towards 2.5D/3D Composable Chiplets for AI Computing: Heterogenous Integration and Design Exploration
Speaker: Prof. Yu Kevin Cao (University of Minnesota)
15:05 Title: Co-Designing Algorithms and Hardware for Efficient Machine Learning System
Speaker: Prof. Caiwen Ding (University of Minnesota)
15:35 Closing
事前参加登録不要、無料
問い合わせ先 contact@easter.kuee.kyoto-u.ac.jp
主催 京都大学情報学研究科集積システム工学講座
共催 IEEE CAS 関西チャプター
講演の概要と講演者紹介
Title: AI-Empowered Heterogeneous Computing for Physical Design Automation towards Timing Closure
Abstract:
Physical design stands as a pivotal step in the intricate design flow of contemporary VLSI circuits. It maps a circuit design into a physical layout with manufacturable gates and wires. Modern physical design necessitates numerous iterations across multiple design stages to achieve convergence in timing, power, and area. This iterative process is often exceedingly time-intensive. To tackle such challenges, in this talk, we will introduce how to leverage heterogeneous computing and machine learning for timing analysis, cross-stage prediction, and timing-driven optimization to accelerate design closure. We will also cover recent open-source efforts on public tools and AI-for-EDA datasets, which can facilitate the modeling and optimization tasks in VLSI design automation.
Biography:
Yibo Lin is an assistant professor in the School of Integrated Circuits at Peking University. He received the B.S. degree in microelectronics from Shanghai Jiaotong University in 2013, and his Ph.D. degree from the Electrical and Computer Engineering Department of the University of Texas at Austin in 2018. His research interests include physical design, machine learning applications, and GPU/FPGA acceleration. He has received multiple Best Paper Awards at premier venues including DATE 2023, DATE 2022, TCAD 2021, and DAC 2019. He has also served on the Technical Program Committees of many major conferences, including DAC, ICCAD, and DATE.
Title: Design-Agnostic Bi-Voltage Scaling for Efficient Cryo-CMOS DVFS
Abstract:
Cryo-CMOS ICs operating at 4K have emerged as a viable solution for quantum computer, where the power consumption (dominated by digital circuits) is strictly limited by the cooling capacity of dilution refrigerators. Prior work has focused on reducing threshold voltage through technology- or circuit-level optimizations to lower power. However, system-level power savings can be more effectively achieved through dynamic voltage and frequency scaling (DVFS), which dynamically adjusts frequency and voltage (and thus power/energy) in response to workload variations. Conventional DVFS reduces driving current by scaling supply voltage to match slower speed requirements. Yet, at cryogenic temperatures, this approach is suboptimal due to the higher transistor turn-off voltage at 4K, which reveals untapped voltage scaling potential.
In this talk, such unique cryogenic scaling potentials will be identified and fully exploited for maximized DVFS efficiency and power reduction. A design-agnostic, fully automated bi-voltage scaling scheme, enabled by a cryo-CMOS standard-cell library, will be introduced. When applied to a 40nm RISC-V subsystem at 4K, silicon measurement results show that this method achieves a 1.5× improvement in DVFS efficiency and 34% power reduction compared to conventional voltage scaling.
Biography:
Longyang Lin received the Ph.D. degree from the National University of Singapore, Singapore, in 2018. From 2018 to 2021, he was a Research Fellow with the Electrical and Computer Engineering Department, National University of Singapore. He is currently an Assistant Professor with the School of Microelectronics, Southern University of Science and Technology, Shenzhen, China.
He has authored or co-authored more than 60 publications on journals and conference proceedings. He is a co-author of the book Adaptive Digital Circuits for Power-Performance Range beyond Wide Voltage Scaling (Springer, 2020). His research interests include ultra-low power VLSI circuits, self-powered sensor nodes, widely energy-scalable VLSI systems, compute-in-memory, and cryogenic digital circuits.
He serves as Associate Editor of the IEEE Transactions on VLSI Systems, Technical Program Committee member and Session Chair of APCCAS 2022 and ICTA 2022-2025. He was the recipient of the Takuo Sugano award for outstanding far-east paper of ISSCC 2022.
Title: Towards 2.5D/3D Composable Chiplets for AI Computing: Heterogenous Integration and Design Exploration
Abstract:
Monolithic designs face substantial challenges of fabrication costs and data movement, particularly when dealing with large and complex AI models. Built on advanced packaging, 2.5D/3D chiplet-based heterogenous architectures have been proposed to overcome these limitations. This talk first introduces HISIM, a new benchmark tool for rapid design space exploration of 2.5D/3D heterogeneous systems. HISIM integrates analytical models to evaluate power, temperature, performance, area and cost (PTPAC) across different types of computing units, network-on-chip, network-on-package and memory. It operates 106 times faster than current benchmark tools. Utilizing HISIM, this talk further presents a library of tiny chiplets that are composable, scalable and reusable for a wide range of AI algorithms. Our study demonstrates that 14 tiny chiplets in the library are able to cover >36 types of AI algorithms, including CNNs, transformers and graphs. The code of HISIM is available at https://github.com/mec-UMN/HISIM, helping shed light on both the potential and limitations of 2.5D/3D integration for AI computing.
Biography:
Yu Cao received the Ph.D. degree in electrical engineering from the University of California, Berkeley, in 2002. He is now the Louis John Schnell Professor of Electrical and Computer Engineering at the University of Minnesota (UMN), Minneapolis, Minnesota. Before joining UMN, he was a Professor of Electrical Engineering at Arizona State University (ASU). He has published numerous articles and three books on nano-CMOS modeling and physical design. His research interests include neural-inspired computing, hardware design for on-chip learning, and reliable integration of nanoelectronics. Dr. Cao was a Distinguished Lecturer of the IEEE Circuits and Systems Society. He was a recipient of the 2020 Intel Outstanding Researcher Award, the 2009 ACM SIGDA Outstanding New Faculty Award, the 2006 NSF CAREER Award, the 2006 and 2007 IBM Faculty Award, and five Best Paper Awards. He is a Fellow of the IEEE.
Title: Co-Designing Algorithms and Hardware for Efficient Machine Learning System
Abstract:
The rapid deployment of ML has witnessed various challenges such as prolonged computation and high memory footprint on systems. In this talk, we will present several ML acceleration frameworks through algorithm-hardware co-design. First, we introduce a fine-grained crossbar-based ML accelerator. Rather than mapping trained positive and negative weights post hoc, we proactively ensure that all weights within the same crossbar column share the same sign, reducing area overhead. Additionally, by dividing the crossbar into sub-arrays, we enable efficient input zero-bit skipping. Next, we focus on co-designing graph neural network (GNN) training. To leverage training sparsity and enhance explainable ML, we propose a hardware-friendly nonlinearity with tailored GPU kernel support. Finally, we explore the use of Large Language Models (LLMs) for AI accelerator design, demonstrating their potential to automate and optimize hardware architectures for ML workloads.
Biography:
Caiwen Ding is an Associate Professor in the Department of Computer Science and Engineering at the University of Minnesota – Twin Cities. From 2019-2024, he was an assistant professor in the School of Computing at the University of Connecticut. He received his Ph.D. degree from Northeastern University. His research interests mainly include efficient embedded and high-performance systems for machine learning, and machine learning for hardware design. He is a recipient of the 2024 NSF CAREER Award, Amazon Research Award, and CISCO Research Award. He received the best paper nomination at 2018 DATE and 2021 DATE, the best paper award at the DL-Hardware Co-Design for AI Acceleration (DCAA) workshop at 2023 AAAI, outstanding student paper award at 2023 HPEC, publicity paper at 2022 DAC, and the 2021 Excellence in Teaching Award from UConn Provost. His team won first place in accuracy and fourth place overall at the 2022 TinyML Design Contest at ICCAD, third place at the 2024 ICCAD Contest on LLM-Assisted Hardware Code Generation. He serves as Associate Editor at IEEE MWSCAS in Neural Networks Track, and IEEE-TCCPS Newsletter.