RISC-V Vector for Scalable Acceleration of AI and HPC
Cuándo: 10/02 (15:00 — 16:00)
Lugar: Sala de Grados
Ponente
Descripción
The rapid growth of data-parallel workloads in artificial intelligence, signal and image processing, multimedia, and scientific computing is driving the need for scalable and energy-efficient computing architectures. Traditional approaches based on frequency scaling, instruction-level parallelism, and fixed-width SIMD are increasingly limited in terms of performance portability and long-term scalability. In this context, our company delivers integrated hardware, software, and system-level solutions for high-performance computing, with a strong focus on vector and matrix acceleration across edge, data center, and next-generation infrastructure. Leveraging the openness and modularity of the RISC-V ecosystem, we adopt the RISC-V Vector Extension (RVV) as a key building block to address data-level parallelism efficiently and portably. This presentation introduces the RVV architecture and its vector-length agnostic (VLA) programming model, which decouples software from hardware vector width and enables a single binary to scale transparently across different implementations. We discuss architectural design choices—including long vectors, multi-lane vector units, integration with in-order and out-of-order cores, and memory access models—and illustrate how these choices translate into performance, scalability, and energy efficiency for real-world workloads.
We also cover the RVV programming model, compiler support, and comparisons with fixed-width SIMD and GPU-based acceleration, highlighting where RVV-based solutions provide lower latency, better control-flow handling, and reduced system complexity. Finally, we outline our roadmap toward combined vector and matrix units as part of a full-stack approach to accelerating AI, HPC, and system-level workloads.
Inscripción
Esta actividad requiere registro previo para control de aforo. Por favor usa este formulario de inscripción.