Afifi, Salma, authorPasricha, Sudeep, authorNikdast, Mahdi, authorACM, publisher2025-09-252025-09-252025-04-09Salma Afifi, Sudeep Pasricha, and Mahdi Nikdast. 2024. Shedding Light on LLMs: Harnessing Photonic Neural Networks for Accelerating LLMs. In Proceedings of ACM/IEEE International conference on computer-aided design (ICCAD'24). Newark, New Jersey, USA, 8 pages. https://doi.org/10.1145/3676536.3697137https://hdl.handle.net/10217/242038At head of title: Invited paper.Large language models (LLMs) are foundational to the advancement of state-of-the-art natural language processing (NLP) and computer vision applications. However, their intricate architectures and the complexity of their underlying neural networks present significant challenges for efficient acceleration on conventional electronic platforms. Silicon photonics offers a compelling alternative. In this paper, we describe our recent efforts on developing a novel hardware accelerator that leverages silicon photonics to accelerate transformer neural networks integral to LLMs. Our evaluation demonstrates that the proposed accelerator delivers up to 14× higher throughput and 8× greater energy efficiency compared to leading-edge LLM hardware accelerators, including CPUs, GPUs, and TPUs.born digitalarticleseng©Salma Afifi, et al. ACM 2025. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ICCAD '24, https://dx.doi.org/10.1145/3676536.3697137.photonic computinglarge language modelsinference accelerationoptical computingShedding light on LLMs: harnessing photonic neural networks for accelerating LLMsTexthttps://doi.org/10.1145/3676536.3697137