Publications
Permanent URI for this collection
Browse
Browsing Publications by Author "Afifi, Salma, author"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access GHOST: a graph neural network accelerator using silicon photonics(Colorado State University. Libraries, 2023-09-09) Afifi, Salma, author; Sunny, Febin, author; Shafiee, Amin, author; Nikdast, Mahdi, author; Pasricha, Sudeep, author; ACM, publisherGraph neural networks (GNNs) have emerged as a powerful approach for modelling and learning from graph-structured data. Multiple fields have since benefitted enormously from the capabilities of GNNs, such as recommendation systems, social network analysis, drug discovery, and robotics. However, accelerating and efficiently processing GNNs require a unique approach that goes beyond conventional artificial neural network accelerators, due to the substantial computational and memory requirements of GNNs. The slowdown of scaling in CMOS platforms also motivates a search for alternative implementation substrates. In this paper, we present GHOST, the first silicon-photonic hardware accelerator for GNNs. GHOST efficiently alleviates the costs associated with both vertex-centric and edge-centric operations. It implements separately the three main stages involved in running GNNs in the optical domain, allowing it to be used for the inference of various widely used GNN models and architectures, such as graph convolution networks and graph attention networks. Our simulation studies indicate that GHOST exhibits at least 10.2 × better throughput and 3.8 × better energy efficiency when compared to GPU, TPU, CPU and multiple state-of-the-art GNN hardware accelerators.Item Open Access TRON: transformer neural network acceleration with non-coherent silicon photonics(Colorado State University. Libraries, 2023-06-05) Afifi, Salma, author; Sunny Febin, author; Nikdast, Mahdi, author; Pasricha, Sudeep, author; ACM, publisherTransformer neural networks are rapidly being integrated into state-of-the-art solutions for natural language processing (NLP) and computer vision. However, the complex structure of these models creates challenges for accelerating their execution on conventional electronic platforms. We propose the first silicon photonic hardware neural network accelerator called TRON for transformer-based models such as BERT, and Vision Transformers. Our analysis demonstrates that TRON exhibits at least 14× better throughput and 8× better energy efficiency, in comparison to state-of-the-art transformer accelerators.