Repository logo
 

Sustainable carbon-aware and water-efficient LLM scheduling in geo-distributed cloud datacenters

dc.contributor.authorMoore, Hayden, author
dc.contributor.authorQi, Sirui, author
dc.contributor.authorHogade, Ninad, author
dc.contributor.authorMilojicic, Dejan, author
dc.contributor.authorBash, Cullen, author
dc.contributor.authorPasricha, Sudeep, author
dc.contributor.authorACM, publisher
dc.date.accessioned2025-09-25T18:41:06Z
dc.date.available2025-09-25T18:41:06Z
dc.date.issued2025-06-29
dc.description.abstractIn recent years, Large Language Models (LLM) such as ChatGPT, Copilot, and Gemini have been widely adopted in different areas. As the use of LLMs continues to grow, many efforts have focused on reducing the massive training overheads of these models. But it is the environmental impact of handling user requests to LLMs that is increasingly becoming a concern. Recent studies estimate that the costs of operating LLMs in their inference phase can exceed training costs by 25× per year. As LLMs are queried incessantly, the cumulative carbon footprint for the operational phase has been shown to far exceed the footprint during the training phase. Further, estimates indicate that 500 ml of fresh water is expended for every 20-50 requests to LLMs during inference. To address these important sustainability issues with LLMs, we propose a novel framework called SLIT to co-optimize LLM quality of service (time-to-first token), carbon emissions, water usage, and energy costs. The framework utilizes a machine learning (ML) based metaheuristic to enhance the sustainability of LLM hosting across geo-distributed cloud datacenters. Such a framework will become increasingly vital as LLMs proliferate.
dc.format.mediumborn digital
dc.format.mediumarticles
dc.identifier.bibliographicCitationHayden Moore, Sirui Qi, Ninad Hogade, Dejan Milojicic, Cullen Bash, and Sudeep Pasricha. 2025. Sustainable Carbon-Aware and Water-Efficient LLM Scheduling in Geo-Distributed Cloud Datacenters. In Great Lakes Symposium on VLSI 2025 (GLSVLSI '25), June 30-July 02, 2025, New Orleans, LA, USA. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3716368.3735301
dc.identifier.doihttps://doi.org/10.1145/3716368.3735301
dc.identifier.urihttps://hdl.handle.net/10217/242040
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado State University. Libraries
dc.relation.ispartofPublications
dc.relation.ispartofACM DL Digital Library
dc.rights©Hayden Moore, et al. ACM 2025. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in GLSVLSI '25, https://dx.doi.org/10.1145/3716368.3735301.
dc.subjectlarge language model
dc.subjectcarbon emissions
dc.subjectwater
dc.subjectenergy cost
dc.titleSustainable carbon-aware and water-efficient LLM scheduling in geo-distributed cloud datacenters
dc.typeText

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
FACF_ACMOA_3716368.3735301.pdf
Size:
1.58 MB
Format:
Adobe Portable Document Format

Collections