This article presents a range within which the electricity consumption per query of ChatGPT may fall and compares it to the measured energy consumption of two other large language models (LLMs).
This is an interesting undertaking for two reasons:
First of all, if organizations know how much electricity ChatGPT requires to answer one question, they can approximate the carbon footprint associated with their use of ChatGPT or similar services such as OpenAI’s LLM APIs.
For more than 50,000 European businesses, this may soon become highly relevant, as the coming Corporate Social Responsibility Directive (CSRD) will likely force them to disclose scope 3 emissions in their management reports . I expect usage of services like ChatGPT to fall under scope 3 because cloud compute is considered to be scope 3 . I hope this article can provide inspiration for how to estimate your organization’s scope 3 emissions from ChatGPT and similar services.
Another reason why it’s interesting to look into ChatGPT’s energy use per query is that it’ll enable individuals to come up with their own estimates of ChatGPT’s total electricity consumption or carbon footprint. As such, I hope this blog post will inspire others to publish similar work.
In the remainder of this article, the terms “query” and “request” will be used interchangeably.
In this section, I’ll present the methodology used to produce estimates of ChatGPT’s electricity consumption per query. The estimates rely on two different methods:
- One in which we estimate the total energy consumption of the hardware ChatGPT presumably is running on and divide by the assumed number of daily queries
- One in which we use the reported…