Recently, the non-profit artificial intelligence research institute Epoch AI released a study analyzing the energy consumption of ChatGPT, challenging previous assumptions about its power usage. It was previously believed that answering a single question with ChatGPT consumed approximately 3 watt-hours, which is ten times more than a Google search. However, Epoch AI's research casts doubt on this claim.
Using OpenAI's latest default model, GPT-4o, as a reference, Epoch AI found that, on average, answering a question with ChatGPT consumes only about 0.3 watt-hours, significantly lower than many household appliances. This suggests that ChatGPT's energy consumption in everyday use is negligible compared to activities such as using home appliances, heating, cooling, or driving a car.
The analysts involved in the study noted that earlier overestimations of ChatGPT's energy consumption may have been due to outdated data and assumptions. For instance, some early studies assumed that OpenAI used older, less efficient chips to run its models, leading to inflated energy consumption estimates.
Although Epoch AI's figure of 0.3 watt-hours provides a more accurate estimate, it remains an approximation since OpenAI does not disclose sufficient information for precise calculations. Furthermore, this analysis does not account for additional energy consumption from supplementary features like image generation within ChatGPT, nor the processing of input data. Analysts acknowledged that queries involving long inputs, such as large documents, could result in higher energy usage compared to standard questions.
Despite significant breakthroughs in AI efficiency in recent years, the growing scale of AI deployment is expected to drive the expansion of large-scale, high-energy-consuming infrastructure. According to a report by RAND Corporation, AI data centers may require electricity supplies close to California's total power capacity in 2022 (68 gigawatts) within the next two years. By 2030, training a cutting-edge AI model could demand power output equivalent to that of eight nuclear reactors (8 gigawatts).
As a platform with a rapidly growing user base, ChatGPT also has substantial server requirements. OpenAI and its investment partners plan to invest billions of dollars over the coming years to build new AI data centers.
Meanwhile, OpenAI and the broader AI industry are shifting towards inference models, which typically offer stronger task execution capabilities but require greater computational power. Unlike models like GPT-4o, which can almost instantaneously respond to queries, inference models take several seconds to minutes to "think" before providing answers, consuming more computing resources and electricity in the process.
Although OpenAI has started rolling out more energy-efficient inference models, such as o3-mini, these efficiency improvements currently seem insufficient to offset the increased power demands resulting from the "thinking" process of inference models and the global growth in AI usage.
For those concerned about AI energy consumption, analysts recommend reducing the use of AI applications like ChatGPT or opting for models that minimize computational demands. For example, smaller AI models can be used, and caution should be exercised when processing or generating large amounts of data.