{"doc_desc":{"producers":[{"name":"John Doe"}],"prod_date":"2025-03-14"},"project_desc":{"title_statement":{"title":"Double Jeopardy and Climate Impact in the Use of Large Language Models: Socio-economic Disparities and Reduced Utility for Non-English Speakers","idno":"JD_SCR_001"},"project_website":["https:\/\/github.com\/worldbank\/double-jeopardy-in-llms"],"authoring_entity":[{"name":"Aivin V. Solatorio","affiliation":"World Bank"},{"name":"Gabriel Stefanini Vicente","affiliation":"World Bank"},{"name":"Holly Krambeck","affiliation":"World Bank"},{"name":"Olivier Dupriez","affiliation":"World Bank"}],"abstract":"This work investigates the socio-economic disparities and reduced utility for non-English speakers in the use of large language models (LLMs). We use the FLORES-200, Ethnologue, and World Development Indicators datasets to analyze the socio-economic disparities in the use of LLMs. We also use the OpenAI's GPT-4 API to assess the reduced utility of LLMs for non-English speakers.","geographic_units":[{"name":"World","code":"WLD"}],"datasets":[{"name":"FLORES-200 and FLORES+","note":"A multilingual dataset covering 100 languages, with 1,000 sentences per language. Used for evaluating translation quality and computing the tokenization premium relative to English.","uri":"https:\/\/github.com\/facebookresearch\/flores"},{"name":"Ethnologue","uri":"https:\/\/www.ethnologue.com\/","note":"Provides linguistic data, including the number of speakers, geographic distribution, and writing systems. We use Ethnologue to estimate the number of speakers for each language."},{"name":"World Bank, World Development Indicators (WDI)","note":"Contains socio-economic data at the country level. Specifically, we use the GDP per capita in current US$ (NY.GDP.PCAP.CD) and the annual population growth rates (SP.POP.GROW) indicators to compute the population-weighted GDP for each language and for aligning population estimates to 2022 based on historical figures from Ethnologue.","uri":"https:\/\/datacatalog.worldbank.org\/dataset\/world-development-indicators","license":"CC BY 4.0"},{"note":"Used to assess the reduced utility of LLMs for non-English speakers. We applied translation with different prompting methods to generate reference translations for FLORES sentences. The LLM translated non-English sentences into English, with the original English sentences serving as a benchmark for evaluating translation quality.","uri":"https:\/\/openai.com\/api\/","name":"OpenAI GPT-4o and GTP-4 Turbo APIs"}],"software":[{"name":"Python","library":[" requests; pandas; docutils; jupyter-book; datasets; tiktoken; fire; openpyxl; tokenizers; ipykernel; transformers; torch; plotly; httpx; joblib; nbformat; openai; ipywidgets; groq; matplotlib; kaleido; scipy; statsmodels"],"version":"3.10"}],"scripts":[{"file_name":"compute-premium-costs.ipynb","description":"Computes the tokenization premium for the FLORES dataset. The calculation of the population-weighted GDP for each language is also done in this notebook.","license":[{"name":"Mozilla Public License"}],"title":"Tokenization of FLORES dataset","format":"Jupyter Python Notebook "},{"file_name":"back-translation-task.ipynb","description":"Generates the back-translation task for the FLORES dataset. The notebook implements the batched translation strategy for the translation task and uses the OpenAI GPT-4o API.","license":[{"name":"Mozilla Public License"}],"title":"Back-translation task for the FLORES dataset","format":"Jupyter Python Notebook "},{"description":"Notebook for additional analysis of the results. Key visualizations are generated in this notebook, including the comparison of the tokenization premiums between two different tokenizers (GPT-4o vs. GPT-4 Turbo).","file_name":"analysis.ipynb","license":[{"name":"Mozilla Public License"}],"title":"Additional analysis of the results","format":"Jupyter Python Notebook "}],"repository_uri":[{"name":"double-jeopardy-in-llms","uri":"https:\/\/github.com\/worldbank\/double-jeopardy-in-llms\/tree\/main","type":"GitHub"}],"technology_environment":"This work has been developed using a MacBook Pro with an M1 Pro processor and 64GB of RAM. No GPU is needed for the computations. ","reproduction_instructions":"Some of the notebooks are not publicly available because they are used to handle proprietary data from Ethnologue which is not publicly available. One of the notebooks is used to compute the adjusted population based on the historical figures from Ethnologue and the annual population growth rates.\n\nThis repository uses poetry to manage dependencies. To install the dependencies, run the following command:\n`poetry install'\n\nTo review the list of dependencies, please refer to the pyproject.toml file.\n\nVS Code \/ Cursor users can use the Python extension to run the notebooks.\n\nUse the following command to spin up a local Jupyter server:\n\n`poetry run jupyter notebook'\n\nIt is recommended to use a virtual environment to run the code.\n\nAdditionaly, the notebooks\/compute-premium-costs.ipynb notebook uses the OpenAI API. To use the API, you need to set the OPENAI_API_KEY environment variable. You can create a .env file in the root of the repository and add the following:\n\n`OPENAI_API_KEY=<your-openai-api-key>'","technology_requirements":"Access to the OpenAI API is required.","license":[{"name":"Mozilla Public License","uri":"https:\/\/www.mozilla.org\/en-US\/MPL\/"}],"citation_requirement":"Please cite our paper as follows when referencing this work.\n\n@misc{solatorio2024doublejeopardyclimateimpact,\n      title={Double Jeopardy and Climate Impact in the Use of Large Language Models: Socio-economic Disparities and Reduced Utility for Non-English Speakers},\n      author={Aivin V. Solatorio and Gabriel Stefanini Vicente and Holly Krambeck and Olivier Dupriez},\n      year={2024},\n      eprint={2410.10665},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL},\n      url={https:\/\/arxiv.org\/abs\/2410.10665},\n}","output":[{"type":"Working paper","abstract":"Artificial Intelligence (AI), particularly large language models (LLMs), holds the potential to bridge language and information gaps, which can benefit the economies of developing nations. However, our analysis of FLORES-200, FLORES+, Ethnologue, and World Development Indicators data reveals that these benefits largely favor English speakers. Speakers of languages in low-income and lower-middle-income countries face higher costs when using OpenAI's GPT models via APIs because of how the system processes the input -- tokenization. Around 1.5 billion people, speaking languages primarily from lower-middle-income countries, could incur costs that are 4 to 6 times higher than those faced by English speakers. Disparities in LLM performance are significant, and tokenization in models priced per token amplifies inequalities in access, cost, and utility. Moreover, using the quality of translation tasks as a proxy measure, we show that LLMs perform poorly in low-resource languages, presenting a ``double jeopardy\" of higher costs and poor performance for these users. We also discuss the direct impact of fragmentation in tokenizing low-resource languages on climate. This underscores the need for fairer algorithm development to benefit all linguistic groups.","title":"Double Jeopardy and Climate Impact in the Use of Large Language Models: Socio-economic Disparities and Reduced Utility for Non-English Speakers","authors":"Aivin V. Solatorio, Gabriel Stefanini Vicente, Holly Krambeck, Olivier Dupriez","uri":"https:\/\/arxiv.org\/abs\/2410.10665","doi":" https:\/\/doi.org\/10.48550\/arXiv.2410.10665"}],"production_date":"2024-10","language":[{"name":"English","code":"EN"}]},"schematype":"script"}