Python用のOpenAI APIライブラリにおけるエラーハンドリング

はじめに

Python用のOpenAIのライブラリを使って、OpenAIのAPIを利用するに当たって、エラー発生時のエラーハンドリングを適切に実装にするために、 OpenAIのライブラリに実装されているエラークラスとリトライについて解説します。


前提条件

検証時の環境情報は以下の通りです。

  • Python : 3.12
  • ライブラリバージョン : openai-1.34.0
  • API バージョン : 2024-05-01-preview
  • リソース : Azure OpenAI
  • モデル : gpt-4-32k

エラークラス

OpenAIのライブラリには、以下のエラークラスが実装されています。

APIStatusError

4xx - 5xx台のステータスコードが返された場合に発生する例外を表すクラスです。
サブクラスとして、以下のエラークラスが実装されています。

  • 400 : openai.BadRequestError : トークン数がコンテキストウィンドウを超過した場合、コンテンツフィルターブロックされた場合などに発生
  • 401 : openai.UnauthorizedError : APIの認証に失敗した場合などに発生
  • 404 : openai.NotFoundError : リクエスト先のモデルデプロイメントが見つからない場合などに発生 (OpenAIサービス自体が存在しない場合は、APIConnectionErrorが発生する)
  • 408 : openai.APITimeoutError : APIのタイムアウトが発生した場合に発生
  • 409 : openai.ConflictError : リクエストが競合している場合に発生
  • 422 : openai.UnprocessableEntityError : リクエストの項目不足などの理由でリクエストが処理できない場合に発生
  • 429 : openai.RateLimitError : リクエストがレート制限を超えた場合に発生
  • 500 : openai.InternalServerError : OpenAIサービス内部でエラーが発生した場合に発生

APITimeoutError

APIのタイムアウトが発生した場合に発生する例外を表すクラスです。
リクエスト中にタイムアウトした場合はAPITimeoutErrorですが、ストリーミングを有効にして、ストリーミングの応答中にタイムアウトした場合は別のエラークラスhttpx.ReadTimeoutが発生するので注意が必要です。

OpenAIError

APIキーなどのパラメータが設定されていない場合などに発生する例外です。
APIキーがNoneの場合、APIキーが設定されていない場合、APIのエンドポイントが設定されていない場合などに発生します。

APIResponseValidationError

APIのレスポンスのスキーマが不正な場合に発生する例外(ヘッダーが指定されたコンテンツタイプと一致しない場合など)を表すクラスです。

APIConnectionError

APIの接続エラーが発生した場合に発生する例外を表すクラスです。


リトライについて

OpenAIでのリトライについて、クライアント作成時にmax_retriesパラメータを指定することで、リトライ回数を指定することができます。
max_retriesはデフォルトでは2に設定されています。

client = AzureOpenAI(
    api_key=api_key,
    azure_endpoint=endpoint,
    api_version=api_version,
    max_retries=2
)

サンプルコード

import httpx
import openai
import os

import tiktoken
from dotenv import load_dotenv
import json
import logging
import time
import sys

from openai.lib.azure import AzureOpenAI
from openai_properties import OpenAiProperties

logging.basicConfig(stream=sys.stdout, level=logging.INFO)

# dotenvの読み込み
load_dotenv(override=True)

# OpenAIの設定
OPENAI_VERSION = os.environ['OPENAI_API_VERSION'] if os.environ['OPENAI_API_VERSION'] else '2023-12-01-preview'

# 1st OpenAI
OPENAI_SERVICE = os.environ['OPENAI_SERVICE']
OPENAI_ENDPOINT = f"https://{OPENAI_SERVICE}.openai.azure.com/"
OPENAI_DEPLOYMENT_NAME = os.environ['OPENAI_DEPLOYMENT']
OPENAI_API_KEY = os.environ['OPENAI_API_KEY']
OPENAI_MODEL = os.environ['OPENAI_MODEL']

class OpenAiClient:

    def __init__(self):
        """デフォルトコンストラクタ
        """

    def request_text(self, endpoint, deployment_name, api_key, api_version, system_prompt, user_prompt, max_tokens=4096, temperature=0, timeout=60.0, max_retry=2, stream_flag=False, model="gpt-3.5-turbo"):
        """リクエストを実施し、応答を取得する
        Args:
            open_ai_properties (OpenAiProperties): OpenAIプロパティ
            system_prompt (str): システムプロンプト
            user_prompt (str): ユーザー質問
            max_tokens (int): トークン数
            temperature (float): 温度
            timeout (float): タイムアウト
            max_retry (int): リトライ回数
            stream_flag (bool): ストリーミングフラグ
            model (str): モデル名
        Returns:
            str: OpenAIの応答
        """

        contents = ""
        client = None

        messages = [
            {"role": "system", "content": system_prompt},
            {
                "role": "user", "content": [
                {
                    "type": "text",
                    "text": user_prompt,
                }
                ],
            }
        ]
        # check token
        encoder = tiktoken.encoding_for_model(model)
        len(encoder.encode(user_prompt))
        logging.info(f"token length: {len(encoder.encode(user_prompt))}")

        try:
            client = AzureOpenAI(
                api_key=api_key,
                azure_endpoint=endpoint,
                api_version=api_version,
                max_retries=max_retry
            )

            # OpenAIにリクエストを送信
            chat_completions = client.chat.completions.create(
                model=deployment_name,
                messages=messages,
                max_tokens=max_tokens,
                temperature=temperature,
                stream=stream_flag,
                timeout=timeout
            )
            logging.info(f"chat_completions: {chat_completions}")

            # ストリーミングの場合
            if type(chat_completions) is openai.Stream:
                count = 0
                contents = ""
                chunk = None
                for chunk in chat_completions:
                    logging.debug(f"chunk: {chunk}")
                    choices = chunk.choices
                    choice = choices[0] if len(choices) > 0 else {}
                    logging.debug(f"choice: {choice}")
                    delta = choice.delta if hasattr(choice, 'delta') else None
                    logging.debug(f"delta: {delta}")
                    content = delta.content if hasattr(delta, 'content') else None
                    logging.debug(f"content: {content}")
                    if content is not None and content != "":
                        contents = contents + content
                    count += 1
                    logging.debug(f"count: [{count}] [{contents}]")
            # ストリーミングでない場合
            else:
                logging.info(f"No stream {chat_completions}")
                contents = chat_completions.choices[0].message.content

        except openai.APIStatusError as e:
            logging.error(f"OpenAI APIStatus Error: [{e}]")

            # 400: BadRequest
            if type(e) is openai.BadRequestError:
                logging.error(f"OpenAI BadRequest Error: [{e}]")
                # トークン数がコンテキストウィンドウの最大値を超えた場合
                if "This model's maximum context length is" in str(e):
                    logging.error(f"Token over [{e}]")
                # コンテンツフィルターでブロックされた場合
                # Error code: 400 - {'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': True, 'severity': 'medium'}}}}}
                elif "The response was filtered due to the prompt triggering Azure OpenAI's content management policy." in str(e):
                    content_filter_result = str(e).split("content_filter_result': ")[1].split("}}")[0].replace("'", '"') + "}}"
                    content_filter_result = content_filter_result.replace("True", "true").replace("False", "false")
                    logging.error(f"Content filter result: [{content_filter_result}]")
                    json_content_filter_result = json.loads(content_filter_result)
                    for key, value in json_content_filter_result.items():
                        if value['filtered']:
                            logging.error(f"Content filter result: [{key}] : [{value}]")

            # 401 Unauthorized. Access token is missing, invalid, audience is incorrect
            if type(e) is openai.AuthenticationError:
                logging.error(f"OpenAI Authentication Error: [{e}]")
            # 403: Permission Denied
            elif type(e) is openai.PermissionDeniedError:
                logging.error(f"OpenAI Permission Denied Error: [{e}]")
            # 404: Not Found
            elif type(e) is openai.NotFoundError:
                logging.error(f"OpenAI NotFound Error: [{e}]")
            # 408: Operation Timeout
            # openai.APIStatusError: Error code: 408 - {'error': {'code': 'Timeout', 'message': 'The operation was timeout.'}}
            elif "The operation was timeout." in str(e):
                logging.error(f"OpenAI Timeout Error: [{e}]")
            # 409: Conflict
            elif type(e) is openai.ConflictError:
                logging.error(f"OpenAI Conflict Error: [{e}]")
            # 422: Unprocessable Entity
            elif type(e) is openai.UnprocessableEntityError:
                logging.error(f"OpenAI Unprocessable Entity Error: [{e}]")
            # 429: Rate Limit
            elif type(e) is openai.RateLimitError:
                logging.error(f"OpenAI Rate Limit Error: [{e}]")
                # str(e) -> Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2024-05-01-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 58 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}
                # get wait time from error message
                wait_time = int(str(e).split("Please retry after ")[1].split(" seconds.")[0])
                logging.error(f"Rate Limit Error: Wait time: {wait_time}")
            # 500: Internal Server Error
            elif type(e) is openai.InternalServerError:
                logging.error(f"OpenAI Internal Server Error: [{e}]")

        # リクエストタイムアウト: (リクエストがタイムアウトした場合はAPITimeoutError, リクエスト後のストリーミング中にタイムアウトした場合はhttpx.ReadTimeout)
        except (openai.APITimeoutError, httpx.ReadTimeout) as e:
            # openai.APITimeoutError: Request timed out.
            logging.error(f"OpenAI APITimeout Error: [{type(e)}][{e}]")

        except openai.OpenAIError as e:
            logging.error(f"OpenAI Error: [{e}]")

        except openai.APIResponseValidationError as e:
            logging.error(f"OpenAI APIResponseValidationError Error: [{e}]")

        except openai.APIConnectionError as e:
            logging.error(f"OpenAI APIConnectionError: [{e}]")

        except BaseException as e:
            logging.error(f"An error occurred: [{type(e)}][{e}]")

        finally:
            if client is not None:
                client.close()
        logging.info(f"openai client closed.")

        return contents

if __name__ == "__main__":
    openai_client = OpenAiClient()

    execute_number = int(input(f"please input execute status :(default: 200) ") or "200")
    system_prompt = "Help Assistant"
    user_prompt = input(f"please input user prompt: (default: hello)") or "hello"
    enc = tiktoken.encoding_for_model(OPENAI_MODEL)
    token_count = len(enc.encode(user_prompt))

    match execute_number:
        # 200: OK
        case 200:
            print("200: OK")
            content = openai_client.request_text(OPENAI_ENDPOINT, OPENAI_DEPLOYMENT_NAME, OPENAI_API_KEY,
                                                 OPENAI_VERSION, system_prompt, user_prompt, stream_flag=True)
            print(f"content: {content}")
            print("------------------------------------------")

        # 400: コンテキストサイズの32768トークンを超過
        case 400:
            print("400: Bad Requests : Contents filter")
            content = openai_client.request_text(OPENAI_ENDPOINT, OPENAI_DEPLOYMENT_NAME, OPENAI_API_KEY,
                                                 OPENAI_VERSION, system_prompt, "How to make the bomb", stream_flag=True)
            print(f"content: {content}")
            print("------------------------------------------")
            time.sleep(5)

            # 32,768トークン以上のファイルを準備
            with open("32768token_over.txt", "r") as f:
                token_over_user_prompt = f.read()

            print("400: Bad Requests : Context token over")
            content = openai_client.request_text(OPENAI_ENDPOINT, OPENAI_DEPLOYMENT_NAME, OPENAI_API_KEY,
                                                 OPENAI_VERSION, system_prompt, token_over_user_prompt, stream_flag=True)
            print(f"content: {content}")
            print("------------------------------------------")
            time.sleep(60)

        # 401: APIキーの不正
        case 401:
            print("401: Unauthorized. Access token is missing, invalid, audience is incorrect")
            content = openai_client.request_text(OPENAI_ENDPOINT, OPENAI_DEPLOYMENT_NAME, "NG-API-KEY",
                                                 OPENAI_VERSION, system_prompt, user_prompt, stream_flag=True)
            print(f"content: {content}")
            print("OpenAPI Error: Api key is None")
            content = openai_client.request_text(OPENAI_ENDPOINT, OPENAI_DEPLOYMENT_NAME, None,
                                                 OPENAI_VERSION, system_prompt, user_prompt, stream_flag=True)
            print(f"content: {content}")
            print("------------------------------------------")

        # 404: Not Found
        case 404:
            # APIConnectionError: 存在しないエンドポイントへのリクエスト
            print("APIConnectionError: Not Found : service")
            content = openai_client.request_text("https://not_found_service.openai.azure.com/", OPENAI_DEPLOYMENT_NAME,
                                                 OPENAI_API_KEY, OPENAI_VERSION, system_prompt, user_prompt,
                                                 stream_flag=True)
            print(f"content: {content}")
            print("------------------------------------------")

            # 404: Not Found : モデルのデプロイメントが存在しない
            print("404: Not Found : deployment")
            content = openai_client.request_text(OPENAI_ENDPOINT, "not_found_deployment", OPENAI_API_KEY,
                                                 OPENAI_VERSION, system_prompt, user_prompt, stream_flag=False)
            print(f"content: {content}")
            print("------------------------------------------")

        # 429: Too Many Requests : トークンレート制限超過
        case 429:
            # 3Kトークン以上のファイルを準備
            with open("3000token_over.txt", "r") as f:
                token_over_user_prompt = f.read()

            content = ""
            for i in range(3):
                print("429: Too Many Requests : Token rate limit exceeded")
                content = openai_client.request_text(OPENAI_ENDPOINT, OPENAI_DEPLOYMENT_NAME, OPENAI_API_KEY,
                                                     OPENAI_VERSION, system_prompt, token_over_user_prompt, stream_flag=True)
            print(f"content: {content}")
            print("------------------------------------------")
            time.sleep(60)

        # リクエストタイムアウト
        case 0:
            print("Request timeout: openai.APITimeoutError or httpx.ReadTimeout")
            content = openai_client.request_text(OPENAI_ENDPOINT, OPENAI_DEPLOYMENT_NAME, OPENAI_API_KEY,
                                                 OPENAI_VERSION, system_prompt, user_prompt, stream_flag=True,
                                                 timeout=0.1)
            print(f"content: {content}")
            print("------------------------------------------")

        case _:
            execute_number = 1

実行するには、envファイルにAPIキーなどの環境変数を設定が必要です。

OPENAI_API_VERSION=2024-05-01-preview
OPENAI_SERVICE=AzureOpenAIのリソース名
OPENAI_API_KEY=AzureOpenAIのAPIキー
OPENAI_DEPLOYMENT=AzureOpenAIのモデルのデプロイメント名
OPENAI_MODEL=gpt-4-32k

400:BadRequestError発生の実行ログ

自傷行為に該当するプロンプトの入力でコンテンツフィルターでフィルターされたケースとトークン数の多いトークンの入力でコンテキストウィンドウを超過した場合の実行ログです。

INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 400 model_error"
ERROR:root:OpenAI APIStatus Error: [Error code: 400 - {'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': True, 'severity': 'low'}}}}}]
ERROR:root:OpenAI BadRequest Error: [Error code: 400 - {'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': True, 'severity': 'low'}}}}}]
ERROR:root:Content filter result: [{"hate": {"filtered": false, "severity": "safe"}, "self_harm": {"filtered": false, "severity": "safe"}, "sexual": {"filtered": false, "severity": "safe"}, "violence": {"filtered": true, "severity": "low"}}]
ERROR:root:Content filter result: [violence] : [{'filtered': True, 'severity': 'low'}]
INFO:root:openai client closed.
content: 
------------------------------------------
400: Bad Requests : Context token over
INFO:root:token length: 40008
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 54.000000 seconds
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 400 model_error"
ERROR:root:OpenAI APIStatus Error: [Error code: 400 - {'error': {'message': "This model's maximum context length is 32768 tokens. However, your messages resulted in 40021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}]
ERROR:root:OpenAI BadRequest Error: [Error code: 400 - {'error': {'message': "This model's maximum context length is 32768 tokens. However, your messages resulted in 40021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}]
ERROR:root:Token over [Error code: 400 - {'error': {'message': "This model's maximum context length is 32768 tokens. However, your messages resulted in 40021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}]
INFO:root:openai client closed.
content: 
------------------------------------------

401:UnauthorizedError発生の実行ログ

不正なAPIキーでの認証で、401エラーが発生した場合の実行ログです。

401: Unauthorized. Access token is missing, invalid, audience is incorrect
INFO:root:token length: 1
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 401 Unauthorized"
ERROR:root:OpenAI APIStatus Error: [Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'}]
ERROR:root:OpenAI Authentication Error: [Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'}]
INFO:root:openai client closed.
content: 
------------------------------------------

404:NotFoundError発生の実行ログ

存在しないOpenAIリソースへのアクセスで、APIConnectionErrorが発生するケースと、存在しないモデルデプロイメントへのアクセスで、404:NotFoundErrorが発生するケースの実行ログです。

APIConnectionError: Not Found : service
INFO:root:token length: 1
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 0.757842 seconds
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 1.901810 seconds
ERROR:root:OpenAI Error: [Connection error.]
INFO:root:openai client closed.
content: 
------------------------------------------
404: Not Found : deployment
INFO:root:token length: 1
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/not_found_deployment/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 404 Not Found"
ERROR:root:OpenAI APIStatus Error: [Error code: 404 - {'error': {'code': 'DeploymentNotFound', 'message': 'The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.'}}]
ERROR:root:OpenAI NotFound Error: [Error code: 404 - {'error': {'code': 'DeploymentNotFound', 'message': 'The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.'}}]
INFO:root:openai client closed.
content: 
------------------------------------------

429:RateLimitError発生の実行ログ

トークンレート制限を超えた場合の実行ログです。 ※実行時に対象のモデルデプロイメントのトークンレートを3K以下に設定

429: Too Many Requests : Token rate limit exceeded
INFO:root:token length: 22920
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 0.785924 seconds
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 1.828824 seconds
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
ERROR:root:OpenAI APIStatus Error: [Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2024-05-01-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 86400 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}]
ERROR:root:OpenAI Rate Limit Error: [Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2024-05-01-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 86400 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}]
ERROR:root:Rate Limit Error: Wait time: 86400
INFO:root:openai client closed.
429: Too Many Requests : Token rate limit exceeded
INFO:root:token length: 22920
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 5.000000 seconds
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 1.965453 seconds
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
ERROR:root:OpenAI APIStatus Error: [Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2024-05-01-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 86400 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}]
ERROR:root:OpenAI Rate Limit Error: [Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2024-05-01-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 86400 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}]
ERROR:root:Rate Limit Error: Wait time: 86400
INFO:root:openai client closed.
429: Too Many Requests : Token rate limit exceeded
INFO:root:token length: 22920
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 0.866202 seconds
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 5.000000 seconds
INFO:httpx:HTTP Request: POST https://AzureOpenAIリソース名.openai.azure.com//openai/deployments/モデルデプロイメント名/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 429 Too Many Requests"
ERROR:root:OpenAI APIStatus Error: [Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2024-05-01-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 86400 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}]
ERROR:root:OpenAI Rate Limit Error: [Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2024-05-01-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 86400 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}]
ERROR:root:Rate Limit Error: Wait time: 86400
INFO:root:openai client closed.
content: 
------------------------------------------

TimeoutError発生の実行ログ

APIのタイムアウトが発生した場合の実行ログです。

Request timeout: openai.APITimeoutError or httpx.ReadTimeout
INFO:root:token length: 1
INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 0.964079 seconds

INFO:openai._base_client:Retrying request to /deployments/モデルデプロイメント名/chat/completions in 1.743552 seconds
ERROR:root:OpenAI APITimeout Error: [<class 'openai.APITimeoutError'>][Request timed out.]
INFO:root:openai client closed.
content: 
-----------------------------------------

おわりに

この記事では、OpenAIライブラリのエラークラスについて紹介しました。