GPTMeta API

GPTs API

On the basis of Official API transit

brokerage service

In this era of openness and sharing, OpenAI leads a revolution in artificial intelligence. Now, we announce to the world that we have fully supported all models of OpenAI, for example, supporting GPT-4-ALL, GPT-4-multimodal, GPT-4-gizmo-*, etc. as well as a variety of home-grown big models. Most excitingly, we have introduced the more powerful and influential GPT-4o to the world!

Why Choose Us

Multi-model support

Global High Speed Access DCDN Global Acceleration

Fair and transparent billing

Data privacy security

💰affordable

We provide the same quality of service at extremely favorable prices to ensure that each user enjoys the most cost-effective service experience.

🚀easy use

There is no need to apply for an OpenAI API account on your own, we provide you with a one-stop solution and the migration process takes only two steps.

🔒privacy and security

We strictly adhere to the OpenAI API usage agreement to ensure that user privacy and data security are safeguarded.

🔥ultra-high concurrency

We have our own account pool and high concurrency processing capabilities to ensure that we are still able to provide stable and efficient services when the volume of user requests peaks.

💨Low network latency

Our API reverse generation servers are deployed around the globe, enabling us to select the optimal node to reduce call latency and improve response time.

🛡️ technical Support

Our professional team is always on standby to ensure that we can provide timely answers and technical support when users need help, so that you can enjoy a more intimate service.

Daily API calls 200K+

In this intelligent world, you have the opportunity to shape your own future. We eagerly look forward to your every innovation, let us work together to build a new era of artificial intelligence. We are committed to strictly adhering to the official fee schedule, without any additional fees. Each request will be deducted transparently according to the official standards, ensuring that you understand each consumption clearly and do not need to worry about any other issues. Let you rest assured to experience and enjoy the convenience and fun brought by intelligent technology!

Number of Large Models Supported 100+

Here you will have the unique opportunity to not only experience top-notch Big Models developed by the OpenAI team, but also have access to domestic Big Models powered by leading domestic tech companies - such as Ali, Smart Spectrum, 360, Xunfei, Baidu and Tencent. These models represent the latest technological breakthroughs in various fields in China, covering a wide range of areas such as artificial intelligence, natural language processing, and image recognition. They are not only the pinnacle of technology, but also the best embodiment of intelligent services

Safeguard data security

On our Large Language Model (LLM) Aggregation API platform, data security is our top priority. We adopt multi-layered security measures, such as industry-standard data encryption, strict access rights management and multi-factor authentication, to prevent unauthorized access. In addition, we conduct regular security checks and vulnerability assessments to ensure a secure and robust platform. Our goal is to provide users with a secure and reliable service platform that you can use without worry!

Pricing Strategy

💳 Billing strategy

We base our pricing on the complexity of the AI model you choose and the amount of data (in terms of Token count) that we process:

Model name Ask a question about prices Answer price Model Upper Limit Remarks
gpt-3.5-turbo $0.0015 /1k tokens $0.002 /1k tokens 4K/tokens Good for quick answers to simple questions.
gpt-3.5-turbo-16k $0.003 /1k tokens $0.004 /1k tokens 16K/tokens Ideal for quick answers to simple questions and more words.
gpt-3.5-turbo-1106 $0.001 /1k tokens $0.002 /1k tokens 16K tokens Newest models, latest data, lower prices.
gpt-3.5-turbo-instruct $0.0015 /1k tokens $0.0020 /1k tokens 8K tokens Fine-tuned models for special scenes.
gpt-4 $0.03 /1k tokens $0.06 /1k tokens 8K tokens The strongest model with more powerful features.
gpt-4-1106-preview $0.01 /1k tokens $0.03 /1k tokens 4K tokens Latest model, context memory up to 128K, output up to 4K.
gpt-4-all $0.03 /1k tokens $0.06 /1k tokens 32K tokens Versatile version of the GPT4 model with integrated multiple processing capabilities.
gpt-4-gizmo $0.03 /1k tokens $0.06 /1k tokens 32K tokens Official GPTs modeled to focus on specific application scenarios with plug-ins available for all GPTs.
gpt-4-vision-preview $0.01 /1k tokens $0.03 /1k tokens 8K tokens Latest model, multimodal.
gpt-4-32K $0.06 /1k tokens $0.12 /1k tokens 32K tokens The strongest model with more features and more words.
gemini-pro $0.0015 /1k tokens $0.003 /1k tokens 8K tokens Google's latest model for more sophisticated language understanding and generation capabilities.
gemini-pro-vision $0.003 /1k tokens $0.006 /1k tokens 8K tokens Google's latest model for more sophisticated language understanding and generation capabilities.
dall-e-3 ---- HD1024x1024: $0.08 / sheet ---- GPT latest painting model. (Ultra HD support)
claude-1.3-100k $0.008 /1k tokens $0.008 /1k tokens 100K tokens Claude's latest model.
claude-2 $0.008 /1k tokens $0.008 /1k tokens 100K tokens Claude 2 official model.
claude-3-opus-20240229 $0.06 /1k tokens $0.06 /1k tokens 100K tokens Claude 3, the latest official model.
tts-1 ---- $0.015 /1K characters ---- GPT text-to-speech modeling.
tts-1-hd ---- $0.03 / 1K characters ---- GPT text-to-speech model HD.
SparkDesk ¥0.018 /1k tokens ¥0.018 /1k tokens 8K tokens CyberStarfire v3.1.
ERNIE-Bot ¥0.012 /1k tokens ¥0.012 /1k tokens 8K tokens Baidu Bunshin Ichimoku model.
ERNIE-Bot-turbo ¥0.008 /1k tokens ¥0.008 /1k tokens 8K tokens Baidu Bunshin Ichimoku model.
ERNIE-Bot-4 ¥0.12 /1k tokens ¥0.12 /1k tokens 8K tokens Baidu Wenxin Yiyin v4.0 model.
chatglm-pro ¥0.01 /1k tokens ¥0.01 /1k tokens 8K tokens Wisdom Spectrum Clear Speech Model.
chatglm_std ¥0.005 /1k tokens ¥0.005 /1k tokens 8K tokens Wisdom Spectrum Clear Speech Model.
chatglm-trte ¥0.004 /1k tokens ¥0.004 /1k tokens 8K/32K tokens Wisdom Spectrum Clear Speech Model.
chatglm-turbo ¥0.005 /1k tokens ¥0.005 /1k tokens 8K tokens Wisdom Spectrum Clear Speech Model.
hunyuan ¥0.1 /1k tokens ¥0.1 /1k tokens 8K tokens Tencent Hybrid Grand Model.
qwen-plus ¥0.02 /1k tokens ¥0.02 /1k tokens 8K tokens Ali Thousand Questions Big Model.
qwen-max ¥0.02 /1k tokens ¥0.02 /1k tokens 8K tokens Ali Thousand Questions Big Model.
qwen-turbo ¥0.008 /1k tokens ¥0.008 /1k tokens 8K tokens Ali Thousand Questions Big Model.
text-embedding-ada-002 $0.0001 /1K tokens $0.0001 /1K tokens 8K tokens GPT-emb vector model, 5W+/rpm.

documentation

The OpenAI API can be applied to almost any task that involves understanding or generating natural language, code, or images. We offer a range of models with different power levels for different tasks, as well as the ability to fine-tune your own custom models. These models can be used for all tasks from content generation to semantic search and classification.

Getting Started Documentation (docs)

API documentation

common problems

How it's billed

Consumption of tokens is in USD as officially, but the exchange rate is extremely low.

Token Price

The official number of input and output tokens:

  • Input: Official Input
  • Output: Official Output

GPTMeta API default price: 1$ = 500,000 tokens, i.e. 0.002$/1k tokens (base price).

Example: gpt-3.5-turbo-1106

It has the following price multiplier settings:

  • Benchmark prices: 0.002$/1k tokens
  • Enter the price: 0.001$/1k tokens
  • Output price: 0.002$/1k tokens

Model Multiplier = Input Price / Base Price

Complementary Multiplier = Output Price / Input Price

0.001 / 0.002 = 0.5

0.002 / 0.001 = 2

The official price formula

Input price * number of input tokens + Output price * number of output tokens

Price formula for GPTMeta API

Model multiplier * (number of hint tokens + number of complementary tokens * complementary multiplier)

= (model multiplier * number of hint tokens + model multiplier * complementary multiplier * number of complementary tokens)

= (input price / base price) * number of input tokens + (output price / base price) * number of output tokens

= (input price * number of input tokens + output price * number of output tokens) / base price

As you can see, they are only different from a fixed parameter, that is, the benchmark price, the program background calculations after the uniform multiplied by the benchmark price is good.

 

What top-up exchange rate ratios are available and how they differ; how you can get a 1:1 top-up ratio

Default user exchange rate ($:$):2:1; for recharge amount below 100 USD, the charge is 2:1, i.e. 40RMB corresponds to 20USD.

Cooperative user exchange rate (¥:$):1.5:1; above $100 and under $500, 1.5:1, i.e. 30RMB corresponds to $200.

Corporate SVIP user exchange rate (¥:$):1:1; for recharge amount above USD500, the charge is 1:1, i.e. RMB1000 corresponds to USD1000.

The Corporate Partner User (SVIP) setting is designed to prevent misuse of resources and ensure that the majority of users are able to use the site properly, while encouraging users to top up their account. If you are satisfied with our website, please support us. Currently, your user level is default. when your accumulated recharge amount reaches 500 USD (¥:$=1:1) or more, you can contact the administrator to upgrade to a cooperative user, enjoying the recharge ratio of 1CNY : 1USD. cooperative users can enjoy priority access to the newest AI models, priority resource protection, and API rate limit will be more relaxed!

Is it possible to get a lower percentage of the recharge rate?

Large quantities can be, depending on your daily consumption, specific contact WeChat: f15303420735 for details

How to recharge

If the first time you register to recharge, the recharge amount is less than 100 U.S. dollars is the default grouping, the user logs directly into the console (¥:$ = 2; 1) online recharge can be.

If you need a large amount of credit, after registering and logging in to the console, please contact customer service to purchase (will be issued in the form of a redemption code credit), then customer service to adjust the user grouping for VIP (¥:¥ = 1.5:1) / SVIP (¥:¥ = 1; 1), and after that, you can recharge online!

What scenarios can be used

Those that support openai's official format calling interface are sufficient.

give an example
curl https://api.openai.com/v1/chat/completions \ <=== The BASE_URL you want to request (replace https://www.blueshirtmap.com for your proxy domain name
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \ <=== your OPENAI_API_KEY (copy your in GPTMeta API KEY key)
  -d '{
    "model": "No models available", \ <=== your model name
    "messages": [
      {
        "role": "system", \
        "content": "You are a helpful assistant."
      },
      {
        "role": "user", "content": "Hello!
        "content": "Hello!"
      }
    ]
  }'
How to Distribute Products

Partner users (VIP) can apply to become a reseller. In order to facilitate the distribution and sale of KEYs by proxies, users visiting the domain will see a page for checking the KEY amount. In addition, all call operations will also be carried out through this proxy domain.

Can I get an invoice?

Support invoicing, please contact customer service. The recharge amount is the account balance, if you need to invoice need to add another tax point 5%.

Transit proxy service based on official APIs

In this era of openness and sharing, OpenAI leads a revolution in artificial intelligence. Now, we announce to the world that we have fully supported all models of OpenAI, for example, supporting GPT-4-ALL, GPT-4-multimodal, GPT-4-gizmo-*, etc. as well as a variety of home-grown big models. Most excitingly, we have introduced the more powerful and influential GPT-4o to the world!

Site Navigation

Begin
consoles
Instructions
Online Monitoring

 

Contact Us

公众号二维码

public number

企业合作二维码

Cooperation

Copyright © 2021-2024 All Rights Reserved 2024 | GPTMeta API