BrowseWiz Logo

BrowseWiz - your AI Assistant, Summarizer and Writer

Free

Best option to try it out.

$0/month
  • Chat with basic AI models
  • Ask the page
  • Summarize videos
  • Write anything
  • Talk with PDF
  • 50,000 daily credits for basic AI models

Pro

Best for regular users.

$14.9/month
  • Chat with basic and advanced AI models
  • Ask the page
  • Summarize videos
  • Write anything
  • Talk with PDF
  • 6,000,000 monthly credits for basic AI models
  • 1,000,000 monthly credits for advanced AI models

What is the value of credit?

Large Language Model (LLM) services used in the BrowseWiz application measure text length in tokens. One token is usually around 4 characters (but in rare cases, it can be as low as 1 character).

The credit cost of sending a message to an LLM service depends on the quantity of input and output tokens, and it is calculated with the following formula:

\(input\_tokens \ imes input\_token\_value + output\_tokens \ imes output\_token\_value\)

The table below shows input and output token values for each LLM model.

Model Name Model Type Input Token Value Output Token Value Input Token Limit Output Token Limit
gemini-1.5-flash Basic 1 4 1,000,000 8,192
gemini-1.5-pro Advanced 1 4 1,000,000 8,192
gpt-4o-mini Basic 1 4 128,000 4,096
gpt-4o Advanced 1 4 128,000 4,096
o1-mini Advanced 1 4 128,000 64,000
o1-preview Advanced 5 4 128,000 32,000
claude-3.5-sonnet Advanced 1 5 200,000 8,192
llama-3.1-405b Advanced 1 1 128,000 4,096

Input tokens sent with message include chosen context (active page text, video transcript, included file or URL), additional instructions based on chat mode (assistant, summarizer, writer - no more than 500 tokens), message entered by user, as well as all the historical messages displayed in the chat.

Example calculation

You are watching a ~20min commentary video on YouTube website. The transcript has around 30,000 characters. Assuming average 4 characters per token, this means around 7,500 of input tokens.

Response text has around 2,000 characters, which is roughly 500 output tokens.

You are using gemini-1.5-flash model, so the calculation is as follows:

\(\approx 7500 \ imes 1 + 500 \ imes 4 = 9500 \ ext{ credits}\)

After using each query, you can check the number of tokens you used in the ⚙️ > "General settings" in your Application.

Before using the query, you can also check the estimated number of tokens of a text snippet at the link (specifically for OpenAI models).