0 purchases
tokencost
tokencost #
Overview #
Clientside token counting + price estimation for LLM apps and AI agents.
tokencost helps calculate the USD cost of using major Large Language Model
(LLMs) APIs by calculating the estimated cost of prompts and completions.
Ported from Python tokencost package, see
AgentOps-AI/tokencost.
Features #
LLM Price Tracking Major LLM providers frequently add new models and
update pricing. This repo helps track the latest price changes
Token counting Accurately count prompt tokens before sending OpenAI
requests
Easy integration Get the cost of a prompt or completion with a single
function
Example usage: #
import 'package:tokencost/tokencost.dart';
void main() {
const model = 'gpt-3.5-turbo';
const prompt = [
{
'role': 'user',
'content': 'Hello world',
},
];
const completion = 'How may I assist you today?';
final promptCost = calculatePromptCost(prompt, model);
final completionCost = calculateCompletionCost(completion, model);
print('$promptCost + $completionCost = ${promptCost + completionCost}');
// $0.00001350 + $0.00001400 = $0.00002750
}
copied to clipboard
Installation ๐ป #
โ In order to start using tokencost you must have the Dart SDK installed on your machine.
Install via dart pub add:
dart pub add tokencost
copied to clipboard
Usage #
Cost estimates #
Calculating the cost of prompts and completions from OpenAI requests
import 'package:tokencost/tokencost.dart';
import 'package:dart_openai/dart_openai.dart';
const model = 'gpt-3.5-turbo';
const prompt = [{'role': 'user', 'content': 'Say this is a test'}];
OpenAICompletioModel chatCompletion = await OpenAI.instance.completion.create(
model: model,
prompt: prompt,
);
completion = chatCompletion.choices.first.message.content!.first.text!;
// This is a test.
final promptCost = calculatePromptCost(prompt, model)
final completionCost = calculateCompletionCost(completion, model)
print(''$promptCost + $completionCost = ${promptCost + completionCost}'');
// $0.00001800 + $0.00001000 = $0.00002800
print('Cost USD: ${(promptCost + completionCost)}');
// Cost USD: $2.8e-05
copied to clipboard
Calculating cost using string prompts instead of messages:
const promptString = 'Hello world';
const response = 'How may I assist you today?';
const model = 'gpt-3.5-turbo';
final promptCost = calculatePromptCost(promptString, model);
print('Cost: $promptCost');
// Cost: $3e-06
copied to clipboard
Counting tokens #
import 'package:tokencost/tokencost.dart';
const messagePrompt = [{'role': 'user', 'content': 'Hello world'}];
// Counting tokens in prompts formatted as message lists
print(countMessageTokens(messagePrompt, 'gpt-3.5-turbo'));
// 9
// Alternatively, counting tokens in string prompts
print(countStringTokens('Hello world', 'gpt-3.5-turbo'));
// 2
copied to clipboard
Continuous Integration ๐ค #
tokencost comes with a built-in GitHub Actions workflow powered by Very Good Workflows but you can also add your preferred CI/CD solution.
Out of the box, on each pull request and push, the CI formats, lints, and tests the code. This ensures the code remains consistent and behaves correctly as you add functionality or make changes. The project uses Very Good Analysis for a strict set of analysis options used by our team. Code coverage is enforced using the Very Good Workflows.
Running Tests ๐งช #
To run all unit tests:
dart pub global activate coverage 1.2.0
dart test --coverage=coverage
dart pub global run coverage:format_coverage --lcov --in=coverage --out=coverage/lcov.info
copied to clipboard
To view the generated coverage report you can use lcov.
# Generate Coverage Report
genhtml coverage/lcov.info -o coverage/
# Open Coverage Report
open coverage/index.html
copied to clipboard
Contributing #
Contributions to TokenCost are welcome! Feel free to create an
issue for any bug reports,
complaints, or feature suggestions.
License #
TokenCost is released under the MIT License.
For personal and professional use. You cannot resell or redistribute these repositories in their original state.
There are no reviews.