flutter_gemma

Last updated:

0 purchases

flutter_gemma Image
flutter_gemma Images
Add to Cart

Description:

flutter gemma

Flutter Gemma #
Gemma is a family of lightweight, state-of-the art open models built from the same research and technology used to create the Gemini models



Bring the power of Google's lightweight Gemma language models directly to your Flutter applications. With Flutter Gemma, you can seamlessly incorporate advanced AI capabilities into your iOS and Android apps, all without relying on external servers.
There is an example of using:



Features #

Local Execution: Run Gemma models directly on user devices for enhanced privacy and offline functionality.
Platform Support: Compatible with both iOS and Android platforms.
Ease of Use: Simple interface for integrating Gemma models into your Flutter projects.

Installation #


Add flutter_gemma to your pubspec.yaml:
dependencies:
flutter_gemma: latest_version
copied to clipboard


Run flutter pub get to install.


Setup #

Download Model: Obtain a pre-trained Gemma model (recommended: 2b or 2b-it) from Kaggle

Optionally, fine-tune a model for your specific use case


Rename Model: Rename the downloaded file to model.bin.
Integrate Model into Your App:

iOS

Enable file sharing in info.plist:

<key>UIFileSharingEnabled</key>
<true/>
copied to clipboard

Change the linking type of pods to static, replace use_frameworks! in Podfile with use_frameworks! :linkage => :static
Transfer model.bin to your device

Connect your iPhone
Open Finder, your iPhone should appear in the Finder's sidebar under "Locations." Click on it.
Access Files. In the button bar, click on "Files" to see apps that can transfer files between your iPhone and Mac.
Drag and Drop or Add Files. You can drag model.bin directly to an app under the "Files" section to transfer them. Alternatively, click the "Add" button to browse and select model.bin to upload.



Android

Transfer model.bin to your device (for testing purposes only, uploading by network will be implemented in next versions)

Install adb tool, if you didn't install it before
Connect your Android device
Copy model.bin to the output_path folder
Push the content of the output_path folder to the Android device



adb shell rm -r /data/local/tmp/llm/ # Remove any previously loaded models
adb shell mkdir -p /data/local/tmp/llm/
adb push output_path /data/local/tmp/llm/model.bin
copied to clipboard

If you want to use a GPU to work with the model, you need to add OpenCL support in the manifest.xml. If you plan to use only the CPU, you can skip this step.

Add to 'AndroidManifest.xml' above tag </application>



<uses-native-library
android:name="libOpenCL.so"
android:required="false"/>
<uses-native-library android:name="libOpenCL-car.so" android:required="false"/>
<uses-native-library android:name="libOpenCL-pixel.so" android:required="false"/>
copied to clipboard
Web


Web currently works only GPU backend models, CPU backend models are not suported by Mediapipe yet


Add dependencies to index.html file in web folder


<script type="module">
import { FilesetResolver, LlmInference } from 'https://cdn.jsdelivr.net/npm/@mediapipe/tasks-genai';
window.FilesetResolver = FilesetResolver;
window.LlmInference = LlmInference;
</script>
copied to clipboard

Copy model.bin to your web folder

Usage #

Initialize:

void main() async {
WidgetsFlutterBinding.ensureInitialized();
await FlutterGemmaPlugin.instance.init(
maxTokens: 512, /// maxTokens is optional, by default the value is 1024
temperature: 1.0, /// temperature is optional, by default the value is 1.0
topK: 1, /// topK is optional, by default the value is 1
randomSeed: 1, /// randomSeed is optional, by default the value is 1
);

runApp(const MyApp());
}
copied to clipboard

Generate response

final flutterGemma = FlutterGemmaPlugin.instance;
String response = await flutterGemma.getResponse(prompt: 'Tell me something interesting');
print(response);
copied to clipboard

Generate response as a stream

final flutterGemma = FlutterGemmaPlugin.instance;
flutterGemma.getAsyncResponse(prompt: 'Tell me something interesting').listen((String? token) => print(token));
copied to clipboard

Generate chat response This method works properly only for instruction tuned models

final flutterGemma = FlutterGemmaPlugin.instance;
final messages = <Message>[];
messages.add(Message(text: 'Who are you?', isUser: true);
String response = await flutterGemma.getChatResponse(messages: messages);
print(response);
messages.add(Message(text: response));
messages.add(Message(text: 'Really?', isUser: true));
String response = await flutterGemma.getChatResponse(messages: messages);
print(response);
copied to clipboard

Generate chat response as a stream This method works properly only for instruction tuned models

final flutterGemma = FlutterGemmaPlugin.instance;
final messages = <Message>[];
messages.add(Message(text: 'Who are you?', isUser: true);
flutterGemma.getAsyncChatResponse(messages: messages).listen((String? token) => print(token));
copied to clipboard
The full and complete example you can find in example folder
Important Considerations

Currently, models must be manually transferred to devices for testing. Network download functionality will be included in future versions.
Larger models (like 7b and 7b-it) may be too resource-intensive for on-device use.

Coming Soon

Network-based model using/downloading for seamless updates.

License:

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Files In This Product:

Customer Reviews

There are no reviews.