Title: | Interface for 'Google Gemini' API |
---|---|
Description: | Provides a comprehensive interface for Google Gemini API, enabling users to access and utilize Gemini Large Language Model (LLM) functionalities directly from R. This package facilitates seamless integration with Google Gemini, allowing for advanced language processing, text generation, and other AI-driven capabilities within the R environment. For more information, please visit <https://ai.google.dev/docs/gemini_api_overview>. |
Authors: | Jinhwan Kim [aut, cre, cph] , Maciej Nasinski [ctb] |
Maintainer: | Jinhwan Kim <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.7.0 |
Built: | 2024-12-25 16:22:35 UTC |
Source: | https://github.com/jhk0530/gemini.R |
Add history for chating context
addHistory(history, role = NULL, item = NULL)
addHistory(history, role = NULL, item = NULL)
history |
The history of chat |
role |
The role of chat: "user" or "model" |
item |
The item of chat: "prompt" or "output" |
The history of chat
Generate text from text with Gemini
gemini(prompt, model = "1.5-flash", temperature = 0.5, maxOutputTokens = 1024)
gemini(prompt, model = "1.5-flash", temperature = 0.5, maxOutputTokens = 1024)
prompt |
The prompt to generate text from |
model |
The model to use. Options are '1.5-flash', '1.5-pro', '1.0-pro' and '2.0-flash-exp'. Default is '1.5-flash' see https://ai.google.dev/gemini-api/docs/models/gemini |
temperature |
The temperature to use. Default is 0.5 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 1024 and 100 tokens correspond to roughly 60-80 words. |
Generated text
https://ai.google.dev/docs/gemini_api_overview#text_input
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini("Explain dplyr's mutate function") ## End(Not run)
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini("Explain dplyr's mutate function") ## End(Not run)
This function sends audio to the Gemini API and returns a text description.
gemini_audio( audio = NULL, prompt = "Describe this audio", model = "1.5-flash", temperature = 0.5, maxOutputTokens = 1024 )
gemini_audio( audio = NULL, prompt = "Describe this audio", model = "1.5-flash", temperature = 0.5, maxOutputTokens = 1024 )
audio |
Path to the audio file (default: uses a sample file). Must be an MP3. |
prompt |
A string describing what to do with the audio. |
model |
The Gemini model to use ("1.5-flash" or "1.5-pro", "2.0-flash-exp"). Defaults to "1.5-flash". |
temperature |
Controls the randomness of the generated text (0-2). Defaults to 0.5. |
maxOutputTokens |
The maximum number of tokens in the generated text. Defaults to 1024. |
A character vector containing the Gemini API's response.
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini_image(audio = system.file("docs/reference/helloworld.mp3", package = "gemini.R")) ## End(Not run)
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini_image(audio = system.file("docs/reference/helloworld.mp3", package = "gemini.R")) ## End(Not run)
Generate text from text with Gemini
gemini_chat( prompt, history = list(), model = "1.5-flash", temperature = 0.5, maxOutputTokens = 1024 )
gemini_chat( prompt, history = list(), model = "1.5-flash", temperature = 0.5, maxOutputTokens = 1024 )
prompt |
The prompt to generate text from |
history |
history object to keep track of the conversation |
model |
The model to use. Options are '1.5-flash', '1.5-pro', '1.0-pro' and '2.0-flash-exp'. Default is '1.5-flash' see https://ai.google.dev/gemini-api/docs/models/gemini |
temperature |
The temperature to use. Default is 0.5 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 1024 and 100 tokens correspond to roughly 60-80 words. |
Generated text
https://ai.google.dev/docs/gemini_api_overview#chat
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") chats <- gemini_chat("Pretend you're a snowman and stay in character for each") print(chats$outputs) chats <- gemini_chat("What's your favorite season of the year?", chats$history) print(chats$outputs) chats <- gemini_chat("How do you think about summer?", chats$history) print(chats$outputs) ## End(Not run)
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") chats <- gemini_chat("Pretend you're a snowman and stay in character for each") print(chats$outputs) chats <- gemini_chat("What's your favorite season of the year?", chats$history) print(chats$outputs) chats <- gemini_chat("How do you think about summer?", chats$history) print(chats$outputs) ## End(Not run)
Generate text from text and image with Gemini
gemini_image( image = NULL, prompt = "Explain this image", model = "1.5-flash", temperature = 0.5, maxOutputTokens = 1024, type = "png" )
gemini_image( image = NULL, prompt = "Explain this image", model = "1.5-flash", temperature = 0.5, maxOutputTokens = 1024, type = "png" )
image |
The image to generate text |
prompt |
The prompt to generate text, Default is "Explain this image" |
model |
The model to use. Options are '1.5-flash', '1.5-pro' and '2.0-flash-exp'. Default is '1.5-flash' see https://ai.google.dev/gemini-api/docs/models/gemini |
temperature |
The temperature to use. Default is 0.5 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 1024 and 100 tokens correspond to roughly 60-80 words. |
type |
The type of image. Options are 'png', 'jpeg', 'webp', 'heic', 'heif'. Default is 'png' |
Generated text
https://ai.google.dev/docs/gemini_api_overview#text_image_input
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini_image(image = system.file("docs/reference/figures/image.png", package = "gemini.R")) ## End(Not run)
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini_image(image = system.file("docs/reference/figures/image.png", package = "gemini.R")) ## End(Not run)
Generates Roxygen2 documentation for an R function based on the currently selected code.
gen_docs(prompt = NULL)
gen_docs(prompt = NULL)
prompt |
A character string specifying additional instructions for the LLM. Defaults to a prompt requesting Roxygen2 documentation without the original code. |
A character string containing the generated Roxygen2 documentation.
Generates unit test code for an R function.
gen_tests(prompt = NULL)
gen_tests(prompt = NULL)
prompt |
A character string specifying the prompt for the Gemini model. If NULL, a default prompt is used. |
#' A character string containing the generated unit test code.