Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation and examples how to use Vercel AI in React #1650

Closed
ejgutierrez74 opened this issue May 19, 2024 · 3 comments
Closed

Documentation and examples how to use Vercel AI in React #1650

ejgutierrez74 opened this issue May 19, 2024 · 3 comments

Comments

@ejgutierrez74
Copy link

Feature Description

Would be nice a tutorial with examples how to use Vercel AI SDK in React, with both stream request and "normal" request. I ve seen examples in Next.js but not in React.

Use Case

Examples using Vercel AI in React, connecting to different LLMS

Additional context

No response

@MaxLeiter
Copy link
Member

MaxLeiter commented May 22, 2024

You generally shouldn't us React without a framework, but very little of useChat and useCompletion should be Next.js-specific (minus the boilerplate like an API route). Is there a specific feature you'd like to see or problem you ran into?

@ejgutierrez74
Copy link
Author

Well the title says it all, id need an example or tutorial of how to use Vercel AI SDK ( ideally with two different LLM providers lets say ollama and OpenAI). I use as component renderer React Chatbotify, and use the respective APIs to connect and get the response of the LLM. So i dont know how to setup a connection. which options, get the stream results or wait till all the information is received....

This is an example of connection to Gemini:

/ useGeminiBot.js
//El hook useGeminiBot encapsula la lógica para interactuar con la API de Gemini.
// Este hook maneja la inicialización de la sesión de chat y el envío de mensajes.

import { useState, useEffect } from 'react';
import { GoogleGenerativeAI, HarmCategory, HarmBlockThreshold } from "@google/generative-ai";

const useGeminiBot = () => {

  const apiKey = import.meta.env.VITE_REACT_APP_API_KEY_GEMINI;

  //useState inicializa chatSession como null.
  const [chatSession, setChatSession] = useState(null);

 /* Efecto useEffect:

    Inicializa una instancia de GoogleGenerativeAI con la clave API.
    Configura el modelo generativo y las configuraciones de generación y seguridad.
    Inicia una nueva sesión de chat y la guarda en el estado chatSession.*/

  useEffect(() => {
    const genAI = new GoogleGenerativeAI(apiKey);
    const model = genAI.getGenerativeModel({ model: "gemini-1.5-pro-latest" });

    const generationConfig = {
      temperature: 1,
      topP: 0.95,
      topK: 64,
      maxOutputTokens: 8192,
      responseMimeType: "text/plain",
    };

    const safetySettings = [
      { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE },
      { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE },
      { category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE },
      { category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE },
    ];

    const newChatSession = model.startChat({ generationConfig, safetySettings, history: [] });
    setChatSession(newChatSession);

    // [apiKey] Ejecuta el efecto cuando la clave API cambia ( en principio no haria falta, ya que la apikey no cambia si no cambias fichero .env)
    // [] Si sabes que la clave API no cambiará, puedes dejar la lista de dependencias vacía para que el efecto solo se ejecute una vez cuando el componente se monte.
  }, []); 

  const sendMessage = async (userInput, streamMessage) => {
    
    if (!chatSession) return await streamMessage("Error chatSession no iniciada.");
    try {
      const result = await chatSession.sendMessageStream(userInput);
      let partialResponse = '';
      for await (const chunk of result.stream) {
        const chunkText = chunk.text();
        partialResponse += chunkText;
        await streamMessage(partialResponse);
      }
    } catch (err) {
      console.error(err);
      await streamMessage("Connection error. Did you provide a valid API key?");
    }
  };

  return { sendMessage };
};

export default useGeminiBot;

Also i have different providers/custom hooks i can manage them ( separate the render from logic):

// useChatBotProvider.js
import useGeminiBot from './useGeminiBot';
import useOllamaBot from './useOllamaBot';

const useChatBotProvider = (provider, model) => {
  const geminiBot = useGeminiBot(model);
  const ollamaBot = useOllamaBot(model);

  const sendMessage = async (userInput, streamMessage) => {
    if (provider === 'gemini') {
      await geminiBot.sendMessage(userInput, streamMessage);
    } else if (provider === 'ollama') {
      await ollamaBot.sendMessage(userInput, streamMessage);
    }
    // Agregar lógica para otros proveedores aquí según sea necesario
  };

  let messageHistory = [];
  let updateMessages = () => {};

  if (provider === 'ollama') {
    messageHistory = ollamaBot.messageHistory;
    updateMessages = ollamaBot.updateMessages;
  }

  return { sendMessage, messageHistory, updateMessages };
};

export default useChatBotProvider;

I use custom hooks you can read the code : https://github.com/ejgutierrez74/workshop-react-eduardo
So i think this is done by Vercel AI SDK and probably with more efficiency and error handling, but i dont know how to use it...

Thanks

@dayos-rohit
Copy link

dayos-rohit commented Jun 10, 2024

hey @MaxLeiter I would also appreciate an simple example of React and vercel ai sdk if possible. I would also like to know what are the specific limitations of using nextjs ai sdk with React.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants