|

Artificial Intelligence - Generative AI

Oodles’ Generative AI Development Solutions enable you to scale your professional capabilities, accelerate creative potential, and consistently deliver high-quality outcomes without compromise, empowering your business to stay ahead of the curve. Powered by advanced AI models such as Stable Diffusion, ChatGPT, and Llama, our custom solutions drive intelligent automation, enhance personalized content generation, and provide real-time data insights, helping you meet rising customer expectations in today’s increasingly demanding business ecosystem.

Transformative Projects

View All

Solution Shorts

Case Studies

75 pdf

Opporture

79 pdf

Bedtime Stories

80 pdf

Shoorah | Mental Health Bot

81 pdf

RAG(Retrieval Augmented Generation) Bot

103 play

Bedtime Stories: Personalized Tales for Kids by Oodles AI

124 pdf

Doctor AI

125 pdf

Accusaga

Top Blog Posts
Building a Chatbot with Mistral AI Step by Step Guide Building a Chatbot with Mistral AI : Step by Step GuidePrerequisitesBefore we get started, you'll need to have the following:1. Basic understanding of JavaScript and Node.js.2. Node.js installed on your machine.3. Understand RESTful APIs.4. Access to Mistral AI API credentials.Step 1: Setting Up Your Node.js ProjectTo begin, create a new Node.js project. Navigate to your project directory in your terminal and run the following command:mkdir mistral-chatbot cd mistral-chatbot npm init -y This will create a new directory named mistral-chatbot and initialize a new Node.js project with the default settings.Step 2: Installing DependenciesNext, install the necessary dependencies that we'll need for our project. In this tutorial, we'll use Axios to make HTTP requests. Run the following command to install Axios:npm install axiosStep 3: Environment Setupa. Create a file named .env in the root of your project to store environment variables.b. Add the following variables to the .env file, replacing "MISTRAL_API_URL" and "YOUR_MISTRAL_API_KEY" with your actual Mistral API URL and API key, respectively MISTRAL_API_URL = MISTRAL_API_URL MISTRAL_API_KEY = YOUR_MISTRAL_API_KEY Step 4: Writing the Chatbot LogicNow, let's write the logic for our chatbot. Create a new file named chatbot.js in your project directory and add the following code:const MistralChatbot = async (req,res) => { try { let {message, mistralChatHistory, conversationId} = req.body; const history = JSON.parse(mistralChatHistory); const format = TEXT_RESPONSE_FORMAT; // if you want any type of formating in response , provide here // Append format to the last message in chat history history[history.length - 1].content += format; // Prepare data for Mistral API request const requestData = { model: 'mistral-tiny', messages: history, temperature: 0, }; // Send request to Mistral API const response = await axios.post(process.env.MISTRAL_API_URL, requestData, { headers: { 'Content-Type': 'application/json', 'Accept': 'application/json', 'Authorization': `Bearer ${process.env.MISTRAL_API_KEY}` } }); // Extract response from Mistral AI const data = response.data.choices[0].message.content; return res.status(200).json({success:true,data}); // Return response to the user } catch (error) { console.log(error); throw new Error('Failed to process message'); } }; module.exports = MistralChatbot;Step 5: Testing Your ChatbotYou can now test your chatbot by sending messages to your application and observing the responses generated by Mistral AI. Ensure that your Mistral API credentials are valid and that your application can communicate with the Mistral API endpoint.
Area Of Work: Chat bot , Generative AI Industry: IT/Software
Chat with your own Document using PrivateGPT Chat with your own Document Using PrivateGPTPrivateGPT is a programme that allows you to ask questions about your documents without requiring an online connection, thanks to the power of Language Models. It is completely private, with no data leaving your execution environment at any stage. You can ingest documents and ask inquiries even without an internet connection!How privateGPT worksThis project is Divided intotwo phases1.)Ingest Phase (ingest.py file) , 2.)Query Phase(privateGPT.py file)Workings of Ingest Phase(ingest.py file)In our privateGPT project it takes the data from the source_document folder, split it into chunks and chunk overlaps by usingRecursiveCharacterTextSplitterclass.with the help of HuggingFaceEmbeddings(SentenceTransformers) and all-MiniLM-L6-v2 Model, it embedthe text.Embeddings are refers to the process of converting text or words into numerical representation (vectors) that can be understand and processed by machine learning algorithm. these text or words embeddings are essential for many Natural Language Processing tasks as they capture semantic relationships and contextual information between words, enabling machine to understand the meaning of the words and sentence in a numerical format.words with similar meanings or contexts are mapped closer together in the vector space. this allows machine learning models to leverage the similarities between words and make more informed decisions based on the context in which words appear.overall, text or word embeddings play a crucial role in bridging the gap between human language and machine learning algorithm, enabling computers to process and understand Natural language text more effectively.after creating the Embeddings we store into our Chroma DBChroma is a powerful toolset that allows developers to work with embeddings, store, index, and query these embeddings effectively and efficiently, and perform similarity searches on large-scale datasets.Working of Query Phase(privateGPT.py file)In this phase the user input serves as a query that is passed to the local Language Model (LLM) for processing and generating an answer. The LLM processes the query and produces an answer based on the context retrieved from the local vector database.User Input: The user provides a natural language text query, which serves as the input question.LLM Processing: The privateGPT.py script uses a local LLM (GPT4All-J or LlamaCpp) to process the user's question. The LLM has been pre-trained on a vast corpus of text and is capable of understanding Natural language and generating relevant responses.Context Retrieval: The script performs a similarity search using the local vector database (Chroma) to retrieve relevant context from the ingested documents by using RetrievalQA. The context consists of a selection of documents that are most similar to the user's query.Answer Generation: The LLM generates an answer based on the user's question and the context retrieved from the vector database. The LLM uses its language modeling capabilities to produce a response that fits the given context.Displaying the Answer: The script displays the answer generated by the LLM and presents the four sources (documents) used as context for generating that answer. This information helps the user understand how the LLM arrived at the response.SummaryIn summary, the user input is used as a query for the local LLM, and the LLM leverages the vector embeddings of the ingested documents (retrieved from the local vector database) to generate a relevant answer. The process does not directly create embeddings for the user input but rather uses embeddings of the documents to contextualize the LLM's response to the user's query.
Area Of Work: Chat bot , Generative AI
Talk with your documents locally using langchain and llama 2 In recent years, the AI industry has grown significantly and is still growing rapidly day by day. Slowly it's becoming an integral part of our daily life, and giving rise to our productivity while cutting off the required time and effort. Whether we are cooking and need a professional instructor or to automate large machines in industries, your saviour AI is here.Suppose, you have an exam coming up in a week and you are still left with most of the topics to prepare or you just need to check your answer whether it's correct or not. Riffling through 100 and 1000 pages is clearly going to cost you your precious time, why not just ask the question to your book and it spits out the answer ? But that's impossible...Well, No! Not when AI is around.In a few minutes you will be able to do that. We will create an AI tool that will allow you to feed your document(s) and ask it questions.In this post, we hope to give you withA brief overview of Llama-2Brief overview of langchain.Brief of huggingFace.How to ingest your local document.Use Llama-2 to fetch answers from the uploaded documents.1. A brief overview of Llama-2:Llama 2 is an open source large language model (LLM) provided by Meta for research and commercial use.Llama 2 comes in three variants:Llama 2 7BLlama 2 13BLlama 2 70B2. Brief overview of langchain:Langchain is a software framework for large language models (LLMs) designed to simplify the creation of applications using AI and LLM.3. Brief of HuggingFace:HuggingFace is a platform where the machine learning community collaborates on models, datasets, and applications. We are using it to download theLlama-2 7b chat model.4. How to ingest your local document:The process of ingesting document involves following steps:Split Text into chunks:Chunking is the process of breaking large pieces of texts into smaller segments. # driver to create chunks of text and python file text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200) python_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.PYTHON, chunk_size=880, chunk_overlap=200 ) # creating chunks of texts texts = text_splitter.split_documents(text_documents)Create embeddings:An embedding is a low-dimensional space that can be used to translate high-dimensional vectors.Ideally, an embedding captures some of the input's semantics by clustering semantically comparable inputs in the embedding space. Embeddings can be learned and reused across models. # Create embeddings embeddings = HuggingFaceInstructEmbeddings( model_name=EMBEDDING_MODEL_NAME, model_kwargs={"device": device_type},)Store the embeddings:we are using chromaDB to store the embeddings. db = Chroma.from_documents( texts, embeddings, persist_directory=PERSIST_DIRECTORY, client_settings=CHROMA_SETTINGS, )4. Use Llama 2 to fetch answer from your document:To QA with our document we have to follow these steps:Load the vectors:vectors are used to represent text or data in a numerical form that the model can understand and process. This representation is known as an embedding. # load the vectorstore db = Chroma( embedding_function=embeddings, persist_directory=PERSIST_DIRECTORY, ) retriever = db.as_retriever()Create Prompt Template: template = """Use the following pieces of context to answer the question at the end. If you don't know the answer,\ just say that you don't know, don't try to make up an answer. {context} {history} Question: {question} Helpful Answer:""" prompt = PromptTemplate( input_variables=["history", "context", "question"], template=template)Perform QA: qa = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", # chain_type="refine", retriever=retriever, return_source_documents=True, chain_type_kwargs={"prompt": prompt, "memory": memory}, ) To perform interactive QA, loop the process: # Interactive questions and answers while True: query = input("\nEnter a query: ") if query == "exit": break # Get the answer from the chain res = qa(query) answer, docs = res["result"], res["source_documents"] # Print the result print("\n\n> Question:") print(query) print("\n> Answer:") print(answer)Output:
Area Of Work: Generative AI Industry: IT/Software
DALL E Creating Images from Text DALL-E is an artificial intelligence (AI) program trained to create images with distinct details from descriptive texts. It has already shown promising results, but its behavioral failure suggests that applying its algorithm to running applications may take some time. AI algorithms tend to be more volatile when it comes to image production due to obsolescence in the data sets used in their training. However, DALL-E came up with a logical interpretation not only of practical things but also of abstract ideas. For example, in a caption describing a capybara in the field at sunrise, AI has surprisingly demonstrated logical reasoning by providing images of a topic that makes its image beyond those specific details specified in the text. It even managed to show good judgment in curing mysterious, imaginary ideas, such as creating a harp-shaped snail by narrating a hollowed-out part of a snail's shell, and by skillfully combining the two into one. DALL-E tends to get frustrated with long strings of text, however, it is less accurate with additional meanings. AI is also a victim of traditional superstitions, such as the production of Chinese food as dumplings. Of course, once completed, there are a number of applications for such a tool, from marketing and building concepts to viewing news boards from structure summaries. Perhaps AI algorithms like DALL-E could be much better than humans in drawing the same way they have given us in air combat. Why Important: New models are the latest in a series of ongoing efforts to create digital learning systems that reflect common sense while doing useful work in the real world - without compromising computer power. What's happening: OpenAI today announces two new systems that are trying to make images of what its landmark GPT-3 model did last year for text-making. DALL-E is a neural network "that can take any text and make a picture of it," said Ilya Sutskever, founder of OpenAI and senior scientist. That includes ideas he would never encounter in training, such as drawing anthropomorphic daikon radish with the dog shown above. Flashback: DALL-E works similarly to the GPT-3, a large transformer model that can produce the first roles of text in terms of a short Prompt. CLIP, another new neural network, "can take any visual cues and create robust and reliable textual interpretations," says Sutskever, improving existing computer-based techniques with minimal training and expensive computer power. They say: "Last year, we were able to make significant progress in writing with the GPT-3, but the thing is that the world is not just made of text," Sutskever said. "This is a step towards the ultimate goal of building a neural network that can work on both images and text." How it works: DALL-E - an OpenAI name taken as a portmanteau by surrealist artist Salvador Dali and the excellent robot of Wighty WALL-E - a model that comes out because it aims to fulfill the Star Trek's dream of simply being able to tell a computer, using common language, that does it. For example:In the picture above, the text is a green pentagon. Any change in these three elements here - shape (pentagon), color (green), object (frame), will produce a different set of images. Source:OpenAI
Area Of Work: Generative AI
Introduction to Langchain Langchainis a framework for developing applications backed by language models (Llama 2, ChatGPT, Mistral etc.)It gives the ability to connect the models to any source of context (shot examples, PDF files with content etc) and how to answer based on provided context, what action to take etc.Its libraries mainly come in two languages i.e Python and JavaScript.Pro Tip: LangChain also provides a chatGPT 3.5 powered chatBot which answers anything about LangChain's Python documentation! https://chat.langchain.com/LangChain provides several products which simplifies the entire lifecycle of an application :LangChain LibrariesLangChain TemplatesLangServe.LangSmithDevelopment: One can write the applications in Langchain and Langchain.js. LangChain also provides templates for reference.Production:withlangSmith, you can keep an eye on your chain and deploy you application with ease.Deploy: WithLangServe, you can turn any chain into an API.Installation:LangChain installation is a piece of cake, run the following command according to your environment.Pippip install langchainCondaconda install langchain -c conda-forgeNote:-This will install the bare minimum requirements of LangChain. You will need to install the prerequisites for individual integrations separatelyFrom Source:Clone thelangChain Repo.Navigate to the cloned directory.Run in terminal:pip install -e .LangChain community:This package contains various 3rd party integrations. It's automatically installed by the Langchain and can be separately installed by using:pip install langchain-community LangChain core:It contains the base abstraction that the rest of the langchain ecosystem uses.pip install langchain-core LangServe:Deploy LangChain runnables as REST API.pip install "langserve[all]"For client code:pip install "langserve[client]"For Server code:pip install "langserve[server]"LangChain CLI:LangChain CLI is useful for working with LangChain templates and other LangServe projects.pip install langchain-cliLangSmith SDK:pip install langsmith
Area Of Work: Generative AI Industry: IT/Software Technology: LangChain

Related Skills

Additional Search Terms

Copilot LangchainDALL-EMistralAI AgentDeep AIGPTLlamaMidjourneyAutoGPTDeepseekMatplotlibZapierAutoGenClaudeGenerative adversarial network (GAN)Generative AILarge Language Model (LLM)Prompt EngineeringPrompt GenerationRetrieval Augmented Generation (RAG)Stable Diffusion