SEARCH KEYWORD -- LLM



  Private LLM Integration with RAGFlow: A Step-by-Step Guide

If you’ve found your way here, you’re probably already excited about the potential of the RAGFlow project and eager to see it in action. I was in the same position, so I delved into the codebase to see how it could work with custom large language models (LLMs). This post will walk you through my findings and show you how to get RAGFlow running with your own LLM. As of now (November 8, 2024), RAGFlow offers limited support for local LLMs, and vLLM&nb...

   AI,LLM,RAGFLOW,RAG     2024-11-12 08:40:29

  Testing LLM on MacOS with Llama

As a programmer, I always believe that the best way to understand something is by actually getting the hands dirty by testing it out ourselves. In past few years, with the introduction of ChatGPT, AI and associated technologies such as LLM come to a hot spot in tech industry and there are lots of platforms available and different models coming out for people to test out. This post I will demonstrate a step-by-step guide on how to run Llama(a Meta LLM) on a MacOS machine with a model. This will j...

   GUIDE,LLAMA,LOCAL,HUGGING FACE,QWEN     2024-11-17 13:36:28

  Flows.network: Writing an LLM Application in Rust

Over the past year, large language models (LLMs) have been booming and developing vigorously. As an enthusiast of data systems, it would indeed seem outdated not to pursue and research this hot field at all. This article summarizes my recent practical experiences attempting to write an LLM application using Rust with flows.network. Concepts Related to Large Language Models When talking about large language models, it's impossible not to mention ChatGPT and OpenAI. Although OpenAI recently change...

   LLM,RUST,APPLICATION,DEVELOPMENT     2024-09-30 21:38:04

  My AI Learning Journey: Exploring the Future of Technology

As someone working in software development, primarily focused on building web products, I’ve always been curious about emerging technologies. The explosion of interest in AI, particularly after the release of ChatGPT, sparked my desire to dive deeper into this fascinating field. Here’s how my journey unfolded. I started with a YouTube video (Wolfram’s explanation) that breaks down how ChatGPT predicts the next word in a sentence (if you don't want to watch the video, you can re...

   AI,LLM,CURSOR,WINDSURF     2025-01-17 00:29:03

  Gemini Example with Go

To connect and use Gemini with Go, Google's LLM, one can use their official Go SDK for doing this. In this post, we will just show a simple chat example to demonstrate how to make it work with Go. The example is just to ask the model to translate some English to Chinese and get its output. The code actually looks like: var client *genai.Client // geminiOnce.Do(func() { client, err = genai.NewClient(ctx, option.WithAPIKey(string(apiKey))) if err != nil { log.Fatal(err) } model := client.Genera...

   EXAMPLE,GO,TRANSLATION,GOLANG,GEMINI     2024-12-14 19:37:23

  Applying Large Language Models (LLMs) to Solve Cybersecurity Questions

In this document, we will introduce some test, experiment and analysis conclusion about applying Large Language Models (LLMs) to solve cybersecurity questions. Introduction Large Language Models (LLMs) are increasingly used in education and research for tasks such as analyzing program code error logs, help summarize papers and improving reports. In this project, we aim to evaluate the effectiveness of LLMs in solving cybersecurity-related questions, such as Capture The Flag (CTF) challenges, ...

       2024-09-08 04:05:07

  Deploying DeepSeek-R1 Locally with a Custom RAG Knowledge Data Base

Project Design Purpose : The primary goal of this article is to explore how to deploy DeepSeek-R1 an open-source large language model (LLM), and integrate it with a customized Retrieval-Augmented Generation (RAG) knowledge base on your local machine (PC/server). This setup enables the model to utilize domain-specific knowledge for expert-level responses while maintaining data privacy and customization flexibility. By doing so, users can enhance the model’s expertise in specific technical ...

   LLM,RAG,DEPLOYMENT     2025-02-10 00:17:37

  Do Not Be Misled by ‘Build an App in 5 Minutes’: In-Depth Practice with Cursor

In August this year, I tried out Cursor and was thoroughly impressed, prompting me to write an introductory article about it. Soon after, I transitioned my daily work environment entirely from GitHub Copilot + JetBrains to the paid version of Cursor. After several months of use, it has felt incredibly smooth. While using it myself, I’ve often recommended Cursor to colleagues and friends. However, many of them still have questions, such as: What advantages does it have over native ChatGPT ...

   ARTIFICIAL INTELLIGENCE,GUIDE,CURSOR,CODE EDITING,WINDSURF,DISCUSSION     2024-12-17 21:30:22