SEARCH KEYWORD -- DEEPSEEK-R1-ZERO



  Use a Simple Web Wrapper to Share the Local DeepSeek-R1 Model Service to LAN Users

In the previous article Deploying DeepSeek-R1 Locally with a Custom RAG Knowledge Data Base, we introduced the detail steps about deploying the DeepSeek-R1:7b locally with a customized RAG knowledge database on a desktop with RTX3060. Once the LLM deepseek-r1:7b is running on the local GPU-equipped computer, a new challenge emerges: we can only use the LLM service on the GPU computer, what if we want to use it from other devices in my LAN, is there any way I can access it from a mobile device or...

   LLM,DEEPSEEK,LAN     2025-03-03 00:55:05

  Deploying DeepSeek-R1 Locally with a Custom RAG Knowledge Data Base

Project Design Purpose : The primary goal of this article is to explore how to deploy DeepSeek-R1 an open-source large language model (LLM), and integrate it with a customized Retrieval-Augmented Generation (RAG) knowledge base on your local machine (PC/server). This setup enables the model to utilize domain-specific knowledge for expert-level responses while maintaining data privacy and customization flexibility. By doing so, users can enhance the model’s expertise in specific technical ...

   LLM,RAG,DEPLOYMENT     2025-02-10 00:17:37

  DeepSeek-R1: The New AI Model Shaking the World

In December of last year, DeepSeek’s release of DeepSeek-V3 made waves in the global AI field, achieving performance comparable to top models like GPT-4o and Claude Sonnet 3.5 at an extremely low training cost. This time, the newly launched model, DeepSeek-R1, is not only cost-efficient but also brings significant technological advancements. Moreover, it is an open-source model. The new model continues DeepSeek’s reputation for high cost-effectiveness, reaching GPT-4o-level performan...

   GUIDE,DEEPSEEK-R1,DEEPSEEK-R1-ZERO     2025-02-26 03:55:57