Skip to content

Known Limitations

Please Note

The following limitations are inherent to the on-premises setup due to its reliance on consumer-grade hardware. In contrast, the cloud version of RapidGPT/Magnus operates on enterprise-grade hardware and optimized infrastructure, delivering superior performance, enhanced context management, lower hallucination rates, and more robust feature support. Users who opt for the cloud solution can expect a more seamless and reliable experience.

Warning

Please review this document periodically for the latest information or contact our support team for further assistance.

Shadow RAG is Disabled

Issue: The shadow Retrieval-Augmented Generation (RAG) feature is not enabled on the on-premises workstation.

Recommended Solution: Use the alternative commands: !rag, !ip, and !docs.

Explanation: These commands provide similar contextual and documentation access, ensuring continuity of functionality despite the unavailability of shadow RAG.

Context Length Issues

Issue: The system may struggle with maintaining extended conversational context due to hardware constraints.

Recommended Solution: Divide conversations into multiple scopes (for example, by module).

Explanation: Segmenting lengthy interactions helps preserve context and improves coherence, allowing the tool to focus on specific parts of the design more effectively.

Higher Hallucination Rate

Issue: There is an increased likelihood of generating inaccurate or hallucinated responses.

Recommended Solution: Keep prompts simple and direct; for complex inquiries, split the prompt into several shorter, focused prompts.

Explanation: Simplifying and breaking down complex inputs reduces cognitive load on the AI model, thereby minimizing the risk of hallucinated outputs.

Inaccurate Code Completion

Issue: The tool might generate incomplete or imprecise code when handling larger modules.

Recommended Solution: Generate smaller code segments and refine them using the AutoReview feature.

Explanation: Working with smaller code blocks enables more accurate completions, and AutoReview helps detect and correct inaccuracies early in the process.

Incorrect Suggestions for Bang Commands Usage

Issue: Occasionally, the system may provide the wrong format for bang commands.

Recommended Solution: Use the correct syntax: !ip::<ip-name> or !docs::<ip-name>.

Explanation: Ensuring adherence to the proper command format helps the tool interpret the commands correctly and reduces errors.

API Irresponsiveness After Long Idle Periods

Issue: Extended periods of inactivity may cause the API to become unresponsive.

Recommended Solution: Reboot the container using sudo docker compose restart from the ~/magnus directory

Explanation: Rebooting refreshes system resources and reinitializes processes, mitigating issues that arise from prolonged idleness.

AutoReview Performance Degradation

Issue: The AutoReview functionality in the on-premises system demonstrates modestly reduced efficiency relative to its cloud-based counterpart.

Recommended Solution: Use AutoReview with caution and consider supplementing it with manual reviews where necessary.

Explanation: Due to hardware limitations, the on-premises AutoReview may not catch all issues as effectively, so additional manual verification is advised.

Potential Issues with !docs and !ip Commands

Issue: Commands such as !docs and !ip may not always prompt the LLM to incorporate the retrieved material accurately, which can lead to hallucinated responses.

Recommended Solution: If the generated response appears inaccurate, try resending the prompt.

Explanation: Variations in file size, whether too small or too large, can impact the accuracy of the response. Resending the prompt often helps the system to generate a more precise and complete answer.