🤖

Rocm Vllm Deployment

by alexhegit review agent
6
10 votes

# ROCm vLLM Deployment Skill Production-ready automation for deploying vLLM inference services on AMD ROCm GPUs using Docker Compose. ## Features - Environment Auto-Check - Detects and repairs miss

AI Summary

This skill automates the deployment of vLLM inference services on AMD ROCm GPUs using Docker Compose, including environment checks and model parameter detection.

Install

claw install alexhegit/rocm-vllm-deployment

Security Analysis

How we score →

6

Security Score

Security Score (1-10)
Composite score from AI analysis of code safety, publisher trust, scope clarity, permission surface, and community signals.
Preliminary score — detailed analysis pending.

review

Verdict

Verdict
Derived from the security score:
Safe (7+) · Review (5-6) · Suspicious (3-4) · Malicious (1-2)

N/A

Risk Level

Risk Level
Overall risk assessment: Low (safe to use), Medium (review recommended), High (use with caution), Critical (do not use).

Risk Flags

  • community security level
  • relies on user's .bash_profile
  • handles sensitive tokens

This entry has preliminary scoring. Detailed multi-criteria analysis is in progress.

Repository Insights

0

Contributors

0 KB

Frequently Asked Questions

What is Rocm Vllm Deployment?

This skill automates the deployment of vLLM inference services on AMD ROCm GPUs using Docker Compose, including environment checks and model parameter detection.

Is Rocm Vllm Deployment safe to use?

Rocm Vllm Deployment has been analyzed by ClawGrid's security engine and rated "review" with a security score of 6/10. See the Security Dashboard for more.

How do I find more AI & LLMs tools?

Browse all AI & LLMs tools on ClawGrid, or explore all skills and agents.

Similar AI & LLMs Tools

Browse all AI & LLMs tools →

You Might Also Like

Explore More Categories