Databricks MCP Server
Community MCP server for Databricks providing cluster management, job execution, notebook access, DBFS file listing, and SQL execution
Score Breakdown
Server Info
- Package
- databricks-mcp-server
- Registry
- pypi
- Repository
- JustTryAI/databricks-mcp-server
- Maintainer
- Community
- Category
- Analytics & Data
- Tags
- data-lakemlspark
- Last Scanned
- 7 Apr 2026
Findings
9 issuesAuthentication & Identity
HIGHNo per-request auth - requires instance-per-user
Stdio-only transport via FastMCP run_stdio_async(). Authenticates to Databricks using a personal access token (DATABRICKS_TOKEN) sent as Bearer header to the Databricks API. Configuration via env vars or .env file (python-dotenv). No MCP-level authentication; any client with stdio access gets full D... For multi-tenant deployment, the platform must spawn a separate server instance per user.
Add HTTP/SSE transport to accept per-request Authorization headers, or implement the MCP OAuth spec.
Tool Schema Quality
HIGH11 of 11 tools have no input schema
No tool has an inputSchema defined. All tools accept a generic Dict[str, Any] 'params' argument. Required parameters and constraints are described only in the tool description string, not enforced by schema. The execute_sql tool accepts any SQL statement (not limited to SELECT). Parameter validation happens at the Databricks API level, not at the MCP layer.
Define JSON Schema with explicit types for all tool parameters.
CRITICALDangerous execution surface: execute_sql: accepts arbitrary SQL statements with no validation, type constraints, or read-only enforcement
Tool allows raw code/query execution which could be exploited via prompt injection.
Use parameterized queries or validated command sets.
Permission Granularity
MEDIUM1 tools combine read and write operations
Infrastructure management tools (create_cluster, terminate_cluster, start_cluster) are mixed with read tools and cannot be selectively disabled. execute_sql is mixed read/write since it accepts any SQL. run_job can trigger arbitrary workloads. No annotation system for destructive hints. Tool descriptions list required parameters but don't indicate destructiveness.
Split into separate read and write tools.
HIGH3 destructive operations not isolated
Admin/delete tools are mixed with regular operations and cannot be independently disabled.
Namespace admin tools separately for independent access control.
LLM Safety
MEDIUM2 tool descriptions are too vague
Short or generic descriptions make tool selection unreliable.
Expand descriptions with specific actions, data types, and side effects.
Data Exposure
MEDIUM4 list operations lack pagination
list_clusters, list_jobs, list_notebooks, and list_files all return full result sets with no pagination, limit, or offset parameters. export_notebook truncates content at 1000 characters. SQL query results are returned in full. No field selection support on any tool.
Add limit/offset or cursor-based pagination.
LOWNo field selection on responses
Responses return full records rather than projected fields.
Implement field selection to return only relevant fields.
Maintenance & Trust
LOWCommunity-maintained by JustTryAI
No official vendor backing.
Seek vendor verification.
Tools
11 total| Name | Description | Risk |
|---|---|---|
| list_clusters | List all Databricks clusters | read |
| create_cluster | Create a new Databricks cluster with parameters: cluster_name (required), spark_version (required), node_type_id (required), num_workers, autotermination_minutes | admin |
| terminate_cluster | Terminate a Databricks cluster with parameter: cluster_id (required) | admin |
| get_cluster | Get information about a specific Databricks cluster with parameter: cluster_id (required) | read |
| start_cluster | Start a terminated Databricks cluster with parameter: cluster_id (required) | admin |
| list_jobs | List all Databricks jobs | read |
| run_job | Run a Databricks job with parameters: job_id (required), notebook_params (optional) | write |
| list_notebooks | List notebooks in a workspace directory with parameter: path (required) | read |
| export_notebook | Export a notebook from the workspace with parameters: path (required), format (optional, one of: SOURCE, HTML, JUPYTER, DBC) | read |
| list_files | List files and directories in a DBFS path with parameter: dbfs_path (required) | read |
| execute_sql | Execute a SQL statement with parameters: statement (required), warehouse_id (required), catalog (optional), schema (optional) | write |
Deploy Databricks MCP Server securely
CompleteFlow adds per-user authentication, permission scoping, and audit logging to any MCP server out of the box.
Deploy on CompleteFlow