D

Databricks MCP Server

Community MCP server for Databricks providing cluster management, job execution, notebook access, DBFS file listing, and SQL execution

Overall Score44/100

Score Breakdown

Server Info

Package
databricks-mcp-server
Registry
pypi
Maintainer
Community
Category
Analytics & Data
Tags
data-lakemlspark
Last Scanned
7 Apr 2026

Findings

9 issues

Authentication & Identity

HIGHNo per-request auth - requires instance-per-user

Stdio-only transport via FastMCP run_stdio_async(). Authenticates to Databricks using a personal access token (DATABRICKS_TOKEN) sent as Bearer header to the Databricks API. Configuration via env vars or .env file (python-dotenv). No MCP-level authentication; any client with stdio access gets full D... For multi-tenant deployment, the platform must spawn a separate server instance per user.

Remediation

Add HTTP/SSE transport to accept per-request Authorization headers, or implement the MCP OAuth spec.

Tool Schema Quality

HIGH11 of 11 tools have no input schema

No tool has an inputSchema defined. All tools accept a generic Dict[str, Any] 'params' argument. Required parameters and constraints are described only in the tool description string, not enforced by schema. The execute_sql tool accepts any SQL statement (not limited to SELECT). Parameter validation happens at the Databricks API level, not at the MCP layer.

Remediation

Define JSON Schema with explicit types for all tool parameters.

CRITICALDangerous execution surface: execute_sql: accepts arbitrary SQL statements with no validation, type constraints, or read-only enforcement

Tool allows raw code/query execution which could be exploited via prompt injection.

Remediation

Use parameterized queries or validated command sets.

Permission Granularity

MEDIUM1 tools combine read and write operations

Infrastructure management tools (create_cluster, terminate_cluster, start_cluster) are mixed with read tools and cannot be selectively disabled. execute_sql is mixed read/write since it accepts any SQL. run_job can trigger arbitrary workloads. No annotation system for destructive hints. Tool descriptions list required parameters but don't indicate destructiveness.

Remediation

Split into separate read and write tools.

HIGH3 destructive operations not isolated

Admin/delete tools are mixed with regular operations and cannot be independently disabled.

Remediation

Namespace admin tools separately for independent access control.

LLM Safety

MEDIUM2 tool descriptions are too vague

Short or generic descriptions make tool selection unreliable.

Remediation

Expand descriptions with specific actions, data types, and side effects.

Data Exposure

MEDIUM4 list operations lack pagination

list_clusters, list_jobs, list_notebooks, and list_files all return full result sets with no pagination, limit, or offset parameters. export_notebook truncates content at 1000 characters. SQL query results are returned in full. No field selection support on any tool.

Remediation

Add limit/offset or cursor-based pagination.

LOWNo field selection on responses

Responses return full records rather than projected fields.

Remediation

Implement field selection to return only relevant fields.

Maintenance & Trust

LOWCommunity-maintained by JustTryAI

No official vendor backing.

Remediation

Seek vendor verification.

Tools

11 total
NameDescriptionRisk
list_clustersList all Databricks clustersread
create_clusterCreate a new Databricks cluster with parameters: cluster_name (required), spark_version (required), node_type_id (required), num_workers, autotermination_minutesadmin
terminate_clusterTerminate a Databricks cluster with parameter: cluster_id (required)admin
get_clusterGet information about a specific Databricks cluster with parameter: cluster_id (required)read
start_clusterStart a terminated Databricks cluster with parameter: cluster_id (required)admin
list_jobsList all Databricks jobsread
run_jobRun a Databricks job with parameters: job_id (required), notebook_params (optional)write
list_notebooksList notebooks in a workspace directory with parameter: path (required)read
export_notebookExport a notebook from the workspace with parameters: path (required), format (optional, one of: SOURCE, HTML, JUPYTER, DBC)read
list_filesList files and directories in a DBFS path with parameter: dbfs_path (required)read
execute_sqlExecute a SQL statement with parameters: statement (required), warehouse_id (required), catalog (optional), schema (optional)write

Deploy Databricks MCP Server securely

CompleteFlow adds per-user authentication, permission scoping, and audit logging to any MCP server out of the box.

Deploy on CompleteFlow