viddeSubmit a Review

Scoring Transparency

We believe transparency builds trust. Here's exactly how Vidde scores developer autonomy.

How It Works

Every company receives a Dev Autonomy Score (0–100) based on community reviews across 5 scored categories, plus 3 informational fields collected for context.

  • Each review answers 8 structured questions — 5 affect the score, 3 are informational only.
  • Each scored category is scored 0–100 independently based on specific criteria.
  • Raw score is the average of answered scored category scores.
  • Final score is penalized by coverage: a company with only partial data won't rank as highly as one with complete reviews.

Overall Score Labels

80–100

Excellent

Company invests heavily in developer tools and freedom.

60–79

Good

Solid tooling culture with minor constraints.

40–59

Mixed

Tooling varies significantly across categories.

20–39

Constrained

Significant restrictions on tools and autonomy.

0–19

Limited

Very restrictive tooling environment.

Scoring Categories

AI Tools

Which AI coding assistants (ChatGPT, Claude, GitHub Copilot, etc.) are permitted? And what's the general token / usage budget?

Base score: Percentage of known tools that are allowed out of 7 tracked tools.

Token budget modifier: −10 points if a limited budget applies.

Example: 4 out of 7 tools allowed, unlimited budget → 57 points.

Hardware

Which operating systems are developers allowed to use? Do they have admin rights?

Base score (OS options):

  • 100 — All three (macOS, Windows, Linux)
  • 65 — Two OS options
  • 35 — One OS only
  • 0 — None specified

Adjustments:

  • +5 points if admin rights are granted
  • −15 points if admin rights are denied

Network

How restrictive is internet access? Are registries, package managers, or sites blocked? Is VPN mandatory?

Base score (restriction level):

  • 100 — Open (full access)
  • 60 — Some restrictions
  • 30 — Heavily restricted

Adjustments:

  • −5 points per blocked resource (max −30)
  • −10 points if VPN is mandatory

Software Approval

How long does it typically take to get a new tool approved by IT?

Score by speed:

  • 100 — Easy (little to no approval)
  • 40 — Bureaucratic (weeks or more)
  • 0 — Blocked (rarely or never approved)

Compliance

How much day-to-day hassle does security and compliance create? Rated on a 1–5 scale.

Score by overhead level:

  • 1 (No hassle) → 100
  • 2 (Manageable) → 65
  • 3 (Noticeable) → 40
  • 4 (Significant) → 15
  • 5 (Extreme) → 0

Company Context (Informational Only)

These fields are collected to help developers understand a company's environment, but they do not affect the score. There is no objective quality signal in which cloud provider or communication tool a company uses, or whether they are remote-first or office-first.

Cloud & Infra · info only

Which cloud and infra providers does the company use? Shown on company profiles to help developers understand the tech stack.

Communication · info only

Which communication tools does the company use? Shown on company profiles for context.

Remote Work Policy · info only

What is the company's remote work policy? Shown on company profiles. We do not believe remote-first is objectively better than office-first — this is a preference, not a quality signal.

What We Value

Vidde's scoring system reflects a core belief: developers work best when they have autonomy and the right tools.

  • Developer autonomy: Freedom to choose your OS, tools, and workspace setup matters.
  • Fast tool approval: Waiting months for a tool kills productivity and signals mistrust.
  • Open internet access: Developers need unblocked registries, documentation, and Stack Overflow.
  • Modern AI tools: ChatGPT, Copilot, and Claude are now essential productivity tools for developers.
  • Minimal compliance overhead: Security is important, but excessive processes slow teams down.
  • Transparency about the work environment: Remote policy, cloud stack, and communication tools are surfaced as context — not judged.

Methodology Notes

Aggregation: When multiple reviews exist for a company, we average scores across all reviews for each category. This means every review contributes equally, regardless of company size.

Coverage penalty: A company with only 3 of 5 scored categories answered can't rank the same as one with full coverage. We penalize incomplete data by multiplying the raw score by coverage percentage.

Coverage formula:

Final Score = Raw Score × (Answered Scored Categories / 5)

Example: 80 raw score with 3/5 scored categories answered → 80 × 0.6 = 48 final score

Rate limiting: One review per company per IP address per 30 days prevents gaming the system.

No votes or fake reviews: We only count genuine reviews submitted through our review form. Community moderation may flag suspicious patterns, but we maintain a transparent appeals process.

Help Make This Better

Our scoring system is built by the community. If you find an unfair category, think we're missing something, or want to suggest improvements — reach out.

Questions? We're always open to feedback.