Risk Calculator Methodology

Technical specification of the Risky Plugins security risk measurement system, scoring algorithms, and false positive reduction mechanisms.

Risky Plugins Security Research Team
Version 1.0
#risk-assessment#methodology#scoring#security

Risk Calculator Methodology

Overview

The Risky Plugins platform utilizes a real-time risk calculator to analyze browser extensions, VS Code extensions, and Microsoft 365 apps. The system applies confidence-weighted scoring and logarithmic damping to identify security threats.

This document details:

  • Criticality measurement and risk score calculation.
  • Definition of "IOCs" within this system.
  • False positive mitigation strategies.
  • Scoring system implementation.

Risk Scoring Logic

High false positive rates cause alert fatigue. This system utilizes a conservative scoring approach:

  • CRITICAL: Confirmed, high-confidence threats.
  • HIGH: Serious security concerns requiring review.
  • MEDIUM/LOW: Potential issues requiring monitoring.
  • MINIMAL: Clean security assessments.

IOC Definition

CRITICAL: "IOC findings" in this system refer to pattern matches, not confirmed compromises.

Traditional vs. Static Analysis IOCs

Traditional IOCs: Artifacts indicating a system breach (malicious IPs, C2 domains, malware hashes).

Risky Plugins IOCs: String pattern matches found during static code analysis:

  • String matching: URLs, IP addresses, domains, file paths.
  • Hash references: SHA256 hashes, API keys, tokens.
  • Network patterns: HTTP/HTTPS endpoints, WebSocket connections.

Adblocker Scenario

Legitimate adblockers contain thousands of domain patterns and hashes.

// Legitimate adblocker code
const adDomains = ['doubleclick.net', 'googlesyndication.com', ...];

Traditional detection flags these as threats. This system uses logarithmic damping and weighting to score such extensions as MEDIUM rather than CRITICAL.

Criticality Measurement

Risk Score Calculation (0-100 Scale)

The risk score combines weighted factors with capping.

1. Finding Severity Base Weights

Severity Base Points Cap Reasoning
Critical 25 points 3 findings Severe threats (credential theft, malware).
High 10 points 5 findings Serious issues (exposed secrets).
Medium 3 points 10 findings Moderate concerns (suspicious patterns).
Low 1 point 20 findings Minor issues (code quality).
Info 0.1 points 50 findings Informational notices.

Caps ensure severity diversity outweighs finding count.

2. Finding Type Confidence Multipliers

Multipliers reflect false positive rates:

  • Malware Signatures: 1.5x (Highest confidence).
  • Exposed Secrets: 1.3x (High confidence).
  • Network Activity: 0.8x (Legitimate necessity).
  • IOC Patterns: 0.3x (High false positive rate).
  • Obfuscation: 0.6x (Standard practice).

3. Logarithmic Damping

Logarithmic damping reduces the impact of high-volume findings.

IOC Count Raw Score Impact After Damping Reduction
10 Low Low 0%
100 Medium Low-Medium ~33%
1,000 High Medium ~67%
10,000 Critical Medium ~87%

4. YARA Rule Confidence Weighting

Malware detection uses YARA rules with reliability metadata.

Confidence FP Rate Weight Applied Example
≥85% ≤15% 100% High-confidence malware.
60-84% 16-30% 50% Suspicious patterns.
<60% >30% 25% Experimental rules.

Risk Category Classification

Category Score Range Criteria User Action
CRITICAL ≥90 High-confidence malware + severe threats. Do not install.
HIGH 70-89 Malware OR exposed secrets + critical findings. Review carefully.
MEDIUM 45-69 Multiple medium findings OR suspicious patterns. Proceed with caution.
LOW 15-44 Minor security findings. Generally safe.
MINIMAL <15 No significant threats. Safe to install.

Trust Score

Inverted score for user display:

Trust Score = 100 - Risk Score

False Positive Mitigation

1. Adblocker Extensions

Challenge: Massive lists of domains/hashes.

Mitigation:

  1. IOC multiplier: Lowest weight (0.3x).
  2. Logarithmic damping: Non-linear score increase.
  3. Rule exclusions: Explicit exclusion of adblocker patterns.
  4. Category context: "content_blocker" adjustment.

2. Developer Tools

Challenge: Extensive permissions and network access required.

Mitigation:

  1. Permission scoring: Context-aware based on category.
  2. Network activity: Lower multiplier (0.8x).
  3. Obfuscation tolerance: Lower multiplier (0.6x).
  4. Behavior focus: Anomalous patterns prioritized over permission count.

3. Cryptocurrency & Finance

Challenge: Blockchain API interaction and sensitive data handling.

Mitigation:

  1. Secret filtering: Known API endpoints excluded.
  2. Network whitelisting: Known blockchain nodes weighted lower.
  3. Context differentiation: Private keys vs. API endpoints.

Use Cases

Security Researchers

Malware hunting:

  1. Identify extensions with CRITICAL risk scores.
  2. Filter by malware detection confidence.
  3. Review rule matches.
  4. Validate findings.

Threat intelligence:

  • Track malware families.
  • Identify supply chain compromises.
  • Correlate risk patterns.

Enterprise Security Teams

Approval workflow:

  • CRITICAL/HIGH: Auto-reject.
  • MEDIUM: Manual review.
  • LOW/MINIMAL: Auto-approve.

Compliance:

  • Generate security posture reports.
  • Track approved/rejected extensions.
  • Monitor supply chain vulnerabilities.

Extension Developers

Improvement workflow:

  1. Review risk assessment.
  2. Address actionable threats.
  3. Reduce permissions.
  4. Resubmit.

End Users

Decision support:

  • Risk Score ≥ 90: Do not install.
  • Risk Score ≥ 70: Review carefully.
  • Risk Score ≥ 45: Proceed with caution.
  • Risk Score < 45: Generally safe.

Comparison with Traditional Tools

Aspect Traditional Scanners Risky Plugins Risk Calculator
Scoring Model Binary or raw counts Weighted 0-100 scale
IOC Handling Flag every match Logarithmic damping + multipliers
False Positives High Low
Confidence Levels Not considered Weighted by confidence/FP rate
Context Awareness Generic Category/Permission aware
Actionability Raw lists Prioritized threats
Transparency Black box Detailed breakdown