On TrustModel for Compliant, Transparent AI Hiring. Q&A with Karl Mehta 

Interview by Ramesh Chitor

Q1 : Enterprise HR and legal teams are under pressure to adopt AI for talent while avoiding the pitfalls now in the spotlight with examples like Eightfold.AI. Can you give us some business context as to why this is important for the board and the CXOs

A Fortune 500 employer uses multiple AI tools across the talent lifecycle: programmatic sourcing, resume screening, and internal mobility recommendations. The CHRO and General Counsel are worried about:

  • Hidden “shadow” profiles and scores created on candidates from scraped public data (social, profiles, web traces) without notice or consent.
  • Allegations that such AI-generated reports function as “consumer reports” under FCRA/CCPA, but are not disclosed, cannot be accessed, and cannot be corrected by job seekers.
  • Reputational damage and regulatory risk illustrated by the class‑action lawsuit against Eightfold.AI for secret scoring and ranking of applicants for companies like Microsoft and PayPal.
  • Lack of auditability: HR cannot answer basic questions like “What data did the model use on this candidate?” or “Can the candidate see and correct errors?”.

The board has explicitly asked: “How do we avoid being the next Eightfold headline?”

Q2: What is the problem enterprises face without a trusted 3rd party evaluation platform like TrustModel? 

In the Current state:

  • AI vendors generate rich candidate profiles (skills, personality tags like “team player” or “introvert”, education quality, predicted future titles) from large scraped datasets.
  • These outputs influence who is shortlisted, interviewed, or rejected, yet applicants typically never see these reports or even know they exist.
  • There is no centralized governance layer across tools; each vendor has its own opaque scoring logic, retention policies, and limited explainability.
  • Legal and compliance teams struggle to map which AI outputs may be “consumer reports” in the FCRA sense, and where applicant rights (disclosure, access, dispute, correction) must be enforced.

Result: The enterprise has adopted powerful AI, but cannot demonstrate basic privacy, fairness, and due‑process safeguards—leaving it exposed to the same theories now being tested against Eightfold.AI.

TrustModel sits as a governance and assurance layer across all talent AI systems, including external vendors and in‑house models.

  1. Centralized model and data inventory
    1. Automatically catalogs all models touching candidate or employee data, including external systems similar to Eightfold (scoring, ranking, profiling tools).
    1. Maps what inputs (publicly scraped data, ATS records, assessments) feed which outputs (scores, risk flags, personality traits).
  2. Policy-aware data controls
    1. Encodes enterprise policies for FCRA/CCPA/GDPR and internal AI principles (no undisclosed scoring, no irrevocable decisions on opaque profiles, retention limits).
    1. Enforces that any model output that could be a “consumer report” automatically triggers required workflows: applicant notice, access, and dispute channels.
  3. Transparent candidate interactions
    1. Generates human‑readable disclosures explaining when and how AI is used in screening, the categories of data involved, and the candidate’s rights—addressing the “secret dossier” concern raised in the Eightfold litigation.
    1. Provides candidate‑facing views of key profile attributes (skills inferences, education quality tags, location‑based inferences) wherever policy requires, with structured paths to request corrections.
  4. Risk and compliance monitoring
    1. Flags models or vendors that:
      1. Rely heavily on scraped or third‑party web data,
      1. Use hidden scores in hiring decisions,
      1. Lack mechanisms for access, correction, and contestation.
    1. Produces audit‑ready logs showing what data and model outputs were used at each decision point.
  5. Vendor accountability layer
    1. Standardizes privacy, transparency, and dispute‑handling requirements in vendor onboarding.
    1. Continuously evaluates vendors against these controls—so an Eightfold‑style tool cannot be used in “black box” mode inside the enterprise.

Q3: Can you give us an example flow in a real life example?


Example flow

  1. A candidate applies via the company career site.
  2. An external AI tool generates a fit score plus inferred skills profile using ATS data and public professional data.
  3. TrustModel detects that the output falls under the company’s “consumer‑report‑like” category.
  4. TrustModel automatically:
    1. Logs the report, the data sources, and the relying decision.
    1. Triggers a disclosure in the candidate portal describing the use of AI and available rights.
    1. Offers the candidate a way to see key elements of the profile and submit a correction request, which is routed back into the relevant systems.
  5. Compliance can later demonstrate that, unlike the Eightfold allegations, the enterprise:
    1. Disclosed AI‑driven evaluations,
    1. Enabled access and correction,
    1. Applied consistent policy controls across vendors.

Q4: Value for the enterprise

The areas where the value of the enterprise excels are

  • Reduce exposure to FCRA/CCPA‑style claims about hidden AI reports and secret scoring.
  • Protect brand trust with candidates by replacing opaque dossiers with governed, explainable, and contestable profiles.
  • Give CHROs and GCs a single pane of glass for AI usage in talent decisions, closing the accountability gaps highlighted by the Eightfold lawsuit

Q5: Where can I learn more about the use cases and more about the product

Learn more at www.trustmodel.ai to lead with #trust and #transparency and #safety

Resources

      www.trustmodel.ai  Now on GCP Marketplace 

………………………………………………………….

Karl Mehta is a serial entrepreneur, author, investor, engineer, and civil servant with over 20 years of experience in founding, building, and funding technology companies in the U.S. and international markets. He is currently Founder & CEO of EdCast Inc., an AI-powered knowledge-cloud platform company backed by Stanford University & Softbank Capital. He is former venture partner at Menlo Ventures, a leading VC firm of Silicon Valley with over $4B under management.

Previously, he was the Founder & CEO of PlaySpan Inc., acquired by Visa Inc. (NYSE:V), the world’s largest payment network. Karl also served as a White House Presidential Innovation Fellow, selected by the Obama Administration during the inaugural 2012-13 term. In 2014 he was appointed by Governor Brown to the Workforce Investment Board of the State of California. In 2010, Karl won the “Entrepreneur of the Year” award from Ernst & Young for Northern California. Karl is on the boards of Simpa Networks and on the advisory board of Intel Capital and Chapman University’s Center of Entrepreneurship.

Karl is founder of several non-profit’s including Code For India (http://CodeforINDIA.org) and Grassroots Innovation (http://grassrootsinnovation.org ). He is author of ‘Financial Inclusion at the Bottom of the Pyramid”(http://www.openfininc.org)

………………………………………………………….

Ramesh Chitor

Ramesh Chitor is a seasoned business leader with over 20 years of experience in the high-tech industry working for Mondee. Ramesh brings a wealth of expertise in strategic alliances, business development, and go-to-market strategies. His background includes senior roles at prominent companies such as IBM, Cisco, Western Digital, and Rubrik, where he served as Senior Director of Strategic Alliances. Ramesh is  actively contributing as a Business Fellow for Perplexity.

Ramesh is a value and data-driven leader known for his ability to drive successful business outcomes by fostering strong relationships with clients, partners, and the broader ecosystem. His expertise in navigating complex partnerships and his forward-thinking approach to innovation will be invaluable assets to Perplexity’s growth and strategic direction. Connect on LinkedIn

Sponsored by Chitor.

You may also like...