Trust Metrics combines state-of-the-art statistical modeling techniques with proprietary data engineering and unique editorial know-how to produce site and page ratings. Our fully automated ratings are the result of a systematic search for statistical formulas that relate the 14,000+ data measurements we collect to carefully curated, human-generated “expert scores.”
This combination enables Trust Metrics to produce a large number unique algorithms that effectively and accurately identify many different classifications for advertisers, publishers, networks, and platforms.
Man vs. Trust Metrics
As algorithms are released into production, their accuracy is always equal to, and consistently better than people who classify sites. There’s sort of a “fundamental uncertainty” in judging a site, in that two different people can maintain a professional degree of disagreement on the correct score for a given site. Trust Metrics defines and corrects that natural human error and provides the most accurate scores possible.
Trust Metrics measures it’s accuracy by performing cross-validation to ensure that our models are tuned for maximum accuracy without being statistically over-fitted. We double-check performance on “holdout sets” as well as applying ongoing “human” quality assurance for every algorithm.
Trust Metrics applies multiple statistical and machine learning methodologies such as lasso and bridge regression for high-dimensional modeling, and we apply linear or logistic varieties of these techniques in order to create the best algorithm for any potential rating.
Our computer scientists are currently working on ratings solutions for mobile, video, image processing and multiple languages. If you’d like to be alerted when these new ratings are available, please send us an email. email@example.com