Limit this search to....

A Pilot Study Using Machine Learning and Domain Knowledge To Facilitate Comparative Effectiveness Review Updating
Contributor(s): And Quality, Agency for Healthcare Resea (Author), Human Services, U. S. Department of Heal (Author)
ISBN: 1483925374     ISBN-13: 9781483925370
Publisher: Createspace Independent Publishing Platform
OUR PRICE:   $15.19  
Product Type: Paperback
Published: March 2013
Qty:
Additional Information
BISAC Categories:
- Medical | Research
Physical Information: 0.11" H x 8.5" W x 11.02" (0.32 lbs) 52 pages
 
Descriptions, Reviews, Etc.
Publisher Description:
Comparative effectiveness reviews need to be updated to maintain their relevance, but these updates are often impeded by the need to screen thousands of citations to locate the 1-10 percent that are included in the final report ("relevant studies"). Such effort may match or exceed that involved in the original review. Prior studies have used machine learning methods to reduce the burden of comparative effectiveness review screening but have not formally simulated updating. We aimed to create a prototype system for assisting researchers with preparing formal updates of comparative effectiveness reviews. In this report, we describe a pilot study using reviewer decisions from two Agency for Healthcare Research and Quality (AHRQ)-sponsored comparative effectiveness reviews to empirically derive statistical models that predict article relevance to efficacy/effectiveness and adverse effect analyses; we then evaluated these models' performance identifying relevant articles from the literature searches retrieved for the updated reviews. We based these statistical models on two algorithms: gradient boosting machine (GBM) and generalized linear models with convex penalties (GLMnet). Each model predicted an article's relevance based on how its indexing terms described a select number of key concepts (such as publication type, intervention, and outcome). The key challenge was accounting for how search strategies, therapies, outcomes, research personnel, and overall objectives may have changed from the original to the updated study. In accord with an earlier study that noted that a high proportion of reviews underwent minor or major changes, both strategies underwent major revisions. To overcome such challenges (known as "concept drift" in other contexts), we represented specific drugs and outcomes as more abstract concepts such as "intervention" and "outcome," with the hypothesis that this procedure would improve generalizability between time periods.